Conference Keynote: Assessing Critical Infrastructure Dependencies and Interdependencies
Scott F. Breor
Deputy Assistant Secretary (Acting), Office of Infrastructure Protection
U.S. Department of Homeland Security
Today’s infrastructure is connected to many other infrastructure assets, systems, and networks that it depends on for normal day-to-day operations. These connections, or dependencies, may be geographically limited or span great distances. The many points of infrastructure connections, and their geographic distribution, make the infrastructure environment much more complex. The U.S. Department of Homeland Security (DHS) works to strengthen critical infrastructure security and resilience by generating greater understanding and action across a (largely) voluntary partnership landscape. This is achieved by working with private and public infrastructure stakeholders to resolve infrastructure security and resilience knowledge gaps, inform infrastructure risk management decisions, identify resilience-building opportunities and strategies, and improve information sharing among stakeholders through a collaborative partnership approach. This talk highlights the Department’s efforts to present a more comprehensive picture of security and resilience through a “system of systems” approach.
Titans of Simulation
Peter I. Frazier
Associate Professor, Cornell University; Staff Data Scientist and Data Science Manager at Uber
Bridging the Gap from Academic Research to Industry Impact
Academic methodological research is often done with the hope of creating mathematical methods that will be used in practice. At the same time, there is a significant gap between publishing papers and having the methods described actually be used in industry. In this talk, we offer advice for bridging this gap. We discuss challenges arising from a difference in focus between academic and industry research, and also an incomplete awareness within academia of the full context in which methods are deployed in industry. We then discuss strategies for overcoming these challenges, describing them using examples from the presenter’s experiences as a data science manager at Uber working on Uber’s carpooling product, UberPOOL, and as an academic developing Bayesian optimization algorithms for use at Yelp and the Bayesian optimization startup company SigOpt. This talk is aimed at academics who want their research to be used in industry, soon-to-graduate PhD students who are making a leap into an industry career, and practitioners interested in exploring ways to be more effective.
Emeritus Professor, University of Southampton
Creating a Real Impression: Visual Statistical Analysis
Many powerful statistical methods available for studying simulation output, are under-appreciated and consequently under-used because they are considered to be hard-to-understand, arcanely mathematical, and hard to implement. Such methods can invariably be implemented using data-driven resampling methods, making their underlying rationale quite transparent. There is little need for much formal mathematics, and what there is can be made visually obvious, with method and results explained and presented using figures and graphs, often with dynamic, animation. This approach in studying simulation output will be illustrated by a number of examples drawn from real applications and simulation studies. A bonus of the approach is that it is quite easy to create one’s own ‘bespoke’ method of analysis tailored to a particular problem. Such an example will be presented and analyzed in ‘real-time’ in the talk itself, enabling the results to be immediately displayed.
Professor Lars Mönch
University of Hagen, Germany
Reflections on Reference Modeling, Simulation Testbeds, and Reproducibility
This presentation will discuss requirements for reaching the long-standing goal of designing a reference model for planning and control functions in semiconductor supply chains. A recently proposed simulation testbed for semiconductor supply chains will be described as an intermediate step towards reaching this goal. Some applications of the testbed will be presented. The discussion of the testbed will be related to recent initiatives for replicated computational results and in more general terms to reproducibility of scientific results and open research efforts.
Dr. Reiner Huber
Emeritus Professor, University of the German Armed Forces, Munich, Germany
Military Modeling and Simulation—A Recollection and Perspective
The roots of today’s military modeling and simulation approaches date back to 1938 when OR emerged after a disappointing exercise conducted by the RAF to test the effectiveness of the newly developed radar. It is fair to say that OR support in WWII was decisive in winning the Battle of Britain in 1940 and the Battle of the Atlantic in 1943.
After WWII, military OR was re-awakened by NATO when the Cold War began by facilitating the build-up of national military OR institutions to support defense planners and militaries in sustaining a NATO force structure capable of deterring a Soviet aggression. During the decade of cooperation with Russian analysts after the end of the Cold War we found out that, based on the results of war games and battle simulations, Soviet leaders concluded that the risk of not meeting the operational objectives of a successful attack on NATO was too high. Given Putin’s revisionist policy, NATO’s problem today is how to re-establish deterrence in an ever more complex environment characterized by cyber threats and hybrid warfare. Hopefully, Modeling and Simulation will again help stabilizing the situation.
PhD Colloquium Keynote
Dean and Professor of Management Science, School of Business and Economics, Loughborough University, United Kingdom
Are you Building and Using the Best Model?
Simulation models provide a powerful means for representing, understanding and improving the real world. But how do you know that you are building and using the best model? As modelers and analysts we tend to focus on model accuracy as a means for improving the chances that a model is valid. We assume that a more accurate model and set of results is better, and therefore is more likely to be believed and acted upon. But is this assumption true? Does the credibility of the results depend upon the accuracy of the model? Can a wrong model still be useful? In this talk we shall explore the relation between accuracy, validity, credibility and usefulness. If our ultimate aim is to provide analyses that are useful this may challenge our assumptions about what is the best model.