TutORials

TutORials 2018 – Recent Advances in Optimization and Modeling of Contemporary Problems

2018 TutORials in Operations ResearchAll attendees receive free access to the INFORMS 2018 TutORials in Operations Research online content concurrently with the meeting. Registrants of the 2018 INFORMS Annual Meeting have online access to the 2018 chapters, written by select presenters, beginning on November 3, 2018. Access this content using the link provided to all attendees by email or, if you are a 2018 member, simply login to INFORMS PubsOnLine.

The TutORials in Operations Research series is published annually by INFORMS as an introduction to emerging and classical subfields of operations research and management science. These chapters are designed to be accessible for all constituents of the INFORMS community, including current students, practitioners, faculty, and researchers. The publication allows readers to keep pace with new developments in the field, and serves as augmenting material for a selection of the tutorial presentations offered at the INFORMS Annual Meetings.

Volume Co-Chairs

Esma Gel, Arizona State University

Lewis Ntaimo, Texas A&M University

Editor

Douglas Shier, Clemson University

Sunday, Nov. 4

8-9:30AM

Change Detection and Prognostics for Transient Real-World Processes Using Streaming Data

Ashif Sikandar Iquebal and Satish Bukkapatnam, Texas A&M University
  • Recent advances in sensor arrays and imaging systems have spurred interest in analyzing high-dimensional, streaming time series data from real-world complex systems. These time series data capture the dynamic behaviors and causalities of the underlying processes and provide a computationally efficient means to predict and monitor system state evolution. More pertinently, they can provide the ability to detect incipient and critical changes in a process, which is essential for real-time system integrity assurance. However, effective harnessing of information from these data sources is currently impeded by the mismatch between the key assumption of stationarity underlying most change detection methods and by the nonlinear and nonstationary (transient) dynamics of most real-world processes. The current approaches are slow or simply unable to detect qualitative changes in the behaviors that lead to anomalies. For most real-world systems, the vector field of state dynamics is a nonlinear function of the state variables, i.e., the relationship connecting intrinsic state variables with their autoregressive terms and exogenous variables is nonlinear. Time series emerging from such complex systems exhibit aperiodic (chaotic) patterns even under steady state. Also, since real-world systems often evolve under transient conditions, the signals so obtained tend to exhibit myriad forms of nonstationarity. This tutorial presents a delineation of these diverse transient behaviors, and a review of advancements in change detection and prognostication methods for nonlinear and nonstationary time series. We also provide a comparison of their performances in certain real-world manufacturing and health informatics applications.
11am-12:30PM

Nature-Inspired Heuristics: Overview and Critique

Craig Tovey, Georgia Tech
  • Evolutionary Programming, Genetic Algorithms, Simulated Annealing, Ant Colony Optimization, and Particle Swarm Optimization are among the earliest optimization heuristics inspired by animate and inanimate phenomena in the natural world. Some of the more recently invented methods have exotic names such as Roach Infestation, Shuffled Frog-Leaping, Invasive Weed Optimization, and Cuckoo Search. This tutorial is a guide to the bewildering, burgeoning menagerie of such heuristics, which now comprises more than 100 algorithms, and whose accompanying publications number in the hundreds of thousands. Their underlying principles include populations, recombination, exploration, reinforcement, encoding, selection, randomness and perturbation. They have been successful in many implementations, ofttimes winning out against classical OR/CS methods for implementation or a user’s acceptance. They have been less successful, sometimes spectacularly so, in many computational test comparisons with classical methods. I speculate as to why these success levels differ so greatly. On the one hand, I critique the insularity and mathematical naiveté of this heuristic research, particularly its limited nature-versus-nature comparisons and self-contradictory stance on algorithm parameter tuning. On the other hand, I critique the OR community’s having implicitly ceded important optimization territory to others, and our failure to face what is now there. In an attempt to spur our community to thoroughly engage with nature-inspired heuristics, I conclude by observing a misalignment between individual and community incentives, identifying a few potentially powerful nature-inspired heuristic ideas for optimization, and proposing some specific research questions that may attract operations researchers.

1:30-3PM

Behavioral Challenges in Policy Analysis with Conflicting Objectives

Sponsored by INFORMS Multi Criteria Decision Making Section

Gilberto Montibeller, Loughborough University
  • Public policy problems are rife with conflicting objectives: efficiency versus fairness, technical criteria versus political goals, costs versus multiple benefits. Multi-Criteria Decision Analysis provides robust methodologies to support policy makers in making tough choices and in designing better policy alternatives when considering these conflicting objectives. However, there are important behavioral challenges in developing these models.

    Policy analysis works with groups of policy makers, modeling their decision, facilitating their discussions, and representing preferences and priorities. The overarching goal is to improve decision processes and provide support to evidence-based decision making, taking into account public priorities and the inherent uncertainties that long term horizons and complex systems present.

    Key challenges in those interventions are the use of expert judgments, whenever evidence is not available, the elicitation of preference and priorities from policy makers and communities, and the effective management of group decision processes. Human behavior has a major influence on each of these challenges: experts might be biased in their estimates, individuals may be unable to express clearly their preferences, and groups may present dysfunctional dynamics. Extensive developments in behavioral decision research, social psychology, facilitated decision modeling, and incomplete preference models shed light on how decision analysts should address these issues to provide better decision support and develop high quality decision models. This tutorial discusses the main findings of these extensive, but rather fragmented, literatures.

    These guidelines are illustrated using policy analysis interventions conducted over the last decade for several organizations, such as the evaluation of capabilities of health systems against rabies for the World Health Organization (WHO), the prioritization of low moisture foods for the Food and Agriculture Organization of the United Nations (FAO), the assessment of bio-security threats for the UK Department of Environment Food and Rural Affairs (DEFRA), the evaluation of malaria treatment kits for the Malaria Consortium/USAID, and the prioritization of value-for-money studies for the UK National Audit Office.

4:30-6PM

Machine Learning and Data Mining with Combinatorial Optimization Algorithms

Dorit Hochbaum, University of California, Berkeley
  • The dominant algorithms for machine learning tasks fall most often in the realm of AI or continuous optimization of intractable problems. This tutorial presents combinatorial algorithms for machine learning, data mining, and image segmentation that, unlike the majority of existing machine learning methods, utilize pairwise similarities. These algorithms are efficient and reduce the classification problem to a network flow problem on a graph. One of these algorithms addresses the problem of finding a cluster that is as dissimilar as possible from the complement, while having as much similarity as possible within the cluster. These two objectives are combined either as a ratio or with linear weights. This problem is a variant of normalized cut, which is intractable. The problem and the polynomial-time algorithm solving it are called HNC.

    It is demonstrated here, via an extensive empirical study, that incorporating the use of pairwise similarities improves accuracy of classification and clustering. However, a drawback of the use of similarities is the quadratic rate of growth in the size of the data. A methodology called “sparse computation” has been devised to address and eliminate this quadratic growth. It is demonstrated that the technique of “sparse computation” enables the scalability of similarity-based algorithms to very large-scale data sets while maintaining high levels of accuracy. We demonstrate several applications of variants of HNC for data mining, medical imaging, and image segmentation tasks, including a recent one in which HNC is among the top performing methods in a benchmark for cell identification in calcium imaging movies for neuroscience brain research.

Monday, Nov. 5

8-9:30AM

Optimization of Sequential Decision Making for Chronic Diseases: From Data to Decisions

Sponsored by INFORMS Health Applications Society

Brian Denton, University of Michigan
  • Rapid advances in medical interventions for chronic diseases such as cardiovascular disease, cancer, and diabetes have made it possible to detect diseases at early stages and tailor treatment pathways to individual patients based on their risk factors including gender, race/ethnicity, and disease-specific factors. However, the large number of relevant risk factors to be considered, combined with uncertainty in future health outcomes and treatment side effects, makes optimizing these decisions challenging. Randomized trials are the gold standard for selecting treatment interventions but the large number of possible decisions and their high cost makes these trials infeasible. Data-driven operations research methods are showing great promise in helping patients and medical doctors improve decisions about health interventions. Observational data that is now routinely collected in many health systems is a valuable resource for fitting and validating stochastic models for chronic diseases. Moreover, optimization methods for sequential decision making, including Markov decision processes, partially observable Markov decision processes, and reinforcement learning methods, exploit these models to optimize treatment policies that can balance competing criteria such as the harms and benefits associated with treatment of chronic diseases. This tutorial provides an introduction to some of the most commonly used approaches for using raw data for the purpose of optimizing sequential treatment policies. Special attention is paid to the challenges associated with using observational data and the influence of model parameter uncertainty in this context.
11AM-12:30PM

Tabu and Scatter Search: Principles and Practice

Manuel Laguna, University of Colorado
  • This tutorial focuses on the metaheuristics known as tabu search and scatter search. Tabu search has dramatically changed our ability to solve a host of problems in applied science, business, and engineering. The adaptive memory designs of tabu search have provided useful alternatives and supplements to the types of memory embodied in other metaheuristic approaches. We also explore the evolutionary approach called scatter search, which originated from strategies for creating composite decision rules and surrogate constraints. Numerous studies have demonstrated the practical advantages of this approach for solving a diverse array of optimization. Scatter search contrasts with other evolutionary procedures, such as genetic algorithms, by providing unifying principles for joining solutions based on generalized path constructions and by utilizing strategic designs where other approaches resort to randomization. Additional advantages are provided by intensification and diversification mechanisms that exploit adaptive memory, drawing on foundations that link scatter search to tabu search. We show connections between tabu search and scatter search by demonstrating how they can be applied to many optimization problems found in practice.

    This tutorial also discusses the search strategy called path relinking, relevant to both tabu and scatter search. Features added to both tabu and scatter search by extension of their basic philosophy are captured in the path relinking framework. From a spatial orientation, the process of generating linear combinations of a set of reference solutions (as typically done in scatter search) may be characterized as generating paths between and beyond these solutions, where solutions on such paths also serve as sources for generating additional paths. This leads to a broader conception of the meaning of creating combinations of solutions. By natural extension, such combinations may be conceived to arise by generating paths between and beyond selected solutions in the neighborhood space. Finally, we highlight key ideas and research issues associated with tabu search, scatter search, and path relinking that offer promise of yielding future advances.

1:30-3PM

A Guide to Optimization Based Multi-Period Planning

Linus Schrage, University of Chicago and LINDO Systems
  • Many organizations use multi-period planning models that involve optimization to decide things like the best production or investment levels in multiple periods into the future. There are a wide variety of features a user would like to have in such models. How those features are represented affects both the usefulness of the results and the solvability of these models as you add more periods to the model, or add more products, or in general, increase the detail. This tutorial describes how to best represent some important features that are common to most long range planning models.

    (a) Planning horizon length.
    (b) Ending conditions. The final period of the planning model frequently needs special treatment. In some situations you may be able to actually use an infinite horizon plan.
    (c) Period length.
    (d) Uncertainty. What is the best way of representing it? Variance, downside risk, Value-at-Risk, a utility function of some sort?
    (e) “Nervousness” and “sliding” scheduling. Most planning models are used in a “rolling” or “sliding” fashion, e.g., solve a 12 period model this month, implement the first period, and then next month slide things forward and repeat. When this is done, “nervousness” may be a problem, i.e., the plan made in February for what to do in June may differ substantially from what the plan published in January suggested for June.
    (f) Changeover, startup and shutdown costs.
    (g) Precedence constraints.
    (h) Scarce resource constraints.
    (i) Taxes. These can be important in some planning models. How these are properly calculated, or at least approximated, in an optimization model can be a challenge in the presence of features such as depreciation and choice of FIFO vs. LIFO inventory valuation.
    (j) Nonlinearities.

4:30-6PM

Coalescing Data and Decision Sciences for Analytics

Yunxiao Deng, Junyi Liu, and Suvrajeet Sen,  University of Southern California
  • The dream of analytics is to work from common, clean, and consistent data sources in a manner that all of its facets (descriptive, predictive, and prescriptive) are supported via a coherent vision of data and decision sciences. To the extent that data and decision sciences work within logically/mathematically consistent frameworks, and that these paradigms operate within computationally realistic structures, analytics will help organizations thrive. Done right, analytics promises to plant more OR/MS resources at C-suites where major corporate decisions are made. Done poorly, the OR/MS community will look back at these days as a lost opportunity to make a long-term difference to both theory and practice of analytics. With this tutorial, we invite the OR/MS community to join an intellectually vibrant endeavor to integrate seemingly disparate pillars, namely predictive and prescriptive analytics. Just as importantly, we show how OR/MS strengths in modeling, algorithms, and applications can facilitate coalescing the data and decision sciences. We draw upon several examples which pair data science tools such as filtering and regression, to decision science tools such as Monte Carlo tree search and stochastic programming. The marriage of these two paradigms (data and decision sciences) is what we refer to as Learning Enabled Optimization (LEO). This tutorial covers fundamental concepts for the fusion of statistical and optimization modeling, statistically approximate optimality, sampling-based algorithms, and finally, model assessment and selection.

Tuesday, Nov. 6

7:30-9AM

Optimization Modeling and Techniques for Systemic Risk Assessment and Control in Financial Networks

John Birge, University of Chicago; Aein Khabazian, University of Houston; Jiming Peng, University of Houston
  • Since the crisis in 2007–2008, the assessment and control of systemic risk in financial networks has become one of the most important and active research areas in economics and finance. In this tutorial, we give a basic introduction to various optimization models including linear optimization problems (LO) with uncertain data in both sides of the constraints, linear optimization with non-convex quadratic constraints (QCLO), mixed integer linear optimization (MILO), and stochastic optimization that arise from the following three topics in the risk analysis of financial networks:

    • vulnerability analysis of a financial network when only limited information of the network is available and the assets are subject to market shocks;
    • identification of the least stable network structure for fixed assets and exploration of cascading failure in the identified least stable network;
    • optimization modeling and techniques for risk mitigation.

    Specifically, we demonstrate how classical LO theory can be used to assess contagions in the network when complete information on the network is available, and we introduce new optimization theory to deal with the scenario where only limited information on the financial network is exposed. We show how conventional optimization techniques, such as alternative convex search (ACS) and successive linear approximation (SLO), can help to identify the least stable network structure, and we discuss under what circumstances these conventional optimization techniques may fail and new optimization techniques need to be developed. We will also discuss how to combine advanced analysis in continuous optimization and graph modeling to identify the network structure. Open questions and challenges from both an optimization perspective and financial perspective will be discussed as well.

10:30AM-12PM

How to Influence and Improve Decisions Through Optimization Models

Jeffrey D. Camm, Wake Forest University
  • Industry’s recent increased focus on data driven-decision making and the use of analytics in all sectors from sports to financial services to technology and healthcare has led to a resurgence in the interest in traditional operations research tools such as optimization, simulation, and decision analysis. As organizations mature analytically, it seems likely that we will see a further increase in interest in prescriptive analytics, including optimization modeling, which is the focus of this tutorial.

    With massive amounts of data being routinely collected in real time and an increased awareness on the part of management of the value of data, the availability of data is typically no longer the bottleneck in the optimization modeling process. Increased computing speed, improved algorithms, parallel processing, and cloud computing have increased the size of optimization problems that can be solved to optimality. Considering better data availability and the dramatic increase in our ability to solve problems, what are the impediments to keeping us from having significant influence and impact on decision making? Going forward, it is possible that our ability to (1) structure a messy decision problem into a useful optimization framework, (2) properly use the model to deliver valuable insights for management, and (3) communicate to management the value proposition of our insights, will become the new reasons we might fail to have the impact we know is possible.

    In this tutorial, we review types of optimization models and the art of modeling, that is, the process of going from mess to model. We discuss how to use an optimization model to provide not simply “the answer” but insights that will be useful to managers and influence their decision making. We discuss the importance of communication in influencing, and provide examples and best practices relevant for optimization. We conclude with thoughts on how optimization modeling is important to the bustling fields of data science and artificial intelligence.

12:05-1:35PM

Bayesian Optimization

Sponsored by INFORMS Applied Probability Society

Peter Frazier, Cornell University
  • Bayesian optimization is an approach to optimizing objective functions that take a long time (minutes or hours) to evaluate. It is best suited for optimization over continuous domains of less than 20 dimensions, and tolerates stochastic noise in function evaluations. It builds a surrogate for the objective and quantifies the uncertainty in that surrogate using a Bayesian machine learning technique, Gaussian process regression, and then uses an acquisition function defined from this surrogate to decide where to sample. In this tutorial, we describe how Bayesian optimization works, including Gaussian process regression and three common acquisition functions: expected improvement, entropy search, and knowledge gradient. We also explain practical tricks that are important for making it work well in practice, and provide an overview of software packages. We then survey more advanced techniques, including expensive-to-evaluate constraints, running multiple function evaluations in parallel, the inclusion of derivative information, and high-dimensional domains.

2-3:30PM

Stochastic Gradient Descent: Recent Trends

Sponsored by INFORMS Simulation Society

David Newton, Purdue University; Farzad Yousefian, Oklahoma State University; and Raghu Pasupathy, Purdue University
  • Stochastic Gradient Descent (SGD), also known as stochastic approximation, refers to certain simple iterative structures used for solving stochastic optimization and root finding problems. The identifying feature of SGD is that, much like in gradient descent for deterministic optimization, each successive iterate in the recursion is determined by adding an appropriately scaled gradient estimate to the prior iterate. Owing to several factors, SGD has become the leading method to solve optimization problems arising within large-scale machine learning and “big data” contexts such as classification and regression. This tutorial will cover the basics of SGD with an emphasis on modern developments. The tutorial starts with stochastic optimization examples and problem variations where SGD is applicable, and then details important flavors of SGD that are currently in use. A highlight of the tutorial is a table that includes SGD variations, alongside their unique features and any reported (oracle/work) complexity. The presentation of this tutorial will include numerical examples.
4:35-6:05PM

Risk-Averse Stochastic Modeling and Optimization

Nilay Noyan, Sabancı University
  • The ability to compare random outcomes based on the decision makers’ risk preferences is crucial to modeling decision-making problems under uncertainty. In this tutorial, the primary focus is on the preference relations based on second-order stochastic dominance (SSD) and the widely-applied risk measure conditional value-at-risk (CVaR). We present single- and two-stage stochastic optimization problems that feature univariate SSD and CVaR-based relations. Moreover, for many decision-making problems, it may also be essential to consider multiple and possibly conflicting stochastic performance measures. In contrast to univariate comparisons, such a multicriteria approach requires specifying preference relations among random vectors, where each dimension of a vector corresponds to a decision criterion. This is usually accomplished by extending the univariate stochastic preference relations to the multivariate case by utilizing scalarization functions. In this context, we present optimization models with multivariate stochastic benchmarking constraints based on CVaR and SSD. We discuss the main computational challenges arising in solving the problems of interest, and for finite probability spaces we describe effective solution methods.