The TutORials in Operations Research series is published annually by INFORMS as an introduction to emerging and classical subfields of operations research and management science. These chapters are designed to be accessible for all constituents of the INFORMS community, including current students, practitioners, faculty, and researchers. The publication allows readers to keep pace with new developments in the field and serves as augmenting material for a selection of the tutorial presentations offered at the INFORMS Annual Meeting.
All attendees receive free access to the INFORMS 2021 TutORials in Operations Research online content concurrently with the meeting. Registrants of the 2021 INFORMS Annual Meeting have online access to the 2021 chapters, written by select presenters, beginning on October 24, 2021.
SUNDAY 7:45-9:15am PDT – Virtual
Learning and Information in Stochastic Networks and Queues
We review the role of information and learning in the stability and optimization of queueing systems. In recent years, techniques from supervised learning, online learning and reinforcement learning have been applied to queueing systems supported by the increasing role of information in decision making. We present observations and new results that help rationalize the application of these areas to queueing systems. We prove that the MaxWeight policy is an application of Blackwell’s Approachability Theorem. This connects queueing theoretic results with adversarial learning. We then discuss the requirements of statistical learning for service parameter estimation. As an example, we show how queue size regret can be bounded when applying a perceptron algorithm to classify service. Next, we discuss the role of state information in improved decision making. Here we contrast the roles of epistemic information (information on uncertain parameters) and aleatoric information (information on an uncertain state). Finally, we review recent advances in the theory of reinforcement learning and queueing, as well as provide discussion of current research challenges.
Authors: Neil Walton, Kuang Xu
SUNDAY 7:45-9:15am PDT – In-person
Response-guided Dosing in Cancer Radiotherapy
The goal in radiotherapy for cancer is to maximize tumor-kill while limiting toxic effects on nearby healthy anatomies. This is attempted via spatial localization of radiation dose, temporal dispersion of radiation dose, and radiation modality selection. The spatial component involves prescribing a high dose to the tumor and putting upper limits on the dose delivered to the healthy anatomies. The radiation intensity profile is then optimized to meet this treatment protocol as closely as possible. This is called fluence-map optimization. The temporal component of the problem involves breaking the total planned dose into several treatment sessions called fractions, which are administered over multiple weeks. This gives the healthy tissue some time to recover between sessions, as it possesses better damage-repair capabilities than the tumor. The key challenge on this temporal side is to choose an optimal number of fractions and the corresponding dosing schedule. This is called the optimal fractionation problem, and has been studied clinically for over a hundred years. Radiotherapy can be administered using different modalities such as photons, protons, and carbon ions. The choice of a modality depends on its physical characteristics and its radiobiological power to damage cells. This tutorial provides a detailed account of mathematical models that utilize the ubiquitous linear-quadratic (LQ) dose-response framework to guide decisions in the fractionation and modality selection problems. The tutorial emphasizes efficient exact solution methods developed in the last five years, and touches upon diverse methodological techniques from linear, nonlinear, convex, inverse, robust, and stochastic dynamic optimization. A brief overview of work that integrates the spatial and temporal components of the problem, and also of mathematical methodology designed to adapt doses to the tumor’s observed biological condition, is included. Potential directions for future research are outlined. Since treatment decisions in this tutorial are driven by a dose-response model, it fits within a paradigm called response-guided dosing, interpreted in a broad sense.
Author: Archis Ghate
SUNDAY 2:45-4:15pm PDT – Virtual
Discrete Choice Models and Applications in Operations Management
Modeling decision behavior among multiple choices has been an active research area for several decades. In this tutorial, we review the classic discrete choice models that are widely used in studying purchase behavior for consumers faced with multiple substitutable products. In addition, many other choice models have also been proposed to capture new features that arise in choice process, such as network effects, consideration set, sequential choice and bounded rationality. We provide an overview for a variety of operations management problems under discrete choice models. Pricing is a widely-used marketing strategy to attract consumers and win market competition. In pricing problems, firms determine prices for all their products to maximize the aggregate expected revenue or profit. We characterize the structure of the optimal prices under various choice models. Assortment management is viewed as another effective retailing strategy. In the assortment problems, sellers are not allowed to change retail prices; for example, some product must be sold at the manufacturer suggested retail price. However, a seller can decide which products should be carried in its store or presented to the arriving consumers. We find the optimal solution to the assortment optimization problems under mild conditions for some discrete choice models, and present efficient approximation algorithms for other problems that are NP-hard. To implement the discrete choice models in practice, a critical step is to calibrate the models using real data. We provide the general estimation procedure for discrete choice models using sales data of different structure, and discuss how to develop algorithms to deal with the issues on choice modeling or data availability.
Author: Ruxian Wang
MONDAY 7:45-9:15am PDT – Virtual
Surrogate-based Simulation Optimization
Simulation models are widely used in practice to facilitate decision-making in a complex, dynamic and stochastic environment. But they are computationally expensive to execute and optimize, due to lack of analytical tractability. Simulation optimization is concerned with developing efficient sampling schemes—subject to a computational budget—to solve such optimization problems. To mitigate the computational burden, surrogates are often constructed using simulation outputs to approximate the response surface of the simulation model. In this tutorial, we provide an up-to-date overview of surrogate-based methods for simulation optimization with continuous decision variables. Typical surrogates, including linear basis function models and Gaussian processes, are introduced. Surrogates can be used either as a local approximation or a global approximation. Depending on the choice, one may develop algorithms that converge to either a local optimum or a global optimum. Representative examples are presented for each category. Recent advances in large-scale computation for Gaussian processes are also discussed.
Authors: Xiaowei Zhang, L. Jeff Hong
MONDAY 11-12:30pm PDT – In-person
Good and Bad Optimization Models: Insights from Rockafellians
A basic requirement for a mathematical model is often that its solution (output) shouldn’t change much if the model’s parameters (input) are perturbed. This is important because the exact values of parameters may not be known and one would like to avoid being misled by an output obtained using incorrect values. Thus, it is rarely enough to address an application by formulating a model, solving the resulting optimization problem and presenting the solution as the answer. One would need to confirm that the model is suitable, i.e., “good,” and this can, at least in part, be achieved by considering a family of optimization problems constructed by perturbing parameters as quantified by a Rockafellian function. The resulting sensitivity analysis uncovers troubling situations with unstable solutions, which we referred to as “bad” models, and indicates better model formulations. Embedding an actual problem of interest within a family of problems via Rockafellians is also a primary path to optimality conditions as well as computationally attractive, alternative problems, which under ideal circumstances, and when properly tuned, may even furnish the minimum value of the actual problem. The tuning of these alternative problems turns out to be intimately tied to finding multipliers in optimality conditions and thus emerges as a main component of several optimization algorithms. In fact, the tuning amounts to solving certain dual optimization problems. In this tutorial, we’ll discuss the opportunities and insights afforded by Rockafellians.
Author: Johannes Royset
MONDAY 2:45-4:15pm PDT – In-person
Storytelling with Sports Analytics
As the use of analytics grows in the sports industry, debates about the usefulness of analytical models in sports has also grown. There is no doubt that analytics have impacted the sports industry in many positive ways, but it is an evolving story as analysts seek better models of player/team performance evaluation, forecasting, and decision-making. Communicating new results in these areas requires analysts to connect with organizations and fans by putting the results in context to tell a more complete story. In this work, we give examples from our own work and the work of others showing how to frame analytics within a story. At the same time, we give a brief history of the evolution in the descriptive, predictive, and prescriptive areas of sports analytics. While this work is not meant to be exhaustive, it highlights some of the major issues that analysts face in building useful models in these areas. The paper also represents a decade-long collaboration between academics and sports writers, and we highlight some of the lessons we’ve learned from that collaboration.
Authors: Elizabeth L. Bouzarth, Benjamin C. Grannan, John M. Harris, Kevin Hutson, Peter J. Keating
MONDAY 2:45-4:15pm PDT – Virtual
Game Theory and the COVID-19 Pandemic
The world is now faced with the COVID-19 pandemic, a healthcare disaster, not limited to time or location. The COVID-19 pandemic has demonstrated the importance of operations research and related analytical tools, with the research and practitioner communities channeling and harnessing their expertise. It has inspired associated investigations and modeling and methodological advances in order to support deeper insights and enhanced decision-making as well as the provision of guidance to policymakers. In this tutorial, I overview some of the novel advances and applications, inspired by the COVID-19 pandemic, utilizing game theory. The focus of the tutorial is on supply chain networks, although the scope is broader. The tutorial first presents an overview of variational inequality theory, which is the methodology utilized for the formulation, qualitative analysis, and solution of the described models. The supply chain network models presented are recently introduced ones that capture, respectively: the inclusion of labor into supply chain networks, enabling the quantitative assessment of disruptions to labor; the fierce competition among entities for medical supplies in the pandemic from PPEs to, now, vaccines; and, finally, the calculation of the potential synergy associated with the teaming, that is, the cooperation, among organizations in the pandemic, under cost and demand uncertainty, to provide needed supplies. Suggestions for future research are provided.
Author: Anna Nagurney
TUESDAY 7:45-9:15am PDT – Virtual
Machine Learning for Optimal Power Flows
Optimal power flow is a cornerstone of electrical power system operations: it is solved repeatedly every five minutes in the real-time market. It is also a critical building block for the market-clearing and reliability optimizations that decide lookahead, reliability, and day-ahead commitments. With increasing uncertainty coming from renewable generation, distributed energy resources, and extreme weather events, the optimal operating point of the electrical power system may rapidly change during real-time operations. Similarly, reliability and risk assessments are increasingly demanding computationally due to the same increase in uncertainty. This tutorial examines the role of machine learning to address these challenges. The availability of massive historical and synthesized data, and the repeated need to solve related problems makes machine learning a promising technology to approach these challenges. However, power system operations also raise fundamental challenges for machine learning which must capture the nonlinear nonconvex physical and engineering constraints of this complex and large-scale infrastructure. The tutorial reviews recent progress in this direction and examines two types of approaches: end-to-end learning and learning to optimize. The goal of end-to-end learning is to provide optimization proxies that approximate, with high fidelity, the optimal solutions of optimal power flow problems. In contrast, the goal of the learning-to-optimize approach is to accelerate existing optimization algorithms for solving optimal power flow problems.
Author: Pascal Van Hentenryck
TUESDAY 11-12:30pm PDT – In-person
Statistical Analysis of Wasserstein Distributionally Robust Estimators
We consider statistical methods which invoke a min-max distributionally robust formulation to extract good out-of-sample performance in data-driven optimization and learning problems. Acknowledging the distributional uncertainty in learning from limited samples, the min-max formulations introduce an adversarial inner player to explore unseen covariate data. The resulting Distributionally Robust Optimization (DRO) formulations, which include Wasserstein DRO formulations (our main focus), are specified using optimal transportation phenomena. Upon describing how these infinite-dimensional min-max problems can be approached via a finite-dimensional dual reformulation, the tutorial moves into its main component, namely, explaining a generic recipe for optimally selecting the size of the adversary’s budget. This is achieved by studying the limit behavior of an optimal transport projection formulation arising from an inquiry on the smallest confidence region that includes the unknown population risk minimizer. Incidentally, this systematic prescription coincides with those in specific examples in high-dimensional statistics and results in error bounds that are free from the curse of dimensions. Equipped with this prescription, we present a central limit theorem for the DRO estimator and provide a recipe for constructing compatible confidence regions that are useful for uncertainty quantification. The rest of the tutorial is devoted to insights into the nature of the optimizers selected by the min-max formulations and additional applications of optimal transport projections.
Authors: Jose Blanchet, Karthyek Murthy, Viet Anh Nguyen
TUESDAY 2:45-4:15pm PDT – Virtual
Evolutionary Computation: An Emerging Framework for Practical Single and Multi-Criterion Optimization and Decision-Making
Many optimization problems from engineering, science and business involve complex objective and constraint functions and other practicalities which violate the assumptions typically required for provable optimization algorithms. Differentiability, convexity and regularities of problems cannot be expected to be present in most practical problems. While classical gradient-based and convex programming methods are the best approaches when the problems satisfy the assumptions, there is a growing need for alternate methods which can be generically applied to any problem to achieve an optimal or a near-optimal solution. In this chapter, we introduce an emerging search and optimization method—evolutionary computation (EC)—which uses a population of solutions in every iteration and employs a series of operators that mimic natural evolutionary principles in arriving at better populations through generations. The population approach, flexibility of their operators for customization to different problem classes, and their direct search approach make EC methods applicable to a wide variety of optimization problems. This chapter discusses their working principles, presents case studies involving single and multi-criterion optimization problems, and discusses a few current research directions in the context of multi-criterion optimization and decision-making.
Author: Kalyanmoy Deb
TUESDAY 2:45-4:15pm PDT – Virtual
Exactness in SDP Relaxations of QCQPs: Theory and Applications
Quadratically constrained quadratic programs (QCQPs) are a fundamental class of optimization problems. In a QCQP, we are asked to minimize a (possibly nonconvex) quadratic function subject to a number of (possibly nonconvex) quadratic constraints. Such problems arise naturally in many areas of operations research, computer science, and engineering. Although QCQPs are NP-hard to solve in general, they admit a natural convex relaxation via the standard (Shor) semidefinite program (SDP) relaxation. In this tutorial, we will study the SDP relaxation for general QCQPs, present various exactness concepts related to this relaxation and discuss conditions guaranteeing such SDP exactness. In particular, we will define and examine three notions of SDP exactness: (i) objective value exactness—the condition that the optimal value of the QCQP and the optimal value of its SDP relaxation coincide, (ii) convex hull exactness—the condition that the convex hull of the QCQP epigraph coincides with the (projected) SDP epigraph, and (iii) the rank-one generated (ROG) property—the condition that a particular conic subset of the positive semidefinite matrices related to a given QCQP is generated by its rank-one matrices. Our analysis for objective value exactness and convex hull exactness stems from a geometric treatment of the projected SDP relaxation and crucially considers how the objective function interacts with the constraints. The ROG property complements these results by offering a sufficient condition for both objective value exactness and convex hull exactness which is oblivious to the objective function. By analyzing the geometry of the associated sets, we will give a variety of sufficient conditions for these exactness conditions and discuss settings where these sufficient conditions are additionally necessary. Throughout, we will highlight implications of our results for a number of example applications.
Authors: Fatma Kılınç-Karzan, Alex L. Wang