All attendees receive free access to the INFORMS 2016 TutORials in Operations Research online content concurrently with the meeting. Registrants of the 2016 INFORMS Annual Meeting have online access to the 2016 chapters, written by select presenters, beginning on November 12, 2016. Access this content using the link provided to all attendees by email or, if you are a 2016 member, simply login to INFORMS PubsOnLine.

The TutORials in Operations Research series is published annually by INFORMS as an introduction to emerging and classical subfields of operations research and management science. These chapters are designed to be accessible for all constituents of the INFORMS community, including current students, practitioners, faculty, and researchers. The publication allows readers to keep pace with new developments in the field, and serves as augmenting material for a selection of the tutorial presentations offered at the INFORMS Annual Meetings.

All TutORials are located in the Music City Center, 106C, Level 1


Storage and Read-Optimized Data Placement Structures for High Performance Analysis
Edmon Begoli and Pragnesh Patel, University of Tennessee-Knoxville and J. Blair Christian, PYA Analytics

  • We present state-of-art structures and methods for efficient data preparation, and representation for analysis. Our intent is to introduce the data science and analytics communities to open source data placements, structures, and methods. These practices can make the foundational processes of data preparation and access dramatically more efficient than typical raw file or database representations and use more conservative storage. To illustrate this, we introduce two highly efficient data placement structures. We then present a tutorial, supported by step-by-step examples, of how to create, use and access data, structured by Parquet or ORC, using Apache Spark. Finally, we illustrate the benefits of using these structures with computational and storage volume benchmarks.

Methods and Applications of Network Sampling
Mohammad Al Hasan, Indiana University and Purdue University

  • Network data appears in various domains, including social, communication, and information sciences. Analysis of such data is crucial for making inferences and predictions about these networks, and moreover, for understanding the different processes that drive their evolution. However, a major bottleneck to perform such an analysis is the massive size of real-life networks, which makes modeling and analyzing these networks simply infeasible. Further, many networks, specifically, those that belong to social and communication domains are not visible to the public due to privacy concerns, and other networks, such as the Web, are only accessible via crawling. Therefore, to overcome the above challenges, researchers use network sampling overwhelmingly as a key statistical approach to select a sub-population of interest that can be studied thoroughly. In this tutorial, we aim to cover a diverse collection of methodologies and applications of network sampling. We will base the discussion of network sampling in terms of population of interest (vertices, edges, motifs), and sampling methodologies (such as Metropolis-Hastings, random walk, and importance sampling). We will also present a number of applications of these methods.

Novel Dimension Reduction Techniques for High Dimensional Data Using Information Complexity 
Hamparsum Bozdogan, The University of Tennessee-Knoxville and Esra Pamukcu, Firat University

  • This tutorial introduces and develops two computationally feasible intelligent feature extraction techniques that addresses the potentially daunting statistical and combinatorial problems. First part of the tutorial employs a three-way hybrid between: Probabilistic Principal Component Analysis (PPCA) to reduce the dimensionality of the dependent variables; Multivariate regression (MVR) models that account for misspecification of the distributional assumption to determine a predictive operating model for glass composition for automobiles; and uses the genetic algorithm (GA) as the optimizer along with the misspecification-resistant form of Bozdogan’s ICOMP as the fitness function. Second part of the tutorial is devoted to dimension reduction via a novel Adaptive Elastic Net (AEN) regression model to reduce the dimension of a Japanese stock index called TOPIX as the response to build a best predictive model when we have “large p, small n” problem. Our results show the remarkable dimension reduction in both of these real-life examples of wide datasets, which demonstrates the versatility and the utility of the two proposed novel statistical data modeling techniques.

A Unified Framework for Optimization under Uncertainty
Warren B. Powell, Princeton University

  • Stochastic optimization, also known as optimization under uncertainty, is studied by over a dozen communities, often (but not always) with different notational systems and styles, typically motivated by different problem classes (or sometimes different research questions) which often lead to different algorithmic strategies. This resulting “jungle of stochastic optimization” has produced a highly fragmented set of research communities which complicates the sharing of ideas. This tutorial unifies the modeling of a wide range of problems, from dynamic programming to stochastic programming to multiarmed bandit problems to optimal control, in a common mathematical framework that is centered on the search for policies. We then identify two fundamental strategies for finding effective policies, which leads to four fundamental classes of policies which span every field of research in stochastic optimization.


Healthcare Analytics: Big Data, Little Evidence
Joris van de Klundert, Erasmus University 

  • While the healthcare sector contributes more than ten percent of GDP in most developed countries and is approaching twenty percent in the U.S., it remains a relatively modest area in the field of operations research, management science, and analytics. There is considerable room for a larger and more valuable contribution, especially in view of the important advancements in information technology taking place in healthcare across the globe, which are already contributing to reducing the global burden of disease. In order for analytics professionals and scientists to reach the full contribution potential of their discipline, it is beneficial to understand the dominant research paradigms and results of clinical and health sciences research. These sciences are rooted in empirical evidence, in empirical data, thus offering connection opportunities. In this tutorial we review the current position of analytics as covered in the operations research and management science literature, and outline a path for the science of analytics to enlarge its contribution to the health of populations.

Research and Teaching Opportunities in Project Management
Nicholas G. Hall, The Ohio State University

  • One-fifth of the world’s economic activity, with an annual value of $12 trillion, is organized using the business process of project management. This process has exhibited dramatic growth in business interest in recent years, with a greater than 1000% increase in Project Management Institute membership since 1996. Contributing to this growth are many new applications of project management. These include IT implementations, research and development, software development, corporate change management, and new product and service development. However, the very different characteristics of these modern projects present new challenges. The partial resolution of these challenges within project management practice over the last 20 years defines numerous interesting opportunities for academic researchers. These research opportunities make use of a remarkably broad range of methodologies, including robust optimization, cooperative and noncooperative game theory, nonlinear optimization, predictive analytics, empirical studies, and behavioral modeling. Furthermore, the $4.5 trillion that is annually at risk from a shortage of skilled project managers, and the 15.7 million new jobs in project management expected by 2020, provide great opportunities for contributions to project management education. These educational opportunities include the integration of case studies, analytics challenges, online simulations, in-class games, self-assessment exercises, videos, and guest speaker presentations, which together form an appealing course for both business and engineering schools.

Systemic Risk, Policies, and Data Needs
Agostino Capponi, Columbia University

  • The study of financial system stability is of fundamental importance in modern economies. The failure or distress experienced by systemically important financial institutions can have contagious effects on the rest of the financial system. This may in turn result in deteriorating macroeconomic conditions and price instability, with negative consequences and spillover effects to other sectors of the real economy. This tutorial surveys the different approaches put forward by academic and practitioner literature to systemic risk modeling and measurement. We analyze the relevant economic forces in play, the mechanisms leading systemic instabilities, and discuss the methodologies used in the analysis. We discuss macroprudential, monetary and resolution policies targeting financial stability. We highlight the supervisory authorities of the different financial institutions, as well as barriers to data sharing.

Recent Developments in Multistage Stochastic Programming
Alan J. King, IBM Thomas J. Watson Research Center

  • Multistage stochastic programming is a framework for applying large scale optimization technologies to multiperiod decision making under uncertainty. This talk will review the past decade and a half’s developments in multistage stochastic programming, including risk functionals, Stochastic Dual Dynamic Programming, time-consistent risk measures, and quantization of scenario trees.


Mathematical Finance, Models, Simulation and Today’s Pressing Problem
J. M. Pimbley, Maxwell Consulting, LLC

  • Financial markets are awash in information ranging in form from numerical data to unstructured news reports to nebulous narratives of executives and regulators. Investors, fiduciaries, intermediaries and other “market actors” apply an exceedingly broad spectrum of human skill and ingenuity to the interpretation of this streaming information. Mathematical techniques and analysis, in particular, are notable tools in which mathematical advances and discoveries may improve markets’ liquidity, efficiency and pace. This article outlines the origin and techniques of mathematical finance and associated models and simulations. We note strengths and shortcomings of these mathematical tools. The greatest challenge today is to learn and teach to the financial world the necessary judgment to avoid and rescind destructive deployment of financial models.

Robust Multiobjective Optimization for Decision Making Under Uncertainty and Conflict
Margaret M. Wiecek and Garrett M. Dranichak, Clemson University

  • Many real-life problems in engineering, business, and management are characterized by multiple, conflicting objectives, as well as the presence of uncertainty. The conflicting criteria originate from various ways to assess system performance and the multiplicity of decision makers, while uncertainty results from inaccurate or unknown data due to imperfect models and measurements, lack of knowledge, and volatility of the global environment.

    In this tutorial, the deterministic approaches to uncertainty that are integrated with multiobjective optimization to address decision making under uncertainty and conflict are discussed. The approaches are based on robust optimization and parametric optimization, both developed for single-objective settings. Six sources of uncertainty are presented, and each type of uncertainty is placed in the multiobjective optimization problem (MOP), yielding several types of uncertain MOPs (UMOPs). Some of the sources are adopted from earlier studies in (single-objective) engineering optimization, while the others result from the multiobjective optimization modus operandi. The UMOP models are classified first according to the location of the uncertainty in their formulation, second with respect to the undertaken optimization approach, and third on the basis of the proposed definition of robust efficient solutions. The models are presented along with the accompanying results on solution concepts, properties, methods, and applications that are specific to each case.

    It is expected that the topics selected in this tutorial and their organization may help beginners to become familiar with the area of robust multiobjective optimization while serving as a reference to researchers.

Multiagent Systems Modeling
Sanmay Das, Washington University in St. Louis

  • A multiagent system is one where multiple autonomous agents with potentially different goals interact. Viewing agents through the computational lens provides a powerful, yet principled method for understanding the behaviors of complex systems, including economic and financial markets, online social networks, etc. In this tutorial, I discuss general principles for such modeling, best practices for handling the simplicity/complexity tradeoff, and present examples of predictive and useful models.

Optimality Conditions for Inventory Control
Eugene A. Feinberg, Stony Brook University

  • This tutorial describes recently developed general optimality conditions for Markov Decision Processes that have significant applications to inventory control. In particular, these conditions imply the validity of optimality equations and inequalities. They also imply the convergence of value iteration algorithms. For total discounted-cost problems only two mild conditions on the continuity of transition probabilities and lower semi-continuity of one-step costs are needed. For average-cost problems, a single additional assumption on the finiteness of relative values is required. The general results are applied to periodic-review inventory control problems with discounted and average-cost criteria without any assumptions on demand distributions. The case of partially observable states is also discussed.


Mining Qualitative Attributes to Assess Corporate Performance
Ananda Swarup Das, Gagandeep Singh, and L. Venkata Subramaniam; IBM India Research Labs and Aparna Gupta, Rensselaer Polytechnic Institute

  • We present an overview of systems and methods to track ongoing events from sources such as corporate filings, financial articles, expert or analyst reports, press releases, customers’ feedback and news articles that have an effect on corporate performance. In this paper we discuss text analytics and sentiment mining approaches to determine quantitative attributes that can be an indicator of corporate performance. For example, strengths, weaknesses, opportunities and threats (SWOT) analysis is a well-known structured planning method widely applied to identify the factors determining success or failure of an enterprise. This analysis can be strongly indicative of the business or financial health of the enterprise. It can provide broader indicators for the firm’s business environment, in terms of ease of doing business in the country, government policies helping (or hurting) business environment.

Assets and Structured Hedges in Energy Markets – Severe Incompleteness and Methods for Dealing with It
Glen Swindle, Scoville Risk Partners

  • Risks in energy markets are inherently high dimensional due to large numbers of delivery locations and physical attributes, stochastic demand, and seasonality. In contrast, the number of instruments with sufficient liquidity to support hedging activities is relatively small, and has never been able to span the set of risks sustained by market participants. This mismatch has spawned an interesting and arguably unique set of challenges related to the valuation and hedging of energy portfolios. Here we will survey examples of such, including variable quantity swaps, generation and structured asset hedges.

Understanding the U.S. Index Futures Stock Market Using Research
William T. Ziemba, University of British Columbia, Vancouver and London School of Economics

  • I begin with five views or camps of beliefs concerning the U.S. stock market. There are efficient markets where prices are correct except for transactions costs, risk premium where excess returns can be made only by bearing additional risk, efficient markets is hogwash, great investors exist but you cannot tell who they are in advance and the study of anomalies and other research. Edges arise from cash flows, institutional practices and behavioral biases. These include the turn of the year effect, the turn of the month effect, presidential election effects and mispriced options. I describe the effects and explain why they exist and then discuss their use in trading considering operational risks, the effect of volatility, prediction of stock market crashes, slippage, risk management, and optimal betting sizing. I won the 2015 futures trading contest of the Battle of the Quants in New York and have been able to obtain very good risk adjusted returns during July 2013 to May 2016 in the Alpha Z Futures Fund.