Tracks

  • The Analytics Leadership Track was formed to bring together leaders of analytics efforts and analytics groups to discuss: i) how to build a world class analytics team for your company; ii) how to define, solve, deliver, and communicate an analytics solution; and iii) what an analytics Leader must do to succeed within the organization and help the organization/company succeed.


    Setting Up the Analytics Leader for Success

    Noha Tohamy
    Successful analytics adoption hinges upon strong leadership. In this session, Gartner will share a proven framework that analytics leaders can use to guide their organizations to analytics success. To demonstrate the framework in action, Gartner will discuss its stages, best practices and lessons learned from a supply chain analytics leader’s perspective.


    Data: The Fuel of Tomorrow

    Aziz Safa
    Data is the new fuel that powers the experiences of the future and gives businesses critical insights we once thought were impossible. Data is disrupting and transforming all industries, it truly is the fuel of tomorrow. Control and security will be key for businesses that need to know what’s being collected, how it’s being used, and how personal data can be deleted as required. End-to-end computing, powered by technology like 5G, sets the foundation for delivering critical insights, and the incredible experiences of tomorrow. Discover how to drive business transformation with a smart data strategy that is optimized for artificial intelligence (AI) to deliver faster time-to-insight. Learn how to unlock the power trapped in immense data volumes, how quality data enables better application of Data Science for effective decisions, as well as the next generation, future-ready infrastructure platforms needed to make the impossible, possible.

  • Operations research and analytics are driving advancements in government that touch nearly every part of our lives. From improving disaster relief efforts following a storm, to enhancing access to healthcare, to criminal justice and immigration reforms, and insuring our national security, analytics is saving lives, reducing costs, and improving productivity across the private and the public sectors. Just as business leaders have used O.R. and analytics to make smart business decisions, policymakers in government have increasingly turned to these modern tools to analyze important policy questions. Come see how the latest applications of analytics are solving public policy problems.


    Process Mining: The Capability Every Organization Needs

    John Bicknell
    Process Mining is an emerging AI/ML technique which may be thought of as an x-ray capability for your organization’s processes. It allows you to see where process challenges reside, simulate change assumptions, make corrections with confidence, and quickly re-measure the upgraded ecosystem — capturing return on investment every step of the way. Your organization has next-level strategic advantage hidden within your IT systems. All systems create “data exhaust” which is rich with process activity trails documenting the actions of users or machines while performing business activities. When process ecosystems are not optimized towards meaningful goals, your organization hemorrhages costs unnecessarily. Failing to adopt cutting edge artificial intelligence to optimize your processes places you at a competitive disadvantage. In this session, you will learn process mining fundamentals, hear impactful cross-industry use cases, and understand why it is the capability your organization needs to compete and transform continually.

  • The Decision & Risk Analysis track describes effective ways to aid those who must make complex decisions. In particular, the talks reference systematic, quantitative, and interactive approaches to address choices, considering the likelihood and impact of unexpected, often adverse consequences.


    Analyzing Social Media Data To Identify Cybersecurity Threats: Decision Making With Real-time Data

    Theodore “Ted” Allen
    In 2018, 27.9% of businesses experienced a cybersecurity breach, losing over 10,000 documents and $3M according to the Ponemon Institute. Of breaches known to Ponemon, 77% involve the exploitation of existing bugs or vulnerabilities. In our work, we found that incidents occur in narrow time windows around when vulnerabilities are publicized. Can you optimally adjust your cybersecurity policies and decisions to address emerging threats? Analyzing social media will help you preemptively identify major medium-level vulnerabilities, which managers often ignore, but which contribute to a large fraction of the incidents and warnings. Success requires transforming textual information into numbers, and I present a method, called K-means latent Dirichlet allocation, that identified the Heartbleed virus. I will describe a Bayesian approach as well, and with both methods, you can adjust your cybersecurity as social media identifies new hazards. Related opportunities for closed loop control using Fast Bayesian Reinforcement Learning are also briefly described. The qualitative benefit of experimentalism of these methods enables improved maintenance options.


    Can We Do Better than Garbage-In – Garbage-Out?

    Dennis Buede
    Can an analytics approach that receives poor quality (aggregated) data produce useful outputs, outputs that have low mean squared error and are calibrated? A team from IDI confronted this question as part of a research project with the Intelligence Advanced Research Projects Activity (IARPA) to mitigate insider threats. Here insider threats are people, who are driven by rage, national loyalty, or profit to steal, destroy, sabotage data from an organization. This talk describes the motives and behaviors of insider threats and details of our multi-modeling solution, which includes data elicitation activities to address missing data (e.g., correlations). The modeling techniques used range from discrete event simulation to copulas to stochastic optimization for simulation populations, from random forests to support vector machines to naïve Bayesian networks to neural networks for down-selecting the potential threats.

  • The purpose of the Franz Edelman competition is to bring forward, recognize, and reward outstanding examples of operations research, management science, and advanced analytics in practice in the world. Finalists will compete for the top prize in this “Super Bowl” of O.R., showcasing analytics projects that had major impacts on their client organizations.
  • The profession of operations research and advanced analytics is constantly developing, growing, and expanding. This track is intended to bring together practitioners and researchers who are working at the edges of the profession to share new areas, explain open problems, formalize new problem areas that are just coming to the fore, and define challenging questions for further development.


    Interpretable AI

    Dimitris Bertsimas
    We introduce a new generation of machine learning methods that provide state of the art performance and are very interpretable. We introduce optimal classification (OCT) and regression (ORT) trees for prediction and prescription with and without hyperplanes. We show that (a) Trees are very interpretable, (b) They can be calculated in large scale in practical times and (c) In a large collection of real world data sets they give comparable or better performance than random forests or boosted trees. Their prescriptive counterparts have a significant edge on interpretability and comparable or better performance than causal forests. Finally, we show that optimal trees with hyperplanes have at least as much modeling power as (feedforward, convolutional and recurrent) neural networks and comparable performance in a variety of real world data sets. These results suggest that optimal trees are interpretable, practical to compute in large scale and provide state of the art performance compared to black box methods. We apply these methods to a large collections of examples in personalized medicine, financial services, organ transplantation among others.


    A Tutorial on Robust Optimization

    Dick den Hertog
    In this presentation we explain the core ideas in robust optimization and show how to successfully apply them in practice.

    Real-life optimization problems often contain parameters that are uncertain, due to, e.g., estimation or implementation errors. The idea of robust optimization is to find a solution that is immune against these uncertainties. The last two decades efficient methods have been developed to find such robust solutions. The underlying idea is to formulate an uncertainty region for the uncertain parameters for which one would like to safeguard the solution. In the robust paradigm it is then required that the constraints should hold for all parameter values in this uncertainty region. It can be shown that, e.g., for linear programming, for the most important choices of the uncertainty region, the final problem can be reformulated as linear optimization or conic quadratic optimization problems, for which very efficient solvers are available nowadays. Robust Optimization is valuable for practice, since it can solve large-scale uncertain problems and it only requires crude information on the uncertain parameters. Some state-of-the-art modeling packages have already incorporated the robust optimization technology.

    In this tutorial we restrict ourselves to linear optimization. We will treat the basics of robust linear optimization, and also show the huge value of robust optimization in (dynamic) multistage problems. Robust optimization has already shown its high practical value in many fields: logistics, engineering, finance, medicine, etc. In this tutorial we will discuss some of these applications. We will also highlight some of the most important (recent) papers on Robust Optimization.


    Behavioral Influences in Procurement Auctions and Pricing Decisions

    Wedad Elmaghraby
    This tutorial will cover research in market design for procurement and competitive bidding, with the emphasis on designing procurement auctions and understanding how human behavior affects their performance. We will then explore how online marketplaces have enlarged the set of market design challenges, and discuss recent research in online Business-to-Business and Business-to Consumer electronics markets, and key insights that are relevant for other market sectors.


    Quantum Computing – Why, What, When, How

    Yianni Gamvros
    Why Quantum Computing has the potential to significantly disrupt Business Analytics. When will real-world Business Analytics tasks move from classical computers to quantum computers? What are the use cases that can be addressed by Quantum Computing, in the short and medium term? What are current industry and thought leaders working on today? How do Quantum Computers solve optimization problems?


    Intuition is Unreliable, Analytics is Incomplete

    Karl Kempf
    In today’s environment, there are many cases where the difference between a good decision and a poor one can be hundreds of millions if not billions of dollars. Decision makers strive to apply their intuition, but intuition is unreliable. Sometimes it is useful, other times misleading. Analytics practitioners want to apply their computational tools, but given the complexity of these cases their models are inescapably incomplete.

    Nobel Laurates have explored one side of this situation from the perspective of human psychology. Simon pointed out the bounded rationality available to decision makers while Kahneman described the plethora of biases afflicting that same population. But they failed to supply answers to two important questions crucial to analytics professions. 1) How bad are the decision makers left to rely only on their intuition? Stated a different way, when we start to apply analytics, how much benefit can we reliably expect? 2) Can we benefit from utilizing the intuition? Can analytics inform intuition AND intuition inform analytics to supply a solution superior to either technique applied alone?

    We briefly supply an answer to the first question based on projects at Intel Corporation over the past 30+ years. Our answer to the second question occupies the bulk of our presentation. This includes evaluation of ideas from the literature including “pre-mortems” and “nudges”, but will focus on two related approaches we have found to be especially powerful. At one extreme, we will describe support systems for decision makers in operations with examples drawn from manufacturing and supply chain. At the other extreme we address systems that support senior management in deciding product development funding to maximizing profits.


    Explainable AI

    Jari Koister
    Financial Services are increasingly deploying AI models and services for a wide range of application. These applications include credit life cycles such as credit onboarding, transaction fraud, and identity fraud. In order to confidently deploy such models, these organizations require models to be interpretable and explainable. They also need to be resilient to adversarial attacks. In some situations, regulatory requirements apply and prohibits application of black-box machine learning models.

    This talk describes tools and infrastructure that FICO has developed as part of the platform to support these needs. The support is uniquely forward-looking and one of the first platforms to support these aspects of applying AI and ML for any customer.

    What we will cover: (1) Examples of Financial Services applications of AI/ML; (2) Specific needs for Explainability and resiliency; (3) Approaches for solving explainability and resiliency.; (4) Regulatory requirements, and how to meet them.; (5) A platform that provide support for xAI and Mission Critical AI.; (6) Further research and product development directions.


    Robotic Process Automation

    Russell Malz
    The rate of change of new disruptive technologies continues to accelerate, and the impact of AI and ML will be bigger and much faster than the impact the Internet had on business. While startups can be built from the ground up based on new AI and ML capabilities, established companies face unique challenges when they look to leverage AI and ML offerings to innovate. Nowhere is this more true than in the analytics domain where both skills and task automation are critical for success. In this talk, learn how innovative companies have successfully used Robotic Process Automation (RPA) to improve efficiency, and how the most forward looking companies are accelerating innovation by establishing the right organizational mindset, and leveraging enabling technologies to gain competitive advantage. Working with big data analytics, and AI strategies at Acxiom, Ayasdi, and Blue Prism, Russ Malz has helped dozens of F500 clients establish and accelerate cognitive strategies.


    Analyzing Everyday Language to Understand People

    James Pennebaker
    The words people use in everyday language reveal parts of their social and psychological thoughts, feelings, and behaviors. An increasing number of studies demonstrate that the analysis of the most common and forgettable words in English — such as pronouns (I, she, he), articles (a, an, the), and prepositions (to, of, for) — can signal honesty and deception, engagement, threat. status, intelligence, and other aspects of personality and social behaviors. The social psychology of language goes beyond machine learning and, instead, identifies the underlying links between word use and thinking styles. Implications for using text analysis to understand and connect with customers, employees, managers, friends, and even yourself will be discussed.


    Detecting Tax Evasion: A Co-evolutionary Approach

    Sanith Wijesinghe
    We present an algorithm that can anticipate tax evasion by modeling the co-evolution of tax schemes with auditing policies. Malicious tax non-compliance, or evasion, accounts for billions of lost revenue each year. Unfortunately when tax administrators change the tax laws or auditing procedures to eliminate known fraudulent schemes another potentially more profitable scheme takes it place. Modeling both the tax schemes and auditing policies within a single framework can therefore provide major advantages. In particular we can explore the likely forms of tax schemes in response to changes in audit policies. This can serve as an early warning system to help focus enforcement efforts. In addition, the audit policies can be fine tuned to help improve tax scheme detection. We demonstrate our approach using the iBOB tax scheme and show it can capture the co-evolution between tax evasion and audit policy. Our experiments shows the expected oscillatory behavior of a biological co-evolving system.

  • Too many analytics projects never get implemented or used. In most cases the analytics and recommendations were based on great work. But impact comes from effective implementation. This track’s speakers will share their experiences in implementing analytic solutions in their organizations and tips for success.


    Analytics Impact: Like A Marathon, The Last Mile Is The Hardest

    Jeffrey Camm
    Data availability, data storage, processing speed and algorithms, are not what is keeping analytics from reaching its full potential. It is the less technical side of analytics that hinders impact and adoption. In this session, we discuss success factors and impediments to analytics impact. We review the operations research/management science literature on this topic, discuss what is different in the age of analytics and also draw on our own research and experience with analytics impact.

  • INFORMS grants several prestigious institute-wide prizes and awards for meritorious achievement each year. This track will feature presentations on the Wagner Prize, INFORMS Prize, and the UPS George D. Smith Prize winner. Innovative Applications in Analytics Award (IAAA) and Hackathon finalists will also present. Special sessions will include presentations about INFORMS Education & Industry Outreach and the CAP program.


    2019 Innovative Applications in Analytics Award Finalist

    Transparent Machine Learning Models for Predicting Seizures in ICU Patients from cEEG Signals

    Duke University
    Continuous electroencephalography (cEEG) technology was developed in the 1990’s and 2000’s to provide real-time monitoring of brain function in hospitalized patients, such as critically ill patients suffering from traumatic brain injury or sepsis. cEEG technology has permitted physicians to characterize electrical patterns that are abnormal but are not seizures. As it turns out, these subtle signals recorded by cEEG monitoring are indicative of damage to the brain and worse outcomes in the future, and in particular, true seizures. If we can detect in advance that a patient is likely to have seizures, preemptive treatment is likely to prevent additional brain injury and improve the patient’s overall condition. However, predicting whether a patient is likely to have a seizure (and trusting a predictive model well enough to act on that recommendation) is a challenge for analytics, and in particular, for machine learning. This project is a collaboration of computer scientists from Duke and Harvard with expertise in transparent machine learning, and neurologists from the University of Wisconsin School of Medicine and Public Health and the Massachusetts General Hospital. The predictive model developed from this collaboration for predicting seizures in ICU patients is currently in use, and it stands to have a substantial impact in practice. Our work is the first serious effort to develop predictive models for seizures in ICU patients.


    2019 Innovative Applications in Analytics Award Finalist

    Machine Learning: Multi-site Evidence-based Best Practice Discovery

    Georgia Institute of Technology and the Care Coordination Institute
    This study establishes interoperability among electronic medical records from 737 healthcare sites and performs machine learning for best practice discovery. A novel mapping algorithm is designed to disambiguate free text entries and provide a unique and unified way to link content to structured medical concepts despite the extreme variations that can occur during clinical diagnosis documentation. Redundancy is reduced through concept mapping. A SNOMED-CT graph database is created to allow for rapid data access and queries. These integrated data can be accessed through a secured web-based portal. A classification machine learning model (DAMIP) is then designed to uncover discriminatory characteristics that can predict the quality of treatment outcome. We demonstrate system usability by analyzing Type II diabetic patients among the 2.7 million patients. DAMIP establishes a classification rule on a training set which results in greater than 80% blind predictive accuracy on an independent set of patients. By including features obtained from structured concept mapping, the predictive accuracy is improved to over 88%. The results facilitate evidence-based treatment and optimization of site performance through best practice dissemination and knowledge transfer.


    2019 Innovative Applications in Analytics Award Finalist

    A Machine Learning Approach to Shipping Box Design

    jet.com/Walmart Labs
    Having the right assortment of shipping boxes in the fulfillment warehouse to pack and ship customer’s online orders is an indispensable and integral part of nowadays eCommerce business, as it will not only help maintain a profitable business but also create great experiences for customers. However, it is an extremely challenging operations task to strategically select the best combination of tens of box sizes from thousands of feasible ones to be responsible for hundreds of thousands of orders daily placed on millions of inventory products. We present a machine learning approach to tackle the task by formulating the box design problem prescriptively as a generalized version of weighted k-medoids clustering problem, where the parameters are estimated through a variety of descriptive analytics. The ultimate assortment of box sizes is also well tested on both real and simulated customer orders before deployment into production. Our machine learning approach to designing shipping box sizes is adopted quickly and widely in Walmart eCommerce family. Within a year, the methodology has been applied respectively to jet.com, walmart.com and samsclub.com. The new box assortments have achieved 1%-2% reduction in number of boxes, 5%-8% increase in overall utilization rate, 7%-12% reduction in order split rate, and 3%-5% savings in transportation cost.


    2019 Innovative Applications in Analytics Award Finalist

    InnoGPS: Innovation Global Positioning System

    Singapore University of Technology and Design
    Traditionally the ideation and exploration of innovation opportunities and directions rely on human expertise or intuition and is faced with high uncertainty. Many historically-successful firms (e.g., Kodak, Motorola) lost direction for innovation and declined. To de-risk innovation ideation, we have developed a cloud-based data-driven computer-aided ideation system, InnoGPS, at Data-Driven Innovation Lab at the SUTD-MIT International Design Centre. InnoGPS integrates an empirical network map of all technology domains in the patent database, with map-based functions to position innovators, explore neighbourhoods and directions to far fields in the technology space. Our inspiration is by analogy from Google Maps for positioning, nearby search and direction finding in the physical space. The descriptive, predictive and prescriptive analytics in InnoGPS fuse innovation, information and network sciences and interactive visualization. InnoGPS is the first of its kind and may disrupt the intuitive tradition of innovators (e.g., individuals, companies) for innovation ideation by providing rapid, data-driven, scientifically-grounded and visually-engaging computer aids.


    2019 Innovative Applications in Analytics Award Finalist

    Taking Assortment Optimization from Theory to Practice: Evidence from Large Field Experiments on Alibaba

    Washington University in St. Louis
    We compare the performance of two approaches for finding the optimal set of products to display to customers landing on Alibaba’s two online marketplaces, Tmall and Taobao. Both approaches were placed online simultaneously and tested on real customers for one week. The first approach we test is Alibaba’s current practice. This procedure embeds hundreds of product and customer features within a sophisticated machine learning algorithm that is used to estimate the purchase probabilities of each product for the customer at hand. The products with the largest expected revenue (revenue * predicted purchase probability) are then made available for purchase. The downside of this approach is that it does not incorporate customer substitution patterns; the estimates of the purchase probabilities are independent of the set of products that eventually are displayed. Our second approach uses a featurized multinomial logit (MNL) model to predict purchase probabilities for each arriving customer. In this way we use less sophisticated machinery to estimate purchase probabilities, but we employ a model that was built to capture customer purchasing behavior and, more specifically, substitution patterns. We use historical sales data to fit the MNL model and then, for each arriving customer, we solve the cardinality-constrained assortment optimization problem under the MNL model online to find the optimal set of products to display. Our experiments show that despite the lower prediction power of our MNL-based approach, it generates 28% higher revenue per visit compared to the current machine learning algorithm with the same set of features. We also conduct various heterogeneous-treatment-effect analyses to demonstrate that the current MNL approach performs best for sellers whose customers generally only make a single purchase. In addition to developing the first full-scale, choice-model-based product recommendation system, we also shed light on new directions for improving such systems for future use.


    2019 Innovative Applications in Analytics Award Finalist

    Using Advanced Analytics to Rationalize Tail Spend Suppliers at Verizon

    Verizon
    Verizon Global Supply Chain organization currently governs thousands of active supplier contracts. These contracts account for several billions of annualized Verizon spend. Managing thousands of suppliers, controlling spend and achieving the best price per unit (PPU) through negotiations are costly and labor intensive tasks within Verizon strategic sourcing teams. Large organizations often engage a plethora of supplier for many reasons – best price, diversity, short term requirements, etc. While managing a few larger spend suppliers can be done manually by dedicated sourcing managers, managing thousands of smaller suppliers at the tail spend is challenging, can often introduce risk, and can be expensive. At Verizon, we leveraged a unique blend of descriptive, predictive and prescriptive analytics as well as Verizon-specific sourcing acumen to tackle this problem and rationalize tail spend suppliers. Through the creative application of Operations Research, Machine Learning, Text Mining, Natural Language Processing and Artificial Intelligence, Verizon reduced multiple millions of dollars of spend and acquired lowest price per unit (PPU) of the sourced products and services. Other benefits realized are centralized and transparent contract and supplier relationship management, overhead cost reduction, decreased contract execution lead time, and service quality improvement of Verizon’s strategic sourcing teams.


    2018 Wagner Prize Winner Reprise

    Analytics and Bikes: Cornell Rides Tandem with Motivate to Improve Mobility

    Bike-sharing systems are now ubiquitous across the United States. We have worked with Motivate, the operator of the systems in, for example, New York, Chicago, and San Francisco, to innovate a data-driven approach both to manage their day-to-day operations and to provide insight into several central issues in the design of their systems. This work required the development of a number of new optimization models, characterizing their mathematical structure, and using this insight in designing algorithms to solve them. Here, we focus on two particularly high impact projects, an initiative to improve the allocation of docks to stations, and the creation of an incentive scheme to crowdsource rebalancing. Both of these projects have been fully implemented to improve the performance of Motivate’s systems across the country; for example, the Bike Angels program in New York City yields a system-wide improvement comparable to that obtained through Motivate’s traditional rebalancing efforts, at far less financial and environmental cost.

  • This track features leaders from companies and academia to share the application of analytics in marketing functions such as promotions, pricing, advertising, market forecasting, and best practices of analytics in overall marketing, in addition to address1ing emerging technologies. The track provides an open forum for participants to connect with their peers and the invited speakers. Come learn from industry and academia experts on how to use advanced analytics and operations research to learn, share, and network and take away valuable techniques to grow your marketing analytics capability.


    Avnet

    Nishant Nishant
    ‘Ask Avnet’ intelligent agent was conceived to solve two complex business problems. Firstly, how do you connect an ecosystem comprising of different websites without destroying value; and secondly, how do you leverage analytics to provide better customer experience with finite resources. Join Nishant to learn how what began as an idea on a post-it note has morphed into a successful customer service channel powered by continuous analysis of customer interaction data.


    Managing All Pricing Levers

    Maarten Oosten
    One of the challenges of price optimization is that the price that the manufacturer lists for a product is not the same as the price the buyer pays (sales price) or the price that the seller receives (net price). Besides various types of costs related to delivery and sales, there are can be many price levers in play. Examples are end customer rebates, distributor discount programs, distributor charge-backs, royalties, channel rebates, and sales commission. These price levers are also controlled by the sellers, but at different levels, not necessarily the transaction level. For example, a distributor rebate applies to a subset of products for a specific period. When optimizing prices, all these levers should be taken into account.After a brief discussion of the general concepts, we will illustrate the concepts by means of an example: trade promotion optimization. Trade promotion optimization is similar to promotion optimization, excepts that it approaches the problem from the perspective of the manufacturer. The manufacturer negotiates the promotions with the various retailers. Therefore, the manufacturer should model the behavior of the end users as well as that of the retailer. After all, if the promotion is not attractive for the retailer, there won’t be a promotion. In this paper we discuss the challenges this poses in both the estimation of the promotion effects as well as the optimization of the promotion schedule, and propose models that address these challenges.


    Pricing and Revenue Management: Different Flavors for Different Industries

    Daniel Reaume
    Pricing and revenue management analytics have driven billions of dollars of increased profits across every business sector. But business and technical challenges differ greatly between industries and companies and success depends on tailoring solutions appropriately. This talk presents an overview of such solutions for six verticals – service retail, B2B, automotive (and other OEMs), media, hotels, and cruises. For each vertical, it will address some of the key challenges and present examples of analytics used to address them. Moreover, it will highlight how further company-specific tailoring is often critical to maximizing value.

  • This track features analytics leaders from top companies in traditional and nontraditional industries such as travel, hospitality, transportation, entertainment, high tech, media, and retail. The track promotes and disseminates the latest developments in pricing, revenue management, and distribution by providing an open forum that allows participants to connect with their peers and the invited speakers. Come learn from industry experts how to use advanced analytics and operations research to better understand and target your customers, improve your pricing practices and demand forecasts, and drive revenue and market share growth.


    Dynamic Pricing of Omni-Channel Inventories

    Pavithra Harsha
    Omnichannel retail refers to a seamless integration of an e-commerce channel and a network of brick-and-mortar stores. An example is cross-channel fulfillment, which allows a store to fulfill online orders in any location. Another is price transparency, which allows customers to compare the online price with store prices. This paper studies a new and widespread problem resulting from omnichannel retail: price optimization in the presence of cross-channel interactions in demand and supply, where cross-channel fulfillment is exogenous. We propose two pricing policies that are based on the idea of “partitions”to the store inventory that approximate how this shared resource will be utilized. These policies are practical because they rely on solving computationally tractable mixed integer programs that can accept various business and pricing rules. In extensive simulation experiments, they achieve a small optimality gap relative to theoretical upper bounds on the optimal expected profit. The good observed performance of our pricing policies results from managing substitutive channel demands in accordance with partitions that rebalance inventory in the network. A proprietary implementation of the analytics that also includes demand estimation is commercially available as part of the IBM Commerce markdown price solution. The system results in an estimated 13.7% increase in clearance-period revenue based on causal model analysis of the data from a pilot implementation for clearance pricing at a large U.S. retailer.


    Fandango 360: Data Driven Movie Marketing and Content Recommendations Platform

    Reeto Mookherjee
    Fandango has built a targeting, attribution and personalization platform, Fandango360. In this talk, we outline the building blocks of this platform. These blocks fall into three groups:

    1. Stitching together cross-device cross platform digital and offline interactions of users and creation of a probabilistic user graph
    2. Surfacing predictive behavioral and affinity segments with millions of micro-segments at scale, and
    3. Generation of propensity scores for known and unknown (cookie) users to future movie slates.

    We will conclude the talk with some of the performance marketing results, insights and learnings stemming from some of the studios using this platform in the 12 months or so in 70+ movie marketing campaigns.


    Intelligent Retailing Decision Support – A Bold New Vision for the Industry

    Hunkar Toyoglu
    Customers today are empowered by internet more than ever. They have new expectations and hence are forcing pricing and revenue management business strategies to be changed. As customers become more demanding, intelligent retailing and decision support is increasingly becoming essential in optimizing revenue in an end-to-end personalized retailing environment. Innovation is required to optimize beyond only the room/seat to include total revenue optimization. Applying Artificial Intelligence and Machine Learning models to enable smarter retailing decision support could unlock a novel set of insights on the market that companies can leverage to grow revenue and share. Learn how to leverage data inside and outside your organization to augment traditional retailing wisdom with Machine Learning capabilities. Specifically, we will talk about designing recommender systems to find the best items to display from a large list of candidate retail products and to determine their display order based on the customer segments.

  • INFORMS grants several prestigious institute-wide prizes and awards for meritorious achievement each year. This track will feature presentations on the Wagner Prize, INFORMS Prize, and the UPS George D. Smith Prize winner. Innovative Applications in Analytics Award (IAAA) and Hackathon finalists will also present. Special sessions will include presentations about INFORMS Education & Industry Outreach and the CAP program.


    Optimizing Army Cyber Branch Readiness and Manning Under Uncertainty: Stochastic and Robust Goal Programming Approaches

    Colonel Andrew Hall
    The Department of Defense (DoD) Cyber Mission Force (CMF) was established in 2012 to carry out DoD’s cyber missions. The CMF consists of cyber operators with the mission to augment traditional defensive measures and defend priority DoD networks and systems against priority threats; defend the US and its interests against cyberattacks of significant consequence; and support combatant commands by generating integrated cyberspace effects in support of operational plans and contingency operations.

    Given the unique expertise required of military personnel to execute the DoD cyber mission, the US Army created the Army Cyber Branch (ACB) to establish managed career fields for Army cyber warriors, while providing a force structure with successive opportunities for career development and talent management via leadership and broadening positions, technical training, and advanced education. In order to optimize readiness and manning levels across the Army’s operating and generating forces, the Army Cyber Proponent (Office Chief of Cyber) at the Cyber Center of Excellence sought analytical decision-support to project the optimal number of accessions, promotions and personnel inventory for each cyber specialty across the Army cyber enterprise needed to support a 30-year career life cycle.

    We proffer the Cyber Force Manning Model (CFMM), an advanced analytics framework that uses stochastic and robust goal programming approaches to enable the modeling, experimentation and optimization necessary to help solve the Army’s Cyber Workforce Planning Problem under uncertainty. The stochastic and robust optimization variants of the CFMM provide tremendous value by enabling useful decision-support to senior cyber leaders and force management technicians, while optimizing ACB readiness by effectively projecting the optimal number of personnel needed to meet the demands of the current force structure.

  • Emerging concepts and innovative technologies are disrupting business and driving new policies, products, services, and channels for increased revenue. Supply chain leaders are evolving their businesses to keep pace and, during this track, our accomplished speakers will share practical applications of new concepts and technologies being implemented within their supply chain analytics programs to maximize their efficiencies while minimizing business disruptions.


    ROMEO: A Fast and Transparent Framework for Multi-Echelon Inventory Analytics in Chemical Industries

    Baptiste Lebreton
    Defining the right level of inventory in multi-echelon supply chains is a key issues for commodity as well as specialty chemical companies. In the past 15 years, the Guaranteed Service Model (GSM) has gained wide adoption in planning software. While the GSM-based approaches bring valuable insights in retail or discrete manufacturing supply chains, these fall short in chemical supply chains where production wheels, tight manufacturing and warehousing capacity constraints as well as variable recipes exist. We present a simulation/optimization approach called ROMEO (Rolling Optimizer for Multi-Echelon Operations) that replicates daily supply chain operations (Order Promising/ATP, Supply Planning) and hence provides analysts with more tractable inventory recommendations that users can relate to. After a quick overview of literature and problem statement, we’ll describe ROMEO’s logic and show how it is currently applied at Eastman Chemical Company to drive inventories down.


    Llamasoft

    Steve Sommer
    LLamasoft will discuss how its Applied Research group combines cutting edge applications of classical operations research techniques with newer advanced analytical techniques to solve a broad spectrum of supply chain business problems. The talk with explore the classical applications within network optimization, vehicle route optimization, inventory optimization, and supply chain simulation to solve detailed supply chain problems at scale. Additionally, it will discuss the application of machine learning and artificial intelligence to improve demand forecasting. Finally, it will touch on the cross section between the classical and analytical techniques and how they can work together to solve more problems.