Research Staff Member & Manager, Cognitive Commerce Group
IBM TJ Watson Research Center
A Rapid What-if Simulator for E-commerce Order Fulfillment
Motivation: As online demand is growing rapidly, retailers are increasingly looking to leverage their store network to fulfill online orders. Today many retailers use options such as ship-from-store (SFS) or buy-online-pickup-in-store (BOPUS) to fulfill online orders. To this end, retailers use order management system (OMS) solutions such as the ones from IBM, Manhattan Associates and Oracle to manage various aspects of order processing including fulfillment. To make order fulfillment decisions, OMS solutions rely on a set of user-defined priority rules defined using combinations of product categories, customer groups and geographic distribution groups of stores and DCs. These systems typically make order-by-order fulfillment assignments broadly in the first-in-first-out (FIFO) order. During the real-time order fulfillment decision making, the system checks the inventory availability across the network locations (nodes) and uses priority-based rules to find the set of nodes (stores and/or DCs) to find the fulfillment solution. These systems may further be enhanced to perform optimization on top of candidate node choices selected through rules to optimize across multiple conflicting business objectives such as shipping cost, load balancing and inventory performance. While the fulfillment solution space is rich, to the best of our knowledge we did not find a tool that allows a retailer to perform what-if simulations to help understand how certain fulfillment policies will pan out or what kind of benefits sophisticated optimization techniques will yield. Retailers are interested in knowing the impact of several what-if scenarios such as: stores first or DC first? 200 stores or 800 stores? Currently, retailers rely on simplistic models to make guesses and try in-production experiments. Some policy changes may have far out implications. Trying such policies in production over short durations might be expensive and may not reflect the range of possibilities.
Solution: To address these challenges, we develop a discrete event simulation system to perform rapid what-if simulations. The system uses a retailer’s historical data on inventory, sales, order and other relevant sources, performs FIFO order-by-order fulfillment simulation, keeps track of system changes, and generates the report of key performance indicators (KPIs). The system allows a user to easily set up multiple what-if simulation scenarios, perform rapid fulfillment simulations and compare KPIs across scenarios to infer the next best action. A typical set of KPIs include units assigned across DC and store networks, capacity utilization, number of packages per order, shipping and labor cost per order, network backlog and delivery delays.
There are several challenges in building an order-by-order fulfillment simulation system, however, the main challenges include managing the scale of data and being able to run the simulations quickly for the tool to be useful. After every order fulfillment decision, the network state including the available inventory and capacity used gets updated, and this affects the future order fulfillment decisions. Hence, any parallel simulation approach that ignores the network state update is not applicable. The simulations across orders need to be done somewhat sequentially while continuously updating the network state, and this closely reflects the real-world OMS behavior. This means that to speed up the simulations, we need to speed up the fulfillment decision making part as well as the network state update part. In terms of the scale of data, for one of the retailers in our case study, a few days’ worth of data including inventory, point of sales, and online orders is truly Big data. For this retailer, in 2016, the number of item-node combinations representing available inventory are more than 180 million, the number of item-node level point of sale transactions on Black Friday are more than 40 million, and the number online orders on Cyber Monday are more than 1 million. At this scale, the network state update needs to be fast. The real-time fulfillment decision making itself is also challenging. For a typical 4-item order, with inventory availability across 200 nodes in the network, the number of available choices or inventory look-ups are above 1.6 billion – a challenge for an OMS solution as well. To speed up the simulations, we establish processes to reduce Big data to small optimized data set and use highly optimized in-memory data structures. The fulfillment decision logic itself closely mimics the rule-based approach of typical OMS solutions or our recently offered multi-objective optimization based approach. In this work, we describe the simulation system, details of rule-based and optimization based fulfillment logic, and two real-world case studies. The simulation system is offered to a retail business user through a browser-based user interface. Case studies:(1) A large retailer with 1100+ stores and a handful of DCs used our simulator for peak season of 2014 to simulate two major fulfillment policies – keep-it-together (where as many items from an order as possible are kept together) vs. split-mixed (orders are split into SFS-eligible and SFS-non-eligible parts and stores are preferred to fulfill SFS-eligible part)(2) The same retailer used our simulator tool to find optimal set of parameter for its optimizer-based fulfillment model for the peak season of 2016.(3) A mid-size retailer with around 150 stores and a few DCs used our simulator with the rule-based model to simulate different fulfillment policies including stores-first, DC-first and weighted priority based combinations. (4) The same retailer used the simulator to compare the performance of the optimizer (optimizing shipping cost, labor cost and markdown cost) against its baseline on the historical data and plan next steps.
Dr. Ajay Deshpande is a Research Staff Member and Manager of the Cognitive Commerce group in the IBM T. J. Watson Research Center. Prior to joining IBM, he worked as a Postdoctoral Associate in the Laboratory for Manufacturing and Productivity at the Massachusetts Institute of Technology (MIT). Dr. Deshpande received his Ph.D. degree in Mechanical Engineering from MIT in 2008 where he was a recipient of the MIT Presidential Fellowship. He also earned his double Masters in Mechanical Engineering, and Electrical Engineering and Computer Science, from MIT in 2006. He graduated from the Indian Institute of Technology Bombay in 2001 with his B.Tech. and M.Tech. degrees. Dr. Deshpande is a Master Inventor at IBM, has received multiple IBM Manager’s Choice Awards, and several IBM Invention Plateau Awards and the Best Paper Award at the CHI 2013 conference. Dr. Deshpande co-authors several papers in the area of commerce, smarter cities, and internet of things.
Associate Professor of Operations Research
University of Klagenfurt
Optimizing a Fleet of Vehicles for Efficient and Customer-friendly Grocery Delivery
Nowadays, all main supermarket chains provide online shopping services, where customers select groceries on the supermarket’s website, as well as a delivery time window. The goal of our common research project with one of the world’s largest supermarket chains was to design a stable and fast optimization approach producing efficient vehicle routes for handling a large number of customers and vehicles in real time. More efficient tours not only safe money but also allow to insert more additional customer orders into the tours and to show more available time windows to each new customer, leading to a higher customer satisfaction. Our optimization approach improved the efficiency of the routes compared to the previous system of our client by nearly 10%, which translates to cost reductions of several million euros per year. Additionally we could increase customer satisfaction as customers are now more likely to obtain their preferred delivery time window.
Philipp is an Associate Professor of Operations Research at University of Klagenfurt in Austria. He received several awards from scientific societies and the Austrian president for his research. Since 2 years he leads a research team of 10 people funded by an industrial cooperation with the English technology firm Satalia (NPComplete Ltd) that provides artificial intelligence solutions to solve industries hardest problems. The primary goal of this industrial cooperation is to build state-of-the-art heuristics, exact methods and machine learning approaches for solving combinatorial optimization problems lying at the heart of real-world applications. Core application areas are routing and scheduling. Currently Philipp’s team conducts projects with one of the world’s largest supermarket chains and one of the world’s largest accounting companies, among others.
Manager, Advanced Analytics
What To Do When Planners Deviate From Optimized Plans? Case Hydropower Planning
Planning tools are often based on optimization models whose recommendations the tool users may, or may not, follow. Empirical evidence suggests that decision makers can deviate from the optimal recommendations provided by a model even in problems that are recurrent; for instance in inventory management or production planning. In this presentation, we describe a case study of a hydropower producer that uses an in-house tool for the optimization of daily production plans driven by electricity price forecasts and constrained by environmental limits and river flow dynamics. We study hundreds of daily plans with focus on deviations between the optimized and actual plans, complemented with corresponding planner feedback collected via a web-survey each day. By analyzing this data, we are able to identify when planners’ deviations are beneficial and when harmful. We also identify improvement potential relating to i) de-biasing the electricity price forecast, ii) improving the hydrological model, and iii) adding new model constraints that allow the planner to control the amount of hour-to-hour changes in the plan. After detailing the case study, we provide insights on how to identify key challenges in use of optimization models by collecting both quantitative and subjective data from the decision process. We then present multiple ideas for corrective actions that pertain to input data, model development and decision maker behavior. The insights are widely applicable to decision processes that are supported by optimization and based on business data.
Anssi Käki manages a team dedicated to analytics and operations research at Finland-based UPM-Kymmene Corporation. At UPM, he has been working on hydropower production planning, power market bidding, paper machine scheduling and wood flow planning, for instance. Before joining UPM in 2013, Anssi worked in supply chain consulting at ROCE Partners (currently part of Chainalytics), where he led projects related to logistics process management, supply chain planning and demand forecasting in consumer electronics and technical wholesales industries. He has a D.Sc. degree in Operations Research from the Aalto University. His research has been published in scientific journals such as Journal of Business Logistics and IEEE Transactions on Engineering Management.
Operations Research Analyst
Ford Motor Company
Human Guided Optimization
Optimization has broad applications in problems like scheduling, routing, investing, etc. However, it can be difficult to capture the essence of a problem mathematically. Problem formulation is more difficult when there are multiple objectives, nonlinear relationships, subjective constraints, or incomplete data. How should we proceed when it is hard to capture the important features? The traditional approach assigns an optimization specialist to carefully craft a model that is realistic and solvable. But this process takes time and expertise. And it may under-utilize the domain knowledge of subject matter experts (SMEs) when their insights are hard to capture mathematically. As a result, project teams can spend time and frustration when “optimal” solutions for the model turn out to be suboptimal or infeasible in application. We will explore in this session how to partially overcome these challenges by having SMEs interact with the algorithm and guide it toward practical solutions. We will discuss: • How SMEs can interact with optimization models through Decision Support Systems. • How to create decomposition methods led by human guidance. • How initial formulations change when the solution process will be human-guided. • How to communicate to SMEs why the algorithm recommends certain decisions. We will present a case study around production planning for stampings at Ford Motor Company. The manufacturer makes decisions years in advance for what to outsource and where to produce parts. The case study looks at how to adjust plans when conditions start deviating from the original assumptions, such as when demand is higher or lower than expected. The problem has multiple objectives, subjective constraints, and a lack of data to determine which of the 1015 potential solutions are really feasible. We will use this problem as a backdrop to illustrate general challenges and demonstrate solutions. We will demonstrate how human-guided optimization can reduce development time before successful results. We will cover a decomposition approach that is analogous to Benders Decomposition, but where the cuts are generated from human evaluation rather than mathematical means. The process has the algorithm search the solution space while SMEs guide for nuances that may be hard to represent mathematically. Because SMEs are able to correct model inaccuracies in real time, less time is needed to pre-prepare a robust formulation that is realistic for all corners of the solution space. Rather SME inputs become part of a dynamic model that grows more accurate over time. This environment allows decision makers to slowly gain insights and trust in the model without a lengthy model-development phase. We demonstrate through the case-study how these strategies can lead to better business outcomes compared to unsupervised optimization.
Joshua is an Operations Research Analyst at Ford Motor Company. He received his PhD from Arizona State University in Industrial Engineering, with research in Energy Markets.
Super-charging Dev Ops with Analytics: How Yelp Runs Millions of Tests a Week and Saves Money Doing It
Over the last two years, Yelp has developed an in-house scalable and reliable parallel task execution system, called Seagull. This system is used by Yelp to drive its enormous testing infrastructure: Seagull enables Yelp developers to perform nearly 1.5 million tests of the Yelp website and platform every day, ensuring that changes and improvements to Yelp’s application are safe to deploy to a production setting. In this talk, we will describe how Yelp is using advanced analytics techniques to solve two related scheduling problems. In so doing, Yelp is able to ensure that its developers can perform continuous development on the Yelp application, while keeping infrastructure costs low and maintaining the stability of its systems. In this talk, we will share lessons that we’ve learned on our journey towards powering DevOps with analytic techniques.
Dr. David R. Morrison is a software engineer working in scheduling and optimization on the distributed systems team at Yelp. He has developed and improved auto-scaling code for one of Yelp’s most expensive compute clusters. Previously, David worked at Inverse Limit, where he received federal funding from DARPA as well as money from Google’s ATAP program to do research and development on a wide range of projects. David received his PhD in computer science (with a focus in Operations Research) from the University of Illinois, Urbana-Champaign under the supervision of Dr. Sheldon Jacobson. David has an established track record as a public speaker; his most recent presentation was to a large audience at AWS re:Invent 2016. He has also given multiple presentations at the INFORMS Annual Meetings, as well as other venues.