Predictive Analytics/Forecasting

speaker-andreeva

Galina Andreeva

Senior Lecturer, Management Science
The University of Edinburgh Business School

Psychology Of Consumer Credit And Reasons For Default: Potential Value In Modeling Loss Given Default
Risk assessment in consumer lending relies on automated credit scoring, i.e. models estimating customers’ creditworthiness. Yet reasons behind default and customer’s personality are left out of the equation with standard predictors being observable proxies, such as age, length of employment, etc. After a brief introduction of credit scoring, this talk will describe approaches used for modeling Loss Given Default (LGD), which is one of the three regulatory credit risk components and is usually measured as Recovery Rate (proportion of debt recovered from defaulted borrowers). The summary of the limited literature related to reasons for default, personality and credit behavior will follow, with the results of a project that models LGD using a unique Brazilian dataset that combines standard loan characteristics with the survey data, including borrowers’ risk-taking attitudes, their financial knowledge and the reasons for missing payments. The talk will outline the potential applications of personality measures in credit area, associated advantages and practical difficulties.

Bio

Dr Galina Andreeva (MA, MSc, PhD) is a Senior Lecturer in Management Science at The University of Edinburgh Business School (UEBS) and a member of Credit Research Centre. Her research interests involve risk assessment in consumer credit and SME lending, the value of different types of information in predictive modeling. She has published in reputable journals, including Journal of the Operational Research Society, European Journal of Operational Research, and presented at multiple international conferences. Her research has been funded by the Economic and Social Research Council, Abbey Santander Research Fund, Royal Society of Edinburgh, UEBS Venture Fund. Prior to joining UEBS, Galina worked at one of the major UK banks. She still maintains close contacts with the credit industry and has collaborated on several consultancy projects. She teaches ‘Credit Risk Management’ to postgraduate students, and supervises a number of PhD projects in credit risk area.

speaker-cosmas

Alex Cosmas

Chief Scientist, Strategic Innovation Group
Booz Allen Hamilton

So You Can Predict The Future. Big Deal. Now Change It.

It’s time for us to push the envelope as a data science community. We’ve proven our ability to find the most obscure of facts (e.g., humans share 50% of their DNA with bananas). We’ve uncovered patterns in untamable datasets that lead to ground-breaking insights. We’ve even learned how to predict the future.
But that is no longer enough. Our audiences want to know why things are happening, and more notably, how to change the futures we predict. They want the identification of root causes, typically the product of human reasoning and experience. Influence requires intervention – and intervention requires an understanding of the rules of a system. That requires causal – rather than simply observational – inference.
While we always heed the old adage “correlation does not imply causation,” many data scientists lack the understanding of the specific conditions under which it is acceptable to derive causal interpretations from correlations in observational data.
How can we isolate the effect of a drug treatment, a marketing campaign, or a policy from observational data? The Neyman-Rubin causal model provides us a baseline framework for causal inference. However, given the unavoidable imperfections of experimental design and data collection, we must address the prevalence of covariates and confounders. We can treat covariates through statistical matching methods, like propensity score matching or Mahalanobis distance computation. Furthermore, Judea Pearl’s do-calculus enables the treatment of confounders through statistical “intervention.”
The million-dollar question is: can we machine-learn causality? Getting causal is no small feat. But it’s not insurmountable. Today, we are able to employ heuristics alongside machine-learned models to eliminate causation. For example:
1. Causes and effects must demonstrate mutual information (or Pearson’s correlation in linear models) i.e. observation of random variable X (cause) must reduce the uncertainty of Y (effect)
2. Temporalization: causes must occur before effects. This is obvious, but incredibly powerful in eliminating causality. Simultaneous observation of random variables (e.g., rainfall and a wet lawn) is often simply a limitation in the fidelity of data collection.
3. Networks of cause and effect: multiple effects of the same cause cannot demonstrate partial information flow. For example, we cannot observe multiple lawns – some wet and some not – and conclude that rainfall is the exclusive root cause. Instead, there is either an additional (perhaps latent) cause or rainfall is not the cause.
If we don’t have mutual information, proper temporalization, and a single cause, we know we can’t approach causality. But, as we recall from standardized exams, the process of elimination often leads to the correct answer.
Alex will introduce causal inference in the context of Bayesian Belief Networks (BBNs). BBN’s produce accurate predictive forecasts, but with appropriate modeler input are also able to identify causal relationships between variables and pinpoint drivers of desired targets. With causal relationships identified, BBNs may be used in a prescriptive fashion in order to make actionable decisions. Alex will dive into a case study in the aviation space which identifies causal drivers of daily flight operations on flight delays and allows us to prescribe delay-reduction plans by acting on controllable drivers.
We, as data scientists, will continue to be called upon to solve the toughest problems. We cannot get away with relying upon machine-learning to do machine-thinking. To earn our place in the science community, we must become stewards of the scientific method – of theory formulation and scientific reasoning. In other words, we must become truth-seekers, not simply fact-seekers.

Bio

Alex Cosmas is Chief Scientist in Booz Allen Hamilton’s Strategic Innovation Group, specializing in predictive analytics across the consumer sectors. He is expert in the use of probabilistic Bayesian Belief Networks to perform both deductive and inductive reasoning from large datasets. He has consulted for Fortune 100’s both domestically and internationally in the areas of demand modeling, consumer choice, network modeling, revenue management and pricing. He earned his Bachelor’s in Applied Physics from Columbia, and a Master’s in Aerospace Engineering from the Massachusetts Institute of Technology.

speaker-fildes

Robert Fildes

Distinguished Professor of Management Science; Director
Lancaster University School of Management; Lancaster Centre for Forecasting

Improving Forecast Quality In The Supply Chain

Demand forecasts drive supply chain planning, both for the manufacturer or retailer. Their accuracy impacts service levels, inventory and distribution, and the bottom lines of profitability and service. The forecasting process in most organizations is complex with the resulting forecasting including contributor information from sales, marketing and logistics. For a manufacturer and suppliers it may also include downstream information from retailers. This Sales and Operations Planning (S&OP) Process adds market intelligence which leads to a judgmentally adjusted ‘final’ operational forecast. All aspects of the process potentially introduce biases and unnecessary inaccuracies. This presentation discusses recent research that aims to improve the quality of the forecasting function. Much academic research focusses on statistical or economic models. But organizations seldom use these more complicated models provided by OR/ Analytics researchers, even where there is evidence they provide benefits and they are readily available in commercial software such as SAP/APO. A lot more is involved in achieving forecast improvement beyond software and statistical methodology, important though they are; it’s also about removing organizational impediments, developing appropriate performance benchmarks and motivational incentives, and improving data reliability and flow within the organization. The typical supply chain forecasting activity in an organization involves three key components: the forecasting staff, their expertise and their knowledge, the statistical methods employed and the forecasting support system itself. From extended survey and analytical work and illustrated with real-life case studies from a range of supply chain companies such as Beiersdorf, Bayer and Sanofi-Aventis where inventory savings in $Ms have been made, I present results which identify the key barriers to quality improvements. The presentation will examine four areas where improvements can be achieved:

    • The quality dimensions on which an organization can evaluate its performance
    • The priorities given by companies to different types of information depending on their organizational collaborations
    • Statistical forecasting methods based on promotional and downstream information and the role of judgment
    • The link between organizational change and forecast quality improvement to account for the needs of demand forecasters.
    • Forecasters (and their managers) attending will learn:

  • -the information typically used by manufacturers in producing their forecastshow to measure accuracy and value added

        • How to measure forecast quality:

    -the biases and misunderstanding seen in a typical S&OP process.

      • How to achieve the benefits of incorporating additional information including expert judgment into standard supply chain forecasting models:-through the effective inclusion of expert judgment in combination with models.
      • The components needed for a successful organizational change program demand forecasting is rarely part of an OR degree program or available as an ‘extension’. Tailored courses are needed to match the particular characteristics of the organization.how commercial software should be designed to help the forecasting team effectively.

The talk’s overall message is that for major improvements in forecasting quality an organizational change program is needed. This should focus not just on the technical characteristics of the forecasts but also on the organizational linkages and developing the expertise of the forecasting team.

Bio

Robert Fildes is Distinguished Professor of Management Science in the School of Management, Lancaster University and Director of the Lancaster Centre for Forecasting. The Centre carries out applied research and training for industry and government aimed at improving forecasting performance. He was co-founder in 1981 of the International Institute of Forecasters and in l985 of the International Journal of Forecasting. He has published widely including in Management Science and Interfaces on almost all aspects of forecasting. The diversity of topics include global warming, the combination of judgmental forecasts and statistical forecasts, organizational issues, and the statistical modelling of sales, from the perspectives of both retailers and manufacturers His major concern is that despite all the research organisations still stay with old-fashioned systems and methods and do not validate their forecasts. The solution, he thinks, is better designed forecasting systems, better trained forecasters and more discriminating consumers. Robert has served on the Edelman Awards Panel. In 2015 he was awarded the UK OR Society’s Beale medal, its highest accolade.

speaker-hilton

Curry Weston Hilton

PhD student, Applied Statistics
University of Alabama

Demand Forecasting
Aimed at delivering best practices in demand forecasting for supply chain decisions to forecasting managers, demand planning teams, and executives. The audience will be exposed to an in-depth discovery of resources from companies who successfully leverage robust forecasts and business processes to reduce inventory costs, make educated vendor purchases, schedule lean manufacturing, manage promotional and seasonal demand, etc. Emphasis will be placed on forecasting techniques to model time series demand data.
The presentation will explore the following topics and answer the underlying questions:
I. Importance of Quality Forecasting a. Why should an organization invest in forecasting?
b. What types of actual benefits can be reaped from demand forecasting? i. Case study results from two companies (Retailer and Manufacturer) that demonstrates such gains in forecast accuracy, bias, and inventory holding costs
II. Effective Forecasting Techniques a. When are certain forecasting models appropriate? i. Seasonal, Trend, Business Life Cycle, Promotional, Intermittent, etc. ii. Code provided for modeling multiple scenarios 1. SAS® Code 2. R® Code
b. What is the most frequently used holdout or training period for model selection?
c. What benefits do SAS® and R® provide in demand forecasting? i. IDE’s and GUI’s
d. How do you handle sparse data and intermittent demand?
e. What is the best hierarchal level for generating forecasts and what is the best practice for distribution/reconciliation among levels?
f. What frequency should demand forecasts be produced?
g. What is an “accurate” forecast horizon?
III. Measurement and Reporting a. Why use a naïve baseline forecast?
b. Why is forecast value add (FVA) the only true means to measure forecast accuracy?
Participants will take away meaningful nuggets of valuable information on how to create forecast models in two statistical languages based on data available, how to effectively measure forecast performance, and communicate the value add of such forecasts for more informed supply chains decisions.

Bio

Curry W. Hilton is a PhD student in Applied Statistics at the University of Alabama. He holds a B.S. in Business Economics from the University of North Carolina, a B.A. in Mathematics and Statistics from High Point University, and a M.A. in Economics from Clemson University.
He is an avid SAS® and R® user and well versed in best practices of predictive analytics and data science. His research interests center on computational statistics in the area of Statistical Quality Control (SQC) and supply chain analytics. In the SQC arena, he is developing a R repository for frequently used statistical process control charts. In addition he has made numerous contributions to the body of literature in forecasting and pricing strategy. He has provided several quality discussions and break-out sessions at well known conferences such as the SAS Global Forum, INFORMS, and Professional Pricing Society. He has authored multiple articles in the Journal of Professional Pricing, Pricing Advisory Newsletter, and the Wiglaf Journal, and co-authored a textbook in the field of pricing strategy.
Hilton has held multiple teaching appointments in Statistics, Business Analytics, and Economics at the University of Alabama, Lipscomb University, and Elon University. He has been awarded twice for excellence in teaching at Elon University. He previously managed a demand planning implementation and a team of demand planners at a premier Fortune 500 retailer.

speaker-oppenheimer

Leonard Oppenheimer

IBM

Out Of The Black Box: Advanced Analytics Merges Into Planning And Forecasting

The advent of friendlier, advanced analytics software is enabling a critical transition for businesses: predictives and optimization capabilities are being democratized and incorporated into more traditional sales and operations planning and forecasting. Our presentation will explore the trends underneath this changeover, such as:

  • Psychological factors: Business professional are permitting algorithmic scenarios equal status to those created by their human analysts.
  • Analytics software improvements: How has the software changed to permit non-statisticians to work with complex internal and external datasets?
  • Planning and forecasting software improvements: How are software vendors and businesses integrating a single OLAP planning and forecasting hub with optimization and predictives algorithms to enable full-circle support for baseline demand and expense planning and production and sales optimizations?

Our leading edge planning and forecasting stories will include: Huffy Bicycle and Pabst Brewing (for predictive demand planning), IBM (for predictive expense planning), and Osram Opto Semiconductors (for production optimization). We will explore how these businesses are merging previously available, advanced capabilities into their business-critical day-to-day planning and forecasting cycles.

Bio

Blending experience from his education, careers, & avocations, Leonard Oppenheimer helps organizations improve their financial & performance management, planning & forecasting, and strategy & governance. Leonard earned a BA & an MBA from the University of Chicago. He spent 22 years in public/non-profit financial management. He joined the software industry 15 years ago, helping develop advanced planning, forecasting & analytics software & solutions. He is a co-inventor of two patents. He has served on governing boards & in leadership roles of three non-profits. He has participated in NSF-funded research areas as diverse as urban transportation pricing and crowd-sourced synthetic RNA creation.