Senior Lecturer, Management Science
The University of Edinburgh Business School
|Risk assessment in consumer lending relies on automated credit scoring, i.e. models estimating customers’ creditworthiness. Yet reasons behind default and customer’s personality are left out of the equation with standard predictors being observable proxies, such as age, length of employment, etc. After a brief introduction of credit scoring, this talk will describe approaches used for modeling Loss Given Default (LGD), which is one of the three regulatory credit risk components and is usually measured as Recovery Rate (proportion of debt recovered from defaulted borrowers). The summary of the limited literature related to reasons for default, personality and credit behavior will follow, with the results of a project that models LGD using a unique Brazilian dataset that combines standard loan characteristics with the survey data, including borrowers’ risk-taking attitudes, their financial knowledge and the reasons for missing payments. The talk will outline the potential applications of personality measures in credit area, associated advantages and practical difficulties.|
Chief Scientist, Strategic Innovation Group
Booz Allen Hamilton
It’s time for us to push the envelope as a data science community. We’ve proven our ability to find the most obscure of facts (e.g., humans share 50% of their DNA with bananas). We’ve uncovered patterns in untamable datasets that lead to ground-breaking insights. We’ve even learned how to predict the future.
But that is no longer enough. Our audiences want to know why things are happening, and more notably, how to change the futures we predict. They want the identification of root causes, typically the product of human reasoning and experience. Influence requires intervention – and intervention requires an understanding of the rules of a system. That requires causal – rather than simply observational – inference.
While we always heed the old adage “correlation does not imply causation,” many data scientists lack the understanding of the specific conditions under which it is acceptable to derive causal interpretations from correlations in observational data.
How can we isolate the effect of a drug treatment, a marketing campaign, or a policy from observational data? The Neyman-Rubin causal model provides us a baseline framework for causal inference. However, given the unavoidable imperfections of experimental design and data collection, we must address the prevalence of covariates and confounders. We can treat covariates through statistical matching methods, like propensity score matching or Mahalanobis distance computation. Furthermore, Judea Pearl’s do-calculus enables the treatment of confounders through statistical “intervention.”
The million-dollar question is: can we machine-learn causality? Getting causal is no small feat. But it’s not insurmountable. Today, we are able to employ heuristics alongside machine-learned models to eliminate causation. For example:
1. Causes and effects must demonstrate mutual information (or Pearson’s correlation in linear models) i.e. observation of random variable X (cause) must reduce the uncertainty of Y (effect)
2. Temporalization: causes must occur before effects. This is obvious, but incredibly powerful in eliminating causality. Simultaneous observation of random variables (e.g., rainfall and a wet lawn) is often simply a limitation in the fidelity of data collection.
3. Networks of cause and effect: multiple effects of the same cause cannot demonstrate partial information flow. For example, we cannot observe multiple lawns – some wet and some not – and conclude that rainfall is the exclusive root cause. Instead, there is either an additional (perhaps latent) cause or rainfall is not the cause.
If we don’t have mutual information, proper temporalization, and a single cause, we know we can’t approach causality. But, as we recall from standardized exams, the process of elimination often leads to the correct answer.
Alex will introduce causal inference in the context of Bayesian Belief Networks (BBNs). BBN’s produce accurate predictive forecasts, but with appropriate modeler input are also able to identify causal relationships between variables and pinpoint drivers of desired targets. With causal relationships identified, BBNs may be used in a prescriptive fashion in order to make actionable decisions. Alex will dive into a case study in the aviation space which identifies causal drivers of daily flight operations on flight delays and allows us to prescribe delay-reduction plans by acting on controllable drivers.
We, as data scientists, will continue to be called upon to solve the toughest problems. We cannot get away with relying upon machine-learning to do machine-thinking. To earn our place in the science community, we must become stewards of the scientific method – of theory formulation and scientific reasoning. In other words, we must become truth-seekers, not simply fact-seekers.
|Alex Cosmas is Chief Scientist in Booz Allen Hamilton’s Strategic Innovation Group, specializing in predictive analytics across the consumer sectors. He is expert in the use of probabilistic Bayesian Belief Networks to perform both deductive and inductive reasoning from large datasets. He has consulted for Fortune 100’s both domestically and internationally in the areas of demand modeling, consumer choice, network modeling, revenue management and pricing. He earned his Bachelor’s in Applied Physics from Columbia, and a Master’s in Aerospace Engineering from the Massachusetts Institute of Technology.|
Distinguished Professor of Management Science; Director
Lancaster University School of Management; Lancaster Centre for Forecasting
Improving Forecast Quality In The Supply Chain
|Demand forecasts drive supply chain planning, both for the manufacturer or retailer. Their accuracy impacts service levels, inventory and distribution, and the bottom lines of profitability and service. The forecasting process in most organizations is complex with the resulting forecasting including contributor information from sales, marketing and logistics. For a manufacturer and suppliers it may also include downstream information from retailers. This Sales and Operations Planning (S&OP) Process adds market intelligence which leads to a judgmentally adjusted ‘final’ operational forecast. All aspects of the process potentially introduce biases and unnecessary inaccuracies. This presentation discusses recent research that aims to improve the quality of the forecasting function. Much academic research focusses on statistical or economic models. But organizations seldom use these more complicated models provided by OR/ Analytics researchers, even where there is evidence they provide benefits and they are readily available in commercial software such as SAP/APO. A lot more is involved in achieving forecast improvement beyond software and statistical methodology, important though they are; it’s also about removing organizational impediments, developing appropriate performance benchmarks and motivational incentives, and improving data reliability and flow within the organization. The typical supply chain forecasting activity in an organization involves three key components: the forecasting staff, their expertise and their knowledge, the statistical methods employed and the forecasting support system itself. From extended survey and analytical work and illustrated with real-life case studies from a range of supply chain companies such as Beiersdorf, Bayer and Sanofi-Aventis where inventory savings in $Ms have been made, I present results which identify the key barriers to quality improvements. The presentation will examine four areas where improvements can be achieved:
The talk’s overall message is that for major improvements in forecasting quality an organizational change program is needed. This should focus not just on the technical characteristics of the forecasts but also on the organizational linkages and developing the expertise of the forecasting team.
|Robert Fildes is Distinguished Professor of Management Science in the School of Management, Lancaster University and Director of the Lancaster Centre for Forecasting. The Centre carries out applied research and training for industry and government aimed at improving forecasting performance. He was co-founder in 1981 of the International Institute of Forecasters and in l985 of the International Journal of Forecasting. He has published widely including in Management Science and Interfaces on almost all aspects of forecasting. The diversity of topics include global warming, the combination of judgmental forecasts and statistical forecasts, organizational issues, and the statistical modelling of sales, from the perspectives of both retailers and manufacturers His major concern is that despite all the research organisations still stay with old-fashioned systems and methods and do not validate their forecasts. The solution, he thinks, is better designed forecasting systems, better trained forecasters and more discriminating consumers. Robert has served on the Edelman Awards Panel. In 2015 he was awarded the UK OR Society’s Beale medal, its highest accolade.|
PhD student, Applied Statistics
University of Alabama
|Aimed at delivering best practices in demand forecasting for supply chain decisions to forecasting managers, demand planning teams, and executives. The audience will be exposed to an in-depth discovery of resources from companies who successfully leverage robust forecasts and business processes to reduce inventory costs, make educated vendor purchases, schedule lean manufacturing, manage promotional and seasonal demand, etc. Emphasis will be placed on forecasting techniques to model time series demand data.
The presentation will explore the following topics and answer the underlying questions:
I. Importance of Quality Forecasting a. Why should an organization invest in forecasting?
b. What types of actual benefits can be reaped from demand forecasting? i. Case study results from two companies (Retailer and Manufacturer) that demonstrates such gains in forecast accuracy, bias, and inventory holding costs
II. Effective Forecasting Techniques a. When are certain forecasting models appropriate? i. Seasonal, Trend, Business Life Cycle, Promotional, Intermittent, etc. ii. Code provided for modeling multiple scenarios 1. SAS® Code 2. R® Code
b. What is the most frequently used holdout or training period for model selection?
c. What benefits do SAS® and R® provide in demand forecasting? i. IDE’s and GUI’s
d. How do you handle sparse data and intermittent demand?
e. What is the best hierarchal level for generating forecasts and what is the best practice for distribution/reconciliation among levels?
f. What frequency should demand forecasts be produced?
g. What is an “accurate” forecast horizon?
III. Measurement and Reporting a. Why use a naïve baseline forecast?
b. Why is forecast value add (FVA) the only true means to measure forecast accuracy?
Participants will take away meaningful nuggets of valuable information on how to create forecast models in two statistical languages based on data available, how to effectively measure forecast performance, and communicate the value add of such forecasts for more informed supply chains decisions.
He is an avid SAS® and R® user and well versed in best practices of predictive analytics and data science. His research interests center on computational statistics in the area of Statistical Quality Control (SQC) and supply chain analytics. In the SQC arena, he is developing a R repository for frequently used statistical process control charts. In addition he has made numerous contributions to the body of literature in forecasting and pricing strategy. He has provided several quality discussions and break-out sessions at well known conferences such as the SAS Global Forum, INFORMS, and Professional Pricing Society. He has authored multiple articles in the Journal of Professional Pricing, Pricing Advisory Newsletter, and the Wiglaf Journal, and co-authored a textbook in the field of pricing strategy.
Hilton has held multiple teaching appointments in Statistics, Business Analytics, and Economics at the University of Alabama, Lipscomb University, and Elon University. He has been awarded twice for excellence in teaching at Elon University. He previously managed a demand planning implementation and a team of demand planners at a premier Fortune 500 retailer.
Out Of The Black Box: Advanced Analytics Merges Into Planning And Forecasting
The advent of friendlier, advanced analytics software is enabling a critical transition for businesses: predictives and optimization capabilities are being democratized and incorporated into more traditional sales and operations planning and forecasting. Our presentation will explore the trends underneath this changeover, such as:
- Psychological factors: Business professional are permitting algorithmic scenarios equal status to those created by their human analysts.
- Analytics software improvements: How has the software changed to permit non-statisticians to work with complex internal and external datasets?
- Planning and forecasting software improvements: How are software vendors and businesses integrating a single OLAP planning and forecasting hub with optimization and predictives algorithms to enable full-circle support for baseline demand and expense planning and production and sales optimizations?
Our leading edge planning and forecasting stories will include: Huffy Bicycle and Pabst Brewing (for predictive demand planning), IBM (for predictive expense planning), and Osram Opto Semiconductors (for production optimization). We will explore how these businesses are merging previously available, advanced capabilities into their business-critical day-to-day planning and forecasting cycles.
Blending experience from his education, careers, & avocations, Leonard Oppenheimer helps organizations improve their financial & performance management, planning & forecasting, and strategy & governance. Leonard earned a BA & an MBA from the University of Chicago. He spent 22 years in public/non-profit financial management. He joined the software industry 15 years ago, helping develop advanced planning, forecasting & analytics software & solutions. He is a co-inventor of two patents. He has served on governing boards & in leadership roles of three non-profits. He has participated in NSF-funded research areas as diverse as urban transportation pricing and crowd-sourced synthetic RNA creation.
Speakers organized by Track
2016 Franz Edelman Award Competition
Analytics Leadership & Soft Skills
Decision & Risk Analysis
Fraud Detection & Cyber Security
Health Care & Life Sciences
INFORMS Prizes & Special Sessions
Internet of Things
Revenue Management & Pricing
Sports & Entertainment
Supply Chain Analytics
Technology Workshops – Sunday