Boeing Professor of Operations Management at MIT
Dimitris Bertsimas is currently the Boeing Professor of Operations Research, the co-director of the Operations Research Center, and faculty director of the Master of Business analytics at MIT. He received his SM and PhD in Applied Mathematics and Operations Research from MIT in 1987 and 1988 respectively. He has been with the MIT faculty since 1988. His research interests include optimization, machine learning and applied probability and their applications in health care, finance, operations management and transportation. He has co-authored more than 200 scientific papers and four graduate level textbooks. He is the editor in Chief of INFORMS Journal of Optimization and former department editor in Optimization for Management Science and in Financial Engineering in Operations Research. He has supervised 67 doctoral students and he is currently supervising 25 others. He is a member of the National Academy of Engineering since 2005, an INFORMS fellow, and he has received numerous research and teaching awards including the Morse prize (2013), the Pierskalla award for best paper in health care (2013), the best paper award in Transportation (2013), the Farkas prize (2008), the Erlang prize (1996), the SIAM prize in optimization (1996), the Bodossaki prize (1998) and the Presidential Young Investigator award (1991-1996).
Track: Emerging Analytics
Monday, April 15, 10:30–11:20am
We introduce a new generation of machine learning methods that provide state of the art performance and are very interpretable. We introduce optimal classification (OCT) and regression (ORT) trees for prediction and prescription with and without hyperplanes. We show that (a) Trees are very interpretable, (b) They can be calculated in large scale in practical times and (c) In a large collection of real world data sets they give comparable or better performance than random forests or boosted trees. Their prescriptive counterparts have a significant edge on interpretability and comparable or better performance than causal forests. Finally, we show that optimal trees with hyperplanes have at least as much modeling power as (feedforward, convolutional and recurrent) neural networks and comparable performance in a variety of real world data sets. These results suggest that optimal trees are interpretable, practical to compute in large scale and provide state of the art performance compared to black box methods. We apply these methods to a large collections of examples in personalized medicine, financial services, organ transplantation among others.