Tutorials

Alexandera Newman image

Alexandera Newman

Colorado School of Mines, USA

Practical Guidelines for Solving Difficult Mixed Integer Programs, Version 2.0

The paper “Practical Guidelines for Solving Difficult Mixed Integer Programs” by Klotz and Newman was written in 2012, and significant parts of that material first appeared in 2007. Since then, mixed-integer solvers have evolved significantly. Two of the three models the paper describes were unsolvable at the time, but some solvers now handle them easily, as of the last five years. This tutorial will provide an update to the guidelines. We will consider additional functionality in the solvers and look at a new set of challenging models.

  • Alexandra Newman is a professor in the Mechanical Engineering Department at the Colorado School of Mines (CSM). Prior to joining CSM, she was a research assistant professor at the Naval Postgraduate School in the Operations Research Department. She obtained her BS in applied mathematics at the University of Chicago and her PhD in industrial engineering and operations research at the University of California at Berkeley. She specializes in deterministic optimization modeling, especially as it applies to energy and mining systems, and to logistics, transportation, and routing. She received a Fulbright Fellowship to work with industrial engineers on mining problems at the University of Chile in 2010 and was awarded the Institute for Operations Research and the Management Sciences (INFORMS) Prize for the Teaching of Operations Research and Management Science Practice in 2013.

conference logo placeholder

Javiera Barrera

Universidad Adolfo Ibanez, Chile
Javiera Barrera is an assistant professor at the Faculty of Engineering and Science, Universidad Adolfo Ibáñez. She obtained her undergraduate degree in mathematical engineering at Universidad de Chile. She earned simultaneously her PhD in mathematics at the Laboratoire MAP5 of  l’Université Paris Descartes and at the School of Engineering of Universidad de Chile. She specializes in asymptotic convergence of Markov chains and in stochastic optimization methods for networks design. Her most recent research has developed models to incorporate failure dependency in the networks reliability analysis.
Mariel Lavieri image

Mariel Lavieri

University of Michigan, USA

Medical Decision Making; The Past, the Present, and the Future

Abstract to come

  • Mariel Lavieri is an Associate Professor in the School of Industrial and Operations Engineering at the University of Michigan. In her work, she applies operations research to healthcare topics. Her most recent research focuses on medical decision making, in particular on determining optimal screening, monitoring and treatment by explicitly modeling the stochastic disease progression. She has also developed models for health workforce planning, which take into account training requirements, workforce attrition, capacity planning, promotion rules and learning. Among others, Mariel is the recipient of the MICHR Distinguished Mentor Award, the National Science Foundation CAREER Award, the International Conference on Operations Research Young Participant with Most Practical Impact Award, and the Bonder Scholarship. She has also received the Pierskalla Best Paper Award, and an honorary mention in the George B. Dantzig Dissertation Award. She has been selected to participate in the Frontiers of Engineering Symposium organized by the National Academy of Engineering. Mariel has guided work that won, among others, the Medical Decision Making Lee Lusted Award, the INFORMS Doing Good with Good OR Award, and the Production and Operations Management Society College of Healthcare Operations Management Best Paper Award.
conference logo placeholder

Andre Luiz Diniz

Eletrobras Cepel, Brazil
Emilio Carrizosa image

Emilio Carrizosa

SEIO

Emilio Carrizosa, President of SEIO, the Spanish Stats & OR Society, Professor of Statistics and Operations Research and Director of IMUS – the Institute of Mathematics of the University of Seville, Spain. In his career, he has combined academic research (he has published more than 100 papers on Mathematical Optimization and its Applications to Data Science) with OR and Data Science industrial activities.

Ivan Contreras image

Ivan Contreras

Concordia University, Canada
Ivan Contreras is an Associate Professor and Research Chair in Transportation and Logistics Network Optimization in the Mechanical, Industrial, and Aerospace Engineering Department at Concordia University, Canada. He is a regular member of the Interuniversity Research Centre on Enterprise Networks, Logistics and Transportation (CIRRELT). He obtained his B.Sc. and M.Sc. in Industrial Engineering from the University of Americas, Mexico, and his Ph.D. in Operations Research from the Technical University of Catalonia, Spain. His research focuses on the development of models and algorithms for large-scale optimization problems arising in transportation and logistics, scheduling and production, and health-care planning. In 2015, he was awarded the Institute for Operations Research and the Management Sciences (INFORMS) Chuck ReVelle Rising Star Award for his contributions to location science.
conference logo placeholder

Juan Pablo Vielma

MIT, USA
conference logo placeholder

Martin Savelsbergh

Georgia Tech, USA
conference logo placeholder

Meisam Razaviyayn

University of Southern California, USA

Solving “Nice” Non-Convex Optimization Problems and its Connections to Training Deep Neural Networks

When is solving a non-convex optimization problem easy? Despite significant research efforts to answer this question, most existing results are problem specific and cannot be applied even with simple changes in the objective function. In this talk, we provide theoretical insights to this question by answering two related questions: 1) Are all local optima of a given non-convex optimization problem globally optimal? 2) When can we compute a local optimum of a given non-convex constrained optimization problem efficiently?  In the first part of the talk, motivated by the non-convex training problem of deep neural networks, we provide simple sufficient conditions under which any local optimum of a given highly composite optimization problem is globally optimal. Unlike existing results in the literature, our sufficient condition applies to many non-convex optimization problems such as training problem of non-convex multi-linear neural networks and non-linear neural networks with pyramidal structures.

In the second part of the talk, we consider the problem of finding a local optimum of a constrained non-convex optimization problem under strict saddle point property. We show that, unlike the unconstrained scenario, the vanilla projected gradient descent algorithm fails to escape saddle points even in the presence of a single linear constraint. We then propose a trust region algorithm which converges to second order stationary points for optimization problems with small number of linear constraints. Our algorithm is the first optimization procedure, with polynomial per-iteration complexity, which converges to $\epsilon$-first order stationary points of a non-manifold constrained optimization problem in $O(\epsilon^{-3/2})$ iterations, and at the same time can escape saddle points under strict saddle property.

  • Meisam Razaviyayn is an assistant professor at the department of Industrial and Systems Engineering at the University of Southern California. Prior to joining USC, he was a postdoctoral research fellow in the Department of Electrical Engineering at Stanford University. He received his PhD in Electrical Engineering with minor in Computer Science at the University of Minnesota under the supervision of Professor Tom Luo. He obtained his MS degree in Mathematics under the supervision of Professor Gennady Lyubeznik. Meisam Razaviyayn is the recipient of the Signal Processing Society Young Author Best Paper Award in 2014 and the finalist for Best Paper Prize for Young Researcher in Continuous Optimization in 2013 and 2016. His research interests include the design and analysis of large scale optimization algorithms arise in modern data science era.