Research Scientist, Market Algorithms Group
Online Matching and Allocation in Advertising Markets
The Search and Display Advertising business model supports many incredible Internet services, and generates a large economic value by connecting buyers and sellers. The technical problems involved have attracted the attention of researchers from many disciplines in algorithms, machine learning, and economics. One core problem in this area is that of Ad allocation, an inherently algorithmic and incentives question — that of matching ad slots to advertisers, online, under demand and uncertain supply constraints. I will present an overview of the algorithmic and modeling techniques for this problem, which derive from a very basic algorithmic problem called Online Matching. I will also talk about how these algorithmic ideas are applied in the practice of Ad allocation.
Aranyak Mehta is a Research Scientist in the Market Algorithms group at Google Research in Mountain View, CA. His interests lie in the areas of algorithms, online optimization, auction and mechanism design, and microeconomics. At Google he works at the intersection of algorithms and economics, on projects in Google’s Advertising systems and other market products. He obtained his Ph.D. in Computer Science at the ACO program at Georgia Tech in 2005, and a B.Tech at IIT Bombay. He was also a Visiting Scientist (postdoc) at IBM’s Almaden Research Center prior to joining Google in 2007.
Potential and Pitfalls of Machine Learning in Programmatic Advertising
Digital advertising is one of the largest and open playgrounds for machine learning, data mining and related analytic approaches. This development id fueled by unparalleled access to consumer activity across all digital devices and the rise of programmatic advertising where about 100 Billion advertising opportunities are sold daily in real time auctions This talk will outline the promise of infusing a historically intuition based industry with rigorous machine learning for both targeting and measurement. We will also touch on a number of challenges which arise in this environment: 1) high volume data streams of around 80 Billion daily consumer touch points, 2) low latency requirements on scoring and automated bidding decisioning within 100ms and 3) adversarial modeling in the light of advertising fraud and bots. The probably most challenging aspects remain however incentive misalignments/measurement issues in the advertising industry and the paradox of big data and predictive modeling: You never have the data you really need and sometime having more data hurst!
Claudia Perlich leads the machine learning efforts that power Dstillery’s digital intelligence for marketers and media companies. With more than 50 published scientific articles, she is a widely acclaimed expert on big data and machine learning applications, and an active speaker at data science and marketing conferences around the world. Claudia is the past winner of the Advertising Research Foundation’s (ARF) Grand Innovation Award and has been selected for Crain’s New York’s 40 Under 40 list, Wired Magazine’s Smart List, and Fast Company’s 100 Most Creative People. Claudia holds multiple patents in machine learning. She has won many data mining competitions and awards at Knowledge Discovery and Data Mining (KDD) conferences, and served as the organization’s General Chair in 2014.Prior to joining Dstillery in 2010, Claudia worked at IBM’s Watson Research Center, focusing on data analytics and machine learning. She holds a PhD in Information Systems from New York University (where she continues to teach at the Stern School of Business), and an MA in Computer Science from the University of Colorado.
John T. Stuart III Centennial Chair in Business Administration
The University of Texas at Austin
Four Common Research Mistakes
According to a popular dictionary, a “mistake” is an action or judgment that is misguided or wrong. This presentation will focus on four common mistakes in applied and academic research. The first mistake relates to the use of a single measure to quantify “total survey error” in surveys and polls. It is demonstrated that this single measure is highly misleading; a 2017 political poll is used as an example. The second mistake relates to the population being investigated in a study. The use of college students by behavioral researchers is shown to be problematic. The third mistake relates to metrics employed to assess the “reasonableness” of structural equation modeling results. Reliance on certain metrics is revealed to be unnecessary. The final mistake relates to the inattention given to “nonsignificant results” in experiments involving human subjects. Ignoring “significantly insignificant” experimental results leads to questionable inferences.
Dr. Robert A. Peterson holds the John T. Stuart III Centennial Chair in Business Administration at The University of Texas at Austin. He has served as chairman of the Department of Marketing and associate dean for research in the McCombs School of Business, director of the IC2 Institute, interim director of the Office of Technology Commercialization, associate vice president for research, and research integrity officer for the University. He is a former editor of the Journal of Marketing Research and the Journal of the Academy of Marketing Science. He received his doctorate from the University of Minnesota. Dr. Peterson has authored or co-authored in excess of 180 articles and books and has received numerous honors and awards for his research. In addition to co-founding three firms, he frequently serves as an expert witness in litigation matters involving intellectual property.
Optimizing Mailing Lists and Content in Direct Marketing
This talk presents a case study on how Alteryx is used by Southern States Cooperative to optimize its direct mail catalog marketing, and allowing them to measure the return on their marketing investments. The application involves the development of two predictive models (a catalog use model and a model that measure the incremental revenue from a customer’s catalog use).
The first of these models indicates that a key driver of catalog use is whether a customer has recently purchased (prior to the catalog mailing) an item that is included in the catalog. As a result, the optimal set of customers that should receive the catalog is conditional on the items included in the catalog.
This leads to a joint optimization problem of selecting both the set of items to include in the catalog and the set of customer that are sent the catalog. This optimization problem is highly non-linear, and is addressed using a genetic algorithm.
Dr. Dan Putler is the Chief Scientist at Alteryx, where he is responsible for developing and implementing the product road map for predictive analytics. He has over 30 years of experience in developing predictive analytics models for companies and organizations that cover a large number of industry verticals, ranging from the performing arts to B2B financial services. He is co-author of the book, “Customer and Business Analytics: Applied Data Mining for Business Decision Making Using R”, which is published by Chapman and Hall/CRC Press. Prior to joining Alteryx, Dan was a professor of marketing and marketing research at the University of British Columbia’s Sauder School of Business and Purdue University’s Krannert School of Management.
Founder and Pricipal
Voice-of-customer Analytics: Evolving From Descriptive To Prescriptive
In the last decade, global consumer brands have invested heavily in technologies that utilize NLP to deliver value. The majority of these applications fall under marketing organizations and focus on “brand watching”, “social listening”, or the like. The quantitative results of these models often become critical executive-level performance metrics akin to net promoter score (NPS), but are merely descriptive in nature. The real opportunity comes in making voice-of-customer analytics prescriptive. Borrowing from real world examples at Lenovo and other global brands, this talk will focus on applying NLP to impact everyday operations, and consequently, improve–not just report on– customer sentiment.
Anthony Volpe established Quantworks, Inc. in late 2015, to provide clients across a wide range of industries with competitively differentiated analytic products and capabilities. By partnering at every stage of the innovation cycle, from ideation through proof-of-concept and launch, his boutique development firm is delivering radically solutions and time-to-value. Prior to launching Quantworks, Anthony was Chief Corporate Analytic Officer at Lenovo from 2013-2015. There he was responsible for building strategy and driving initiatives to advance the world’s largest PC manufacturer as a leading consumer brand. Under his leadership, Lenovo and its customers recognized significant gains from analytics in a wide range of areas, including product development, quality, pricing, marketing, customer engagement and supply chain. Prior to Lenovo Anthony held positions in Product Management and Analytics Advisory at SAS Institute. He was previously Senior Research Fellow at the Harvard Center for Textile and Apparel Research, and spent time as an analytics practitioner with Ford Motor Company. Anthony received his B.S. in Mathematics from Duke University, and his Ph.D. in Applied Mathematics from Harvard.