Powell, Warren (Princeton University)

powell

Warren Powell
328 Christopher Drive
Princeton, NJ 08540 USA

Phone: 609-273-0218
Fax: 609-258-3796
E-mail: powell@princeton.edu
Website: http://www.castlelab.princeton.edu/

Topics:

A brain for an old machine
Railroads have long caught the imagination of the country in terms of massive equipment, and their large, complex networks. In recent years, railroads, trucking companies, and airlines have dramatically improved the intelligence they use to plan and run their operations through the use of advanced computer models (the brain). In the process of trying to computerize intelligence, we have learned a number of lessons that apply generally to complex organizations. I will use the lessons of trying to get computer models to simulate (or replace) humans making decisions to uncover lessons on how to make good decisions that apply to any large organization. (Elementary)

Approximate dynamic programming
Approximate dynamic programming (or ADP as it is often called) is a modeling and algorithmic framework for solving a wide range of stochastic optimization problems.

Classical dynamic programming has been limited in its use because of the well-known "curse of dimensionality." For my applications, I have found that there are three curses of dimensionality, and

I show, with limited mathematics, how to overcome these curses by combining the concept of the post-decision state variable with statistical techniques from machine learning. I will illustrate the ideas using real-world projects in transportation, energy, health, and finance. I will show how we can combine classical math programming with statistics and simulation to create a powerful algorithmic strategy that has a wide range of applications. (Advanced)

Optimal learning
There are many problems in which we need to make a decision in the presence of different forms of uncertainty. The problem is that not only are we uncertain about, say, the value of a particular decision, we do not even know the distribution. Furthermore, as we collect information, we will change our belief about uncertain quantities. Given this, we would like to make choices where we learn as much as possible.

Optimal learning arises in virtually any activity that requires that we make decisions, and where we depend on observational data to create estimates about the value of a decision. For example:

  • We want to find the best path to a new job, but the only way to learn about a path is to try it.
  • We want to cure cancer, and we need to sequence experiments to learn about the behavior of different molecules as quickly as possible.
  • We would like to learn about the presence of a pathogen (such as the MRSA virus) in the population. Obtaining measurements about one segment of the population might teach us about the prevalence of the pathogen in neighboring populations.
  • We need to assess websites to determine their threat potential. We do not have the time to have a domain expert evaluate every website. How do we choose which websites to show our expert?
  • We are trying to find the best price for a product we are selling on the Internet. The website can detect sales and adjust the price, but how should we do this?

This is just a sample of many applications of optimal learning. In this speech, I will present an overview of the dimensions of an optimal learning problem, and introduce some very simple methods (some optimal, some heuristic) for handling the problem of how to collect information.

Some communities address this challenge as the "exploration vs. exploitation" problem. The machine learning community calls it "active learning." The issues arise in the design of simulations ("simulation optimization"), and in engineering simulations, it arises in the context of optimizing expensive functions.

The lecture will describe a number of practical methods for guiding the process of collecting information. Special attention will be given to a new method called the “knowledge gradient,” which is proving to be a powerful method for determining how to collect information. (Intermediate)

Background:

  • BSE Princeton University
  • MSE MIT
  • Ph.D. MIT

Professor at Princeton University since 1981; Professor of Operations Research and Financial Engineering since 1990; director of CASTLE Laboratory since 1990. CASTLE Lab specializes in the development and implementation of computer models that help management make a variety of strategic, tactical, and real-time decisions. The process of developing these models has provided invaluable insights into the organization and flow of information and decisions within large organizations. Prof. Powell is an author or coauthor of over 140-refereed publications, and has received numerous awards for his work with industry and his contributions to research. He is the author of "Approximate Dynamic Programming: Solving the Curses of Dimensionality." He has models running at a number of the largest transportation companies, spanning rail, truckload trucking, less-than-truckload trucking, small-package, and business jets.