next up previous
Next: Traffic assignment Up: Route learning in iterated Previous: Route learning in iterated

Introduction

Many contributions to this book are about route selection of humans in transportation networks. There are experimental contributions - where humans are observed how they solve this problem - and modeling contributions - where computer simulations attempt to generate plausible routes.

One area where such modeling knowledge is needed is the area of transportation simulations. Such simulations are built for many purposes, from signal coordination for traffic management to long-term regional planning. In particular for long-term planning, it is clear that a simulation of traffic and cars alone is not enough, and strategic behavior of the agents including activity and route choice need to be included. There is an emerging consensus that such transportation simulation packages should consist at least of the following modules:

The above list is not complete; it reflects only the most prominent modules. For example, the whole important issue of freight traffic is completely left out. Also, at the land use/housing level, there will probably be many modules specializing into different aspects.

The modules interact, and the interaction goes in both directions: for example, activities and routes generate congestion, yet (the expectation of) congestion influences activities and routes. This is typically solved via a relaxation method, i.e. modules are run sequentially assuming that the others remain fixed, until the results are consistent.

There are two ways to see this relaxation mechanism: as a solution method to a nonlinear optimization problem, or as modeling human learning. In the first interpretation, the assumption is that the relaxed state, which typically is a Nash Equilibrium, is how the real system behaves. As we will see later, this interpretation normally goes along with a mathematical formulation for which one can prove uniqueness of the solution, and the computation should get there as quickly as possible. In this approach, the agents do not learn explicitly. All learning happens outside the modeling; the modeling is only interested in the final state of the system. This is fundamentally different from the second approach, which models human learning directly. Here, the computation specifies learning rules for each individual agent, and the simulation is run repeatedly to allow for day-to-day learning.

There is however no clear dividing line between both interpretations. For example, for some systems and some methods of explicit learning one can show that they converge to the same Nash Equilibrium as the solution to the nonlinear optimization problem. Conversely, some computational methods to solve the nonlinear optimization problem in fact resemble human learning. Often, a method which models human learning is used, but in the computation learning is made much faster than in reality in order to save on computer time.

In this contribution, we will focus on simulations of such learning behavior, in particular with respect to route choice, although many of the arguments should also apply to other aspects. In particular, we will look at the following:

The paper is concluded by a summary.


next up previous
Next: Traffic assignment Up: Route learning in iterated Previous: Route learning in iterated
Kai Nagel 2002-05-20