next up previous
Next: Related work Up: No Title Previous: No Title

Introduction

It is by now widely accepted that it is worth investigating if the microscopic simulation of large transportation systems [6, 42] is a useful addition to the existing set of tools. By ``microscopic'' we mean that all entities of the system - travelers, vehicles, traffic lights, intersections, etc. - are represented as individual objects in the simulation [14, 32, 15, 31, 12, 20, 44].

The conceptual advantage of a micro-simulation is that in principle it can be made arbitrarily realistic. Indeed, microscopic simulations have been used for many decades for problems of relatively small scale, such as intersection design or signal phasing. What is new is that it is now possible to use microscopic simulations also for really large systems, such as whole regions with several millions of travelers. At the heart of this are several converging developments:

  1. The advent of fast desktop workstations.
  2. The possibility to connect many of these workstations to parallel supercomputers, thus multiplying the available computing power. This is particularly attractive for agent-based transportation simulations since they do not benefit from traditional vector supercomputers.
  3. In our view, there is a third observation that is paramount to make these approaches work: many aspects of a ``correct'' macroscopic behavior can be obtained with rather simple microscopic rules.

The third point can actually be rigorously proven for some cases. For example, in physics the ideal gas equation, p V = m R T, can be derived from particles without any interaction, i.e. they move through each other. For traffic, one can show that rather simple microscopic models generate certain fluid-dynamical equations for traffic flow [25].

In consequence, for situations where one expects that the fluid-dynamical representation of traffic is realistic enough for the dynamics but one wants access to individual vehicles/drivers/..., a simple microscopic simulation may be the solution. In addition to this, with the microscopic approach it is always possible to make it more realistic at some later point. This is much harder and sometimes impossible with macroscopic models.

The TRANSIMS (TRansportation ANalysis and SIMulation System) project at Los Alamos National Laboratory [42] is such a micro-simulation project, with the goal to use micro-simulation for transportation planning. Transportation planning is typically done for large regional areas with several millions of travelers, and it is done with 20 year time horizons. The first means that, if we want to do a micro-simulation approach, we need to be able to simulate large enough areas fast enough. The second means that the methodology needs to be able to pick up aspects like induced travel, where people change their activities and maybe their home locations because of changed impedances of the transportation system. As an answer, TRANSIMS consists of the following modules:

As is well known, such an approach needs to make the modules consistent with each other: For example, plans depend on congestion, but congestion depends on plans. A widely accepted method to resolve this is systematic relaxation [12] - that is, make preliminary plans, run the traffic micro-simulation, adapt the plans, run the traffic micro-simulation again, etc., until consistency between modules is reached. The method is somewhat similar to the Frank-Wolfe-algorithm in static assignment.

The reason why this is important in the context of this paper is that it means that the micro-simulation needs to be run more than once - in our experience about fifty times for a relaxation from scratch [34, 35]. In consequence, a computing time that may be acceptable for a single run is no longer acceptable for such a relaxation series - thus putting an even higher demand on the technology.

This can be made more concrete by the following arguments:

We will show in this paper that the TRANSIMS microsimulation indeed can be run with this computational speed, and that, for certain situations, this can even be done on relatively modest hardware. By ``modest'' we mean a cluster of 10-20 standard PCs connected via standard LAN technology (Beowulf cluster). We find that such a machine is affordable for most university engineering departments, and we also learn from people working in the commercial sector (mostly outside transportation) that this is not a problem. In consequence, TRANSIMS can be used without access to a supercomputer. As mentioned before, it is beyond the scope of this paper to discuss for which problems a simulation as detailed as TRANSIMS is really necessary and for which problems a simpler approach might be sufficient.

This paper will concentrate on the microsimulation of TRANSIMS. The other modules are important, but they are less critical for computing (see also Sec. 10). We start with a description of the most important aspects of the TRANSIMS driving logic (Sec. 3). The driving logic is designed in a way that it allows domain decomposition as a parallelization strategy, which is explained in Sec. 4. We then demonstrate that the implemented driving logic generates realistic macroscopic traffic flow. Once one knows that the microsimulation can be partitioned, the question becomes how to partition the street network graph. This is described in Sec. 6. Sec. 7 discusses how we adapt the graph partitioning to the different computational loads caused by different traffic on different streets. These and additional arguments are then used to develop a methodology for the prediction of computing speeds (Sec. 8). This is rather important, since with this one can predict if certain investments in one's computer system will make it possible to run certain problems or not. We then shortly discuss what all this means for complete studies (Sec. 10). This is followed by a summary.


next up previous
Next: Related work Up: No Title Previous: No Title


Thu Oct 5 16:59:00 CEST 2000