Next: Related work
Up: No Title
Previous: No Title
It is by now widely accepted that it is worth investigating if the
microscopic simulation of large transportation
systems [6, 42] is a useful addition to the existing
set of tools. By ``microscopic'' we mean that all entities of the
system - travelers, vehicles, traffic lights, intersections, etc. -
are represented as individual objects in the
simulation [14, 32, 15, 31, 12, 20, 44].
The conceptual advantage of a micro-simulation is that in principle it
can be made arbitrarily realistic. Indeed, microscopic simulations
have been used for many decades for problems of relatively small
scale, such as intersection design or signal phasing. What is new is
that it is now possible to use microscopic simulations also for really
large systems, such as whole regions with several millions of
travelers. At the heart of this are several converging
developments:
-
The advent of fast desktop workstations.
- The possibility to connect many of these workstations to
parallel supercomputers, thus multiplying the available computing
power. This is particularly attractive for agent-based
transportation simulations since they do not benefit from
traditional vector supercomputers.
- In our view, there is a third observation that is paramount to
make these approaches work: many aspects of a ``correct''
macroscopic behavior can be obtained with rather simple microscopic
rules.
The third point can actually be rigorously proven for some cases. For
example, in physics the ideal gas equation, , can be
derived from particles without any interaction, i.e. they move
through each other. For traffic, one can show that rather
simple microscopic models generate certain fluid-dynamical equations
for traffic flow [25].
In consequence, for situations where one expects that the
fluid-dynamical representation of traffic is realistic enough for the
dynamics but one wants access to individual vehicles/drivers/..., a
simple microscopic simulation may be the solution. In addition to
this, with the microscopic approach it is always possible to make it
more realistic at some later point. This is much harder and sometimes
impossible with macroscopic models.
The TRANSIMS (TRansportation ANalysis and SIMulation System) project
at Los Alamos National Laboratory [42] is such a
micro-simulation project, with the goal to use micro-simulation for
transportation planning. Transportation planning is typically done
for large regional areas with several millions of travelers, and it
is done with 20 year time horizons. The first means that, if we want
to do a micro-simulation approach, we need to be able to simulate
large enough areas fast enough. The second means that the methodology
needs to be able to pick up aspects like induced travel, where people
change their activities and maybe their home locations because of
changed impedances of the transportation system. As an answer,
TRANSIMS consists of the following modules:
- Population generation. Demographic data is
disaggregated so that we obtain individual households and individual
household members, with certain characteristics, such as a street
address, car ownership, or household income [3].
- Activities generation. For each individual, a set of
activities and activity locations for a day is
generated [43, 5].
- Modal and route choice. For each individual, modes and
routes are generated that connect activities at different
locations [18].
- Traffic micro-simulation. Up to here, all individuals
have made plans about their behavior. The traffic
micro-simulation executes all those plans simultaneously. In
particular, we now obtain the result of interactions between
the plans - for example congestion.
As is well known, such an approach needs to make the modules
consistent with each other: For example, plans depend on congestion,
but congestion depends on plans. A widely accepted method to resolve
this is systematic relaxation [12] - that is, make
preliminary plans, run the traffic micro-simulation, adapt the plans,
run the traffic micro-simulation again, etc., until consistency
between modules is reached. The method is somewhat similar to the
Frank-Wolfe-algorithm in static assignment.
The reason why this is important in the context of this paper is that
it means that the micro-simulation needs to be run more than once -
in our experience about fifty times for a relaxation from
scratch [34, 35]. In consequence, a
computing time that may be acceptable for a single run is no longer
acceptable for such a relaxation series - thus putting an even higher
demand on the technology.
This can be made more concrete by the following
arguments:
- The number of ``about fifty'' iterations was gained from
systematic computational experiments using a scenario in Dallas/Fort
Worth. In fact, for route assignment alone, about twenty iterations
are probably sufficient [34, 35],
but if one also allows for other behavioral changes, more iterations
are needed [13]. The numbers become plausible via
the following argument: Since relaxation methods rely on the fact
that the situation does not change too much from one iteration to
the next, changes have to be small. Empirically, changing more than
10% of the travellers sometimes leads to strong fluctuations away
from relaxation [34, 35]. A
replanning fraction of 10% means that we need 10 iterations in
order to replan each traveller exactly once; and since during the
first couple of iterations travellers react to non-relaxed traffic
patterns, we will have to replan those a second time, resulting in
15-20 iterations. Nevertheless, future research will probably find
methods to decrease the number of iterations.
- We assume that results of a scenario run should be available
within a few days, say two. Otherwise research becomes frustratingly
slow, and we would assume that the same is true in practical
applications. Assuming further that we are interested in 24 hour
scenarios, and disregarding computing time for other modules besides
the microsimulation, this means that the simulation needs to run
25 times faster than real time.
We will show in this paper that the TRANSIMS microsimulation indeed
can be run with this computational speed, and that, for certain
situations, this can even be done on relatively modest hardware. By
``modest'' we mean a cluster of 10-20 standard PCs connected via
standard LAN technology (Beowulf cluster). We find that such a
machine is affordable for most university engineering departments, and
we also learn from people working in the commercial sector (mostly
outside transportation) that this is not a problem. In consequence,
TRANSIMS can be used without access to a supercomputer. As mentioned
before, it is beyond the scope of this paper to discuss for which
problems a simulation as detailed as TRANSIMS is really necessary and
for which problems a simpler approach might be sufficient.
This paper will concentrate on the microsimulation of TRANSIMS. The
other modules are important, but they are less critical for computing
(see also Sec. 10). We start with a description of the
most important aspects of the TRANSIMS driving logic
(Sec. 3). The driving logic is designed in a way that
it allows domain decomposition as a parallelization strategy, which is
explained in Sec. 4.
We then demonstrate that the implemented driving logic
generates realistic macroscopic traffic flow. Once one knows that the
microsimulation can be partitioned, the question becomes how to
partition the street network graph. This is described in
Sec. 6. Sec. 7 discusses how
we adapt the graph partitioning to the different computational loads
caused by different traffic on different streets. These and
additional arguments are then used to develop a methodology for the
prediction of computing speeds (Sec. 8). This is
rather important, since with this one can predict if certain
investments in one's computer system will make it possible to run
certain problems or not. We then shortly discuss what all this means
for complete studies (Sec. 10). This is followed by a
summary.
Next: Related work
Up: No Title
Previous: No Title
Thu Oct 5 16:59:00 CEST 2000