next up previous
Next: The Four step process Up: Distributed intelligence in large Previous: Distributed intelligence in large


Introduction

The real world, we assume, is an example for distributed intelligence. In the archetypical example, the anthill, many agents with limited intelligence -the ants- interact and via this interaction make the whole system -the anthill- function. Similarly, we assume that humans interact to make the whole system -our society- function. We assume that this is achieved by many people making autonomous decisions, i.e. without central control.

The transportation system is a sub-system of this global socio-econo-political system. As we will see, in this system do we not only have agents drive or walk through a networks of roads or walkways, but they also make tactical and strategic decisions, from skipping lunch to relocating their household. That is, the actions of individuals in the transportation system are strongly coupled to how these individuals live their daily lives, and how they adjust that daily life in reaction to obstacles. In terms of a practical example, a new highway to the subways will often trigger the following reactions: (1) congestion relief; (2) people making additional trips (called induced traffic) and/or more people relocating to the subways; and in consequence (3) congestion coming back.

There is an emerging consensus that transportation simulations for planning purposes should consist of the following modules (Fig. 1):

In addition, there need to be initialization modules, such as the synthetic population generation module, which takes census data and generates disaggregated populations of individual people and households. Similarly, it is necessary to generate good default layouts for intersections etc. without always knowing the exact details.

Figure 1: TRANSIMS modules
\includegraphics[width=0.8\hsize]{transims-bubbles-fig.eps}

Real-world scenarios often consist of many millions of travelers, and also it seems (without hard evidence) that our multi-agent methods work best on large problems and the corresponding macroscopic questions. For such large problems, parallel computing is an absolute necessity. The first thing to compute in parallel is the traffic micro-simulation - and this is achieved via ``standard'' domain decomposition, i.e. the geographical region is cut into pieces, and each CPU is responsible for one such piece. Running the other modules in parallel is straightforward as long as the agents do not interact at those levels, as is currently the case for all operational implementations. However, the above relaxation method does not reflect reality - agents do in fact make decisions and change plans during travel, and not just before they start. Yet, in a parallel traffic micro-simulation, one cannot have agents go through the cognitive motions of replanning on the same CPU as the traffic simulation is running, since this would lead to inefficient load balancing. Thus, the method of choice is to make the intelligence of the travelers external to the micro-simulation - in some sense, to have the traffic micro-simulation represent the ``real world'' and to have one additional, external, computer for each brain in the simulation.

It should be noted again that this view of distributed intelligence is not oriented towards the solution of any well-defined problem. There is not even a definition of what is meant by intelligence, and if it is each individual agent who is intelligent, or the system as a whole, or both. The only assumption is that the system manages to ``function'', in the sense that each individual manages to end up with a list of activities which enables him or her to survive, and hopefully to live a good life (in the sense that he or she does not relocate to a different city). This view of distributed intelligence is rather different from a computer science view of distributed intelligence, which takes on the task to achieve a computational speed-up to solve a well-defined problem (e.g. [1]) - although there is certainly overlap, especially in the methods.

We will start with a short discussion of the traditional method (Sec. 2). Next, we present agent-based traffic simulation as an alternative (Sec. 3), and describe the modules mentioned above in more detail. In Sec. 4 we describe how these modules are coupled in order to make the agents adapt and learn. As pointed out above, for large scenarios, parallel computing is a necessity, and it has interesting consequences for the distribution of intelligence in the simulation (Sec. 5). Finally, the state of the art is discussed (Sec. 6), followed by a conclusion.


next up previous
Next: The Four step process Up: Distributed intelligence in large Previous: Distributed intelligence in large
Kai Nagel 2002-08-14