next up previous contents
Nächste Seite: Multi-agent simulation modeling issues Aufwärts: . Vorherige Seite: Inhalt   Inhalt

Introduction

The human society is quite generally interested in forecasts of all kinds. This concerns both the short-term, e.g. weather, and the long-term e.g. climate. Reasons for such an interest are for example economic interests, or simple curiosity. Also in the area of traffic, there is interest in forecast, both in the short-term, e.g. for traffic operations, and in the long-term, e.g. for regional and urban planning.

Forecasts quite generally need to be based on models. A model can for example be human intuition, or a linear extrapolation. For complex problems, such as transportation, there is now some agreement that at least some of the forecasts should be based on solid scientific and engineering methods.

Regarding these scientific and engineering methods for forecasting, two broad trends can be distinguished: ``black box'' models, and ``models from first principles''. Black box models refers to methods which are not geared to a particular problem, but rather general in their applicability. Examples are linear extrapolation or neural nets. Common to these models is that they do not look at the microscopic mechanics going on in a system. In contrast, models from first principles look at those microscopic processes and attempt to exploit them. An example is the theory of fluid-dynamics, which can be justified by an elaborate derivation from kinetic equations. Models from first principles are, at least in theory, more powerful than black box methods, since they are geared to the specific application. As a side effect, they normally allow more insight and understanding of the system under consideration. The disadvantage of models from first principles is that they are more costly and more difficult to develop.

Clearly, there are all kinds of intermediate approaches. For example, sometimes a linear law is based on first principles (e.g. the ideal gas equation); or a method relies on some understanding of the system, without looking at all the microscopic details.

This contribution will focus on microscopic models for traffic forecasting. The term microscopic means that the individual traveler is the principal unit of modeling. This is in contrast to some other approaches, most prominently the four step process, which looks at aggregated streams of travelers rather than individual travelers. In agreement with other areas of computational science, the microscopic approach will be called a multi-agent simulation (MAS). The difference between an agent and a (physical) particle is the fact that the agent can have an internal state, internal processing, etc. This means that two outwardly identical agents, when submitted to exactly the same situation, can make fundamentally different decisions. Clearly, there are again all kinds of intermediate models between pure physics particles and truly autonomous, intelligent agents.

The paper consists of two main parts, one on modeling issues and one on computational issues. The part on modeling issues (Sec. 2) is rather short, concentrating on the conceptual differences between initial/boundary conditions (Sec. 2.1), simulation of the physical reality (Sec. 2.2), strategy generation (Sec. 2.3), and learning/feedback (Sec. 2.4). This is complemented by a section on backward compatibility to the 4-step process (Sec. 2.5), which discusses how modules from agent-based approaches can be coupled to modules from the 4-step process.

The part on computational techniques (Sec. 3) concentrates on how such simulations can be implemented. It starts with the issues of coding (Sec. 3.2), a very short look at computer science and computational science search algorithms (Sec. 3.3), and a discussion of database applications in the maintenance of initial and boundary conditions (Sec. 3.4). Arguably, the most important and least resolved computational issue is the interoperability between modules (Sec. 3.5). In contrast, parallel computing, which can be used to speed up individual modules, at this point can be considered a mature technology, even if practical applications are still somewhat rare (Sec. 3.6). Sec. 3.7 then discusses how the issues of interoperability and of parallel computing interact, leading to the very active fields of Distributed Artificial Intelligence, Software Agents, and Peer-To-Peer Systems, but also to a large number of open issues and few established and mature solutions. The last section in the computational part (Sec. 3.8) covers issues such as the emerging GRID technology and the use of visualization and virtual reality.

The focus on the computational techniques is borne by our beliefs that modeling and implementation are not independent. Rather, our experience is that certain models are easier to implement than others, and certain implementations favor certain models over others. In other words, there will be a tendency to select models which are easy to implement over models which may be more realistic, and such a pragmatic approach may even be justified by the results. And also, once a particular implementation has been selected, there will be a certain amount of ``lock-in'' toward certain models until someone else takes it on to start a competing, different implementation. The paper is concluded by a short discussion and a summary.


next up previous contents
Nächste Seite: Multi-agent simulation modeling issues Aufwärts: . Vorherige Seite: Inhalt   Inhalt
2003-07-21