next up previous contents
Next: Experimental setup and simulation Up: A Portland/Oregon case Previous: Our approach   Contents


Related work

The topic of this paper is a computational procedure of how to systematically feed back the results of a dynamic traffic assignment (DTA) to demand generation. In principle, any route assignment could be used instead of ours. However, since our work are steps towards a completely microscopic simulation approach, we are primarily interested in simulation-based route assignment and network loading. For this, one needs traffic flow simulations where one is able to follow each vehicle individually. Some simulations which fulfill this requirement besides the queue simulation used in the paper are: PAMINA (Rickert, 1998); the Transims main micro-simulation (Transims, 1992); LEGO (Gawron, 1996); INTEGRATION (Rakha and Van Aerde, 1996); DYNASMART (Mahmassani et al, 1995); PARAMICS (1996); MITSIM (Yang, 1997); DYNAMIT (2000); DYNEMO (Schwertfeger, 1987)  or VISSIM (2000). Out of these, probably only LEGO, DYNASMART, DYNEMO, and DYNAMIT are fast enough to run iteration series such as ours on a single CPU. Within these four, LEGO is based on a queue model very similar to ours, while the other three use macroscopic equations for the movement of the vehicles.

In terms of re-routing during the route iterations, we use a standard time-dependent fastest path Dijkstra (see, e.g., Jacob et al, in press) based on 15-min link trip time averages. However, for this paper only a fraction of the population is re-planned. A widely used alternative is to re-plan 100% of the population in each iteration but to use a discrete choice approach approach to spread travelers across different routes (Cascetta and Papola, 1998; Bottom, 2000). Besides different theoretical properties, these approaches also have different computing complexities. The time complexity of our approach for the routing is $O(f \, N \, E \log\ K)$, where $N$ is the number of travelers, $f$ is the re-planning fraction (usually 10% in this paper), and $E \log\ K$ is the complexity of the Dijkstra algorithm where $E$ is the number of edges and $K$ the number of nodes. Note that this is independent of the time resolution. The approaches which re-plan everybody usually exploit the fact that, for any given starting location, one obtains the complete shortest path calculation for all destinations with the same worst case complexity as the calculation for just one destination. One thus obtains $O(F(\Delta T) \, M \, E \log\ K)$, where $M$ is the number of possible starting points (traditionally zones) and $F(\Delta T)$ is some function that increases with increasing time resolution (decreasing $\Delta T$) (Chabini, 1998). Since in our work each link is a potential starting point, this translates into $O( F(\Delta T) \, E^2 \log\ K)$. In this paper, where $E \approx 20k$, $N \approx 500\,000$, and $f =
0.1$, the two approaches are about equivalent. For street networks with higher resolution, $E$ grows while $N$ remains constant, making our approach grow more slowly in time complexity.

Also the workplace assignment is an old problem. An example of such a matching is the classic ``Hitchcock'' solution (Sheffi, 1985), where the workplace assignment is done in such a way that the overall sum of all trip times is minimized. This clearly results in much shorter trips than in reality. Axhausen (1990) suggests to couple demand generation, route assignment, and traffic simulation, although he puts more emphasis on on-trip learning than in the implementation presented here. Several groups such as the groups of Ben-Akiva or Mahmassani are actively working on this as extensions of their ITS projects. We are not aware of any results of these attempts yet. There are also earlier versions of the work presented in this paper (Wagner and Nagel, 1999, Esser and Nagel, 1999).


next up previous contents
Next: Experimental setup and simulation Up: A Portland/Oregon case Previous: Our approach   Contents
2004-02-02