This paper describes one possible implementation of a large-scale agent-based simulation package for regional planning. As was repeatedly pointed out, the approach is modular and extensible. In order to test the modularity, replacing one or more modules by alternative ones is desirable. In the following, this is discussed on a module-by-module basis.
The queue simulation has its limitations, for example with respect to complicated intersections, inhomogeneous vehicle fleets, queue dissolution, interaction between different modes of transportation, etc. These limitations will be difficult or impossible to remove within the method of the queue simulation approach. Therefore it seems desirable to move beyond the queue simulation to a more realistic traffic simulation. Besides being more realistic, this simulation should fulfil the following criteria in order to be consistent with our approach: It should be able to process travelers with individual plans; and it should be computationally fast. There are currently few traffic simulations which fulfill these criteria simultaneously. The TRANSIMS microsimulation is one of them. As discussed above, with the emergence of useful network conversion tools, this may become a viable option. Note that including the micro-simulation into our set-up would still be different from using the full TRANSIMS suite.
Our current router computes car-only fastest paths, without regard for alternative cost functions (such as monetary cost, familiarity, scenic beauty, etc.), and without regard for alternative modes. Again, an option would be to use the multi-modal TRANSIMS router as a single module within our set-up. This will, as discussed above, depend on functionality.
Yet, having the fastest path, even if multi-modal, does not solve all problems. In practice, people often do not use the fastest path, or there are stochastic influences, or the path depends on which part of a network they know (mental map). Maybe somewhat unexpectedly, it is rather difficult to construct non-optimal solutions to the routing problem (e.g. 28).
The above results use traditional origin-destination tables for demand generation. We intend to move our investigations to activity-based demand generation. One method will be based on discrete choice theory, one on genetic algorithms.
A fair amount of Swiss traffic is cross-border traffic, either with origin or destination in Switzerland, or completely traversing the country. Also, freight traffic would not be included in a first version of activity-based demand generation, which would concentrate on people. It is planned to include all these effects by conventional origin-destination matrices, i.e.some ``background'' traffic that will be able to adjust routes (and maybe starting times) but will not be elastic in terms of number of trips.
The use of the agent database in the feedback mechanism works well, but needs tuning. Both computational speed and the learning behavior of the system are an issue. The computational speed issues are addressed via a combimation of database performance tuning and consolidating the current script-based approach into one program. The methodological questions will be addressed via an examination of established learning methods (such as best reply or reinforcement learning).
Another shortcoming or the current method is that replanning can happen only over night. Work is under way to improve this situation via an online coupling between modules, which will allow within-day replanning (21). We explicitly want to avoid coupling the modules via standard subroutine/library calls, since this both violates the modular approach idea and efficiency considerations for parallel computing.
Even with day-to-day replanning only, many problems remain. It was pointed out in this paper that the use of an agent data-base, i.e.memorization of more than one strategy for each agent, solves some conceptual problems. However, even if one assumes that one is capable to generate a set of plausible strategies, the question becomes which of those to select. The standard logit approach of , where is the utility of option , has, as is well known, the so-called IID property (``independence from irrelevant alternatives''). IID essentially means that strategies should not be related. As an extreme example, assume that the agent-database contains three strategies for an agent, two of which are nearly the same. IID says that each strategy will be selected with a probability of 1/3, while it would be plausible that the nearly identical strategies are selected with a probability of 1/4 each, and the third, truly different strategy with a probability of 1/2. Alternatives to standard multinomial logit are C-logit or pathsize logit, which remove some of these problems (29).
It was mentioned above that there was a serious gridlock problem within the city of Zurich. This was attributed to generally too low network capacities. Unfortunately, this intuition is difficult to check. It is clear that, with the input data that was at our disposal, there was a mismatch between demand and network capacity. Also, the same method worked everywhere else in Switzerland. We can only think of three reasons: (i) there was a demand overestimation in the OD cells for Zurich; (ii) there was a capacity underestimation in the network data; (iii) our queue micro-simulation is overly sensitive to gridlock and this problem shows up only for large congested networks. Unfortunately, there is no other similarly large metropolitan region inside Switzerland; the metropolitan regions of Lugano, Geneva, and Basel extend across the border and therefore cannot be simulated realistically with our available demand data.
It should be noted that simulations with hard capacity and storage constraints are generically much more sensitive to capacity mismatches than static assignment. In static assignment, an overloaded link (with volume higher than capacity) will just be unattractive for the routing, but it will forward the requested steady state flow nevertheless. In a simulation with hard constraints, a queue will form upstream of such a bottleneck, and it will spill back into the rest of the system.
Our plan to solve this problem and to also advance towards more microscopic representation is to include a higher resolution network for the region around Zurich. This network will have considerably more links, possibly leading to a higher network capacity because of the addition of secondary capacity. That network should be a lot more reliable in terms of realism and thus eliminate one of the sources of errors. In addition, adding other choices into the model (mode, destination, activity pattern) should also dampen the adverse effects of demand-capacity mismatch.
Finally, it is necessary to point out the necessity of regression testing and ``trusted components''. The bug in the TRANSIMS feedback setup was found after rather a lot of manual work, and it was only found because of the specific testing set-up. In many ``normal'' scenarios, such as our 6-9 scenario, there is a good chance that the problem would have gone unnoticed for a much longer time. The major concern is however that a problem may get fixed, but then, with further changes, some new problem may appear. It is therefore desirable, albeit awkward, to consider systematic regression testing in the community of large scale microscopic simulation. Regression testing means systematic test suites which are run every time the software is changed, and which ensure that previously working functionality is not degraded by later changes in the code. Trusted components means that possibly certain pieces of a software, maybe after a formal proof of their correctness, should be completely removed from further changes - all improvements then need to be done via transparent object-oriented interfaces. It is unclear if one can reconcile such an approach with the desire for flexibility in a research environment.