Next: CONCLUSION
Up: agdb
Previous: THE AGENT DATABASE
The true potential of multi-agent simulations in the area of
transportation science has not yet been fully tapped. An important
point of a true CAS method is that each agent has several different
individual strategies, and that learning methods are applied to generate
new strategies, either via crossbreeding of existing strategies or via
innovation. However, virtually no existing (large scale)
implementation allows for multiple strategies per agent. Even
TRANSIMS, which is based so much on individual intelligent agents, in
its default configuration does not exploit the potential of multiple
strategies per agent, although the design would allow it.
An open issue concerns the calibration and validation of agent-based
techniques. There are several related but separate issues:
-
- Verification that a code corresponds to its
specifications. It has been our experience also in other projects
such as in climate simulations that this goal is difficult to achieve
in practice. Also, as one has seen in this paper, even code fully
corresponding to specifications can give implausible results. Formal
proofs of correctness have now become possible for medium-sized
projects (B. Meyer, personal communication),
but are relatively expensive and possibly incompatible with a research
environment. Still, some process of verification and ``code
freezing'' should be eventually implemented.
-
- Calibration means that the parameters of the model were
adjusted so that they match some given set of data as well as
possible.
In our case, the micro-simulation is (by definition and via some
testing) fully calibrated against the input data that it uses. The
routing model uses the normatively declared time-dependent fastest
path. And the feedback mechanism uses a heuristic 10% learning rate,
which yields fast relaxation but has no additional justification.
-
- Validation means that the calibrated model is used for
some real world problem and compared to some field data, preferably
against field data that has not been used for the calibration.
We have in fact done such a study for traffic in Switzerland, where
realistic OD matrices were fed into our system and the resulting
volumes for the morning rush hour were compared against reality. The
details of this go beyond the scope of this paper and can be found in
Ref. (45). The overall result was an average relative error of
less than 26%. This was better then the results of an assignment
model that was used for comparison, and interestingly this result came
out although the OD matrices were calibrated to optimize the
assignment result against the counts.
In general, it is our belief that validation of agent-based models
should be in the field, not on synthetic or reduced scenarios. A good
way, in our view, would be to have international competitions as they
are common in other fields of science. Such a competition would be
organized around major infrastructure changes. It would give access
to all possible input data to the scenario, the predictions would be
submitted before the infrastructure change is executed, and
after the infrastructure change the predictions would be checked.
Although each individual competition would have a strong random
component, one would expect that in the long run the better methods
would produce the better results.
Next: CONCLUSION
Up: agdb
Previous: THE AGENT DATABASE
Kai Nagel
2002-11-16