With the exception of Sec. 22.4, we have concentrated on day-to-day learning. Our typical approach is:
Generate some initial option for each traveler.
Execute that option in the micro-simulation.
Allow a certain fraction of the travelers to replace their option with another one, generated by an external module.
Goto 2.
In all our implementations, we have suggested to use a randomly selected 10% sample of the population for replanning. Fig. 31.1 shows the effect of different replanning schedules with respect to the sum of all travel times. This figure suggests that all relaxation series relax to the same final result; looking at traffic patterns provides additional support for this statement. There are however important differences in terms of relaxation speed. In particular, runs 4 and 5 were done with a replanning fraction of one percent. Note that in this case, the probability of a traveler never having undergone replanning after 100 iterations is , more than one third of the population. This is an unacceptably high number, and it explains why even after so many iterations the sum of the travel times is not at the same level as for the others.
All other runs represent higher replanning fractions. Run 1 uses a schedule: 20% replanning in iterations 1-3, 10% replanning in iterations 4-6, 5% in iterations 7-9, and 2% afterwards. Runs 7, 8, and 11 use 5% replanning throughout the iterations, but with a bias towards agents which have not been replanned for a long time. Run 7 in addition loads the network successively, i.e. in the zeroth iteration only 20% of the traffic is put on the network, another 20% is added in the first iteration, etc. Run 10 uses a deterministic instead of a random selection of the travelers for replanning. The advantage is that, with 5% replanning, after 20 iterations one is certain that each traveler was picked exactly once for replanning. In comparison, run 12 uses a simple 5% arbitrary random sample of the population.
The overall result seems to be that, when done right, about 30 iterations are enough to reach relaxation. Also, more complicated selection of agents has no significant advantages over just plain and simple random selection. All simulations refer to the replanning of routes only.
|