next up previous
Next: Within-day re-planning Up: Route learning in iterated Previous: Day-to-day learning, feedback, and


Individualization of knowledge

Knowledge of agents should be private, i.e. each agent should have a different set of knowledge items. For example, real people only know a relatively small subset of the street network (``mental map''), and they have different knowledge and perception of congestion (e.g. [14]).

This opens the door for the use of Complex Adaptive Systems methods (e.g. [15]). Each agent has a set of strategies from which to choose, and indicators of past performance for these strategies. The agent normally choses a well-performing strategy. From time to time, the agent choses one of the other strategies, to check if its performance is still bad, or replaces a bad strategy by a new one.

This approach divides the problem into two parts (see also [16]):

A major advantage of this approach is that it becomes more robust against artifacts of the router: if an implausible route is generated, the agent will fall back on a more plausible route generated earlier. Fig. 1.5 shows an example. The scenario is the same as in Fig. 1.2; the location is slightly north of the final destination of all trips. We see snapshots of two relaxed scenarios. The left plot was generated with a standard relaxation method as described in the previous section, i.e. where individual travelers have no memory of previous routes and their performance. The right plot in contrast was obtained from a relaxation method which uses exactly the same router but which uses an agent data base, i.e. it retains memory of old options. In the left plot, we see that many vehicles are jammed up on the side roads while the freeway is nearly empty, which is clearly implausible; in the right plot, we see that at the same point in time, the side roads are empty while the freeway is just emptying out - as it should be.

The reason for this behavior is that the router miscalculates at which time it expects travelers to be at certain locations - specifically, it expects travelers to be much earlier at the location shown in the plot. In consequence, the router ``thinks'' that the freeway is heavily congested and thus suggests the side road as an alternative. Without an agent data base, the method forces the travelers to use this route; with an agent data base, agents will discover that it is faster to use the freeway.

This means that the true challenge is not to generate exactly the correct routes, but to generate a set of routes which is a superset of the correct ones [16]. Bad routes will be weeded out via the performance evaluation method. For more details see [19]. Other implementations of partial aspects are [20,21,22,14].

The way we have explained it, one will probably assume that each individual has computational memory to store his/her plan or plans. The memory requirements for this are of the order of $O(N_{people}
\times N_{trips} \times N_{links} \times N_{options})$, where $N_{people}$ is the number of people in the simulation, $N_{trips}$ is the number of trips a person takes per day, $N_{links}$ is the average number of links between starting point and destination, and $N_{options}$ is the number of options remembered per agent. For example, for our Switzerland simulations with a network of $28\,622$ links, we have $N_{people} \sim 7.5~\hbox{mio}$, $N_{trips}
\sim 3$, $N_{links} \sim 50$, and $N_{options} \sim 5$, which results in

\begin{displaymath}
7.5 \cdot 10^6~\hbox{persons} ~\times~ 3~\hbox{trips per person}
~\times~ 50~\hbox{links per trip}
\end{displaymath}


\begin{displaymath}
\times~ 5~\hbox{options} ~\times~ 4~\hbox{bytes per link} ~=~ 22.5~\hbox{GByte}
\end{displaymath}

of storage if we use 4-byte words for storage of integer numbers. Let us call this agent-oriented plans storage.

Since this is a large storage requirement, many approaches do not store plans in this way. They store instead the shortest path for each origin-destination combination. This becomes affordable since one can organize this information in trees anchored at each possible destination. Each node in the network has ``signposts'' for which way to go for any possible destination; a plan is thus given by knowing the destination and following the ``signs'' at each intersection. The memory requirements for this are of the order of $O(N_{nodes} \times N_{destinations} \times
N_{options})$, where $N_{nodes}$ is the number of nodes of our network, and $N_{destinations}$ is the number of possible destinations. $N_{options}$ is again the number of options, but note that these are different options per destination, so different agents traveling to the same destination cannot have more than $N_{options}$ different options between them.

Traditionally, transportation simulations use of the order of 1000 destination zones, and networks with of the order of 10000 nodes, which results in a memory requirement of

\begin{displaymath}
1\,000 \times 10\,000 \times 5 \times 4 = 200~\hbox{MByte} \ ,
\end{displaymath}

considerable smaller than above. Let us call this network-oriented plans storage.

The problem with this second approach is that it explodes with more realistic representations. For example, for our simulations we usually replace the traditional destinations zones by the links, i.e. each of the 28622 links is a possible destination. In addition, we need the information time-dependent. If we assume that we have 15-min time slices, this results in a little less than 100 time slices for a full day. The memory requirements for the network-oriented plans storage now become

\begin{displaymath}
28\,622\times 10\,000 \times 100 \times 5 \times 4 \approx
600~\hbox{GByte} \ ,
\end{displaymath}

already more than for the agent-oriented approach. In contrast, for agent-oriented plans storage, time resolution has no effect. The situation becomes worse with high resolution networks (orders of magnitude more links and nodes), which leaves the agent-oriented approach nearly unaffected while the network-oriented approach becomes impossible. As a side remark, we note that in both cases it is possible to compress the information by a factor of at least 30 [23].

Figure 1.5: Individualization of plans and interaction with router artifacts. LEFT: All vehicles are re-planned according to the same information; vehicles do not use the freeway (arrrows) although the freeway is empty. As explained in the text, this happens because the router makes erroneous predictions about where a vehicle will be at what time. RIGHT: Vehicles treat routing results as additional options, that is, they can revert to other (previously used) options. - The arrows point to the freeway. The time is 7pm.
\includegraphics[width=0.7\hsize]{individualization-gz.eps}


next up previous
Next: Within-day re-planning Up: Route learning in iterated Previous: Day-to-day learning, feedback, and
Kai Nagel 2002-05-20