When considering the modules of Sec. 3, it becomes quickly clear that there is no unique direction of causality. The mode choice follows from the activities, but the availability of certain modes influences activity selection. Similarly, congestion information is needed when calculating activity schedules, which in turn influence activity patterns and routes, and thus congestion.
A possible way to solve this problem is to use systematic relaxation. This is where all agents make some choices (plans, or strategies), these plans are executed in a traffic simulation, some agents revise their plans, they are again executed, etc., until some kind of stopping criterion is fulfilled. This approach is widely accepted for the route assignment part, where travelers switch to new routes until they cannot find another route that makes them better off than the route they already have. This is just the definition of a game theoretic Nash Equilibrium.
For activities, it is less clear where such a relaxation algorithm will lead, and if the result makes sense. In fact, there are several issues with such iterations: The first issue is that the Nash Equilibrium (NE) is a normative state; that is, it is claimed that the system somehow reaches that state, and once there it stays at it. This means that it does not matter how the computation reaches this state, any measure to speed up the computation is allowed, and in consequence the feedback iterations do not need to have any relation to real-world human learning. Second, there is no guaranty that the NE is unique. If it is not unique, then any attractive NEs can be reached by the iterative procedure. The result then depends on the initial conditions. Finally, if real people typically do not operate at a Nash Equilibrium, then the method is no longer correct. In that case, it becomes necessary to model human learning directly, i.e. the human adaptation from one day or week or year to the next. In this case, it also becomes necessary to define the speed of human learning. As with multiple Nash Equilibria, initial conditions matter here.
It is possible to write agent-based simulations such that they allow the modeling of both approaches: fast relaxation toward NE, or realistic human learning. An important aspect of such a flexible method is to make the implementation truly agent-based. By this we mean that there are true individuals in the simulation, with home address, demographic characteristics, etc. Each individual has plans for activities, modes, and routes. This is already different from many implementations, where the routes are given implicitly via the destination and are thus not explicitly part of the agents' strategies.
From here, one can make progress by using methods from Evolutionary Game Theory (e.g. Hofbauer and Sigmund, 1998), from Complex Adaptive Systems (e.g. Stein and others, since 1988), Distributed Artificial Intelligence (e.g. Ferber, 1999), and Machine Learning (e.g. Russel and Norvig, 1995). The agent is considered as an intelligent entity that is capable of collecting information about its environment, and of using that information to come up with better and better solutions. Initially, the agent will just attempt to come up with a good plan for itself, but it will have to react to other agents' behavior, especially to congestion. So far, this is a non-cooperative co-evolution problem. Eventually, however, one will have to make the agents negotiate with each other; for example, there will be household tasks which only one member of the household needs to do.
Finally, there is a difference between so-called day-to-day learning, and within-day learning. In the former, agents execute their daily plan without modification, and only ``over night'' can consider using a different plan. Within-day learning implies that agents can modify their plans during the day. The latter is more realistic since some of our decisions are made on time scales much shorter than a day (Doherty and Axhausen, 1998). It is however also conceptually less clear since day-to-day learning has relations to evolutionary game theory. In our current implementation, we use day-to-day learning.