next up previous contents
Next: Calibration and validation Up: Learning and feedback Previous: Smart agents and non-predictability   Contents

Conclusion

The approach of this class [[book]] to agent learning was that the learning method is first described as a computer algorithm, and the behavior of the algorithm is analyzed later. The first level of analysis is the analysis of the resulting dynamics, without any normative statements. Day-to-day dynamics is discrete in time, and can be analyzed as any time-discrete deterministic or stochastic system. In all generality, this does not help much, since possible outcomes range from fixed points to chaotic attractors; it does however provide a language to describe resulting behavior and to classify what to expect.

In terms of a normative theory, game theory comes in. Our system can be interpreted as all agents attempting to find their best solution, given the behavior of all other agents (Nash Equilibrium). With appropriate care, some versions of a learning dynamics will contain Nash Equilibria as fixed points. The mapping of our learning dynamics into game theory does however move the simulations away from what seems behaviorally plausible.

Third, there are relations to machine learning. In particular, each agent can be seen as a learning machine. The two most important differences to standard machine learning are: We have many more agents, and there is no common goal.

Finally, the chapter has described some examples of where smarter agents lead to larger instabilities. Such examples seem to be generic, also outside the area of transportation. Care needs therefore to be taken to not make simulations and reality more unstable by adding more information.


next up previous contents
Next: Calibration and validation Up: Learning and feedback Previous: Smart agents and non-predictability   Contents
2004-02-02