next up previous
Next: Communication between the modules Up: A pedestrian simulation for Previous: Mobility Simulation

Subsections


Mental Layer and Learning


How learning works (Mental modules and their interaction)

Every agent is created individually and treated microscopically. This means that we can assign demographical data to every agent. An agent knows, for example, its age, its physical fitness etc. Also it has an expectation of what it wants to experience, and what it likes most. This can be static, taken from the demographical data, or based on previous visits to this (or even another) hiking area.

Initially, every agent starts with a plan that, in its opinion, fulfills its expections. For example, if the period of interest is a day, and then such an initial plan might refer to a specific hike. To do this, the agent chooses activity locations it wants to visit, like hotel, peak of mountain, restaurant etc.

This chain of activity locations is then handed over to the routing module, which calculates the routes between activities according to the information available. This information can be static and global, like shortest path information based on the street network graph. Also information that is local to the agents memory and might be uncertain can be used.

The mobility simulation then executes the routes. The agent experiences the environment and sends its perception as events (see later) to the other modules.

From here on, the system enters the replanning or learning loop. The idea, as mentioned before, is that the agents go through the same period (e.g. day) over and over again. During these iterations, they try to accumulate additional information, and to improve their plan.

The two critical questions are (1) how to accumulate, store, and classify that information, and (2) how to come up with new plans. Both questions are related to (artificial) intelligence, and we are certainly far away from answering them in their entirety. Nevertheless, our system contains the following elements which makes it able to learn:

It should be noted that the distinctions between these modules are not sharp. For example, an agent database may run out of memory if it memorizes as separate entities plans that differ only in small details; in that case, the agent database might have to start to build a mental map of the world in which case it becomes similar to the activity generation module as described above.

As said before, these aspects of the simulation concern the modeling of human intelligence, which is an unsolved (and maybe unsolvable) problem. Yet, one should recognize that for our simulations it is not possible to model individual people correctly, but it is sufficient to obtain correct distributions of behavior. Our approach should be considered as a first step into that direction.


Modeling the Visual Landscape

As modeling agents' reaction to the visual qualities of the landscape is a key part of our project, it is necessary to model what the individual agent ``sees'' and interpret how what the agent sees reflects their expectations. This concerns the ``scores'' as mentioned above, which are necessary both for the agent database and for the mental maps.

There have been many attempts to model visual quality using GIS-based approaches. These approaches have distilled the overall `attractiveness' of a particular place (usually modeled as a raster cell) into a single numerical factor, based on available GIS data. These analyses tend to be highly specific to a particular question (such as appropriateness for camping; Meitner and Daniel, 1997), and while useful for classifying huge areas of seldom visited land, their coarseness makes them less than appropriate for modeling smaller scale landscapes such as our test case. In particular, the fact that the existing models assign values to specific places, rather than on sequential experiences, mean that they are not able to easily model concepts such as ``landscape variety'', which, as mentioned above, was identified as one of the key points in attracting tourists in the Swiss Alps.

Rather than using a single visual quality model, our approach has been to give the agents the ability to ``see'' the landscape, and integrate their visual experience into the factors that are evaluated by the agent database modules. This allows us to model sequences of views, and provides a lot of flexibility in terms of exploring the importance of various visual parameters.

The view module exploits the capabilities of modern 3D graphics hardware to quickly perform visibility calculation. Using a similar technique as that described by Bishop et al. (2001), objects are rendered in perspective using false colors. These colors are assigned based either on unique objects or on logical groupings (such as stands of trees.) As the agents move through the landscape, the scene is rendered from the viewpoint of each individual agent. The rendering process produces a color image and a depth buffer, which is a natural byproduct of the rendering process used in current graphics hardware and describes how far away an object is from the viewpoint. As these visibility calculations are performed on specialized hardware, the process is able to scale very well to very complex scenes with little effort from the user perspectives.

While the process is considerably faster than other visibility approaches, it is not quick enough for our purposes. At a frame rate of 15+ frames per second, it quickly becomes the bottleneck for the entire simulation system.

We are exploring two different approaches to eliminate this bottleneck:

Both of these approaches offer considerable opportunity for speed improvement, and both are facilitated by the modular structure and communication strategies of the entire simulation system. It is very easy to swap the module that calculates the views for every agent at every time step with the pre-rendered implementation.


next up previous
Next: Communication between the modules Up: A pedestrian simulation for Previous: Mobility Simulation
2003-12-20