next up previous contents
Next: A solution method Up: Static assignment Previous: Constrained optimization   Contents

Subsections

Uniqueness

[[Muss das nochmal nachsehen, denn ich denke, dies ist sogar noch st"arker: es gibt nur EIN lokales min, welches damit auch das globale ist.-???]]

One of the major advantages of static assignment is that, under certain conditions, it has one unique solution. This means that no matter what the solution method, all solutions are the same. This is vastly different from our simulation approach, and certainly one of the big drawbacks of simulation that we have to consider in our work.

Sufficient conditions for uniqueness of Static Assignment are:

together with These conditions are not minimal, but they are normally used in practice. They will be described in more detail in the following.

Convexity of $z(\vec q)$

Strict convexity of $z(\vec q)$ means, intuitively, that it is ``bent'' (curved) upwards everywhere. In one dimension, this would be ensured by having a second derivative that is $> 0$ everywhere. In higher dimensions, it is ensured by having a Hessian ($=$ matrix of second derivatives) that positive definite. A matrix $H$ is positive definite if $\vec v \cdot H \vec v > 0$ for all $\vec v \ne \vec 0$ - this is just the higher dimensional version of ``second derivative $> 0$ everywhere''.

For an unconstrained problem, the intuitive interpretation is as follows: Assume there is one location $\vec q^*$ where $\nabla z \,
(\vec q^*) \equiv \vec 0$, which is therefore a candidate for an optimum. Now if $z(\vec q)$ is curved upwards everywhere, then candidate is a local minimum, and there cannot be a second place where $\nabla z(\vec q) \equiv \vec 0$.

For constrained optimization, one has in addition to make sure that the boundaries cooperate. This is indeed achieved by the convexity of the feasible region, see Sec. 28.5.2.

For Static Assignment, it is possible to simplify the condition of a positive definite Hessian. The calculus for this is a bit tricky, but workable. The result is that the statement

\begin{displaymath}
\hbox{$H$\ positive definite $\df$\ $z(\vec q)$\ strictly convex}
\end{displaymath}

can be replaced by

\begin{displaymath}
\hbox{$\forall a$: $\displaystyle \frac{\partial t_a(q_a)}{\partial q_a} > 0$\ $\df$\ $z(\vec q)$\ strictly
convex.}
\end{displaymath}

So what we need is that link travel time increases strictly monotonically with link flow. Given the assumptions that we have already accepted, this one is easy to accept.

One has to note that the above will prove convexity of $z(\vec q)$ with respect to the link flows $q_a$, not with respect to the path flows $f^{rs,p}$. And indeed, the solution is unique with respect to the link flows, but not with respect to the path flows.


Convexity of the feasible region

The feasible region is the set of all solutions which fulfill the constraints. That is, all path flows which fulfill the OD matrix.

Convexity of the feasible region means that any convex combination of feasible solutions is again feasible. A convex combination is a normalized linear combination: If $X_1$ and $X_2$ are both feasible, then

\begin{displaymath}
X_3 := \alpha \, X_1 + (1-\alpha) \, X_2
\end{displaymath} (28.9)

should also be feasible ($\alpha \le 1$).

[[intuitively, convexity would also avoid ``local optima at the boundaries'' (see Fig.). This would guarantee convergence of any hillcliming method --???]]

$f^{rs,p} \ge 0$ together with $\sum_p f^{rs,p} = Q^{rs}$ will always result in a convex feasible region.


next up previous contents
Next: A solution method Up: Static assignment Previous: Constrained optimization   Contents
2004-02-02