
UNCERTAINTY IN
CONTINUOUS SYSTEM SIMULATION 

GENERAL
REMARKS
The lack of reliable data
in computer simulation is an important obstacle in many simulation projects. Models
that are nice and valid from the academic point of view often result to be
useless in practical applications, when the user cannot provide reliable data. In
the past, a common way to treat this lack of exact data was to suppose some
model parameters or input variables to be random ones. This results in a
stochastic model, where every realization of the system trajectory is
different, and the problem is rather to determine the probability density
function in the system space for certain timesections, the variance,
confidence intervals etc.
Such stochastic analysis is interesting but not always possible. The common problem consists in the lack of data. Some parameters of the model have "uncertain" values, and the model user has no idea about their probabilistic behavior. More likely we are given an interval the uncertain parameter belongs to, instead of its probability distribution or sampled real data. Some external disturbances can fluctuate within certain intervals, and what we are asked to is to give the interval for some output variables. The stochastic simulations with randomized variables do not give such intervals. Moreover, frequently the user wants to know a possible extremal values rather than a probability to reach them (recall the law of Murphy !). The uncertainty treatment has nothing, or very little, to do with "Monte Carlo" or stochastic simulation. The intervals we are looking for are not confidence intervals or any other statistics.
There is no room here to
mention a huge number of publication on uncertainty problems.
See Bargiela and Hainsworth or Bargiela
for an example of uncertainty management in water distribution systems, for
example.
MATHEMATICAL
MODELING TOOL
Most simulationists who
deal with continuous systems use, as a basic tool, the ordinary or partial
differential equations. My point is that ordinary differential (ODE) and
partial differential equations are too primitive tools for many system
models except, perhaps, simple mechanisms, circuits and very academic examples.
The fact that they have been used for decades does not mean that all can be modeled
using these tools. Let us consider a simple example of a second order system
(1) 
d^{2}y/dt^{2} + a dy/dt + y = b 
This is a simple ODE model.
Introducing notation x_{1}= y , x_{2}
= dy/dt we obtain
(2) 
dx_{1}/dt = x_{2 } dx_{2}/dt = b  ax_{2}  x_{1 } 
In more general notation
the state equation is
(3)
dx/dt = f(a,b,x)
where x is a
twodimensional vector, t is the time and f is a vectorvalued
function.
Now suppose that the
parameters a and b are uncertain and
that the only information we have are the corresponding intervals where their
values may belong, or a permissible irregular set on the plain where the point (a,b)
must belong. Note that we know
nothing about a possible probability distribution of these parameters and we do
not treat them as random variables. Thus, the equation (3) takes the
following form.
(4)
dx/dt begongs to F(x,t)
where F
is a set. (I type, "belongs to" instead of the
corresponding symbol because not all browsers can show it). What we obtained is
a differential inclusion (DI). The righthand side of the equation (2)
determines the set F. However, this is
merely one of several possible ways to represent the set. In this case it is parameterized
with a and b.
The solution to a DI is the
reachable set for the possible system trajectories that is exactly the
solution to our uncertainty problem. In this very natural way we see that
the uncertainty in dynamic system modeling leads to differential inclusions as
a corresponding mathematical tool. This is strange that doing so much in
uncertainty modeling, very few simulationists use DIs as a mathematical
tool. Note that this tool is known for about 70 years and that there is
wide literature available on the DIs theory and applications. The first works
have been published in 193132 by Marchaud and Zaremba. They used the terms
"contingent" or "paratingent" equations. Later, in
196070, T. Wazewski and his collaborators published a series of works,
referring to the DIs as orientor conditions and orientor fields. As always
occurs with new theories, their works received sever criticism, mainly from
some physicists who claimed that it is a stupid way of wasting time while
dealing with so abstract an useless theories. Fortunately, the authors
did not abandon the idea and developed the elemental theory of differential
inclusions. In the decade 193040 such problems as the existence and properties
of the solutions to the DIs have been resolved in the finitedimensional space.
After this, many works appear on DIs in more abstract, infinitedimensional
spaces. Within few years after the first publications, the DIs resulted to be
the basic tool in the optimal control theory. Recall that optimal
trajectories of a dynamic system are those that lay on the boundary of the
system reachable set. In works of Pontiragin, Markus and Lee, Bellman and many
others, one of the fundamental problems are the properties of the reachable
sets. Using the theory of Marchaud and Zaremba, T.Wazewski pointed out that in
many optimal control problems the resulting control strategy is the socalled bangbang control,
generated by switching controllers.
A differential inclusion is a generalization of an ordinary differential equation. In fact, an ODE is a special case of a DI, where the righthand F is a onepoint set. One could expect that a solution algorthm for a DI might be obtained as some extension of a known algorithms for the ODEs. Unfortunately, this is not the case. First of all, the solution to a DI is a set. Namely, it is a set in the timesate space, where the graphs of all possible trajectories of a DI are included. Finding the boundary of such set (named reachable set, or emission zone as in the works of Zaremba and Wazewski) is not an easy task.
Observe that the reachable set (RS) Z is not the set of all model trajectories. It is a set of points in the spacetime space, such that for any model trajectory x_{k}(t) belongs to Z . An important concept was introduced by Wazewski, namely the quasitrajectory of a DI. A quasitrajectory is a weak solution to a DI, roughly speaking it is a function being a limit of a sequence of trajectories. A quasitrajectory may be itself a solution of a DI or not. An important fact is that the set Z for quasitrajectories (considering quasitrajectories instead of trajectories) becomes a closed set. With some regularity assuptions about the modelled system, the sets of trajectories and quasitrajectories coincide. I will not discuss here more theoretical details about the DIs. A more extended survay can be found in one of my articles on DIs in simulation.
One of the properties of the reachable sets is the fact that if a trajectory reaches a point on the boundary of the RS at the final time, then its entire graph must belong to the RS. This fact is well known and used in the optimal control theory. Observe, that any trajectory that reaches a point on the boundary of the RS is optimal in some sense. Such trajectories can be calculated using several methonds, the main one being the Maximum Principle of Pontriagin. This can be used to construct an algorithm for RS determination. If we can calculate a sufficient number of trajectories that scan the RS boundary, then we can see its shape. The trajectories should be uniformly distributed over the RS boundary. This can be done by some kind of random shooting over the RS boundary. Such shooting has nothing to do with a simple random shooting, when the trajectories are generated randomly inside the RS. Later on it will be shown that such simple shooting, even with big amount of trajectories, is usefulless to determine the RS boundary.
My first attempts to develop a DI solver were presented on the IFAC Symposium on Optimization Methods, Varna, 1974. This was a random shooting method, but not a simple shooting. That algorithm generated trajectories inside the RS, but the control variable was being modified to obtain a mostly uniform possible distribustion of ponts inside the RS at the endo of the simulated time interval. The method was applied in some uncertainty problems, for example to simulate the growth of the number of computer science specialists in the former "Socialistic Countries" block (project sponsored by COMECON, Moscow). However, this algorthm resulted to be highly unefficient for other practical applications. The DI solver presented in the Transacions on SCS in 1996 is much more effective. Here you can find the summary of the algorithm, and a summary of the new version of the soler for Windows, with improved graphics.
In few words, the DI solver works as follows. The user provides the DI in form of an equivalent control system. To do it he/she must parametrize the righthand size (the set F) using a mdimensional auxiliar variable u. For example, if the set F is a rectangle in the 2D space which width is proportional to x1, height is proportional to x2 and the elevation (in x2 direction) is proportional to the time, then the parametrization can be as follows.
F(x,u,t) = {y: y_{1}=au_{1}x_{1} , y_{2}=bu_{2}x_{2}+ct, u_{1} and u_{2} belong to [0,1^{]} }
where a,b and c are constant parameters. The above function F maps the unit square into the required righthand set.
Now, the DI takes the form
dx/dt = f(x,u,t)
where f_{1}=au_{1}x_{1} , f_{2}=bu_{2}x_{2}+ct. The DI is defined by the righthand sides of this control system. The rest is done automatically by the solver. It determines the equations of socalled conjugated vector p and integrates a set of trajectories, each of them belonging to the boundary of the RS. Over each trajectory the hamiltonian H(x,p,u,t) is maximaized. This procedure is similar to that of dynamic optimization. In the optimal control problem the main difficulty consists in the boundary conditions for the state and conjugated vectors. For the sate vector we have the initial condition defined, and for the conjugated vecto only the final conditions (at the end of the trajectory) are known, given by the transversality contitions.This means, that the optimal control algorithm must resolve the corresponding twopointboundary value problem, which is more difficult than a simple ODE integration and normally is done using an iterative process. In the case of a DI we are in better situation. There is no object funtion and no transversality conditions. As the consequence, for the vector p we can define the final as well as the initial conditions. Anyway we obtain atrajectory which graph belongs to the RS boundary. Defining initial conditions for p we integrate the trajectory only once, going forward. The only problem is how to generate these initial conditions in order to scan the RS boundary with uniform density. The algorithm is quite simple: the initial conditions for p are generated randomly, due to a density function that changes, covering with more density points that correspond to trajectories that fall into a low density places at the RS boundary. Trajectories that are very close to each other are not stored (storing only one from each eventual cluster). As a result we obtain a set of trajectories covring the RS boundary, that can be observed in graphical form and processed.
EXAMPLE
Consider a second order dynamic system where the acceleration as well as the damping coefficient are uncertain. An example of the corresponding DI in parametrized form can be as follows.
dx_{1}/dt = x_{2}
dx_{2}/dt = u_{1}  u_{2}x_{2}  x_{1}
where 1 < u1 < 1 and 0.5 < u2 < 1.5 . The following figure shows the 3D image of the RS in coordinates x_{1} (upwards), x_{2} (to the right) and t
Below you can see the timesection of the RS for time equal to 6. Note the small blue cluster of trajectories at the origin of the coordinate system. These are trajectories (10000 in total) obtained by a simple shooting, where both controls had been genrated randomly within the limits defined above. The white pixels are the end points (400 in total) of the trajectories generated by the DI solver. This demonstrates how useless is the simple shooting method in DI solving.
Stanislaw Raczynski
P.O.Box 22783
14000 Mexico D.F.Mexico
http://www.raczynski.com
Email : stanracz@netservice.com.mx
REFERENCES
Raczynski S., Differential Inclusions in System Simulation, Transaction of the SCS, March 1996, vol.13 (1), pp. 4754.
Zaremba S.K., Sur les equations au paratingent, Bull. Sci. Math., 60, Warszawa 1936.
Wazewski T., Sur un système de commande dont les trajectoires coincident avec les quasitrajectoires système de commande donnè, Bulletin de l´Academie Polonaise des Sciences, Ser. Sci. Math., Astr., Phys, vol 11, no 3, 1963.
Bargiela A., Hainsworth G., Pressure and flow uncertainty in water systems, ASCE Journal of Water Resources Planning and Management, Vol. 115, 2, March 1989, pp. 212229.
A Bargiela, Managing uncertainty in operational control of water distribution systems, in Integrated Computer Applications, Vol 2, (Ed.) B Coulbeck, J Wiley, 1993, ISBN: 0 471 94232 2, pp 353363.
Pontriagin L.S., Boltianskii V.G., Gamkrelidze R.V., Mishchenko E.F., Mathematical theory of optimal processes, John Wiley and Sons, New York, London, Sydney, 1967.
Raczynski S., On the determination of reachable sets and optimal control by the random method. IFAC Symposium "Optimization Methods", Varna, Bulgaria, 1974.