The Dark Side of the Force: When computer simulations lead us astray and "model think" narrows our imagination
- Preconference draft, Models and Simulations, Paris, June 12-14 -

Eckhart Arnold

1 Introduction
2 Different aims of computer simulations in science
3 Criteria for “explanatory” simulations
4 Simulations that fail to explain
5 Conclusions
Bibliography

3 Criteria for “explanatory” simulations

But in what sense can a computer simulation be explanatory? And what are the criteria a computer simulation must meet in order to be explanatory?

A computer simulation can be called explanatory if it adequately models some empirical situation and if the results of the computer simulation (the simulation results) coincide with the outcome of the modeled empirical process (the empirical results). If this is the case, we can conclude that the empirical results have been caused by the very factors (or, more precisely, by the empirical correspondents of those factors) that have brought about the simulation results in the computer simulation.

To take an example, let us say we have a game theoretic computer simulation of the repeated prisoner's dilemma where under certain specified conditions the strategy “tit for tat” emerges as the clear winner. Now, assume further that we know of an empirical situation that closely resembles the repeated prisoner's dilemma with exactly the same conditions as in our simulations. (Probably, the only way to bring this about would be by conducting a game theoretic experiment, where the conditions can be closely monitored.) And let us finally assume that also in the empirical situation the “tit for tat” strategy emerges as the most successful strategy. Then we are entitled to conclude that “tit for tat” was successful in the empirical case, because the situation was a prisoner's dilemma with such and such boundary conditions and because - as the computer simulation shows - “tit for tat” is a winning strategy in repeated prisoner's dilemma situations under the respective conditions.

Now that we have seen how explanations by computer simulations work in principle, let us ask what are the criteria a computer simulation must fullfill in order to deserve the title of an explanatory simulation. The criteria should be such as to allow us to check whether the explanation is valid that is whether the coincidence of the results is due to the congruence of the operating factors (in the empirical situation and in the computer simulation) or whether it is merely accidental.

As criteria that a computer simulation must meet in order to be an explanatory model of an empirical process, I propose the following:

  1. Adequacy Requirement: All causally relevant factors of the modeled empirical process must be represented in the computer simulation.
     
  2. Stability Requirement: The input parameters of the simulation must be measurable with such accuracy that the simulation results are stable within the range of inaccuracy of measurement.

If both criteria are met, we can say that there exists a close fit between model and modeled reality. The claim I wish to hold is that only if there is a close fit between model and reality we are entitled to say that the model explains anything. Even though these criteria are very straight forward, a little discussion will be helpful for a better understanding.

Regarding the first criterion, it should be obvious that if not all causally relevant factors are included then any congruence of simulation results and empirical results can at best be accidental. Two objections might be raised at this point: 1) If there really is a congruence of simulation results and empirical results should that not allow us to draw the conclusion that the very factors implemented in the computer simulation are indeed all factors that are causally relevant? 2) If we use computer simulations as a research tool to find out what causes a certain empirical phenomenon, how are we to know beforehand what the causally relevant factors are, and how are we ever to find out, if drawing reverse conclusions from the compliance of the results to the relevant causes is not allowed?

As to the first objection: If the simulation is used to generate empirical predictions and if the predictions come true then this can - with some hesitations - indeed be taken as a hint to its capturing all relevant causes of the empirical process in question. The hesitations concern the problem that even if a simulation has predictive success it can still have been based on unrealistic assumptions. Sometimes the predictive success of a simulation can even be increased by sacrificing realism. Therefore, in order to find out whether the factors incorporated in the computer simulation are the causally relevant factors we cannot rely on predictive success alone, but we have to consult other sources as well, such as our scientific background knowledge about the process in question.

As to the second objection: If we have a simulation that predicts correctly then we are - with the hesitations mentioned above - entitled to draw reverse conclusions from the compliance of the results to the exclusive causal relevance of the incorporated factors or mechanisms. However, this is impermissible if the simulation does not generate predictions but is just meant to give an ex-post explanation. For, if we only try long enough, we are almost sure to find some computer simulation and some set of input parameters that matches a previously fixed set of output data. The task of finding such a simulation amounts to nothing more than finding any arbitrary algorithm that produces a given pattern. But then we will only accidentally have hit on the true causes that were responsible for the results in the empirical process.[1]

Therefore, only if we make sure that all causally relevant factors are included in the simulation, we can take it as an explanation. And usually we cannot assure this by relying on the conformance of the simulation results and the empirical results alone without any further considerations. Summarizing we can say: If the first criterion is not fullfilled, then the computer simulation does not explain anything.

The second criterion is even more straight forward. If the model is unstable, then we will not be able to check whether the simulation model is adequate. For, if it is not stable within the inevitable inaccuracies of measurement, it does not deliver one result, but a range of different results. But then we cannot say for sure, whether the empirical results are due to the factors the model captures. Imagine for example, we had a game theoretic model that tells us whether some actors will cooperate or not cooperate. Now assume, we had some empirical process at hand where we know that the actors cooperate and we would like to know whether they do so for the very reasons the model suggests. In other words: We would like to know whether our model can explain why they cooperate. If the model is unstable then - due to measurement inaccuracy - we do not know whether the empirical process falls within in the range of input parameters for which the model predicts cooperation or not. Then there is no way to tell whether the actors in the empirical process cooperated, because of the reasons the model suggests or, quite the contrary, inspite of what the model would predict.

A special case of this problem of model stability and measurement inaccuracies occurs when we can only determine the ordinal relations of greater and smaller of some empirical quantity, but not its cardinal value (perhaps, because it does not have a cardinal value by its very nature such as the quantity of utility in economics for example), even though the simulation crucially depends on the ordinal value of the respective input parameter.[2] Briefly put, the morale of the second criterion is: If condition two is not met, we cannot know whether the computer simulation explains.

In connection with the first criteria the requirement of model stability (in relation to measurement inaccuracy) gives rise to a kind of dilemma. An obvious way to make a model more adequate is by including further parameters. Unfortunately, the more parameters are included in the model the harder it becomes to handle. Often, though not necessarily, a model looses stability by including additional parameters. Therefore, in order to assure that the model is adequate (first criterion), we may have to lower the degree of abstraction by including more and more parameters. But then the danger increases that our model will not be sufficiently stable any more (second criterion).

There exists no general strategy to avoid this dilemma. In many cases it may not be possible at all to get around the dilemma. But this should not come as a surprise. It merely reflects the fact that the use computer simulations is, of course, limited. With the tool of computer simulations many scientific problems get into the reach of formal modelling that would be hard to handle with pure mathematics alone. Still, many scientific problems remain outside the realm of what can be handled with formal methods, either because of their complexity or because of the nature of the problem. This remains especially true for many areas of the social sciences.

Apart from the two criteria listed above it is important that the output of the computer simulation should reflect the empirical results with all the details that are regarded as scientifically important, and not just - as it sometimes happens - merely a much sparser substructure of them. [3] For example, we may want to use game theoretic model like the prisoner's dilemma to study the strategic interaction of states in politics. The game theoretic model will tell us whether the states will cooperate or not, but most probably it will say nothing about the concrete form of cooperation (diplomatic contacts, trade agreements, international contracts etc.) or non cooperation (embargos, military action, war etc.). Therefore, even if the model or simulation really was predictively accurate, it does at best provide us with a partial explanation, because it does not explain all aspects of the empirical outcome that interest us. In the worst case it's explanatory - or, as the case may be, it's predictive - power is almost as poor as that of a horoscope. The prediction of a horoscope that, for example, tomorrow “something important” will happen easily becomes true, because of its vagueness. Similarly, if a game theoretic simulation predicts that the parties of a political conflict will stop cooperating at some stage, but does not tell us whether this implies, say, the outbreak of war or just the breakup of diplomatic relations then it only offers us comparatively unimportant information. We could also say that if the simulation results fail to capture all important features of the empirical outcome then the computer simulation “misses the point”.

Summing it up: Only if a computer simulation closely fits the simulated reality - that is if it adequately models the causal factors involved, if it is stable and if it is descriptively rich enough to “hit the point” - it can claim to be explanatory.

[1] The problem here is in some respects similar to the problem of curve fitting, where one has to deal with the danger of overfitting. One could try to apply similar tricks here as are often used with curve fitting. For example, one could try to turn an ex-post explanation into a quasi-prediction by dividing the data set (that describes the empirical results) and then designing and calibrating the simulation on only one part of the divided data set. The thus calibrated simulation is then used to “predict”, or rather “quasi-predict” the other part of the data set. If the “quasi-predictions” prove to be true, we have some reason to assume that we have hit upon the real causes. But, even if we use such methods to create quasi-predictions, the above mentioned caveats apply.

[2] This is a well known restriction for modelling in economics, but it seems to have fallen into oblivion when computer simulations hit the scene.

[3] This requirement could also be regarded as a second adequacy criteria, but to keep things simple it has been left out from the list of criteria above.

t g+ f @