The Dark Side of the Force: When computer simulations lead us astray and "model think" narrows our imagination. (revised version, October 2006)

Eckhart Arnold

1 Introduction
2 Different aims of computer simulations in science
3 Criteria for explanatory simulations
4 Examples of Failure: Axelrod style simulations of the “evolution of cooperation”
5 Conclusion
Bibliography

3 Criteria for explanatory simulations

But in what sense can a computer simulation be explanatory? And what are the criteria a computer simulation must meet in order to be explanatory?

A computer simulation can be called explanatory if it adequately models some empirical situation and if the results of the computer simulation (the simulation results) coincide with the outcome of the modeled empirical process (the empirical results). If this is the case, we can conclude that the empirical results have been caused by the very factors (or, more precisely, by the empirical correspondents of those factors) that have brought about the simulation results in the computer simulation.

To take an example, let us say we have a game theoretic computer simulation of the repeated prisoner's dilemma where under certain specified conditions the strategy Tit For Tat emerges as the clear winner. Now, assume further that we know of an empirical situation that closely resembles the repeated prisoner's dilemma with exactly the same conditions as in our simulations.

And let us finally assume that also in the empirical situation the Tit For Tat strategy emerges as the most successful strategy. Then we are entitled to conclude that Tit For Tat was successful in the empirical case, because the situation was a repeated prisoner's dilemma with such and such boundary conditions and because - as the computer simulation shows - Tit For Tat is a winning strategy in repeated prisoner's dilemma situations under the respective conditions.

Now that we have seen how explanations by computer simulations work in principle, let us ask what are the criteria a computer simulation must fullfill in order to deserve the title of an explanatory simulation. The criteria should be such as to allow us to check whether the explanation is valid that is whether the coincidence of the results is due to the congruence of the operating factors (in the empirical situation and in the computer simulation) or whether it is merely accidental.

As criteria that a computer simulation must meet in order to be an explanatory model of an empirical process, I propose the following:

  1. Adequacy Requirement: All (or at least all known[1] ) causally relevant factors of the modeled empirical process must be represented in the computer simulation.
     
  2. Robustness or Stability Requirement: The input parameters of the simulation must be measurable with such accuracy that the simulation results are consistent within the range of inaccuracy of measurement.
     
  3. Descriptive Appropriateness or Non-Triviality Requirement: The results of the computer simulation must reflect all or at least some important features (that is features the explanation of which is desired) of the results of the modeled empirical process.

If all of these criteria are met, we can say that there exists a close fit between model and modeled reality. The claim I wish to hold is that only if there is a close fit between model and reality we are entitled to say that the model explains anything. Even though these criteria are very straightforward, a little discussion will be helpful for a better understanding.

Regarding the first criterion, it should be obvious that if not all causally relevant factors are included then any congruence of simulation results and empirical results can at best be accidental. Two objections might be raised at this point: 1) If there really is a congruence of simulation results and empirical results should that not allow us to draw the conclusion that the very factors implemented in the computer simulation are indeed all factors that are causally relevant? 2) If we use computer simulations as a research tool to find out what causes a certain empirical phenomenon, how are we to know beforehand what the causally relevant factors are, and how are we ever to find out, if drawing reverse conclusions from the compliance of the results to the relevant causes is not allowed?

To these objections the following can be answered: If the simulation is used to generate empirical predictions and if the predictions come true then this can indeed be taken as a hint to its capturing all relevant causes of the empirical process in question. With certain reservations we are then entitled to draw reverse conclusions from the compliance of the results to the exclusive causal relevance of the incorporated factors or mechanisms. The reservations concern the problem that even if a simulation has predictive success it can still have been based on unrealistic assumptions. Sometimes the predictive success of a simulation can even be increased by sacrificing realism. Therefore, in order to find out whether the factors incorporated in the computer simulation are the causally relevant factors we should not rely on predictive success alone, but we should consult other sources as well, such as our scientific background knowledge about the process in question. Also, if we already know (for whatever reason) that a certain factor is causally relevant for the outcome of the empirical process under investigation and if this factor is not included in the simulation of this process then even if the simulation predicts correctly, it cannot be said that it explains correctly.

Furthermore, drawing conclusions from the predictive success of a simulation to its explanatory validity is impermissible in the case of ex-post predictions. For, if we only try long enough, we are almost sure to find some computer simulation and some set of input parameters that match a previously fixed set of output data. The task of finding such a simulation amounts to nothing more than finding any arbitrary algorithm that produces a given pattern. But then we will only accidently have hit on the true causes that were responsible for the results in the empirical process.

Therefore, only if we make sure that at least all factors that are known to be causally relevant are included in the simulation, we can take it as an explanation. And usually we cannot assure this by relying on the conformance of the simulation results and the empirical results alone without any further considerations. Summarizing we can say: If the first criterion is not fullfilled, then the computer simulation does not explain.

The second criterion is even more straightforward. If the model is unstable, then we will not be able to check whether the simulation model is adequate. For, if it is not stable within the inevitable inaccuracies of measurement, this means that the model delivers different results within the range of inaccuracy of the measured input parameters. But then we can neither be sure that the model is right, when the model results match the empirical results, nor that it is wrong, when they don't (unless the empirical results fall even outside the range of possible simulation results for the range of inaccuracy of the input parameters). Let's for example imagine we had a game theoretic model that tells us whether some actors will cooperate or not cooperate. Now assume, we had some empirical process at hand where we know that the actors cooperate and we would like to know whether they do so for the very reasons the model suggests or, in other words, we would like to know whether our model can explain why they cooperate. If the model is unstable then - due to measurement inaccuracy - we do not know whether the empirical process falls within the range of input parameters for which the model predicts cooperation or not. Then there is no way to tell whether the actors in the empirical process cooperated, because of the reasons the model suggests or, quite the contrary, inspite of what the model predicts.

A special case of this problem of model instability and measurement inaccuracies occurs when we can only determine the ordinal relations of greater than and smaller than of some empirical quantity but not its cardinal value (perhaps, because it does not have a cardinal value by its very nature such as the quantity of utility in economics[2] ). In this case the empirical validation of any simulation that crucially depends on the cardinal values of the respective input parameters will be impossible. Briefly put, the morale of the second criterion is: If condition two is not met, we cannot know whether the computer simulation explains.

In connection with the first criteria the requirement of model stability (in relation to measurement inaccuracy) gives rise to a kind of dilemma. In many cases an obvious way to make a model more adequate is by including further parameters. Unfortunately, the more parameters are included in the model the harder it becomes to handle. Often, though not necessarily, a model looses stability by including additional parameters. Therefore, in order to assure that the model is adequate (first criterion), we may have to lower the degree of abstraction by including more and more parameters. But then the danger increases that our model loses stability (second criterion).

There exists no general strategy to avoid this dilemma. In many cases it may not be possible at all. But this should not come as a surprise. It merely reflects the fact that the powers of computer simulations are - as one should certainly expect - at some point limited. With the tool of computer simulations many scientific problems that would be hard to handle with pure mathematics alone get within the reach of formal treatment. Still, many scientific problems remain outside the realm of what can be described with formal methods, either because of their complexity or because of the nature of the problem. This remains especially true for many areas of the social sciences.

The third criteria requires that the output of the computer simulation should reflect the empirical results with all the details that are regarded as scientifically important and not just - as it sometimes happens - merely a much sparser substructure of them. For example, we may want to use game theoretic models like the prisoner's dilemma to study the strategic interaction of states in politics. The game theoretic model will tell us whether the states will cooperate or not, but most probably it will say nothing about the concrete form of cooperation (diplomatic contacts, trade agreements, international contracts etc.) or non cooperation (embargos, military action, war etc.). Therefore, even if the model or simulation really was predictively accurate, it does at best provide us with a partial explanation, because it does not explain all aspects of the empirical outcome that interest us. In the worst case it's explanatory or, as the case may be, it's predictive power is almost as poor as that of a horoscope. The prediction of a horoscope that tomorrow “something of importance” will happen easily becomes true, because of its vagueness. Similarly, if a game theoretic simulation predicts that the parties of a political conflict will stop cooperating at some stage but does not tell us whether this implies, say, the outbreak of war or just the breakup of diplomatic relations then it only offers us comparatively unimportant information. We could also say that if the simulation results fail to capture any important features of the empirical outcome then the computer simulation “misses the point”.

Summing it up: Only if a computer simulation closely fits the simulated reality - that is if it adequately models the causal factors involved, if it is stable and if it is descriptively rich enough to “hit the point” - it can claim to be explanatory.

[1] The restriction to all known causes was suggested by Claus Beisbart to avoid an epistemic impassé when simulations are employed as a tool to find out just what the causally relevant factors of a given empirical process are.

[2] This is a well known restriction that affects a large part of the modeling done in economics.

t g+ f @