Tools or Toys?
|Table of Contents|
|2 The role of models in science|
|3 Why computer simulations are merely models and not experiments|
|3.1 Computer simulations are just elaborate models|
|3.2 Computer simulations are not experiments|
|3.2.1 The simulations-experiments dispute|
|3.2.2 Resolving the simulations-experiments dispute|
|4 The epistemology of simulations at work: How simulations are used to study chemical reactions in the ribosome|
|5 How do models explain in the social sciences?|
|6 Common obstacles for modeling in the social sciences|
Which of the two positions on the relation between models and experiments is the right one? Or how can these views be reconciled? In my opinion the dispute can be decided very clearly in favor of those who deny that simulations are experiments. The reason is straight forward: Simulations cannot generate any results that are not already implied in the theories and assumptions that enter into the simulation setup. Typically, these implications are not known to us and the simulation results may therefore be surprising just like the results of an experiment. Still, computer simulations cannot deliver anything that was not built into them. There is no causal influence from nature or, more precisely, from the investigated target system itself on the results of the simulation. And therefore there is consequently also no transfer of information from nature to the simulation results.
That there is a fundamental and irreconcilable categorial difference between simulations and experiments becomes most apparent when we think of the special kind of experiment which is called “experimentum crucis”, i.e. the kind of experiment which we use to test our most fundamental theories. For example, there is obviously no way how Young's double-slit experiment (Wikipedia double_slit), which was conducted in order to decide between the corpuscular and the wave theory of light, could be replaced by a computer simulation. Other than in the experiment, the outcome of a computer simulation would simply depend on which of the alternative theories is preferred by the programmer of the simulation.
But if there is such a fundamental and obvious difference between simulations and experiments, why do so many people then believe that simulations are experiments? A possible reason is that many experiments are indeed just simulations. If, for example, scale models are used to study the properties of some physical system, then one can reasonably maintain that the scale model is a simulation of the “real” system. This is nicely illustrated by a Quotation from John von Neumann, who explained the purpose that experiments in wind tunnels had at his time:
The purpose of the experiment is not to verify a proposed theory but to replace a computation from an unquestioned theory by direct measurement. Thus wind tunnels are used as computing devices to integrate the nonlinear partial differential equations of fluid dynamics.” (quote taken from (Winsberg 2003, p.\ 114))
One could say that in such cases the experimental setup is in effect an analog computer to perform certain calculations. It is no surprise that experimental setups that function as analog computers can safely be replaced by digital computer simulations. The quotation of von Neumann also nicely highlights an important prerequisite for simulations to replace experiments: There must already be an “unquestioned theory” the laws of which can safely be assumed to govern the phenomena that are simulated. As we will see later, this is one of the main problems of “simulation experiments” in the social sciences.
Admittedly, as there exist similarities as well as dissimilarities between simulations and models, the answer to the question whether simulations are experiments or not becomes somewhat relative because it depends on whether greater importance is attributed to the similarities or to the dissimilarities between the two. A very important reason for emphasizing the dissimilarities, however, is that if the differences between simulations and experiments become blurred, scientists (or the broader public for that matter) might much easier be inclined to overestimate the cognitive value of simulations and prefer to stick to pure simulation studies instead of carrying out the often much harder work of empirical or experimental testing of their hypotheses. A remark by Peter Hammerstein about the uselessness of the sort of simulation studies of the “evolution of cooperation” that became fashionable in the aftermath of Robert Axelrod's pioneering work (Axelrod 1984) vividly illustrates this problem:
Why is there such a discrepancy between theory and facts? A look at the best known examples of reciprocity shows that simple models of repeated games do not properly reflect the natural circumstances under which evolution takes place. Most repeated animal interactions do not even correspond to repeated games. (Hammerstein 2003, p.\ 83) [
Most certainly, if we invested the same amount of energy in the resolution of all problems raised in this discourse, as we do in publishing of toy models with limited applicability, we would be further along in our understanding of cooperation. (Hammerstein 2003, p.\ 92)
It is indeed well-nigh impossible to apply any of the results of the simulations of this particular simulation-tradition empirically, if by empirical application more than just drawing vague and superficial analogies is meant (Arnold 2008, p.\ 145ff.). What is even more worrisome is that in some instances simulation scientists even appear to be rather insensitive to the importance of empirical validation, as another example illustrates, where a journalist summarizes his discussion with a scientist who has simulated opinion dynamics:
None of the models has so far been confirmed in psychological experiments. Should one really be completely indifferent about that? Rainer Hegselmann becomes almost a bit embarrassed by the question. “You know: In the back of my head is the idea that a certain sort of laboratory experiments does not help us along at all.” (Groetker 2005, p.\ 2) 
This attitude is even more surprising because the way the simulation model is constructed it does not appear principally impossible to submitt it to some form of empirical testing (Hegselmann/Krause 2002). Normally one would expect a scientist to have a natural interest in the question whether his or her model is true or not. It stands to reason that the anti-empirical attitude of some simulation researchers is fostered by confounding the categories of simulation and experiment.
The pragmatic aspect of directing research in the wrong or right direction has not been paid much attention to in the philosophical discussion about the relation between simulations and experiments. If this aspect is taken into account then another important reason for distinguishing these two categories is the danger of drawing premature conclusions about the world from empirically unconfirmed simulation studies.
Summing it up: While certain types of experiments can indeed be replaced by simulations, there remains a fundamental difference between simulations and experiments in so far as in experiments we can put nature to the test, which we cannot do with simulations. Therefore, instead of thinking of simulations as experiments or experiment-like it would be more adequate to think of simulations as tools for analysing the consequences of theories.
But before we examine what consequences these results about the simulations have for the employment of simulations in the social sciences, I am going to illustrate what has been said about the epistemology of simulations so far with an example from the natural sciences.
 The German original of this passage reads: Keines der Modelle wurde bisher in psychologischen Experimenten bestätigt. Sollte einem das wirklich völlig egal sein? Rainer Hegselmann macht diese Frage fast ein wenig verlegen. "Wissen Sie: In meinem Hinterkopf ist die Idee, dass eine bestimmte Sorte von Laborexperimenten uns gar nicht weiterhilft."
 It should be considered less embarrasing for a scientist to test a model and fail the test than not to test one's own models at all. One notices a difference of attitude and research design between Hegselmann's and Krause's simulation study (Hegselmann/Krause 2002) and the earlier quoted study by Siebers, Aickelin, Celia and Clegg (Siebers et al. 2010). To be sure, Hegselmann and Krause did not have the same man-power at hand as the other team, but Hegselmann should at least be aware that empirical testing is essential in science.