Tools for Evaluating the Consequences of Prior Knowledge, but no Experiments. On the Role of Computer Simulations in Science
|Table of Contents|
|2 Common features of simulations and experiments|
|3 Distinguishing features of experiments and simulations|
|4 Borderline cases|
|5 Summary and conclusion: Computer simulations as a tools for drawing conclusions from prior knowledge|
Although there are many striking similarities, there are also quite a few differences between simulations and experiments. The question is if any of these differences is epistemically relevant and important enough to place simulations and experiments into different epistemological categories. A difference would be epistemically relevant if, due to this difference, experiments allow us to gain knowlege about the real world that cannot, even in principle, be gained by simulations. If such a difference exists then we can - and in fact have to - distinguish between simulations and experiments.
The following candidates for distinguishing features of simulations and experiments have been suggested:
The author coming closest to the view falsely ascribed to Humphreys by Winsberg is Wendy Parker, who maintains that “Computer simulation studies ... are .. material experiments in a straightforward sense”, because the “system directly intervened on during a computer simulation study is a material/physical system” (Parker 2009, p. 495). But even this formulation does not imply that Parker considers a computer simulation as a trial of the computer as a physical system.
A somewhat less extreme view is taken by Morrison, who “locates the materiality not in the machine itself but in the simulation model” (Morrison 2009, p. 45). With this wording, however, her denial that materiality makes a difference rests on a rather liberal use of the word “material” which blurs the distinction between the representation of a material object and the material object itself. Her argument therefore leaves the fact undisputed that computer simulations work with a numerical and in this sense non-material representation of the target system while “material” experiments examine a physical object directly. Just as Parker (2009), Morrison is still right in so far as the materiality of experiments does not generally render the results of experiments more credible than those of simulations (Morrison 2009, p. 54f.).
Where does all this leave us? According to Winsberg (2010, p. 62) the property of “materiality” is too vague to mark the difference between simulations and experiments (Winsberg 2010, p. 62). However, this vagueness seems to be mostly due to the fact that different authors are using the term “materiality” in a different sense. If “materiality” is strictly confined to that what happens on the physical level then it can serve as a distinguishing feature because computer simulations represent their target system on a semantic level which experiments do not. But even then it is not the fact that computer simulations represent the target system on a semantic level as such that forbids them to preempt the epistemic function of experiments in all possible cases, but that as a consequence of this fact they cannot operate on the target system directly.
Although, as has just been argued, materiality can be appealed to in order to draw a line between simulations and experiments, an epistemically more relevant difference consists in the fact that “in a simulation one is experimenting with a model rather than the phenomenon itself” (Gilbert/Troitzsch 2005, p. 14). Or, in other words: Simulations do not operate directly on the target system.
It is true that - as Margret Morrison asserts (Morrison 2009, p. 43) - in many experiments the measuring procedures already assume a model of the target system. But while the outcome of a computer simulation of a target system is exclusively determined by the model of the target system embedded in the simulation, the outcome of an experimental measurement is not determined by the model of the target system alone. Rather, it is determined by the measurement device, which, as the case may be, presumes a model of the target system (Morrison 2009, p. 43) and the target system itself.
It is also true that not all experiments operate directly on the target system. If one experiments with an electrical harmonic oscillator in order to learn something about a mechanical oscillator (Hughes 1999, p. 138), then this experiment does not operate on the “phenomenon itself”. However, in order to make a difference between the two categories of computer simulations and experiments it suffices that some experiments operate directly on the target system and no computer simulation operates directly on the target system. While the latter is obvious it has been called into question by Eric Winsberg, whether there really are any experiments of which it can be said - without further qualification - that they operate directly on the target system. As Winsberg explains: “It might be argued, of course, that Mendel's peas and Galileo's chandelier are instances of the systems of interest few of Galileo's contemporaries would have thought of his chandelier as a `freely falling object.' Some, conceivably, might have doubted that cultivated plants are an instance of natural heredity” (Winsberg 2010, p. 52/53). However, Winsberg's examples do not show that experiments do not operate directly on the target system, but they highlight a completely different problem, namely, whether the inductive inference from a particular instance to the general case is warranted. Even if in the two examples a hypothetical sceptic might deny the validity of inductive reasoning, both examples remain examples of experiments that operate directly on the target system. The most that can be concluded from them is that operating directly on the target system is not much of an epistemic advantage for experiments in cases where we are in doubt about what kind of inductive inferences from the experimental results are warranted.
Thus, operating directly on the target system is a feature that distinguishes at least some experiments from computer simulations. And, what is more, it is a feature that extends the epistemic reach of experiments beyond that of simulations. Operating directly on the target system therefore marks an epistemically relevant difference between experiments and simulations.
Yet, a certain kind of epistemic primacy can still be claimed for experiments on behalf of their more direct relation to the empirical world. Experiments are more directly related to the empirical world because the object of an experiment is part of the world, whereas the object of a computer simulation is always a numerical representation of a model of some part of the world. The difference is of minor importance if the object and the target system are not the same, because then the crucial question is whether the object adequately represents the target system, which in this case (and in this case only) is the same question for computer simulations and experiments. But the fact that experiments operate on real-world objects becomes very important when fundamental hypotheses need to be tested. A hypothesis is fundamental if neither the hypothesis nor its negation are implied by known facts and known natural laws. Because it is in principle impossible to test fundamental hypotheses with computer simulations, one can reasonably say that experiments are “epistemically prior” (Winsberg 2010, p. 71) to computer simulations. It should be noted that the status of being a fundamental hypothesis may change over time. For example, Kepler's laws were fundamental at the time of their discovery. But later they could be derived from Newton's theory of gravity. Still, at any point in the history of science there exist some fundamental hypotheses that in virtue of their being fundamental cannot be tested by a computer simulation but need to be tested by material experiments.
Thus, the fact that experiments can be employed to test fundamental hypotheses while computer simulations cannot, is another distinguishing feature.
In contrast, the expression “new knowledge” is used here already if no human agent has been aware of it so far (and in that sense hasn't known it), notwithstanding whether it is implied in our previous knowledge or not. Thus, one can reasonably say that computer simulations provide us with new knowledge about the world, even though all of the knowledge that simulations can provide is already contained in the theories and other explicit or implicit assumptions that enter into the construction of the computer simulations, like modeling assumptions, local theories and (old) empirical data.
A difficulty when discussing this point is that the terminology is by no means fixed and can thus easily be misunderstood. It must therefore be emphasized that “generating new empirical data” is strictly understood in the sense of obtaining new data from the empirical system under study. The situation can be illustrated by a simulation of H-tunnelling on the basis of quantum mechanics which has been conducted by Goumans/Kaestner (2010). In this example computer simulations lead to the conclusion that tunneling of H contributes to H-enrichment in outer space. This is new knowledge because without the simulation results there would be much less reason to consider this true, although it could of course be conjectured. Yet, this simulation did neither gather nor generate any new empirical data. The data it produced is derived from quantum mechanics plus further modeling assumptions. One can say that quantum mechanics is itself derived - by means of inductive and abductive reasoning - from the physical experiments that lead to the development of quantum mechanics. Thus, in a sense, they indirectly form part of the input of the simulation. But this does not make the simulation any more empirical. Only if the the simulation results would be data about those empirical systems from which the input data was taken, the simulation could be understood as the evaluation component of a highly refined empirical measurement procedure (see section 4.4 below). Since this is not the case here, it really is a simulation, i.e. a non-empirical procedure of deriving knowledge about particular empirical systems.
The position maintained here that simulations merely explore the consequences of existing knowledge is denied by Winsberg who, referring to another example, says: “To think it is true is to assume that anything you learn from a computer simulation based on a theory of fluids is somehow already `contained' in that theory. But to hold this is to exaggerate the representational power of unarticulated theory. It is a mistake to think of simulations as tools for unlocking hidden empirical content” (Winsberg 2010, p. 54). However, save for the qualification that what we can learn from a simulation is not only what is contained in the theory but what is contained in the theory plus further implicit or explicit modeling assumptions, Winsberg is simply wrong here. There just are no other sources from which simulations can draw in order to produce their results.
Thus, another clear and important distinguishing feature of experiments is that experiments generate empirical data and can therefore provide us with new empirical information while simulations cannot.
To sum up the discussion, we find three important distinguishing features of experiments: 1) Experiments can provide us with new empirical data, computer simulations cannot. 2) Some experiments operate directly on the target system, computer simulations never do. 3) Experiments can be used for the testing of fundamental hypotheses (experimentum crucis), computer simulations cannot.
 This has been confirmed to me by Paul Humphreys.
 I am indebted to Johannes Kästner from the Institute of Theoretical Chemistry at the University of Stuttgart for explaining this simulation to me.