Tools for Evaluating the Consequences of Prior Knowledge, but no Experiments. On the Role of Computer Simulations in Science
|Table of Contents|
|2 Common features of simulations and experiments|
|3 Distinguishing features of experiments and simulations|
|4 Borderline cases|
|5 Summary and conclusion: Computer simulations as a tools for drawing conclusions from prior knowledge|
Although the understanding of computer simulations as “computer experiments” (Gramelsberger 2010) or as a “third way” of doing science will be criticised in this article, it must be admitted that these characterizations of computer simulations are well motivated by a large number of sometimes striking similarities between computer simulations and experiments. In the following I first examine the similarities between simulations and experiments and then their alleged differences. The differences will be discussed at greater length than the similarities, because it is the claim of this article (disputed by others) that there exist several insurmountable differences between simulations and experiments, while it is not denied that there are indeed many common features.
The common features of simulations and experiments are the following:
An example for an experiment where the object is not identical with the target system would be a ripple tank that is used to study the nature of light waves. An example for an experiment where the target system is identical or at least an instance of the target system would be a pendulum that is used to study gravity. In the case of computer simulations the object is always a representation of the target system but never the target system itself. The only possible exception would be a simulation that is not conducted to learn anything about some target system in nature, but merely to study the properties of the model that is implemented in the simulations. In this case the simulation does not have a target system.
The fact that in order to run a computer simulation all parameters must have determinate values has been described as their “semantic saturation” by Barberousse et al. (2009, p. 572). One could say that from the point of view of nature an experiment is semantically saturated, too, but not from the point of view of the human experimenter.
Regarding the last two points, one might wonder what sense these make in the case of simulations, since the exact replication of simulation results becomes trivial if both the program code and the system specification are given. And conformance of the results with background theory becomes trivial if the theory is built into the simulations (Winsberg 2010, p. 44). However, replicating a simulation in a different system environment, say, with a different simulation package or another set of math libraries may indeed be useful in order to ascertain that the simulation results do not depend on the idiosyncrasies of a particular computational environment or infrastructure. For experiments, too, replication under varied conditions is considered to confirm the experiment's results more than replication under exactly the same conditions (Franklin/Howson 1984). Only, in contrast to computer simulations replication under the same conditions still bestows some additional inductive confirmation to an experiment. Contradictions of the computer simulation's results with the background theory might reveal implementation errors or unanticipated consequences of approximations and simplifications. Just like with experiments there is “constant concern for uncertainty and error” (Winsberg 2010, p. 34) in simulations.
Thus, there is a considerable number of important features which computer simulations and experiments have in common or with regards to which they appear to be at least very similar. But the similarities would justify placing simulations into the same category as experiments only if there are not, at the same time, any fundamental differences between simulations and experiments. Therefore, in the following, the alleged differences between simulations and experiments will be examined.
 In this article the term “target system” is strictly confined to empirical target systems.
 Mary Morgan makes a subtle difference between simulations that “may surprise the experimenter” and relatively more material experiments that “may yet confound the experimenter” (Morgan 2003). In contrast to Morgan, I do not believe that there is a difference regarding the kind and strength of “surprise” that computer simulations on the one hand side and material experiments on the other hand side may elicit from the experimenter. Rather, the difference she hints to that the relatively more experimental procedure contains real, and therefore to her estimate potentially “confounding”, empirical data is in this article taken care of by considering the gathering of new empirical data as a distinguishing feature of experiments in contrast to simulations (see point 3 in section 3 below).