Tools for Evaluating the Consequences of Prior Knowledge, but no Experiments. On the Role of Computer Simulations in Science

Eckhart Arnold

1 Introduction
2 Common features of simulations and experiments
3 Distinguishing features of experiments and simulations
4 Borderline cases
5 Summary and conclusion: Computer simulations as a tools for drawing conclusions from prior knowledge
Bibliography

2 Common features of simulations and experiments

Although the understanding of computer simulations as “computer experiments” (Gramelsberger 2010) or as a “third way” of doing science will be criticised in this article, it must be admitted that these characterizations of computer simulations are well motivated by a large number of sometimes striking similarities between computer simulations and experiments. In the following I first examine the similarities between simulations and experiments and then their alleged differences. The differences will be discussed at greater length than the similarities, because it is the claim of this article (disputed by others) that there exist several insurmountable differences between simulations and experiments, while it is not denied that there are indeed many common features.

The common features of simulations and experiments are the following:

  1. Methodological structure: Both simulations and experiments operate on an object to learn something about a target system. The object must in some way or other be representative of the target system. In the case of an experiment the object can also be identical with the target system or be an instance of the target system (see below, section 3, point 3). Here, with object the entity on which a computer simulation or an experiment operates is meant. With target system the entity in nature about which we want to learn something through a simulation or an experiment is meant.[2]

    An example for an experiment where the object is not identical with the target system would be a ripple tank that is used to study the nature of light waves. An example for an experiment where the target system is identical or at least an instance of the target system would be a pendulum that is used to study gravity. In the case of computer simulations the object is always a representation of the target system but never the target system itself. The only possible exception would be a simulation that is not conducted to learn anything about some target system in nature, but merely to study the properties of the model that is implemented in the simulations. In this case the simulation does not have a target system.

     
  2. Controlled Environment: Both simulations and experiments run in a controlled environment. However, in a simulation all causally effective factors are controlled by the simulation setup. In an experiment all but one specific factor, the factor under experimentation, is controlled.

    The fact that in order to run a computer simulation all parameters must have determinate values has been described as their “semantic saturation” by Barberousse et al. (2009, p. 572). One could say that from the point of view of nature an experiment is semantically saturated, too, but not from the point of view of the human experimenter.

     
  3. Interventions: Computer simulations just as experiments allow interventions on the object (Parker 2009, p. 487) (Morgan 2003, p. 223). In fact, the easiness of intervening in the modell and monitoring the effects is one of the advantages of computer simulations over material experiments.
     
  4. Evaluation tools: Computer simulations apply tools that were formerly thought of as typical for experimental data analysis like visualisation, statistics or data mining (Winsberg 2010, p. 33). This, again, emphasizes the similarity between both types of scientific procedure.
     
  5. Error management: Similar techniques of error management are used for both simulations and experiments. Among these are:
    1. Validation of the setup (or the apparatus) against cases with known results.
       
    2. Testing for the responsiveness on interventions.
       
    3. Replicating the results.
       
    4. Testing for the conformance of the results with undisputed theoretical and phenomenological background knowledge.

    Regarding the last two points, one might wonder what sense these make in the case of simulations, since the exact replication of simulation results becomes trivial if both the program code and the system specification are given. And conformance of the results with background theory becomes trivial if the theory is built into the simulations (Winsberg 2010, p. 44). However, replicating a simulation in a different system environment, say, with a different simulation package or another set of math libraries may indeed be useful in order to ascertain that the simulation results do not depend on the idiosyncrasies of a particular computational environment or infrastructure. For experiments, too, replication under varied conditions is considered to confirm the experiment's results more than replication under exactly the same conditions (Franklin/Howson 1984). Only, in contrast to computer simulations replication under the same conditions still bestows some additional inductive confirmation to an experiment. Contradictions of the computer simulation's results with the background theory might reveal implementation errors or unanticipated consequences of approximations and simplifications. Just like with experiments there is “constant concern for uncertainty and error” (Winsberg 2010, p. 34) in simulations.

     
  6. Unanticipated results: Both simulations and experiments allow us to learn something new and potentially surprising about their object (Morgan 2003, p. 224)[3] and, if the object is truly representative, also about the target system.
     
  7. Partial autonomy from theory: Just as it has been described for experiments by Hacking (Hacking 1983), simulations “have a life of their own” and are in part “self-vindicating” (Winsberg 2010, p. 45). Winsberg (2001, p. 447) has described simulations as “downward”, “autonomous” and “motley”. At least autonomy and being motley can equally well be ascribed to experiments.
     
  8. Comparable epistemological challenges: Simulations and experiments share the challenge of bridging the gap between their object and the target system, or to put it differently, between the laboratory setup on the one hand and the real world outside the laboratory on the other hand (Arnold 2008, p. 174/175). Again, there are a subtle differences in this respect between simulations and experiments: In the case of a simulation the gap to be bridged is that between the numerical representation and the represented target system. In the case of the experiment the question is rather if the behaviour that the target system exposes under laboratory conditions can be transferred to real world instances of the target system.

Thus, there is a considerable number of important features which computer simulations and experiments have in common or with regards to which they appear to be at least very similar. But the similarities would justify placing simulations into the same category as experiments only if there are not, at the same time, any fundamental differences between simulations and experiments. Therefore, in the following, the alleged differences between simulations and experiments will be examined.

[2] In this article the term “target system” is strictly confined to empirical target systems.

[3] Mary Morgan makes a subtle difference between simulations that “may surprise the experimenter” and relatively more material experiments that “may yet confound the experimenter” (Morgan 2003). In contrast to Morgan, I do not believe that there is a difference regarding the kind and strength of “surprise” that computer simulations on the one hand side and material experiments on the other hand side may elicit from the experimenter. Rather, the difference she hints to that the relatively more experimental procedure contains real, and therefore to her estimate potentially “confounding”, empirical data is in this article taken care of by considering the gathering of new empirical data as a distinguishing feature of experiments in contrast to simulations (see point 3 in section 3 below).

t g+ f @