Tools or Toys?
|Table of Contents|
|2 The role of models in science|
|3 Why computer simulations are merely models and not experiments|
|3.1 Computer simulations are just elaborate models|
|3.2 Computer simulations are not experiments|
|4 The epistemology of simulations at work: How simulations are used to study chemical reactions in the ribosome|
|5 How do models explain in the social sciences?|
|6 Common obstacles for modeling in the social sciences|
Quite a few authors have claimed that computer simulations, being a revolutionary new tool of science, call for a new kind of philosophy or require an epistemology of their own that pays due credit to their distinctive character (Humphreys 2009, Humphreys 2004, Winsberg 2001). The acclaimed novelty of computer simulations has been examined in detail by Roman Frigg and Julian Reiss (Frigg/Reiss 2009), who come to the conclusion that computer simulations do not raise any new or substantially different philosophical question from those that are discussed in the philosophy of science already. Here I am mostly concerned with the issue of validation of models. Frigg's and Reiss' most important point regarding the issue of validation comes up in connection with Winsberg's notion that the specific epistemological features of simulations are that they are constructed “downward” (i.e. starting from a theory), “autonomous” (from empirical data which may not or only sparsely be available for the simulated process) and “motley” (i.e. partially independent from theory, freely mixing ad-hoc assumptions and assumptions from the theoretical backgrounds) (Winsberg 2001, p.\ 447/448). With respect to these features, Frigg and Reiss contend:
it is hard to see, at least without further qualifications, how justification could derive from construction in this way. There does not seem to be a reason to believe that the result of a simulation is credible just because it has been obtained using a downward, autonomous and motley process. In fact, there are models that satisfy these criteria and whose results are nevertheless not trustworthy. (Frigg/Reiss 2009, p.\ 600)
An important pragmatic point made by Frigg and Reiss is that putting too much emphasis on the question of novelty of simulations may divert the attention of philosophers of science from more important and relevant questions:
Blinkered by the emphasis on novelty and the constant urge to show that simulations are unlike anything we have seen before, we cannot see how the problems raised by simulations relate to exiting problems and we so forgo the possibility to have the discussions about simulation make contributions to the advancement of these debates. [
For instance, if we recognise that the epistemological problems presented to us by simulations have much in common with the ones that arise in connection with models, we can take the insights we gain in both fields together and try to make progress in constructing the sought-after new epistemology. (Frigg/Reiss 2009, p.\ 611)
Frigg's and Reiss' view that computer simulations do not introduce new issues to the philosophy of science has been criticised by Humphreys (Humphreys 2009). Rather than repeating the arguments by Frigg and Reiss, with which I largely emphasize, I am going to discuss the main counter arguments by Humphreys as far as they may have a bearing on the question of validation. What remains to be done is to show that simulations are models and not experiments, for this is an issue with respect to which Frigg and Reiss remain neutral.
Those of Humphreys' arguments for the novelty of computer simulations that are potentially relevant for the validation issue concern 1) the epistemic opacity of simulations, 2) the different semantics of simulations, 3) the temporal dynamics of simulations and 4) the crucial difference between “in principle” and “in practice” that according to Humphreys deserves special attention once simulations enter the scene.
1) According to Humphreys, computer simulations are epistemically opaque because we cannot monitor every single step (“epistemically relevant elements of the process” (Humphreys 2009, p.\ 618)) of a simulation that may run through many millions and billions of “epistemically relevant” steps before it produces its result. Does this have any bearing on the validation of simulations? If it does then it can only mean that simulations are a comparatively more dangerous tool than models, because many simulations are opaque in Humphrey's sense. However, since we do know and understand the algorithms programmed into a simulation, and since we can probably at least monitor a few samples of the “epistemically relevant elements” of the simulation process, this kind of opacity may not pose too much of a problem for the justification of the simulation.
2) Humphreys believes that neither the syntactic nor the semantic view of theories are fully adequate to capture just how simulations relate to target systems. According to him this relation differs from how traditional models are applied: “It is in replacing the explicitly deductive relation between the axioms and the prediction by a discrete computational process that is carried out in a real computational device that the difference lies.” (Humphreys 2009, p.\ 620). The aspect that Humphreys hints at is the “semi-autonomy” of models from theory (see point 2.1 on page 2.1). So, this aspect is already taken care of.
I am not sure what exactly falls under the category of “traditional models”, but I doubt that it is ultimately just computer simulations that depart from the scheme of an “explicitly deductive relation between the axioms and the prediction”. If this is true then it appears to be better to draw the line between simple application of a theory on the one hand side and models and simulations on the other hand side, as I have done before (see page 2.1), rather than between traditional models and computer simulations.
3) That the temporal dynamics of simulations matter is most obvious for real time simulations as they are used for example in control engineering. Real-time requirements tighten the range of applicable modeling techniques. Often the construction of a real-time simulation involves the development of highly optimized problem-specific algorithms. While this makes the validation via well-approved modeling techniques more difficult in particular cases, it does not change the situation with respect to validation fundamentally.
4) According to Humphreys, another “novel feature of computational science is that it forces us to make a distinction between what is applicable in practice and what is applicable only in principle” (Humphreys 2009, p.\ 623). It is not exactly clear, why this distinction that can become important in many situations should become unavoidable only when computational science is considered. As far as the validation of models is concerned it is, of course, decisive what is “applicable in practice”. But again, there is no difference between models and simulations in this respect.