The Dark Side of the Force: When computer simulations lead us astray and "model think" narrows our imagination
|Table of Contents|
|2 Different aims of computer simulations in science|
|3 Criteria for “explanatory” simulations|
|4 Simulations that fail to explain|
|4.1 Axelrod style simulations of the “evolution of cooperation”|
|4.1.1 Typical features of Axelrod style simulations|
|4.1.2 How Axelrod style simulations work|
|4.1.3 The explanatory irrelevance of Axelrod-style simulations in social sciences|
|4.1.4 Do Axelrod-style simulations do any better in biology?|
|4.2 Can we simulate the “Social Contract”?|
My first example is concerned with the sort of computer simulations of “the evolution of cooperation” that have become very popular after the publication of Robert Axelrod's book (Axelrod 1984) with the same title. Robert Axelrod's book is a surprising phenomenon for two reasons: First of all, because of the extraordinary success it had as far as its impact on the scientific community is concerned. It spawned virtually myriads of subsequent studies on the repeated prisoner's dilemma (the model Axelrod used) and the “evolution of cooperation” that went more or less along the same lines and employed similar methods as Axelrod. An annotated biography from 1994 (ten years after the first publication of “The Evolution of Cooperation”) lists more than 200 articles that directly relate to Axelrod's study. But Axelrod's approach is also surprising for a second reason: The almost complete uselessness his and his follower's computer simulations of the reiterated prisoner's dilemma proved to have for the empirical research in the field.
How did Axelrod arrive at his results about cooperation and why did it prove so difficult to support them empirically? In order to find out, if and how cooperation can emerge among egoistic agents, Axelrod started off with a game theoretical model of a certain type of cooperation dilemma, the well known prisoner's dilemma. Since the one shot prisoner's dilemma does not offer many strategic opportunities (no rational player will ever cooperate in the one shot prisoner's dilemma, and any player who does fares worse than if he or she did not), Axelrod built his simulation on top of the repeated prisoner's dilemma. He conducted his famous computer tournaments of the repeated two player prisoner's dilemma with strategies that he had got from many different participants. On top of the computer tournament he built an “evolutionary simulation” simulating a population dynamical process among these strategies by using the payoffs they gained in the tournament to calculate their fitness values. Already at this point we may notice that the setup of Axelrod's simulation does not resemble any empirical situation whatsoever. The prisoner's dilemma itself provides a concise abstract description of the essential features of many dilemma situations that occur in reality, but nowhere in this world we find an arrangement that really corresponds to Axelrod's computer tournament that is built on top of it. How are we then to draw conclusions from the computer tournament with respect to empirical cooperation dilemmas?
The way Axelrod proceeded was to examine the simulation results and to draw generalizing conclusions from them. This is how Axelrod arrived at such conclusions like: the strategy Tit For Tat is generally a very good strategy in the repeated prisoner's dilemma, a strategy should be friendly in the sense that it should not start to defect, a strategy should punish defection but not be too unforgiving, the evolution of cooperation depends crucially on the continuation of interaction and the like. Unfortunately, subsequent research showed that none of these conclusions was generally true. It suffices to change the simulation setup but a little bit and it pays to be a cheater, or to be unforgiving (as is the case when the simulation is run with all two state automata as a base strategy set). And, of course, Tit For Tat does not always win the race. The general finding that cooperative strategies can be successful in the repeated prisoner's dilemma as such is just a trivial consequence of the game theoretical folk theorem (Binmore 1998, p. 313ff.). And all other generalizing conclusions Axelrod drew were simply not warrented.
Nonetheless, Axelrod's pioneering work triggered off a multitude of similar computer simulations of the prisoner's dilemma or other games. Most of their author's were too cautious to draw such sweeping conclusions as Axelrod did. Still, regarding their design and the kind of reasoning they rely on, many of these simulations follow the pattern that was set by Axelrod's role model. In order to classify this type of simulation, we may speak of Axelrod style simulations.
Generally speaking, Axelrod style simulations are computer simulations that share the following typical features:
 The details are not important here. There exist many descriptions of Axelrod's procedure the best of which is probably still Axelrod's own book (Axelrod 1984). Simulations of the repeated prisoner's dilemma similar to Axelrod's computer tournament can easily be found on the web. (Google for “CoopSim” to find one of them.)