## The Dark Side of the Force: When computer simulations lead us astray and "model think" narrows our imagination. (revised version, October 2006) |

Table of Contents |

Robert Axelrod's book on “The Evolution of Cooperation” (Axelrod 1984) is a surprising phenomenon for two reasons: First of all, because of the extraordinary success it had as far as its impact on the scientific community is concerned. It spawned virtually myriads of subsequent studies on the repeated prisoner's dilemma (the model Axelrod used) and the “evolution of cooperation” that went more or less along the same lines and employed similar methods as Axelrod. An annotated biography from ten years after the first publication of “The Evolution of Cooperation” (Axelrod/Dambrosio 1994) lists more than 200 articles that directly relate to Axelrod's study.[3] But Axelrod's approach is also surprising for a second reason: The almost complete uselessness his and his follower's computer simulations of the reiterated prisoner's dilemma proved to have for the empirical research in the field.

How did Axelrod arrive at his results about cooperation and why did it prove so difficult to support them empirically? In order to find out, if and how cooperation can emerge among egoistic agents, Axelrod started off with a game theoretical model of a certain type of cooperation dilemma, the well known prisoner's dilemma. Since the one shot prisoner's dilemma does not offer many strategic opportunities (no rational player will ever cooperate in the one shot prisoner's dilemma, and any (non-rational) player who does fares worse than if he or she did not), Axelrod built a simulation based on the repeated prisoner's dilemma. He conducted his famous computer tournaments of the repeated two player prisoner's dilemma with strategies that he had got from many different participants. On top of the computer tournament he built an “evolutionary simulation” simulating a population dynamical process among these strategies by using the payoffs they gained in the tournament to calculate their fitness values.[4] Already at this point we may notice that the setup of Axelrod's simulation does not resemble any empirical situation whatsoever. The prisoner's dilemma itself provides a concise abstract description of the essential features of many dilemma situations that occur in reality, but nowhere in this world do we find an arrangement that really corresponds to Axelrod's computer tournament that is based on it. How are we then to draw conclusions from the computer tournament with respect to empirical cooperation dilemmas?

The way Axelrod proceeded was to examine the simulation results and
to draw generalizing conclusions from them. This is how Axelrod arrived
at such conclusions like: The strategy *Tit For Tat* is generally
a very good strategy in the repeated prisoner's dilemma, a strategy
should be friendly in the sense that it should not start to defect,
a strategy should punish defection but not be too unforgiving, the
evolution of cooperation depends crucially on the continuation of interaction
and the like (Axelrod 1984, ch. 2,3).
Unfortunately, subsequent research[5] showed
that none of these conclusions was generally true. It suffices to change
the simulation setup but a little bit and it pays to be a cheater,
or to be unforgiving (Binmore 1994, p. 194ff.).
And, of course, *Tit For Tat* does not always win the race.
The general finding that cooperative strategies can be successful in
the repeated prisoner's dilemma as such is just a trivial consequence
of the game theoretical folk theorem (Binmore 1998, p. 313ff.).
And all other generalizing conclusions Axelrod drew simply were not
warranted.

Nonetheless, Axelrod's pioneering work triggered off a multitude of
similar computer simulations of the prisoner's dilemma or other games.
Few of their authors dared to draw such sweeping conclusions as Axelrod
did. Still, regarding the design and the kind of reasoning they rely
on, many of these simulations follow the pattern that was set by Axelrod's
role model. In order to classify this type of simulation, we may speak
of *Axelrod style simulations*.

Generally speaking, *Axelrod style simulations* are computer
simulations that share the following typical features:

- They are constructed from a set of plausible assumptions or on top
of a common mathematical model. In many cases they are derived from
existing Axelrod style simulations by adding new parameters or changing
other boundary conditions. The concrete shape of the model remains
largely arbitrary and at the discretion of the scientist who builds
it.

- They are not related to any particular empirical situation. (And most
certainly there exists no
*close fit*to empirical reality in the sense explained before.) Thus they remain a primarily theoretical endeavor.

- If any conclusions are drawn from the simulation, they are usually drawn by means of inductive generalizations from the simulation results. The simulation is thus used to establish very general points or rules of thumb about its subject matter.

[3] A brief overview of some of the models and simulations of the repeated prisoner's dilemma can also be found in Dugatkin's book “Cooperation among Animals” (Dugatkin 1997, p. 24ff.)

[4] The details are not important here. There exist many descriptions of Axelrod's procedure the best of which is probably still Axelrod's own book (Axelrod 1984). Simulations of the repeated prisoner's dilemma similar to Axelrod's computer tournament can easily be found on the web. For example: www.eckhartarnold.de/apppages/coopsim.html

[5] See (Binmore 1994), (Binmore 1998) or (Schuessler 1997) for a discussion of some of the subsequent research.