Can the Best-Alternative-Justification solve Hume's Problem?
On the Limits of a Promising Approach

Eckhart Arnold

1 Introduction
2 Schurz' basic approach
3 Schurz' central results
4 Limitations of Schurz' approach: Confinement to the finite
    4.1 Why Schurz' approach cannot be extended to the infinite case
    4.2 A sidenote: Limitations of “one favorite” meta-inductivists
5 Open Questions
6 Conclusion
Bibliography

4.2 A sidenote: Limitations of “one favorite” meta-inductivists

Just how difficult it is to design a meta-inductivist that covers all possible or at least all desirable scenarios becomes apparent when considering a limitation of Schurz' avoidance meta-inductivist, which is the most universal type in a series of “one-favorite meta-inductivists” that Schurz develops in sections four to six of his article.

As Schurz proves mathematically (his theorem 3), the avoidance meta-inductivist () -approximates the maximal success of the non-deceiving alternative predictors. However, this proof does not cover all strategies that we might intuitively consider as non-deceivers. For example, a strategy that starts as a deceiver and switches to a non-deceiving clairvoyant prediction algorithm only later in the game (after it has been classified as a deceiver by ) will remain classified as a deceiver by . Intuitively, though, we would probably not consider it a deceiver any more after it has switched to a non-deceiving clairvoyant algorithm. Further below it will be demonstrated that this can even happen accidentally for a predictor that never deceives (in an intuitive sense).

This limitation is a consequence of the fact that Schurz' definition of “deception” is purely extensional. It is based on the predictor's overt behaviour and not on the deceptive or non-deceptive algorithm the predictor uses: “A non-MI-player (and the strategy played by ) is said to deceive (or to be a deceiver) at time n iff ” (Schurz 2008, p. 293), where is the ``deception-threshold'' and is 's conditional success-rate when has as a favorite. So, contrary to what we might intuitively think, it is not a necessary condition for being a deceiver to base the predictions on what favorites the meta-inductivists have.

As Schurz himself notices “even an object-strategy (such as OI) may become a deceiver, namely, when a demonic stream of events deceives the object-strategy” (Schurz 2008, p. 293). This is of course to be understood in terms of Schurz' previous definition of deception, because the algorithm that, say, uses is the same as in a non-demonic world and would intuitively not be considered as deceptive.

But then there is a finite probability that an will be classified as a deceiver by even though the stream of world events is not demonic in the sense that the events are computed from the predictions made by the predictors. For there is a finite probability that a random stream of world events accidentally mimics a demonic stream of world events up to round so that appears as a deceiver up to round . If is sufficiently large then classifies as deceiver. And it will only reevaluate 's status if lowers its unconditional success rate. “For a player P who is recorded as a deceiver will be 'stigmatized' by aMI as a deceiver as long as P does not decrease his unconditional success rate (since P's aMI-conditional success rate is frozen as long as aMI does not favor P)” (Schurz 2008, p. 295). As 's success rate reflects the frequency of random world events in the binary prediction game, it is unlikely that it significantly lowers its success rate at a later stage in the game.

In such a situation would fail to -approximate the maximal success of even though no deception was ever intended and the conditions under which this situation can occur are completely natural (i.e. non demonic world, no supernatural abilities like clairvyoance, etc.). Thus, if can fail to be optimal even with respect to under completely natural circumstances, the optimality result concerning the performance of with regards to all non-deceivers may not quite deliver what we expect. For example, we cannot not say that is optimal save for demonic conditions or deception, if decpetion is understood in an intuitive sense as described above.

t g+ f @