I think it's pretty obvious that you can't predict someone's decisions if 
you show him the prediction before he makes his final choice. So let's 
consider a different flavor of prediction. Suppose every time you make a 
choice, I can predict the decision, write it down before you do it, and 
then show it to you afterwards. Neither the infinite recursion argument 
nor the no fixed point argument work against this type of prediction. If 
this is actually possible, what would that imply for free will? 

If you are an AI, this would be fairly easy to do. I'll just make a copy 
of you, run your copy until it makes a decision, then use that as the 
"prediction". But in this case I am not able to predict the decision of 
the copy, unless I made another copy and ran that copy first.

The point is that algorithms have minimal run-time complexities. There are 
many algorithms which have no faster equivalents. The only way to find out 
their results is to actually run them. If you came up with an algorithm 
that can predict someone's decisions with complete accuracy, it would 
probably have to duplicate that person's thought processes exactly, 
perhaps not on a microscopic level, but probably on a level that still 
results in the same conscious experiences. So now there is nothing to rule 
out that the prediction algorithm itself has free will. Given that the 
subject of the prediction and the prediction algorithm can't distinguish 
between themselves from their subjective experiences, they can both 
identify with the prediction algorithm and consider themselves to have 
free will. So you can have free will even if someone is able to predict 
your actions.

The more obvious fact that you can't predict your own actions really has
less to do with free will, and more with the importance of the lack of
logical omniscience in decision theory. Classical decision theory
basically contradicts itself by assuming logical omniscience. You already
know only one choice is logically possible at any given time in a
deterministic universe, and with logical omniscience you know exactly
which one is the possible one, so there are no more decisions to be made.  
But actually logical omniscience is itself logically impossible, because
of problems with infinite recursion and lack of fixed points. That's why
it's great to see a decision theory that does not assume logical
omniscience. So please read that paper (referenced in the first post in
this thread) if you haven't already.

Reply via email to