On 5/9/2012 12:09 PM, Quentin Anciaux wrote:

2012/5/9 meekerdb <meeke...@verizon.net <mailto:meeke...@verizon.net>>

    On 5/9/2012 11:43 AM, Quentin Anciaux wrote:

    2012/5/9 meekerdb <meeke...@verizon.net <mailto:meeke...@verizon.net>>

        On 5/9/2012 2:30 AM, Quentin Anciaux wrote:

        2012/5/9 meekerdb <meeke...@verizon.net <mailto:meeke...@verizon.net>>

            On 5/8/2012 4:24 PM, Stathis Papaioannou wrote:
            On Wed, May 9, 2012 at 5:52 AM, John Mikes<jami...@gmail.com>  
<mailto:jami...@gmail.com>  wrote:
            Stathis: what's your definition? - JM

            On Sat, May 5, 2012 at 6:56 PM, Stathis Papaioannou<stath...@gmail.com>  
            On Sat, May 5, 2012 at 10:46 PM, Evgenii Rudnyi<use...@rudnyi.ru>  
<mailto:use...@rudnyi.ru>  wrote:
            I have started listening to Beginning of Infinity and joined the
            list for the book. Right now there is a discussion there

            Free will in MWI

            I am at the beginning of the book and I do not know for sure, but 
            answers to this discussion it seems that according to David Deutsch 
            find free will in MWI.
            One can find or not find free will anywhere depending on how one
            defines it. That is the entire issue with free will.
            My definition: free will is when you're not sure you're going to do
            something until you've done it.

So if carefully weigh my options and decide on one it's not free will? I'd say free will is making any choice that is not coerced by another agent.


        It's compatible with what Stathis said... unti you've made the actual 
        you didn't do it and didn't know what it will be...  "the do something 
        Stathis can be you're not sure what you'll choose until you've chosen 

        Are you saying that one *never* knows what they are going to do until 
they do it...

     .You have some knowledge of what you'll do... but you can only really 
    retrospectively. Iow, you are your fastest simulator... if it was not the 
case it
    would be possible to implement a faster algorithm able to predict what 
you'll do
    before you even do it... that seems paradoxical.

    I don't see anything paradoxical about it.  A computer that duplicated your 
    neural network, but used electrical or photonic signals (instead of 
    would be orders of magnitude faster.

It's paradoxical, because if it could, I could know the outcome, and if I could know the outcome, then I can do something else, and If I do something else, then the simulation of that superphotonic computer is wrong hence the hypothesis that it could simulate my choice faster than me is impossible (because if it could, it *must* take in account my future knowledge of my choice, if it does not, it is no faster to simulate what I'll do than me).

That's an incoherent paradox. You've now assumed that not only is your brain simulated, so your action is known in advance, but also that the simulation information is fed back to your brain so it influences the action. That's changing the problem and essentially creating a brain+simulator=brain'. The fact that brain'=/=brain is hardly paradoxical.

    But this is has no effect on the compatibilist idea of free will (the kind 
of free
    will worth having).

        which then by Stathis defintion means that every action is free will and
        coercion is impossible?

    Coercion limit your choices, not your will, you can still choose to die (if 
    choice was between your life and something else for example). You can 
always choose
    if you can think, it's not because the only available choices are bad, that 
    free will suddenly disapeared.

    So would it be an unfree will if an external agent directly injected 
chemicals or
    electrical signals into your brain thereby causing a choice actually made 
by the
    external agent?


Why is it still "you" if your brain is hooked up to something that allows an external agent to control your body?

    How is this different from an external agent directly injecting information 
via your
    senses causing and thereby causing a choice actually made by the agent?

In the first case *you* choose, in the second case you don't.

?? That's the reverse of your previous post in which you held that an external agent threatening you does not remove your free will. You said it just limited your choices, you still chose. Did you read my post correctly?


You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
For more options, visit this group at 

Reply via email to