On 5/9/2012 11:43 AM, Quentin Anciaux wrote:

2012/5/9 meekerdb <meeke...@verizon.net <mailto:meeke...@verizon.net>>

    On 5/9/2012 2:30 AM, Quentin Anciaux wrote:

    2012/5/9 meekerdb <meeke...@verizon.net <mailto:meeke...@verizon.net>>

        On 5/8/2012 4:24 PM, Stathis Papaioannou wrote:
        On Wed, May 9, 2012 at 5:52 AM, John Mikes<jami...@gmail.com>  
<mailto:jami...@gmail.com>  wrote:
        Stathis: what's your definition? - JM

        On Sat, May 5, 2012 at 6:56 PM, Stathis Papaioannou<stath...@gmail.com>  
        On Sat, May 5, 2012 at 10:46 PM, Evgenii Rudnyi<use...@rudnyi.ru>  
<mailto:use...@rudnyi.ru>  wrote:
        I have started listening to Beginning of Infinity and joined the
        list for the book. Right now there is a discussion there

        Free will in MWI

        I am at the beginning of the book and I do not know for sure, but from
        answers to this discussion it seems that according to David Deutsch one
        find free will in MWI.
        One can find or not find free will anywhere depending on how one
        defines it. That is the entire issue with free will.
        My definition: free will is when you're not sure you're going to do
        something until you've done it.

        So if carefully weigh my options and decide on one it's not free will?  
I'd say
        free will is making any choice that is not coerced by another agent.


    It's compatible with what Stathis said... unti you've made the actual 
choise, you
    didn't do it and didn't know what it will be...  "the do something of 
Stathis can
    be you're not sure what you'll choose until you've chosen it."

    Are you saying that one *never* knows what they are going to do until they 
do it...

.You have some knowledge of what you'll do... but you can only really "know" retrospectively. Iow, you are your fastest simulator... if it was not the case it would be possible to implement a faster algorithm able to predict what you'll do before you even do it... that seems paradoxical.

I don't see anything paradoxical about it. A computer that duplicated your brain's neural network, but used electrical or photonic signals (instead of electrochemical) would be orders of magnitude faster. But this is has no effect on the compatibilist idea of free will (the kind of free will worth having).

    which then by Stathis defintion means that every action is free will and 
coercion is

Coercion limit your choices, not your will, you can still choose to die (if the choice was between your life and something else for example). You can always choose if you can think, it's not because the only available choices are bad, that your free will suddenly disapeared.

So would it be an unfree will if an external agent directly injected chemicals or electrical signals into your brain thereby causing a choice actually made by the external agent?

How is this different from an external agent directly injecting information via your senses causing and thereby causing a choice actually made by the agent?


You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
For more options, visit this group at 

Reply via email to