On 16 Aug 2012, at 09:12, Russell Standish wrote:

On Wed, Aug 15, 2012 at 12:15:59PM +0200, Bruno Marchal wrote:

On 15 Aug 2012, at 10:12, Russell Standish wrote:

On Tue, Aug 14, 2012 at 01:01:10PM +0200, Bruno Marchal wrote:

On 14 Aug 2012, at 12:30, Russell Standish wrote:

Assuming the coin is operating inside the agent's body? Why
would that
be considered non-free?

In what sense would the choice be mine if it is random?

It is mine if the random generator is part of me. It is not mine if
the generator is outside of me (eg flipping the coin).

I don't see this. Why would the generator being part of you make it
your choice? You might define "me" and "part of me" before. It is

The self-other distinction is a vital part of conscsiousness. I don't
think precise definitions of this are needed for this discussion.

not clear if you are using the usual computer science notion of me,
or not, but I would say that if the root of the choice is a random
oracle, then the random oracle makes the choice for me. It does not
matter if the coin is in or outside my brain, which is a local non
absolute notion.

My brain make a choice, therefore it is my choice. My boss orders me
to do something, its not really my choice (unless I decide to disobey

Why would this be any different with random number generators? A coin
flips, and I do something based on the outcome. It is not my choice
(except insofar as I chose to follow an external random event). My
brain makes a random choice based on the chaotic amplification of
synaptic noise. This is still my brain and my choice.

So you identify yourself with a brain, like Searle. With comp I would say that only a person makes choice, the solid material brain is already a construct from an infinity of random choice, but none can be said to mine, like if I found myself in Moscow instead of Washington after a WM-duplication, I can't say that I have chosen to be in Moscow.

It is like
letting someone else take the decision for you. I really don't see
how randomness is related to with free will (the compatibilist one).

Compatibilism, ISTM, is the solution to a non-problem: How to
free will with a deterministic universe.

The very idea that we have to reconcile free-will with determinism
seems to be a red herring to me.

Agreed. But that is what all the fuss seems to be about. I try not to
engage with it, as it is so century-before-the-last.

I can agree with this. Still, I do like to debunk invalid conception of it.

It is a non-problem, because
the universe is not deterministic. (The multiverse is deterministic,
of course, but that's another story).

But then you have to reconcile free-will with indeterminacy, and
that makes not much sense.
I don't think free-will (as I defined it of course) has anything to
do with determinacy or indeterminacy. The fact that someone else can
predict my behavior does not make it less "free".

Um, yes it does.

Why would I be less free to eat blueberries in case everybody can predict that I will eat them.

You did not reply my question: take the iterated
WM-self-duplication. All the resulting people lives the experience
of an random oracle. Why would they be more free than someone
outside the duplication boxes? How could they use that random oracle
for being more free than someone not using them, as they cannot
select the outcome?

In the setup of your teleporters, the source of randomness comes from
outside of the person, so no, that doesn't have anything to with free
will. But if you move the source of randomness to inside somehow, then
sure it might do.

I don't see what inside and outside have anything to do with the fact that a choice can't be helped with a random coin. A choice is driven by many factors like my personality, my culture, my life, my current appetite, and thousand of parameters.

It looks like you do defend the "old" notion of free will, which
basically assume non-comp. Using first person indeterminacy can't
help, imo, but if you have an idea you can elaborate.

I'm not sure what this "old" notion of free will is, but if it
involves immaterial spirits, substance dualism and the like, then
definitely not.

OK. Me too.

I don't see how my form of free will is non-comp.

With comp everything is deterministic from the 3p view, like arithmetical truth is definite. Then from the 1-view, there are mainly two type of indeterminacy. The one due to self-multiplication in UD* (alias arithmetical truth), which, as you agree above can't play a role in free-will. Then there is the self-indeterminacy based on Turing, which is the one playing a role in free-will. But in both case, there is no indeterminacy in the big picture. If free-will necessicate a real 3p-free will, comp would be false, as we cannot Turing emulate it. The QM indeterminacy cannot work here, as it is a self-multiplication like in the first person indeterminacy.

By contrast, your
UD argument seems to argue for its necessary appearance.


Someone asked why this concept is important. It isn't for me, per se,
but I would imagine that someone implementing an agent that must
survive in a messy real world environment (eg an autonomous robot)
will need to consider this issue, and build something like it into
their robot.

Probabilist algorithm can be more efficacious and can solve problem that deterministic algorithm cannot, but in most case you can use pseudo-random one in most case. And if consciousness and free will necessitates a real 3p indeterminacy, then comp is violated, as this cannot be Turing emulated.




You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
For more options, visit this group at 

Reply via email to