On Mon, Nov 22, 2010 at 11:40 AM, Bruno Marchal <marc...@ulb.ac.be> wrote:
> On 21 Nov 2010, at 19:47, Rex Allen wrote:
>> On Fri, Nov 19, 2010 at 8:32 AM, Bruno Marchal <marc...@ulb.ac.be> wrote:
>>> But your reasoning does not apply to free will in the sense I gave: the
>>> ability to choose among alternatives that *I* cannot predict in advance
>>> (so that *from my personal perspective* it is not entirely due to reason
>>> nor do to randomness).
>> So that is a good description of the subjective feeling of free will.
> I was not describing the subjective feeling of free will, which is another
> matter, and which may accompany or not the experience of free will.
> Free-will is the ability to choose among alternatives that *I* cannot
> genuinely predict in advance so that reason fails, and yet it is not random.

The ability to choose among unpredictable alternatives?  What???

In no way does “ability to choose from unpredictable alternatives”
match my conception of free will.  Nor would you find many people in
agreement amongst the general populace.

You’re just redefining “free will” in a way that allows you to claim
that it exists but which bears little relation to the original

In a deterministic universe, there are no alternatives.  Things can
only unfold one way.  Our being unable to predict that unfolding is
neither here nor there.

Again, ignorance is not free will.  Ignorance is just ignorance.

>> But if you question most people closely, this isn't what they mean by
>> “free will”.
> You have interpret too much quickly what I was describing. Free-will as I
> define it is not the subjective feeling of having free-will. It is really
> due to the fact that the choice I will make is not based on reason, nor on
> randomness from my (real) perspective (which exists).

I didn’t say that the options were choices based on “reason or randomness”

I said:

“Either there is *a reason* for what I choose to do, or there isn't.”

By “a reason” I mean “a cause”.

I don’t mean “reason” in the sense of rationality.

> Subjective does not mean inexisting. Free-will is subjective or better
> subject-related, but it exists and has observable consequences, like
> purposeful murdering, existence of jails, etc. It is the root of moral
> consciousness, or conscience.

How does my inability to predict my choices or alternatives in advance
serve as the root for moral conscience?

>> They mean the ability to make choices that aren't random, but which
>> also aren't caused.
> And this becomes, with the approach I gave: "the ability to make choices
> that aren't random, but for which they have to ignore the cause". And I
> insist: they might even ignore that they ignore the cause. They will say
> "because I want do that" or things like that.

The vast majority of the populace certainly does not equate free will
with ignorance of causes.

> I disagree that many people would accept your definition, because it would
> entail (even for religious rationalist believers) that free-will does not
> exist, and the debate would be close since a long time.

If you ask “most people”, they will not agree that the human choice is
random, and they will not agree that human choice can be explained by
causal forces.

Rather, they claim that human choice is something not random *and* not
caused.  Though they can’t get any more specific than that.

The debate isn’t settled because they won’t admit that there is no
third option.  They feel free, therefore they *believe* that they must
actually be free.  Free from randomness and free from causal forces.

“I feel free, therefore I must be free.”

That reasoning is what keeps the free will debate alive.

>> They have the further belief that since the choices aren't random or
>> caused, the chooser bears ultimate responsibility for them.
> They are right. That is what the materialist eliminativist will deny, and
> eventually that is why they will deny any meaning to notion like "person",
> free-will, responsibility or even "consciousness".

How does ignorance of what choice you will make lead to ultimate
responsibility for that choice?

I deny the possibility of ultimate responsibility and I’m not a
eliminative materialist.

But I also deny that mechanism can account for consciousness (except
by fiat declaration that it does).

As to “person”, I take a deflationary view of the term.  There’s less
to it than meets the eye.

>> This further belief doesn't seem to follow from any particular chain
>> of reasoning.  It's just another belief that this kind of person has.
> Because as a person she is conscious and feel a reasonable amount of sense
> of responsibility, which is genuine and legitimate from her first person
> perspective (and from the perspective of machine having a similar level of
> complexity).

This comes back to my earlier point.  She “feels” a sense of
responsibility and therefore believes that she is genuinely and
legitimately responsible.

But the fact that she feels responsibility in no way means that she
actually is responsible.

A further mechanism would have to be provided other than her feelings
for me to believe that she actually was ultimately responsible.

And I’ve never heard of such a mechanism, and I don’t buy your
ignorance-based explanation in the least.

>> Silly, I know.
> It is not silly at all. That is why mechanism is not a reductionism, and
> eventually "saves" the notion of person. That is why consciousness, even if
> matter exists in some fundamental way, is not an epiphenomenon.

Aren’t you making consciousness an epiphenomenon of the digital machine?

>>> When you say "random or not random", you are applying the third excluded
>>> middle which, although arguably true ontically, is provably wrong for
>>> most personal points of view.  We have p v ~p, but this does not
>>> entail Bp v B~p,  for B used for almost any hypostasis (points of view).
>> I'd think that ontically is what matters in this particular case?
> I don't see why. A murderer remains a murderer independently of the ontic
> level, be it particles, waves, fields, or number relations.

Murder is just a category you’ve made up for your own convenience.  It
has no ontological status separate from you.

Either there’s a reason for the killing act, or there isn’t.  If
there’s a reason, then it was an unavoidable consequence of reality’s
causal structure.

If there isn’t, it’s just a random event and nothing further can be
said about it.

>> Why would I care about whether or why I or anyone else *seem* to have
>> free will from their personal points of view?
> They do *have* free-will. They genuinely makes decisions which cannot be
> attributed to reason or randomness, from their point of view and from the
> points of view of any machines having a similar complexity.

Faux will.  Fake will.

If their decision has no cause, it’s random.

If their decision is not random, then it has a cause.

I see no third option.  And your “inability to predict” theory seems
to me to be completely irrelevant to the issue.

Even probabilistic laws are a form of causation.  In this case the
course of events are genuinely unpredictable (within limits), this
unpredictablity also doesn't amount to free will.

If you shuffle a deck of cards using some irreducibly random method,
and then turn the first card face up - I genuinely can't predict which
particular card it will be.  But I can predict with certainty that the
card will belong to one of the 4 suits and one of the 13 ranks.  No
matter how randomly the deck is shuffled, the card won't come up some
unknown 5th suit.  The randomness, though irreducible, is still
constrained.  It's only random to the extent it isn't determined.

> If you believe that the fact that the action was determinable in principles
> by some very powerful computer prevent real free-will, then you might say
> that consciousness is also an illusion and you will be led to eliminativism.

Consciousness stands or falls independently of free will.

I see no reason at all to say that consciousness requires free will.  None.

The feeling of free will is an aspect of conscious experience.  But
conscious experience has no dependency on free will.

> A machine will seem to have consciousness, but will not have genuine
> consciousness, with such confusion. There is genuine free-will, because the
> ignorance of the machine is real and genuine, independently of the fact that
> the machine believes in free-will or not, or seems to have free-will or not.
> Such an ignorance cannot be eliminated by adding knowledge to the machine,
> without transforming it into a new and different machine which will still be
> ignorant about herself at another level.

I still don’t see any connection between ignorance and free will.

I also don’t see any connection between ignorance and responsibility.
“You don’t know what motivates your actions...therefore you are
ultimately responsible for them.”


I believe that you should find a new term for your “ability to choose
among alternatives that *I* cannot genuinely predict in advance.”

Because that’s not free will, and by claiming that it is you’re just
muddying the water.

“Ignorant Will”.  That’s catchy.  Use that.

You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
For more options, visit this group at 

Reply via email to