On 19 Apr 2011, at 07:38, Rex Allen wrote:

On Tue, Apr 19, 2011 at 1:24 AM, Rex Allen <rexallen31...@gmail.com> wrote:
On Mon, Apr 18, 2011 at 12:24 PM, Bruno Marchal <marc...@ulb.ac.be> wrote:

Th fact that you say that compatibilist free will is "faux will" or worst "subjective will" means that you *do* believe in incompatibilist free will.

Ah, I see what you're saying.

I've mentioned this before.  I think that libertarians are referring
to *something* when they use "free will".  It's just something that
doesn't exist.  Like unicorns, or the bibilical Triune God.

Yeah, but we agree. That's the incompatibilist free-will.

Since then, Artificial Intelligence Research is born, and Mechanist theories have gone through the Gödel, and the Church-Turing "revolution". Now many are open to the idea that machine can be conscious, and it is not far stretched to defend the idea that they can have a sort of free will similar to our own.

A clever computer is a computer which take a time before choosing its user.


They are referring to an imaginary ability to make decisions that are
neither caused nor random -

Yes, that is close to nonsense. Yet the compatibilist notion explained why we feel it to be so, and why it *is* so, from the machine's point of view. I have no free will ... in the eyes of God, but then I cannot look through the eyes of God (without salvia, say :)



but instead are something else, something
that can't be clearly conceived of or described but which somehow
gives them ultimate responsibility for their actions.

OK, but responsibility is never ultimate.

if you build a nuclear central on a place with frequent earthquakes, that is stupidity, akin to irresponsibility. It is bad, but then we learn. But if you build it knowing the risk, and hiding it, then you have a responsibility. If you do it again, you have full responsibility.

The bad is not so bad.
What is bad is the bad, and again the bad, and again the bad, and again the bad, and again, ... without ever listening to the other.
That happens.




It isn't a coherent concept, or rational...but that's people for you.

When a concept appears to be inconsistent, I try to reshaped it with the minimal changes as to make it consistent again.

Bruno


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

Reply via email to