You will find them by clicking on publications on my home page
http://iridia.ulb.ac.be/~marchal/
The main one is informatique théorique et philosophie de
l'esprit (theoretical computer science and philosophy of mind).
Toulouse 1988. Like my thesis I have been asked to do in french (alas).
On 18 Mar 2010, at 23:04, Stathis Papaioannou wrote:
On 19 March 2010 04:01, Brent Meeker meeke...@dslextreme.com wrote:
On 3/17/2010 11:01 PM, Stathis Papaioannou wrote:
On 18 March 2010 16:36, Brent Meeker meeke...@dslextreme.com wrote:
Is it coherent to say a black box accidentally
William,
On 18 Mar 2010, at 18:06, L.W. Sterritt wrote:
Bruno and others,
Perhaps more progress can be made by avoiding self referential
problems and viewing this issue mechanistically.
I don't see what self-referential problems you are alluding too,
especially when viewing the issue
Bruno,
Your response is most appreciated. Your publications will keep me busy
for while. You also mentioned earlier some of your publications that
are not on your URL. That reference has gone missing in my
labyrinthine filing system. Would you please post those references
again.
On 18 March 2010 16:36, Brent Meeker meeke...@dslextreme.com wrote:
Is it coherent to say a black box accidentally reproduces the I/O? It is
over some relatively small number to of I/Os, but over a large enough number
and range to sustain human behavior - that seems very doubtful. One would
On 18 Mar 2010, at 07:01, Stathis Papaioannou wrote:
On 18 March 2010 16:36, Brent Meeker meeke...@dslextreme.com wrote:
Is it coherent to say a black box accidentally reproduces the I/
O? It is
over some relatively small number to of I/Os, but over a large
enough number
and range to
On 17 Mar 2010, at 18:34, Brent Meeker wrote:
On 3/17/2010 3:34 AM, Stathis Papaioannou wrote:
On 17 March 2010 05:29, Brent Meeker meeke...@dslextreme.com wrote:
I think this is a dubious argument based on our lack of
understanding of
qualia. Presumably one has many thoughts that do
On 17 Mar 2010, at 18:50, Brent Meeker wrote:
On 3/17/2010 5:47 AM, HZ wrote:
I'm quite confused about the state of zombieness. If the requirement
for zombiehood is that it doesn't understand anything at all but it
behaves as if it does what makes us not zombies? How do we not we are
not?
On 17 Mar 2010, at 19:12, Brent Meeker wrote:
On 3/17/2010 10:01 AM, Bruno Marchal wrote:
On 17 Mar 2010, at 13:47, HZ wrote:
I'm quite confused about the state of zombieness. If the requirement
for zombiehood is that it doesn't understand anything at all but it
behaves as if it does what
On 3/17/2010 11:01 PM, Stathis Papaioannou wrote:
On 18 March 2010 16:36, Brent Meekermeeke...@dslextreme.com wrote:
Is it coherent to say a black box accidentally reproduces the I/O? It is
over some relatively small number to of I/Os, but over a large enough number
and range to sustain
Bruno and others,
Perhaps more progress can be made by avoiding self referential
problems and viewing this issue mechanistically. Where I start: Haim
Sompolinsky, Statistical Mechanics of Neural Networks, Physics Today
(December 1988). He discussed emergent computational properties of
On 3/18/2010 10:06 AM, L.W. Sterritt wrote:
Bruno and others,
Perhaps more progress can be made by avoiding self referential
problems and viewing this issue mechanistically. Where I start: Haim
Sompolinsky, Statistical Mechanics of Neural Networks, /Physics
Today /(December 1988). He
On 18 March 2010 17:06, L.W. Sterritt lannysterr...@comcast.net wrote:
Perhaps more progress can be made by avoiding self referential problems and
viewing this issue mechanistically.
Undoubtedly.
I guess I'm in the QM camp
that believes that what you can measure is what you can know.
But
David,
I think that I have to agree with your comments. I do think that we
will learn something from the quest for conscious machines, perhaps
not what we had in mind.
Lanny
On Mar 18, 2010, at 10:45 AM, David Nyman wrote:
On 18 March 2010 17:06, L.W. Sterritt lannysterr...@comcast.net
Brent,
There are some quite interesting observations in the paper by Koch and
Tonini, e.g.
Remarkably, consciousness does not seem to require many of the things
we associate most deeply with being human: emotions, memory, self-
reflection, language, sensing the world and acting in it...
On 3/18/2010 12:03 PM, L.W. Sterritt wrote:
Brent,
There are some quite interesting observations in the paper by Koch and
Tonini, e.g.
Remarkably, consciousness does not seem to require many of the things
we associate most deeply with being human: emotions, memory,
self-reflection,
Brent,
This link should work. IEEE sometimes makes their articles available
to non-members and non-subscribers:
http://spectrum.ieee.org/biomedical/imaging/can-machines-be-conscious/3
If this does not work, please let me know and I'll find another path
to the article. I could also go
Brent,
I notice that the link that I forwarded opens on the 3rd page; just
select view all, toward the upper right of the page.
This brief article on consciousness as integrated information may also
be interesting:
Thanks. I got it.
Some assertions seem dubious:
Primal emotions like anger, fear, surprise, and joy are useful and
perhaps even essential for the survival of a conscious organism.
Likewise, a conscious machine might rely on emotions to make choices and
deal with the complexities of the
On 19 March 2010 04:01, Brent Meeker meeke...@dslextreme.com wrote:
On 3/17/2010 11:01 PM, Stathis Papaioannou wrote:
On 18 March 2010 16:36, Brent Meeker meeke...@dslextreme.com wrote:
Is it coherent to say a black box accidentally reproduces the I/O? It is
over some relatively small
On 17 March 2010 05:29, Brent Meeker meeke...@dslextreme.com wrote:
I think this is a dubious argument based on our lack of understanding of
qualia. Presumably one has many thoughts that do not result in any overt
action. So if I lost a few neurons (which I do continuously) it might mean
On 17 March 2010 06:09, John Mikes jami...@gmail.com wrote:
Stathis,
I feel we are riding the human restrictive imaging in a complex nature.
While I DO feel completely comfortable to say that there is a neuron through
which connectivity is established to a next segment in our mental
I'm quite confused about the state of zombieness. If the requirement
for zombiehood is that it doesn't understand anything at all but it
behaves as if it does what makes us not zombies? How do we not we are
not? But more importantly, are there known cases of zombies? Perhaps a
silly question
On 17 March 2010 23:47, HZ hzen...@gmail.com wrote:
I'm quite confused about the state of zombieness. If the requirement
for zombiehood is that it doesn't understand anything at all but it
behaves as if it does what makes us not zombies? How do we not we are
not? But more importantly, are
On 17 Mar 2010, at 13:47, HZ wrote:
I'm quite confused about the state of zombieness. If the requirement
for zombiehood is that it doesn't understand anything at all but it
behaves as if it does what makes us not zombies? How do we not we are
not? But more importantly, are there known cases of
On 3/17/2010 3:34 AM, Stathis Papaioannou wrote:
On 17 March 2010 05:29, Brent Meekermeeke...@dslextreme.com wrote:
I think this is a dubious argument based on our lack of understanding of
qualia. Presumably one has many thoughts that do not result in any overt
action. So if I lost a
On 3/17/2010 5:47 AM, HZ wrote:
I'm quite confused about the state of zombieness. If the requirement
for zombiehood is that it doesn't understand anything at all but it
behaves as if it does what makes us not zombies? How do we not we are
not? But more importantly, are there known cases of
On 3/17/2010 10:01 AM, Bruno Marchal wrote:
On 17 Mar 2010, at 13:47, HZ wrote:
I'm quite confused about the state of zombieness. If the requirement
for zombiehood is that it doesn't understand anything at all but it
behaves as if it does what makes us not zombies? How do we not we are
not?
Brent:
why do you believe IN *QUALIA?* they are just as human assumptions (in our
belief system) as* VALUE* (or, for that matter: to take seriously your
short (long?) term memories).
A* ZOMBIE* is the subject of a thought experiment in our humanly
aggrandizing anthropocentric boasting. A dog?
On 3/17/2010 11:39 AM, John Mikes wrote:
Brent:
why do you believe IN *QUALIA?* they are just as human assumptions
(in our belief system) as* VALUE* (or, for that matter: to take
seriously your short (long?) term memories).
I don't believe *IN* anything. They are just something that
Hi Gentlemen,
I start out with the bias that the brain as a neural network with ~
10^11 neurons, given the exogenous and endogenous inputs presented to
it, continuously computes our perception of the world around us.
Some neuroscientists suggest that each neuron in the brain is
, 2010 1:45 AM
To: everything-list@googlegroups.com
Subject: Re: Jack's partial brain paper
On 16 Mar 2010, at 19:29, Brent Meeker wrote:
On 3/16/2010 6:03 AM, Stathis Papaioannou wrote:
On 16 March 2010 20:29, russell standish mailto:li...@hpcoders.com.au
li...@hpcoders.com.au wrote
On 18 March 2010 06:32, Stephen P. King stephe...@charter.net wrote:
As I have been following this conversation a question
occurred to me, how is a Zombie (as defined by Chalmers et al.) any
different functionally from the notion of other persons (dogs, etc.) that a
Solipsist
On 18 March 2010 04:34, Brent Meeker meeke...@dslextreme.com wrote:
However I think there is something in the above that creates the just a
recording problem. It's the hypothesis that the black box reproduces the
I/O behavior. This implies the black box realizes a function, not a
recording.
On 3/17/2010 9:28 PM, Stathis Papaioannou wrote:
On 18 March 2010 04:34, Brent Meekermeeke...@dslextreme.com wrote:
However I think there is something in the above that creates the just a
recording problem. It's the hypothesis that the black box reproduces the
I/O behavior. This implies
I've been following the thread on Jack's partial brains paper,
although I've been too busy to comment. I did get a moment to read the
paper this evening, and I was abruptly stopped by a comment on page 2:
On the second hypothesis [Sudden Disappearing Qualia], the
replacement of a single neuron
On 16 March 2010 20:29, russell standish li...@hpcoders.com.au wrote:
I've been following the thread on Jack's partial brains paper,
although I've been too busy to comment. I did get a moment to read the
paper this evening, and I was abruptly stopped by a comment on page 2:
On the second
On 3/16/2010 6:03 AM, Stathis Papaioannou wrote:
On 16 March 2010 20:29, russell standishli...@hpcoders.com.au wrote:
I've been following the thread on Jack's partial brains paper,
although I've been too busy to comment. I did get a moment to read the
paper this evening, and I was abruptly
Stathis,
I feel we are riding the human restrictive imaging in a complex nature.
While I DO feel completely comfortable to say that there is a neuron through
which connectivity is established to a next segment in our mental
complexity, and if *that *neuron dies, the connectivity to that
On 16 Mar 2010, at 19:29, Brent Meeker wrote:
On 3/16/2010 6:03 AM, Stathis Papaioannou wrote:
On 16 March 2010 20:29, russell standish li...@hpcoders.com.au
wrote:
I've been following the thread on Jack's partial brains paper,
although I've been too busy to comment. I did get a moment
Hi Gentlemen,
Regarding Jack's partial brain paper, and Free will: Wrong entry:
The IEEE Computational Intelligence Society, one of the Institute of
Electrical and Electronic Engineers groups, publishes three
journals: the IEEE Transactions on Neural Networks, the IEEE
Transactions
41 matches
Mail list logo