Colin Hales writes:
Stathis said
SNIP
and Colin has said that he does not believe that philosophical zombies
can exist.
Hence, he has to show not only that the computer model will lack the 1st
person
experience, but also lack the 3rd person observable behaviour of the
real
Stahis said:
snip
If you present an object with identical sensory measurements but get
different results in the chip, then that means what you took as sensory
measurements was incomplete. For example, blind people might be able to
sense the presense of someone who silently walks into the room
James N Rose wrote:
Brent Meeker wrote:
If consciousness is the creation of an inner narrative
to be stored in long-term memory then there are levels
of consciousness. The amoeba forms no memories and so
is not conscious at all. A dog forms memories and even
has some understanding
perceive anything.
Writers,
philosophers, mathematicians can all be creative without perceiving anything.
Stathis Papaioannou
Date: Mon, 18 Dec 2006 10:54:05 +1100
From: [EMAIL PROTECTED]
Subject: RE: computer pain
To: everything-list@googlegroups.com
Colin Geoffrey Hales wrote:
What I expect to happen is that the field configuration I find emerging in
the guts of the chips will be different, depending on the object, even
though the sensory measurement is identical. The different field
configurations will correspond to the different
Colin,
You have described a way in which our perception may be more than can
be explained by the sense data. However, how does this explain the
response
to novelty? I can come up with a plan or theory to deal with a novel
situation
if it is simply described to me. I don't have to
Brent Meeker wrote:
That notion may fit comfortably with your presumptive
ideas about 'memory' -- computer stored, special-neuron
stored, and similar. But the universe IS ITSELF 'memory
storage' from the start. Operational rules of performance
-- the laws of nature, so to speak --
Colin Geoffrey Hales wrote:
So your theory is that the electromagnetic field has an ability to learn
which is not reflected in QED - it's some hitherto unknown aspect of the
field and it doesn't show up in the field violating Maxwell's equations
or
QED predictions? And further this aspect
Colin Hales writes:
Stathis wrote:
I can understand that, for example, a computer simulation of a storm is
not a storm, because only a storm is a storm and will get you wet. But
perhaps counterintuitively, a model of a brain can be closer to the real
thing than a model of a storm. We don't
So the EM fields account for the experiences that accompany the brain
processes. A kind of epiphenomena.
So why don't my experiences change when I'm in an MRI?
I haven't been through the detail - I hope to verify this in my
simulations to come but...
As far as I am aware MRI magnets
I understand your conclusion, that a model of a brain
won't be able to handle novelty like a real brain,
but I am trying to understand the nuts and
bolts of how the model is going to fail. For
example, you can say that perpetual motion
machines are impossible because they disobey
the
Colin Hales writes:
I understand your conclusion, that a model of a brain
won't be able to handle novelty like a real brain,
but I am trying to understand the nuts and
bolts of how the model is going to fail. For
example, you can say that perpetual motion
machines are impossible
Stathis Papaioannou wrote:
Colin Hales writes:
I understand your conclusion, that a model of a brain
won't be able to handle novelty like a real brain,
but I am trying to understand the nuts and
bolts of how the model is going to fail. For
example, you can say that perpetual motion
So you are saying the special something which causes
consciousness and which functionalism has ignored
is the electric field around the neuron/astrocyte.
But electric fields were well understood even a
hundred years ago, weren't they? Why couldn't
a neuron be simulated by something like a
Colin,
I can understand that, for example, a computer simulation of a storm is not a
storm,
because only a storm is a storm and will get you wet. But perhaps
counterintuitively,
a model of a brain can be closer to the real thing than a model of a storm. We
don't
normally see inside a
Colin Geoffrey Hales wrote:
So you are saying the special something which causes
consciousness and which functionalism has ignored
is the electric field around the neuron/astrocyte.
But electric fields were well understood even a
hundred years ago, weren't they? Why couldn't
a neuron be
Brent said:
snip
Of course they describe things - they aren't the things themselves.
But the question is whether the description is complete. Is there
anything about EM fields that is not described by QED?
Absolutely HEAPS! Everything that they are made of and how the components
inteact to
Stathis wrote:
I can understand that, for example, a computer simulation of a storm is
not a storm, because only a storm is a storm and will get you wet. But
perhaps counterintuitively, a model of a brain can be closer to the real
thing than a model of a storm. We don't normally see inside a
Colin Geoffrey Hales wrote:
Stathis wrote:
I can understand that, for example, a computer simulation of a storm is
not a storm, because only a storm is a storm and will get you wet. But
perhaps counterintuitively, a model of a brain can be closer to the real
thing than a model of a storm. We
So your theory is that the electromagnetic field has an ability to learn
which is not reflected in QED - it's some hitherto unknown aspect of the
field and it doesn't show up in the field violating Maxwell's equations
or
QED predictions? And further this aspect of the EM field is able to
Jamie Rose writes:
Stathis,
As I was reading your comments this morning, an example
crossed my mind that might fit your description of in-place
code lines that monitor 'disfunction' and exist in-situ as
a 'pain' alert .. that would be error evaluating 'check-sum'
computations.
In a
Brent meeker writes:
Stathis Papaioannou wrote:
Brent Meeker writes:
I would say that many complex mechanical systems react to pain in a way
similar to simple animals. For example, aircraft have automatic shut
downs and fire extinguishers. They can change the flight controls
: RE: computer pain
To: everything-list@googlegroups.com
Hi Stathis/Jamie et al.
I've been busy else where in self-preservation mode deleting emails
madly .frustrating, with so many threads left hanging...oh well...but
I couldn't go past this particular dialog.
I am having trouble
Yes Stathis, you are right, 'noxious stimulus' and
'experience' are indeed separable - but - if you want to
do an analysis of comparing, its important to identify
global parameters and potential analogs.
My last post's example tried to address those components.
I've seen stress diagrams of
no clue
where it was and learn nothing looking remotely normal. Meanwhile Marvin
inside can do perfectly good 'zombie room' science.
RE: Computer Pain
There's a whole axis of modelling orthogonal to the soma membrane which
gets statistically abstracted out by traditional Hodkin/Huxley models. The
neuron
Stathis Papaioannou wrote:
Brent meeker writes:
Stathis Papaioannou wrote:
Brent Meeker writes:
I would say that many complex mechanical systems react to pain in a way
similar to simple animals. For example, aircraft have automatic shut
downs and fire extinguishers. They can change
James N Rose wrote:
Stathis,
The reason for lack of responses is that your idea
goes directly to illuminating why AI systems - as
promoulgated under current designs of software
running in hardware matrices - CANNOT emulate living
systems. It an issue that AI advocates intuitively
and
: Re: computer pain
Stathis,
The reason for lack of responses is that your idea
goes directly to illuminating why AI systems - as
promoulgated under current designs of software
running in hardware matrices - CANNOT emulate living
systems. It an issue that AI advocates intuitively
Brent Meeker writes:
I would say that many complex mechanical systems react to pain in a way
similar to simple animals. For example, aircraft have automatic shut downs
and fire extinguishers. They can change the flight controls to reduce stress
on structures. Whether they feel this
Stathis,
As I was reading your comments this morning, an example
crossed my mind that might fit your description of in-place
code lines that monitor 'disfunction' and exist in-situ as
a 'pain' alert .. that would be error evaluating 'check-sum'
computations.
In a functional way, parallel
Stathis Papaioannou wrote:
Brent Meeker writes:
I would say that many complex mechanical systems react to pain in a way
similar to simple animals. For example, aircraft have automatic shut downs
and fire extinguishers. They can change the flight controls to reduce
stress on
Hi Stathis/Jamie et al.
I've been busy else where in self-preservation mode deleting emails
madly .frustrating, with so many threads left hanging...oh well...but
I couldn't go past this particular dialog.
I am having trouble that you actually believe the below to be the case!
Lines of
No responses yet to this question. It seems to me a straightforward
consequence of computationalism that we should be able to write a program
which, when run, will experience pain, and I suspect that this would be a
substantially simpler program than one demonstrating general intelligence. It
Stathis,
The reason for lack of responses is that your idea
goes directly to illuminating why AI systems - as
promoulgated under current designs of software
running in hardware matrices - CANNOT emulate living
systems. It an issue that AI advocates intuitively
and scrupulously AVOID.
Pain in
101 - 134 of 134 matches
Mail list logo