Right, but I was speculating that we could probably develop the inputs for
such stimulus. Do you really think that would be so difficult? Or am I
misunderstanding the way you mean "sensation"? I read that as
"undirected/involuntary continuous streaming and processing of sensory
stimuli like that which humans experience". I see no reason why this can't
be accessed via a machine system rather than a biological system, and
processed in an emulation of the brain (or some algorithmic system that
does this in a way that is similar to the algorithmic system used in
hardware brains, I guess) rather than a physical brain. There might be
reasons, but you'll need to tell me what they are. :)
The problem is that the info that the body sends to the brain and that the
brain sends to different parts of itself is extremely complex consisting of
electrical impulses, biochemical changes (portions of the brain sense the
hormonal outputs of various body parts) and biochemical changes (blood sugar,
oxygen etc) as well as other environmental data (temperature etc). This
endogenous information (as apposed to perceptual data such as vison and sound
which give us info on the world "out there") would be difficult to emulate
in a computer I would think and what would be the point? The machine would
need to monitor its own internal state which would key to its survival rather
than emulate an organic body. Now this may not be important. it may be enough
that the machine has info about itself. But guys like D'amasio and Nick
Humphrey seem to be arguing that the feelings we have are critical to the
production and mainatance of the sense of the here and now that is
consciousness. That consciousness requires emotion and that emotions arise
involuntarily (automatically) from sensation
Honestly, I suspect the first bit, the prosthesis of the sensor, would be
easy. The hard part of the modeling of all the processing that goes on
between optic nerve and conscious mind that's difficult. Primary
processing/filtering (probably not so hard to figure out), cross-modal
referencing [as in Cytowic's _The Man Who Tasted Shapes_, and probably
harder], emotional and memory-interaction (yikes!), and so on, all that
stuff is probably the harder part. Just a guess from an ignorant
"hilly-billy" as one friend called me recently.
The processing between the optic nerve and the brain is hard but the other
stuff is harder because while the machine may have the need to process visual
information as a biologic criter it does not have the need to process the
same sort of endogenous information.
But I'm still not clear on what would prevent it from being
pretty close to like us, specifically. Unless there's some clear thing that
would not BE emulable, I can't see why it would necessarily *need* to be
unlike us in many ways.
Again it may need to be different than us because it has different physical
constraints. You might as why we "need" consciousness at all ourselves.
Damasio (correct spelling - I just went to my book shelf) argues that the
ability to monitor how the brain is responding to external and internal info
(the ability to plot the change in time of the state of the brain) provides
the organism with improved survival. It has short term memory, working
memory and in the case of humans a huge store of autobiographical memory
which it can use to anticipate how the brain state will change in response to
all sorts of things.
(and Zim, I notice you fixed your reply thing so we can tell what you wrote
vs what was quoted. Good stuff!)
I have fixed nothing. Some of my posting is from work via netscape. What I
am doing now is directly on AOL. So tell me - is this post ok or my usual
mixed up style?
>>