At 9:50 PM -0400 05/11/2000, [EMAIL PROTECTED] wrote:
>But if it is true that consciousness arises as an integral part of having
>sensations then machine consciousness may be very different than ours. That
>is, many machines now monitor their own functions. They can perceive things
>in the environment as well but I don't know if the quality of the sensory
>imput is critical to the kind of consiousness we animals have.

Right, but I was speculating that we could probably develop the inputs for
such stimulus. Do you really think that would be so difficult? Or am I
misunderstanding the way you mean "sensation"? I read that as
"undirected/involuntary continuous streaming and processing of sensory
stimuli like that which humans experience". I see no reason why this can't
be accessed via a machine system rather than a biological system, and
processed in an emulation of the brain (or some algorithmic system that
does this in a way that is similar to the algorithmic system used in
hardware brains, I guess) rather than a physical brain. There might be
reasons, but you'll need to tell me what they are. :)

Probably the real test would be seeing how well we do in the area of
developing prosthetics. For example, right now I can say the state of the
art for visual prosthesis is very very basic --  having a camera wired into
your brain so you can see rough areas of light and dark (such as a door in
a wall). [this is something I came up with researching at work...]

Honestly, I suspect the first bit, the prosthesis of the sensor, would be
easy. The hard part of the modeling of all the processing that goes on
between optic nerve and conscious mind that's difficult. Primary
processing/filtering (probably not so hard to figure out), cross-modal
referencing [as in Cytowic's _The Man Who Tasted Shapes_, and probably
harder], emotional and memory-interaction (yikes!), and so on, all that
stuff is probably the harder part. Just a guess from an ignorant
"hilly-billy" as one friend called me recently.

> I think 2050 is extremely unlikely, but I don't think it's SO much farther
> off, at least necessarily. Maybe 200 years, or 250?  ThoughI mean barring
> really major crisis incidents, though, so that's not a bet. It could be
> 1000, who knows! It all depends on how deep we need to emulate the hardware
> of physical brains to get an isomorphic version of consicousness in a
> computer.
>
>The problem may not be in modelling the hardware or software of brains that
>live in bodies. it may be that you need a body to have consciousness and that
>body may have to function in certain ways. I am not certain how strongly I
>feel about this.

I don't understand, probably because I haven't read D'Amasso. What are the
"certain ways" it would need to fuction that he/she suggests? I'm not sure
what difference would matter, as long as it didn't go on in such a way that
it might be non-computable (a la Penrose) and thus non-emulable.

>Until I read D' Amasso's book I was convinced that machine
>consciousness was inevitable and it may still be but it may be very different
>than ours. It may be a lot more than simply having to converse with your
>children in a different language.

Certainly. I was just making the point that a kind of difference that we
seem not to be ready to conceive of in our daily lives is not SO
unthinkable for people in the situation itself. I think if we could have,
say not descendants as much as "far cousins" who were really quite alien to
us, such as machine consciousnesses, then we'd probably be quite happy to
go for it. But I'm still not clear on what would prevent it from being
pretty close to like us, specifically. Unless there's some clear thing that
would not BE emulable, I can't see why it would necessarily *need* to be
unlike us in many ways.

Whee this is fun!!! :)

(and Zim, I notice you fixed your reply thing so we can tell what you wrote
vs what was quoted. Good stuff!)


Reply via email to