On Sat, Jan 30, 2010 at 8:10 PM, soulcatcher☠ <soulcatche...@gmail.com>wrote:

> I see a red rose. You see a red rose. Is your experience of redness
> the same as mine?
> 1. Yes, they are identical.
> 2. They are different as long as neural organization of our brains is
> slightly different, but you are potentially capable of experiencing my
> redness with some help from neurosurgeon who can shape your brain in
> the way as mine is.
> 3. They are different as long as some 'code' of our brains is slightly
> different but you (and every machine) is potentially capable of
> experiencing my redness if they somehow achieve the same 'code'.
> 5. They are different and absolutely private - you (and anybody else,
> be it a human or machine) don't and can't experience my redness.
> 6. The question doesn't have any sense because ... (please elaborate)
> 7. ...
> What is your opinion?
>
> My (naive) answer is (3). Our experiences are identical (would a
> correct term be 'ontologically identical'?) as long as they have the
> same symbolic representation and the symbols have the same grounding
> in the physical world. The part about grounding is just an un-educated
> guess,  I don't understand the subject and have only an intuitive
> feeling that semantics (what computation is about) is important and
> somehow determined by the physical world out there.
> Let me explain with example. Suppose, that you:
> 1. simulate my brain in a computer program, so we can say that this
> program represents my brain in your symbols.
> 2. simulate a red rose
> 3. feed "rose data" into my simulated brain.
> I think (more believe than think) that this simulated brain won't see
> my redness - in fact, it won't see nothing at all cause it isn't
> conscious.
> But if you:
> 1. make a robot that simulates my brain in my symbols i.e. behaves
> (relative to the physical world) in the same ways as I do
> 2. show a rose to the robot
> I think that robot will experience the same redness as me.
> Would be glad if somebody suggests something to read about 'symbols
> grounding', semantics, etc., I have a lot of confusion here, I've
> always thought that logic is a formal language for a 'syntactic'
> manipulation with 'strings' that acquire meaning only in our minds.
>
>
I have to disagree with your intuition that grounding in the physical world
is required.  What would you say about the possibility that our whole
universe is running in some computer?  According to your intuition:

Physical Universe->Your Brain = Conscious
Physical Universe->Robot Brain = Conscious
Physical Universe->Computer Simulation->Software Brain = Not Conscious

What would you say about this setup:

Computer Simulation->Physical Universe->Your Brain

That is to say, what if our physical universe were simulated in some alien's
computer instead of being some primitive "physical" world?

Also, what would the non-conscious software brain say if someone asked it
what it saw?  Would the software simulation of your brain ever feel
bewilderment over its sensations or wonder about consciousness?  Would it
ever compose an e-mail on topics such as the redness of red or would that
activity be impossible for the software brain fed simulated input?

If you expect different behavior in any conceivable situation between the
robot brain fed input from a camera, and the software brain fed input from
the simulation, assuming the programming and input are identical, I think
this leads to a contradiction.  Equivalent Turing machines should evolve
identically given the same input.  Therefore there should be no case in
which the robot would write an e-mail questioning the redness of red, but
the software simulation would not.

Jason

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

Reply via email to