On Aug 2, 6:51 pm, meekerdb <meeke...@verizon.net> wrote:

> But that is not obvious and saying so isn't an argument.

You don't have to accept it, but you shouldn't strawman it either.

> > If you do a
> > computational simulation through a similar material that the brain is
> > made of, then you have something similar to a brain. The idea of pure
> > computation independent of some physical medium is not something we
> > should take for granted. It seems like a completely outrageous fantasy
> > to me. Why would such a thing be any more plausible than ghosts or
> > magic?
> I don't take it for granted.  But I can imagine building an intelligent
> robot that acts in every way like a person.  And I know that I could
> replace his computer brain for a different one, built with different
> materials and using different physics, that computed the same programs
> without changing its behavior.  Now you deny that this robot is
> conscious because its brain isn't made of proteins and water and neurons
> - but I could replace part of the computer with a computer made of some
> protein and water and some neurons; which according to you would then
> make the robot conscious.  This seems to me to be an unjustified
> inference.  If it acts conscious with the wet brain and it acted the
> same before, with the computer chip brain, then I infer that it was
> probably conscious before.

Why are you equating how something appears to behave with it's
capacity to experience human consciousness? Think of the live neurons
as a pilot light in a gas appliance. You may not need to heat your hot
water heater with a bonfire that needs to be maintained with wood, and
if you have a natural gas utility you can substitute a different fuel,
but if that fuel can't ignite, there's not going to be any heat. By
your reasoning, the natural gas could be substituted with carbon
dioxide, since it looks the same, acts like a gas, etc so you could
infer that it should make the same heat. With the pilot light, you
will at least know whether or not the fuel is viable.

> Do I conclude that it experiences consciousness exactly as I do?  No, I
> think that it might depend on how its programming is implement, e.g.
> LISP might produce different experience than FORTRAN  or whether there
> are asynchronous hardware modules.  I'm not sure how Bruno's theory
> applies to this since he looks at the problem from a level where all
> computation is equivalent modulo Church-Turing.

I hear what you're saying, and there's no question that the
programming is instrumental both in simulating intelligence or
generating a human level of interior experience artificially. All I'm
saying is that LISP or FORTRAN cannot have an experience by itself. A
silicon chip can and does experience something when it runs a program,
just not what we experience when we use the program. Just as your TV
set experiences something when you watch the news, but what it
experiences is not the news, not a tv program, not colored pixels or
patterns, but electronic level detection. Circuits, voltage,
resistance, capacitance, etc. You put a lot of fancy elaboration on a
circuit, sure, maybe you get some novelty showing up in the
experience, but I think that the level at which molecules cohere as a
living cell is likely to be the same level at which electronic
detection level awareness autopoiesizes into actual sensitivity or
proto feeling. I'm guessing about this of course, but I think it makes
sense, certainly a lot more sense than the idea that 'consciousness'
gradually appears when there are enough IF-THEN statements.


You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
For more options, visit this group at 

Reply via email to