On Mon, May 18, 2009 at 12:30 AM, Brent Meeker <meeke...@dslextreme.com> wrote:
> On the contrary, I think it does. First, I think Chalmers idea that
> vitalists recognized that all that needed explaining was structure and
> function is revisionist history. They were looking for the animating
> spirit. It is in hind sight, having found the function and structure,
> that we've realized that was all the explanation available.
Hmmm. I'm not familiar enough with the history of this to argue one
way or the other. A quick read through the wikipedia article on
vitalism, and some light googling, left me with the impression that
most of the argument centered around function. And also the
difference between organic and inorganic chemical compounds.
Though to the extent that there was something being debated beyond
structure and function, I think that Chalmers makes a good point here:
> There is not even a plausible candidate for a further sort of property of
> life that needs explaining (leaving aside consciousness itself), and
> indeed there never was.
I'm highlighting the parenthetical "leaving aside consciousness itself".
SO. Dennett makes one claim. Chalmers makes what I thought was a
pretty good rebuttal. I've never seen a counter-response from Dennett
on this point, and it's not a historical topic that I know much about.
Do you have some special expertise, or a good source that overturns
Though, comparing what people thought about an entirely different
topic 150 years ago to this topic now seems like a clever debating
point, but otherwise of iffy relevance.
> We will eventually
> be able to make robots that behave as humans do and we will infer, from
> their behavior, that they are conscious.
What about robots (or non-embodied computer programs) that are equally
complex but (for whatever design reasons) don't exhibit any
"human-like" behaviors? Will we "infer" that they are conscious? How
will we know which types of complex systems are conscious and which
aren't? What is the marker?
We'll just "know it when we see it"? If so, it's only because we have
definite knowledge of our own conscious experience, and we're looking
for behaviors that we can "empathize" with. But is empathy reliable?
It's certainly exploitable...Kismet for example. So it can generate
false positives, but what might it also miss?
> And we, being their designers,
> will be able to analyze them and say, "Here's what makes R2D2 have
> conscious experiences of visual perception and here's what makes 3CPO
> have self awareness relative to humans."
I would agree that we could say something definite about the
functional aspects, but not about any experiential aspects. Those
would have to be taken on faith. For all we know, R2D2 might have a
case of blindsight AND Anton-Babinski syndrome...in which case he
would react to visual data but have no conscious experience of what he
saw (blindsight), BUT would claim that he did experience it
> We will find that there are
> many different kinds of "conscious" and we will be able to invent new
How would we know that we had actually invented new ones? What is it
like to be a robo-Bat?
> We will never "solve" Chalmers hard problem, we'll just realize
> it's a non-question.
Maybe. Time will tell. But even if we all agree that it's a
non-question, that wouldn't necessarily mean that we'd be correct in
>> Well, here's where it gets tricky. Conscious experience is associated
>> with information.
> I think that's the point in question. However, we all agree that
> consciousness is associated with, can be identified by, certain
> behavior. So to say that physical systems are too representationally
> ambiguous seems to me to beg the question. It is based on assuming that
> consciousness is information and since the physical representation of
> information is ambiguous it is inferred that physical representations
> aren't enough for consciousness. But going back to the basis: Is
> behavior ambiguous? Sure it is - yet we rely in it to identify
> consciousness (at least if you don't believe in philosophical
> zombies). I think the significant point is that consciousness is an
> attribute of behavior that is relative to an environment.
So I think the possibility (conceivability?) of conscious computer
simulations is what throws a kink into this line of thought.
I'll quote Hans Moravec here:
"A simulated world hosting a simulated person can be a closed
self-contained entity. It might exist as a program on a computer
processing data quietly in some dark corner, giving no external hint
of the joys and pains, successes and frustrations of the person
inside. Inside the simulation events unfold according to the strict
logic of the program, which defines the ``laws of physics'' of the
simulation. The inhabitant might, by patient experimentation and
inference, deduce some representation of the simulation laws, but not
the nature or even existence of the simulating computer. The
simulation's internal relationships would be the same if the program
were running correctly on any of an endless variety of possible
computers, slowly, quickly, intermittently, or even backwards and
forwards in time, with the data stored as charges on chips, marks on a
tape, or pulses in a delay line, with the simulation's numbers
represented in binary, decimal, or Roman numerals, compactly or spread
widely across the machine. There is no limit, in principle, on how
indirect the relationship between simulation and simulated can be."
You received this message because you are subscribed to the Google Groups
"Everything List" group.
To post to this group, send email to email@example.com
To unsubscribe from this group, send email to
For more options, visit this group at