On 12/29/2011 5:29 PM, John Clark wrote:
On Wed, Dec 28, 2011 at 2:54 PM, Craig Weinberg <whatsons...@gmail.com
>>Are you saying that hallucinations, dreams, and delusions don't exist?
>They don't exist, they insist. Their realism supervenes upon the
the subject so that they have no independent ex-istence.
But you can say the same thing about ANYTHING. Making predictions and manipulating the
world is the most we can hope for, nobody has seen deep reality. Our brain just reacts
to the electro-chemical signals from nerves connected to a transducer called an eye. Our
computers react to the electronic signals from wires connected to a transducer called a
Our brain uses theories to explain these signals, so would intelligent computers.
Theories explain how some sense sensations relate to other sense sensations. For example
we receive information from our eyes, we interpret that information as a rock moving at
high speed and heading toward a large plate glass window, we invent a theory that
predicts that very soon we will receive another sensation, this time from our ears, that
we will describe as the sound of breaking glass. Soon our prediction is confirmed so the
theory is successful; but we should remember that the sound of broken glass is not
broken glass, the look of broken glass is not broken glass, the feel of broken glass is
not broken glass. What "IS" broken glass? It must have stable properties of some sort or
I wouldn't be able to identify it as a "thing". I don't know what those ultimate stable
properties are, but I know what they are not, they are not sense sensations. I have no
idea what glass "IS". The sad truth is, I can point to "things" but I don't know what a
thing "IS", and I'm not even sure that I know what "IS" is, and an intelligent computer
would be in exactly the same boat I am.
> There's no harm in anthropomorphizing a stuffed animal or emoticon or
What about anthropomorphizing your fellow human beings? It seems to me to be very useful
to pretend that other people have feelings just like I do, at least it's useful when
they are not acting unintelligently, like when other people are sleeping or dead.
> but if you want to understand consciousness or emotion [...]
You have only one example of consciousness that you can examine directly, your own. If
you want to study the consciousness of others, be they made of meat or metal, then like
it or not you MUST anthropomorphize.
> Computers can be thought of as billions of little plastic THANK YOUs
the microelectronic gears of a logical clock.
You take something grand and glorious, like intelligence or consciousness, and break it
up into smaller and simpler pieces, then you take those pieces and break them up again
into even smaller and simpler pieces, then you repeat the process again, and again, and
again, and again. Eventually you come to something that is not the slightest bit grand
or glorious and you say, "this can not have anything to do with intelligence or
consciousness because it is too small and simple and is no longer grand and glorious".
And you want to understand how something very complicated works so you break it into
smaller pieces and you come to understand how the individual pieces work but then you
say "I want to understand this thing but that explanation can't be right because I
understand it". Foolish argument is it not?
Right. If you're going to explain something you had better explain it in terms of
something you understand.
> Information doesn't feel like anything.
Interesting piece of information, how did you obtain it? Did this information about
information come to you in a dream?
> It's an inversion to consider information genuinely real.
There you go again with the "R" word. OK if it makes you happy there will never be a AI
that is "really" intelligent", but it could easily beat you in any intellectual pursuit
you care to name; so I guess being "real" isn't very important.
> Consciousness research doesn't go anywhere because it's being approached
It doesn't go anywhere because consciousness theorizing is too easy, any theory will
work just fine; but intelligence theorizing is hard as hell and most intelligence
theories fail spectacularly, so enormous progress has been made in making machines
intelligent. That is also why armchair theorists always talk about consciousness and
never intelligence; consciousness is easy but intelligence is hard.
> Whether or not a machine could be conscious is the wrong question to ask.
I agree, even if the machine isn't conscious that's it's problem not mine, the question
to ask is "is the machine intelligent?". And the answer is that it is if it behaves that way
> A machine isn't an actual thing, it's just a design
Yes a design, in other words it's just information. And the thing that makes your 3
pound brain different from 3 pounds of corned beef is the way the atoms are arranged, in
other words information.
> Intelligence can't evolve without consciousness.
If so then the Turing Test works for consciousness and not just intelligence; so if you
have a smart computer you know it is conscious; but the reverse is not necessarily true,
a conscious computer may or may not be smart.
> Determinism cannot have opinions. What would be the point?
I don't understand the question, what would be who's point?
> Why should you have any preference in how things are arranged if they
been and will always be arranged in the way that they are determined to be?
Because neither you nor a outside observer knows what those prearrangements will lead
to, deterministic or not the only way to know what you are going to do next is to watch
you and see. And if you don't like everything always happening because of cause and
effect that's fine, the alternative is that some things do not happen because of cause
and effect, and there is a word for that "random". If you find that being a pair of dice
is philosophically more satisfying than being a cuckoo clock that's fine with me; there
is no disputing matters of taste.
> That's circular reasoning. You can't justify the existence of feeling
or meaning by saying that meaning makes things feel meaningful.
The feeling of freedom comes from the inability to always predict what we are going to
do next even in a unchanging environment, and this inability would be there even if the
universe were 100% deterministic (it's not), and most people find this feeling pleasant.
What is circular about that?
> The neuron doctrine is just one model of consciousness,
You can say that again! There are more models of consciousness than you can shake a
> one which has failed to have any explanatory power in reality.
Yes, exactly like every other model of consciousness, not one has the slightest bit of
experimental evidence in its favor, consciousness theories are all equally useless. So
lets talk about intelligence theories even though that is astronomically more difficult.
Sounds like you agree with my prediction that when we are able to create human-level AI,
questions of consciousness will become uninteresting.
"One cannot guess the real difficulties of a problem before
having solved it."
--- Carl Ludwig Siegel
You received this message because you are subscribed to the Google Groups
"Everything List" group.
To post to this group, send email to email@example.com.
To unsubscribe from this group, send email to
For more options, visit this group at