On Wed, Dec 28, 2011 at 2:54 PM, Craig Weinberg <whatsons...@gmail.com>wrote:

>>Are you saying that hallucinations, dreams, and delusions don't exist?
>>
>
> >They don't exist, they insist. Their realism supervenes upon the
> interpretation of the subject so that they have no independent ex-istence.
>

But you can say the same thing about ANYTHING. Making predictions and
manipulating the world is the most we can hope for, nobody has seen deep
reality. Our brain just reacts to the electro-chemical signals from nerves
connected to a transducer called an eye. Our computers react to the
electronic signals from wires connected to a transducer called a TV camera.

Our brain uses theories to explain these signals, so would intelligent
computers. Theories explain how some sense sensations relate to other sense
sensations. For example we receive information from our eyes, we interpret
that information as a rock moving at high speed and heading toward a large
plate glass window, we invent a theory that predicts that very soon we will
receive another sensation, this time from our ears, that we will describe
as the sound of breaking glass. Soon our prediction is confirmed so the
theory is successful; but we should remember that the sound of broken glass
is not broken glass, the look of broken glass is not broken glass, the feel
of broken glass is not broken glass. What "IS" broken glass? It must have
stable properties of some sort or I wouldn't be able to identify it as a
"thing". I don't know what those ultimate stable properties are, but I know
what they are not, they are not sense sensations. I have no idea what glass
"IS". The sad truth is, I can point to "things" but I don't know what a
thing "IS", and I'm not even sure that I know what "IS" is, and an
intelligent computer would be in exactly the same boat I am.

> There's no harm in anthropomorphizing a stuffed animal or emoticon or
> whatever
>

What about anthropomorphizing your fellow human beings? It seems to me to
be very useful to pretend that other people have feelings just like I do,
at least it's useful when they are not acting unintelligently, like when
other people are sleeping or dead.

> but if you want to understand consciousness or emotion [...]
>

You have only one example of consciousness that you can examine directly,
your own. If you want to study the consciousness of others, be they made of
meat or metal, then like it or not you MUST anthropomorphize.

> Computers can be thought of as billions of little plastic THANK YOUs
> ornamenting the microelectronic gears of a logical clock.
>

You take something grand and glorious, like intelligence or consciousness,
and break it up into smaller and simpler pieces, then you take those pieces
and break them up again into even smaller and simpler pieces, then you
repeat the process again, and again, and again, and again. Eventually you
come to something that is not the slightest bit grand or glorious and you
say, "this can not have anything to do with intelligence or consciousness
because it is too small and simple and is no longer grand and glorious".
And you want to understand how something very complicated works so you
break it into smaller pieces and you come to understand how the individual
pieces work but then you say "I want to understand this thing but that
explanation can't be right because I understand it". Foolish argument is it
not?

> Information doesn't feel like anything.
>

Interesting piece of information, how did you obtain it? Did this
information about information come to you in a dream?

> It's an inversion to consider information genuinely real.
>

There you go again with the "R" word. OK if it makes you happy there will
never be a AI that is "really" intelligent", but it could easily beat you
in any intellectual pursuit you care to name; so I guess being "real" isn't
very important.

> Consciousness research doesn't go anywhere because it's being approached
> in the wrong way
>

It doesn't go anywhere because consciousness theorizing is too easy, any
theory will work just fine; but intelligence theorizing is hard as hell and
most intelligence theories fail spectacularly, so enormous progress has
been made in making machines intelligent. That is also why armchair
theorists always talk about consciousness and never intelligence;
consciousness is easy but intelligence is hard.

> Whether or not a machine could be conscious is the wrong question to ask.
>

I agree, even if the machine isn't conscious that's it's problem not mine,
the question to ask is "is the machine intelligent?". And the answer is
that it is if it behaves that way

> A machine isn't an actual thing, it's just a design
>

Yes a design, in other words it's just information. And the thing that
makes your 3 pound brain different from 3 pounds of corned beef is the way
the atoms are arranged, in other words information.

> Intelligence can't evolve without consciousness.
>

If so then the Turing Test works for consciousness and not just
intelligence; so if you have a smart computer you know it is conscious; but
the reverse is not necessarily true, a conscious computer may or may not be
smart.

> Determinism cannot have opinions. What would be the point?
>

I don't understand the question, what would be who's point?

> Why should you have any preference in how things are arranged if they
> have always
> been and will always be arranged in the way that they are determined to be?
>

Because neither you nor a outside observer knows what those prearrangements
will lead to, deterministic or not the only way to know what you are going
to do next is to watch you and see. And if you don't like everything always
happening because of cause and effect that's fine, the alternative is that
some things do not happen because of cause and effect, and there is a word
for that "random". If you find that being a pair of dice is philosophically
more satisfying than being a cuckoo clock that's fine with me; there is no
disputing matters of taste.

> That's circular reasoning. You can't justify the existence of feeling
> or meaning by saying that meaning makes things feel meaningful.
>

The feeling of freedom comes from the inability to always predict what we
are going to do next even in a unchanging environment, and this inability
would be there even if the universe were 100% deterministic (it's not), and
most people find this feeling pleasant. What is circular about that?

>  The neuron doctrine is just one model of consciousness,
>

You can say that again! There are more models of consciousness than you can
shake a stick at.

>  one which has failed to have any explanatory power in reality.
>

Yes, exactly like every other model of consciousness, not one has the
slightest bit of experimental evidence in its favor,  consciousness
theories are all equally useless. So lets talk about intelligence theories
even though that is astronomically more difficult.

> A human being doesn't use neurons, it is the collective life experience
> of neurons. They are living organisms, not machines.
>

What about the parts of those neurons? Is the neurotransmitter
acetylcholine a living organism? And what about the parts of that molecule,
is a hydrogen atom a living organism? Does acetylcholine know about
philosophy when you think about Plato, or does acetylcholine just obey the
laws of chemistry?

> It's not the literal sense that matters when we are talking about
> subjectivity.
>

Subjectively you don't feel exactly like you did one year ago but pretty
much you do, so something must have remained pretty much constant over that
time and if it wasn't atoms (and it certainly was not) and it wasn't
information then what was it?

>  Information doesn't exist.
>

Hmm, yet another of those things that do not exist. It seems that lack of
the existence property does not cramp the style of these things very much.

> If you make a mistake though, your friend might catch it, but the
> calculator cannot.
>

Your friend is far more likely to make a error in arithmetic than a
calculator is.

>  You are looking at the exterior behavior of the neuron only.
>

You are looking at the exterior behavior of the microelectronic switches
only.

> Our entire lives are literally created through neurons and we know that
> they are filled with human feeling and experiences
>

What's with this "our" business? I know that I am conscious and I have a
theory that you are too when you are not sleeping or dead, in other words
when you act intelligently; but I can't prove it and it's only a theory.

> What humans do is an example of human intelligence. What computers do is
> an example of human intelligence at programming semiconductors.
>

According to that reasoning Einstein was not intelligent, it was Einstein's
teachers that were intelligent.   1952 was a watershed year in the history
of AI, in that year Arthur Samuel wrote a checker playing program, and the
interesting thing is that the program could pretty consistently beat Arthur
Samuel at playing checkers.

> The semiconductors know all about voltage and current but nothing about
> the messages and pictures being traded through those systems.
>

Neurons know about synapse voltages and ion concentrations but nothing
about the messages and pictures being traded through the brain.

> Computation is not intelligence. It's really just organized patience.
>

Regardless of what it "just" is, it can "just" outsmart you.

 > The computer is an infinitely patient and accurate moron with a well
> trained muscle instead of a mind.
>

A moron that can nevertheless make you or me look like idiots, so if you're
right and computation is not intelligence then computation is better than
intelligence because one can outsmart the other.

  John K Clark

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

Reply via email to