On Feb 10, 3:52 pm, John Clark <johnkcl...@gmail.com> wrote:
> On Thu, Feb 9, 2012 Craig Weinberg <whatsons...@gmail.com> wrote:
>
> > The rule book is the memory.
>
> Yes but the rule book not only contains a astronomically large database it
> also contains a super ingenious artificial intelligence program; without
> those things the little man is like a naked microprocessor sitting on a
> storage shelf, its not a brain and its not a computer and its not doing one
> damn thing.

I think you are radically overestimating the size of the book and the
importance of the size to the experiment. ELIZA was about 20Kb.
http://www.jesperjuul.net/eliza/

If it's a thousand times better than ELIZA, then you've got a 20 Mb
rule book. The King James Bible can be downloaded here
http://www.biblepath.com/bible_download.html at 14.33Mb. There is no
time limit specified so we have no way of knowing how long it would
take for a book this size to fail the Turing Test.

It might be more useful to use more of a pharmaceutical model, like
LD50 or LD100; how long of a conversation do you have to have before
50% of the native speakers fail the system. Is the Turing Test an LD00
test with unbounded duration? No native speaker can ever tell the
difference no matter how long they converse? This is clearly
impossible. It's context dependent and subjective. I only assume that
everyone here is human because I have no reason to doubt that, but in
a testing situation, I would not be confident that everyone here is
human judging only from responses.

>
> > >The contents of memory is dumb too - as dumb as player piano rolls.
>
> That's pretty dumb. but the synapses of the brain are just as dumb and the
> atoms they, and computers and everything else, are made of are even
> dumber.

Player piano rolls aren't living organisms that create and repair vast
organic communication networks. Computers don't do anything by
themselves, they have to be carefully programed and maintained by
people and they have to have human users to make sense of any of their
output. Neurons require no external physical agents to program or use
them.

>
> > The two together only seem intelligent to Chinese speakers outside the
> > door
>
> Only?! Einstein only seemed intelligent to scientifically literate speakers
> in the outside world.

No, he was aware of his own intelligence too. I think you're grasping
at straws.

> It "seems" that, as you use the term, seeming
> intelligent is as good as being intelligent.

So if I imitate Arnold Schwarzenegger on the phone, then that's as
good as me being Schwarzenegger.

> In fact it seems to me that
> believing intelligent actions are not a sign of intelligence is not very
> intelligent.

I understand that you think of it that way, and I think that is a
moronic belief, but I don't think that makes you a moron. It all comes
down to thinking in terms of an arbitrary formalism of language rather
and working backward to reality rather than working from concrete
realism and using language to understand it. If you start out defining
intelligence as an abstract function and category of behaviors rather
than quality of consciousness which entails the capacity for behaviors
and functions, then you end up proving your own assumptions with
circular reasoning.

>
> > A conversation that lasts a few hours could probably be generated from a
> > standard Chinese phrase book, especially if equipped with some useful
> > evasive answers (a la ELIZA).
>
> You bring up that stupid 40 year old program again? Yes ELIZA displayed
> little if any intelligence but that program is 40 years old! Do try to keep
> up.

You keep up. ELIZA is still being updated as of 2007:
http://webscripts.softpedia.com/script/Programming-Methods-and-Algorithms-Python/Artificial-Intelligence-Chatterbot-Eliza-15909.html

I use ELIZA as an example because you can clearly see that it is not
intelligent and you can clearly see that it could superficially seem
intelligent. It becomes more difficult to be as sure what is going on
when the program is more sophisticated because it is a more convincing
fake. The ELIZA example is perfect because it exposes the fundamental
mechanism by which trivial intelligence can be mistaken for the
potential for understanding.

> And if you are really confident in your ideas push the thought
> experiment to the limit and let the Chinese Room produce brilliant answers
> to complex questions, if it just churns out ELIZA style evasive crap that
> proves nothing because we both agree that's not very intelligent.

Ok, make it a million times the size of ELIZA. A set of 1,000 books. I
think that would pass an LD50 Turing Test of a five hour conversation,
don't you?

>
> > The size isn't the point though.
>
> I rather think it is. A book larger than the observable universe and a
> program more brilliant than any written,

where are you getting that from?

> yet you insist that if understand
> is anywhere in that room it must be in the by far least remarkable part of
> it, the silly little man.

That's the point of the thought experiment. It points out that the
program which fetches recorded instructions is the least remarkable
part of AI.

>  And remember the consciousness that room
> produces would not be like the consciousness you or I have, if would take
> that room many billions of years to generate as much consciousness as you
> do in one second.

I've already gone over this. If I'm a chef and I walk into a room, the
room doesn't become a restaurant. Why stop at the room, why not say
the entire city speaks Chinese? If consciousness worked this way then
there could be no localization at all - the universe would be one big
intelligence that knows everything about everything.

>
> > Speed is a red herring too.
>
> No it is not and I will tell you exactly why as soon as the sun burns out
> and collapses into a white dwarf. Speed isn't a issue so you have to
> concede that I won that point.

Are you saying that if Watson takes 2 seconds to answer a question it
is intelligent but if it takes 2 hours to answer the same question
correctly is it somehow less intelligent? Speed is meaningless for
this thought experiment.

>
>  > if it makes sense for a room to be conscious, then it makes sense that
>
> > anything and everything can be conscious
>
> Yes, providing the thing in question behaves intelligently.

What is the 'thing in question' though? The room is in a building, so
does the building behave intelligently now?


> We only think
> our fellow humans are conscious when they behave intelligently and that's
> the only reason we DON'T think they're conscious when they're sleeping or
> dead; all I ask is that you play by the same rules when dealing with
> computers or Chinese Rooms.

We give humans the benefit of the doubt because we know they are human
rather than a contrived apparatus. If we didn't, then we would not be
behaving intelligently.

>
> >> However Searle does not expect us to think it odd that 3 pounds of grey
> >> goo in a bone vat can be conscious
>
> > Because unlike you, he [Searl] is not presuming the neuron doctrine. I
> > think his position is that consciousness cannot solely because of the
> > material functioning of the brain and it must be something else.
>
> And yet if you change the way the brain functions, through drugs or surgery
> or electrical stimulation or a bullet to the head, the conscious experience
> changes too.

Sure, but that only correlates brain function with consciousness, not
causation. If you put a bullet through your computer, you affect your
access to the internet, but not the internet itself.

> And if the brain can make use of this free floating glowing
> bullshit of yours

You aren't getting that the free floating glow is me making fun of
your view. I was pointing out that it is absurd to talk about a room
or machine being just conscious in general. That it spreads from book
to man to room or something. It is your bullshit that I'm ridiculing.
I guess you missed that.

> what reason is there to believe that computers can't also
> do so? I've asked this question before and the best you could come up with
> is that computers aren't squishy and don't smell bad so they can't be
> conscious. I don't find that argument compelling.

You aren't listening to what I'm saying. You don't understand what
Searle was talking about either. Other people do though. We are alive
because we are made of living organisms. Computers are not alive
because they are inorganic matter that is organized by an external
agent. You can't make a stem cell out of a semiconductor, and indeed
you can't make a good semiconductor out of a complex living organism.
They represent completely opposite and mutually exclusive potentials
of matter. What makes a good machine makes a bad organism. The
fundamental unit of a computer is mathematical, but the fundamental
unit of a cell is biological. It cannot be reduced any more than a
color can be reduced to black and white. A cat can't be reduced to a
tomato. A person can't be built of Legos.

>
> > We know the brain relates directly to consciousness, but we don't know
> > for sure how.
>
> If you don't know how the brain produces consciousness

I don't think the brain produces consciousness. I think awareness
produces consciousness. The brain produces neurotransmitters.

> then how in the
> world can you be so certain a computer can't do it too, especially if the
> computer is as intelligent or even more intelligent than the brain?

Because I understand the relation between brain and mind, and the
difference between that and machines and hardware. Living things try
to write their own stories, machines do not. Machines are automatic
and pre-recorded, not live and aware.

>
> > We can make a distinction between the temporary disposition of the brain
> > and it's more permanent structureor organization.
>
> A 44 magnum bullet in the brain would cause a change in brain organization
> and would seem to be rather permanent. I believe such a thing would also
> cause a rather significant change in consciousness. Do you disagree?

That's what I'm saying. A bullet can do that because it's causing a
physical catastrophe to the brain as a whole, not because it is
reprogramming the organization of the mind. There is no virtual bullet
book you could read that would have a similar effect on your
consciousness as being shot to death.

Craig

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

Reply via email to