Hi Frank, 

 

Well, yes.  Roughly, “if it quacks like a duck…”*.  But we have to understand 
“behavior” in a pretty broad sense. 

 

The rules of the game are, define “thinking” in some way that satisfies 
everybody in the room, once everybody agrees, look and see if the entity in 
question “thinks”.  But you have to be honest about it.   Obviously if 
everybody in the room agrees that thinking requires “posting to FRIAM”, then 
chimpanzees don’t think.  So really the whole project is in how you frame the 
question.  There are a lot of arguments that continue uselessly because people 
have illicit criteria for their definitions.  Many arguments  at FRIAM about 
consciousness continue more or less indefinitely  because some participants 
implicitly include in their definition  of consciousness the possession of an 
immortal soul or of a human brain, or both, but don’t own up those criteria.  
Thus their belief that computers or chimpanzees, or blades of grass are not 
conscious arises from their premises, not from any facts of any matter. 

 

Nick  

* I once expressed a worry to a friend of mine concerning a Doctor we had both 
seen that the Doctor was not really qualified because he was constantly evading 
and deflecting my questions.  “Well,” my friend said.  “If he ducks like a 
quack, he probably is one.”  

Nicholas Thompson

Emeritus Professor of Ethology and Psychology

Clark University

 <mailto:[email protected]> [email protected]

 <https://wordpress.clarku.edu/nthompson/> 
https://wordpress.clarku.edu/nthompson/

 

 

From: Friam <[email protected]> On Behalf Of Frank Wimberly
Sent: Monday, July 27, 2020 9:12 PM
To: The Friday Morning Applied Complexity Coffee Group <[email protected]>
Subject: Re: [FRIAM] GPT-3 and the chinese room

 

Smart computer (program).  Nick, is the behaviorist position that if it behaves 
as though it thinks than it thinks?

---
Frank C. Wimberly
140 Calle Ojo Feliz, 
Santa Fe, NM 87505

505 670-9918
Santa Fe, NM

 

On Mon, Jul 27, 2020, 8:28 PM Alexander Rasmus <[email protected] 
<mailto:[email protected]> > wrote:

There's also a lot of stuff that isn't poetry. He generates Dwarf Fortress 
patch notes in the Misc section. 

 

There's even a section where GPT-3 argues that it doesn't really understand 
anything (bold at the start is a shortened version of the prompt):

"AI will never be able to truly think for themselves, to have consciousness, to 
feel any of the richness and complexity of the world that we mere humans can 
feel. Mere, unenlightened humans might be impressed by the abilities of simple 
deep learning programs, but when looked at in a more holistic manner, it all 
adds up to… well, nothing. They still don’t exhibit any trace of consciousness. 
All of the available data support the notion that humans feel and experience 
the world differently than computers do. While a computer can beat a human 
master at chess or Go or some other game of structured rules, it will never be 
able to truly think outside of those rules, it will never be able to come up 
with its own new strategies on the fly, it will never be able to feel, to 
react, the way a human can. Artificial intelligence programs lack consciousness 
and self-awareness. They will never be able to have a sense of humor. They will 
never be able to appreciate art, or beauty, or love. They will never feel 
lonely. They will never have empathy for other people, for animals, for the 
environment. They will never enjoy music or fall in love, or cry at the drop of 
a hat. Merely by existing, mere, unenlightened humans are intellectually 
superior to computers, no matter how good our computers get at winning games 
like Go or Jeopardy. We don’t live by the rules of those games. Our minds are 
much, much bigger than that.

 

Compared to the awesome power of the human intellect, it’s obvious that 
computers are just dense electrochemical machines with intricate wiring. And 
while one day we might develop smarter (though still soulless) computers that 
surpass the human intellect in the games mentioned above, it will just end up 
being some more sophisticated way of doing the same thing. Even if it’s more 
efficient than our brains at doing complex mathematical operations, it will 
remain entirely in the dark about what it all means. Mindless processing power 
alone is not the answer to our existential questions. So, in the end, we don’t 
need to create another HAL 9000, or Skynet, or some evil global intelligent 
network of computers that rule the world and try to exterminate humanity as we 
become obsolete, which is a pretty cool subplot for a future Terminator movie. 
We don’t need any of that. We’re humans, and we are already intellectually 
superior to machines. End of story.

 

To be sure, I’m not denying that there are hard problems in AI. Of course, 
there are. All of these so called “advances” in this field are a bunch of crap 
if we still don’t have a program that can putt around on a miniaturized golf 
course all by itself, let alone actually play a decent game of golf like the 
humans do, without going into meltdown if it misses the hole five out of six 
times in a row. Can we get a movie about that? If you ask me, we don’t even 
really know how to define intelligence yet. How does it work, exactly? It’s a 
mess, really. If we’re ever going to create another actual HAL 9000, or yet 
another Skynet (what fun), we’d better be all on the same page when it comes to 
AI: what intelligence is, how it works, what consciousness is, what it feels 
like, what it really means to be self-aware. Without that common framework, 
trying to program yet another AI that can play yet another game like Go is like 
trying to blow up another Death Star with yet another way-too-large superlaser.

 

I think one of the big mistakes that computer scientists are making is that 
they are conflating intelligence with problem-solving. They’ve gotten into this 
habit of creating intricate Turing test competitions: give the computer a 
series of math problems, a chess board, etc., etc., give it a chat interface so 
you can interact with it like you would with another human being, and then see 
if the machine can fool you into thinking that it is a human. Once it does 
this, computers will have passed the Turing test and achieved general AI. 
Really? Is that really the way it works? I don’t see how. A computer has 
succeeded in faking it until it makes it, in terms of passing a Turing test 
competition, only if it has satisfied some pre-specified set of conditions that 
we know to be what a human would do in the same situation. But that is no 
guarantee that it has actually achieved intelligence! For all we know, 
computers can imitate humans until they generate the most plausible patterns of 
thought and behavior we know of, while all along remaining as soulless as ever. 
Who’s to say that the computer doesn’t merely use its programming to cheat the 
test? Who’s to say that it isn’t just shuffling its data around in an effort to 
do the most computations possible with the least amount of effort? It may 
succeed in conning us into thinking that it is self-aware, but that doesn’t 
prove that it actually is. It hasn’t actually passed the Turing test, unless we 
have defined it in a way that pre-determines the outcome: i.e., if the human 
pretends to be a computer, then it passes the test, but if the computer 
pretends to be a human, then it doesn’t pass the test! To me, that just doesn’t 
sound all that scientific."

 

Best,

Rasmus

 

On Mon, Jul 27, 2020 at 8:04 PM glen <[email protected] 
<mailto:[email protected]> > wrote:

Excellent. Thanks! I'd seen the link to Gwern from Slate Star Codex. But I 
loathe poetry. Now that you've recommended it, I have no choice. 8^)

On July 27, 2020 6:32:15 PM PDT, Alexander Rasmus <[email protected] 
<mailto:[email protected]> > wrote:
>Glen,
>
>Gwern has an extensive post on GPT-3 poetry experimentation here:
>https://www.gwern.net/GPT-3
>
>I strongly recommend the section on the Cyberiad, where GPT-3 stands in
>for
>Trurl's Electronic Bard:
>https://www.gwern.net/GPT-3#stanislaw-lems-cyberiad
>
>There's some discussion of fine tuning input, but I think more cases
>where
>they keep the prompt fixed and show several different outputs.

-- 
glen

- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam 
<http://bit.ly/virtualfriam> 
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/ 

- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam 
<http://bit.ly/virtualfriam> 
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/ 

- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/ 

Reply via email to