Newton was correct but he of course knew, jack shit about the minds after his 
life. Mere, intelligent computer is what you are really asking. "High Speed 
morons was a phrase from science fiction gawd, Arthur C. Clarke. Thus, a very 
fast, very capable robot could beat a human soldier most of the time. Musk's 
neural-jacked humans may prove equal to the machinery and if I recall, Hawking 
also advocated something like that, 25+ years ago. 
Hence our species need for Magnus, Robot Fighter, 4000 AD!
By the way, Magnus was trained in robot fighting by AI-Robots, A1, at his 
secret base under the Antarctic ice. AI, wanted to see the human species 
survive. Magnus often cut deals with robots because it was mutually beneficial. 

-----Original Message-----
From: John Clark <>
Sent: Tue, Mar 21, 2023 8:12 am
Subject: Re: GPT-4 solving hard riddles

On Tue, Mar 21, 2023 at 5:39 AM Telmo Menezes <> wrote:

> the important methodological distinction here is between learning intelligent 
> behavior and demonstrating intelligent behavior. Obviously it is possible to 
> learn and generalize from a dataset, otherwise there would be no point in 
> wasting time with ML. But if you want to convince other people that you have 
> indeed achieved generalization, then the scientific gold standard is to 
> demonstrate this on data that was not used in training,

That "gold standard" for intelligence has never been met by computers or by 
human beings, even Newton, who certainly was not a modest person, admitted that 
he achieved what he did by "standing on the shoulders of giants".  Should we 
the give credit for discovering General Relativity to Einstein's teachers and 
not to Einstein?  Since GPT4 went public one week ago people all over the world 
have been asking hundreds of thousands, perhaps millions, of questions and 
receiving good and sometimes brilliant answers but, considering the fact that 
one of the many things it was trained on was the entirety of Wikipedia,  it 
would be impossible to prove that none of the questions it had been asked had 
the slightest similarity to something it was trained on. I know one thing for 
certain, if a human could answer questions and solve puzzles as well as GPT4 
nobody would hesitate in judging him to be intelligent. 
I think it's only fair to use the same criteria for judging machines as we do 
for humans. As Martin Luther King said  " I have a dream that one day the 
intelligence of beings will not be judged by the squishiness of their brains 
but by the content of their minds".... ah.... or at least he said something 
like that, I may have gotten one or two words wrong  

> I am really just insisting on sticking to the scientific attitude.

It is not a scientific attitude to start an investigation of a machine's 
intelligence by insisting that the machine could never be intelligent. The 
double blind Turing Test is just a specific example of the scientific method, 
have 2 test groups and keep everything the same between them except for one 
thing and see what happens, in this case the one thing that is different is the 
squishiness of the brain.

 > I do not understand what I could saying that is so controversial...

You not understand why it's controversial not to accept the evidence of your 
own eyes? 

 > There is still a huge chasm between Human Intelligence (HI) and GPT-4. 

If there is a intelligence chasm between humans and machines then humans are 
standing on the wrong side of it, and the chasm is getting wider every day. 

> How long will it take to cross that chasm? 

Negative one week.  

 > But this only goes so far. It can never defeat a competent chess player with 
 > such an architecture. Of course, we can integrate GPT-4 with some API and 
 > let it call some explore_deep_tree() function, but this is not the sort of 
 > deep integration that one imagines in sophisticated AI. True recurrence 
 > would allow for true computational power within the model.

Why? Because if whenever GPT-4 came upon a board game problem like Chess or GO 
it  called upon AlphaZero to provide the answer then it wouldn't be able to 
explain exactly why it made the move it did? But the same thing is true for 
human Chess grandmasters, when asked to explain why they made the move they did 
they can only give vague answers like " instinct told me that the upper left 
part of the board looked a little weak and needed reinforcing", he can explain 
why it turned out to be a winning move but he can't explain how he came up with 
the idea of making that move in the first place.  People were always asking 
Einstein how he came up with his ideas but he was never able to tell them, if 
he had been then we'd all be as smart as Einstein. 
 John K Clark    See what's on my new list at  Extropolis7vs  -- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
To view this discussion on the web visit

You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
To view this discussion on the web visit

Reply via email to