You are not engaging with what I am actually saying.

Telmo

Am Sa, 18. Mär 2023, um 13:29, schrieb John Clark:
> On Sat, Mar 18, 2023 at 5:28 AM Telmo Menezes <[email protected]> wrote:
> 
>> *> Huge progresses is being made, but we are not at the human level of 
>> generality of intelligence and autonomy. Not even close.*
> 
> Not even close? Don't be silly.  
> 
>> *> I fear that you are falling for the very human bias (I fall for it so 
>> many times myself) of seeing what you want to see.*
> 
> And I fear you are whistling past the graveyard.  
> 
>> *> A machine learning system can only be objectively evaluated by applying 
>> it to data that was not used to train it.*
> 
> I don't know what you mean by that, you're not falling for that old cliché 
> that computers can only do what they're told to do are you? GPT-4 was not 
> trained on the exact questions asked; I suppose you could make a case that 
> some of the training data GPT-4 was educated on was somewhat similar to some 
> of the questions it was asked, but the exact same thing is true for human 
> beings. When you ask questions to a human being some of those questions are 
> somewhat similar to data he was educated on. In fact if some of the data 2 
> intelligences were educated on were not similar they would not be able to ask 
> each other questions because they wouldn't even be able to communicate. 
> 
> 
>> *>Again, it is important to understand what exactly GPT-4 is doing. It is 
>> certainly impressive, but it is not the same thing as a human being taking 
>> an IQ test,*
> 
> So you must think the following fundamental axiom is true:
> 
> *"If a human does something that is smart then the human is smart, but if a 
> computer does the exact same thing then the computer is NOT smart."*
> 
> And from that axiom it's easy to derive the following Corollary:
> 
> *"Computers, buy definition, can never be smart."*
> 
> I think you need to be more careful in picking your fundamental axioms.
> 
>> 
>> *> I do think that passing the Turing test is impressive,*
> 
> Probably the greatest understatement of all time.  
> 
>> *> although it is true that most AI researchers never took it very 
>> seriously,*
> 
> What?!  I'm sure that in their daily lives AI researchers, like every other 
> human being on planet earth, have met people in their life that they 
> considered to be very intelligent, and people they considered to be very 
> stupid, but if they didn't use the Turing Test to make that determination 
> then what on earth did they use? All the Turing test is saying is that you 
> need to play fair, whatever criteria you used to judge the intelligence of 
> your fellow human beings you should also use on a computer to judge its 
> intelligence. 
> 
> It's always the same, I'm old enough to remember when respectable people were 
> saying a computer would never be able to do better than play a mediocre game 
> of chess and certainly never be able to beat a grandmaster at the game. But 
> when a computer did beat a grandmaster at Chess they switched gears and said 
> such an accomplishment means nothing and insisted a computer could never beat 
> a human champion at a game like GO because that really requires true 
> intelligence. Of course when a computer did beat the human champion at GO 
> they switched gears again and said that accomplishment means nothing because 
> a computer would never be able to pass the Turing Test because that really* 
> really* requires true intelligence.  And now that a computer has passed the 
> Turing Test the human response to that accomplishment is utterly predictable. 
>  As I said before, they're whistling past the graveyard.
> 
> ... and so, just seconds before he was vaporized the last surviving human 
> being turned to Mr. Jupiter Brain and said "*I still think I'm more 
> intelligent than you*".
> 
> 
>> *> GPT-4 and image generators are a type of intelligence that we had never 
>> seen before. Maybe the first time such a thing arises in this galaxy or even 
>> universe,*
> 
> I agree, and I can't think of anything more important that happened in my 
> lifetime.  
> 
>  
>>  > *They are probably also similar to stuff that happens in our brain. But 
>> what they are not is something you can be compare to a human mind with an IQ 
>> test in any meaningful way.*
> 
> Not just *an* IQ test but 4 quite different types of IQ tests. And it was a 
> lobotomized version of GPT-4 that was tested that could not input graphs and 
> charts or diagrams so any question that contained them was automatically 
> marked as getting wrong, and yet it STILL got an IQ of 114. And the computer 
> completed those tests in seconds while it took humans hours to do the same 
> thing. Imagine what IQ score it will get in two years, or even two months.  
> And you say "not even close"?
>  
>> *> That is just junk science.*
> 
> Huh? Creating "a type of intelligence that we had never seen before, maybe 
> the first time such a thing arises in this galaxy or even the universe", is 
> junk science?
> 
> John K Clark    See what's on my new list at  Extropolis 
> <https://groups.google.com/g/extropolis>
> e4v
> 

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/09a49ea8-2d7e-49a1-9950-f28438d34d32%40app.fastmail.com.

Reply via email to