On Sat, Mar 18, 2023 at 3:49 AM smitra <[email protected]> wrote:

>
>
> *> The way one would be able to see that the system despite
> performingextremely well does not have the intellectual capabilities of a
> humanbeing, would be to follow up on gaps in its knowledge and see if it
> canlearn from its mistakes and master new subjects.*


Some humans have the capacity to do that, but most do not, so you couldn't
say that's the defining characteristic of being human.


> *> I'll be convinced if they succeed making such a system do original
> research in, say, theoretical physics or mathematics*


Protein folding. The 4 color map problem. The Boolean Pythagorean triples
problem.

*> I would be more impressed by a system that may make many more mistakes
> like that than this GPT system made, but where there is a follow-up
> conversation where the mistakes are pointed out and the system shows that
> it has learned*


GPT-4 doesn't know everything but I'm sure you will admit it does know some
things,  but if it didn't have the capacity for learning it wouldn't know
anything. But it does know some things, that's why they say it's a "machine
learning" program.

John K Clark    See what's on my new list at  Extropolis
<https://groups.google.com/g/extropolis>
mlp

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv2TEV7Y_1OLtZhnVkutQ612FaWMug_wA-9rJn59o51evg%40mail.gmail.com.

Reply via email to