On Fri, Dec 12, 2025 at 7:37 AM 'Tomasz Rola' via Everything List <
[email protected]> wrote:

* >>I would say that getting a score of 118 on the Putnam is FAR beyond
>> theabilities of 99.99+% of the humans on this planet; so "superhuman"
>> would not be a completely inaccurate word to describe such an ability. *
>
>
>
> * > Sure, solving more problems should be more difficult and would require
> bigger ability - in this particular field, i.e. problem solving.*


*Problem solving is more than just a "particular field";  intelligence is a
measure of how good something is at problem solving.    *

*> while a horse is stronger than me, so t has bigger ability in the field
> of moving a ton of cargo, it is still no reason to treat a horse as some
> kind of demigod.*


*Do you really believe that is a good analogy for what we are currently
observing in the field of AI?!  *


>
>
> * > This is not to mean that there has been no progress - there was a
> progress, quite impressive progress, happened during more or less single
> lifetime.*


*Over a single lifetime? There has been quite impressive progress in AI
during the previous month!  *

*> it is very hard to double performance of a cluster. You may see an
> example in wikipedia article about scalability, where is is shown how
> doubling cpus from 4 to 8 only gives 22% of speed increase (for very
> specific kind of computation, as described there). *
>

*Recent developments have proven that one of the following things must be
true: *

*1) The article is flat out wrong. *
*2) The article is correct but AI it's not one of the "very specific kinds
of computation" that it's referring to. *

* >The whole talk about building nuclear plants for powering cluster to run
> bigger model on it*


*That's because no matter how smart and efficient an AI is, more power will
always enable it to perform more computations and therefore become
smarter. *


> *> seems (IMHO) to indicate "they" have hit the wall and cannot easily
> improve anymore.*


*Interesting theory, but it's wrong because it disagrees with observations.
As Richard Feynman said "It doesn't matter how beautiful your theory is, it
doesn't matter how smart you are. If it doesn't agree with experiment, it's
wrong."*

* > This time, it may be different*.



*It is. This time nobody talks about the Turing Test anymore because AI
blew past that benchmark about 3 years ago, and it was considered the gold
standard for AI. It took about 70 years to go from the first primitive
electronic computer to beating the Turing Test, but since then, during the
previous 3 years, I have observed the rate of AI improvement accelerate
dramatically.  *

*John K Clark *

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv1AK0%2BL6kLOgoW_iF0M4wh0XQH4iatLSCwNj4QR1zWwLQ%40mail.gmail.com.

Reply via email to