On Wed, Dec 25, 2024 at 12:06 AM PGC <[email protected]> wrote:

*> You see what I'm getting at. It’s true that from a purely outcome-based
> perspective—either a problem is solved, or it isn’t—it can seem irrelevant
> whether an AI is “really” reasoning or simply following patterns. This I'll
> gladly concede to John's argument.If Einstein’s “real reasoning” and an
> AI’s “simulated reasoning” both yield a correct solution, the difference
> might appear purely philosophical.*


*Thank you.  *


> *> However, when we look at how that solution is derived—whether there’s a
> coherent, reusable framework or just a brute-force pattern assembly within
> a large but narrowly defined training distribution—we begin to see
> distinctions that matter. Achievements like superhuman board-game play and
> accurate protein folding often leverage enormous coverage in training or
> simulation data.*


*In other words an AI, like a human being, needs to be educated. No human
being can be introduced to Chess for the very first time and immediately
play it at a grandmaster level, it takes a human years of study and it
takes an AI hours of study.    *

> * > This can conceal limited adaptability when confronted with radically
> unfamiliar inputs, and even if a large model appears to have used
> “higher-order abstraction,” it may not have produced a stable or
> generalizable theory—only an ad hoc solution  that might fail under new
> conditions.*


*The same thing is true for human scientists. When in 1900 Max Planck first
introduced the concept that the atom was quantized his idea fit the data
for the blackbody radiation curve but it caused all sorts of other
problems; for example the electron should constantly lose energy by
radiating electromagnetic waves and crash in the nucleus, but that doesn't
happen.  Even Planck thought his idea was just a mathematical trick that
allowed him to predict what the blackbody radiation curve would look like
at a given temperature. Quantum Mechanics wasn't fully fleshed out until
1927 and even today some think it's not entirely complete. Usually before
you find a theory that can give you a perfect answer all of the time, you
find a theory that can give you an approximate answer most of the time. *


*> in high-risk domains like brain surgery, we risk encountering unforeseen
> circumstances well beyond any training distribution.*


*Regardless of if your surgeon is human or robotic if, during your brain
surgery, he or it encounters something never seen before in his or its
education or previous practice then you're probably toast.  *



>  * > AI’s specialized “intelligence” is inefficient compared to a
> competent human team*


*Compared with humans current AIs are certainly inefficient if you're
talking about energy consumption, but not if you're talking about time
consumption. No human being could start with zero knowledge of Chess and
become a grandmaster of the game after two hours of self study, especially
if during those two hours he couldn't talk to other Chess players and had
no access to Chess books; but an AI can.  *



> *> Other domains off the top in which this robust kind of generalization
> is critical are Self-Driving Vehicles in unmapped terrains or extreme
> weather:*


*Self-Driving cars are already safer than the average human driver
in unmapped terrains or extreme weather, but for legal and societal reasons
they will not become ubiquitous until they are MUCH safer than even the
very safest human driver.  *



> * > Another one would be financial trading during market crises: Extreme
> market shifts can quickly invalidate learned patterns, and “intuitive” AI
> decisions might fail without robust abstraction and loose billions.*


*In those extreme circumstances where seconds and even milliseconds become
important I would feel far more comfortable with an AI managing my money
than a human. *



> *> How about nuclear power plant operations? Rare emergency scenarios, if
> not seen in training, could lead to catastrophic outcomes*


*And that's exactly what caused the Chernobyl disaster and the 3 mile
island meltdown, and in both cases the operators of the reactors were human
beings. The Fukushima nuclear accident was not caused by a reactor operator
error but by the human reactor designers who thought it was a good idea to
build a nuclear plant very near the ocean and put the emergency diesel
generators in the basement even though it was directly above a major
earthquake fault line. *

* John K Clark    See what's on my new list at  Extropolis
<https://groups.google.com/g/extropolis>*

5v2

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv12zLuT66TBVcq3duXFsBDd435%2BxBuS8WykrvBJGq8aAA%40mail.gmail.com.

Reply via email to