On Tue, Oct 1, 2024 at 7:23 AM PGC <[email protected]> wrote:

*> I don't care what my statements sound like. It's about the argument. I'm
> not making statements like "superintelligence is around the corner",*


*I would maintain it's physically impossible to overhyped the importance of
artificial intelligence.  *

*> in which case the burden of proof lies with those hyping those
> statements.*


*There is no burden, things are just heading in an inevitable direction
and, short of starting a thermonuclear war, nobody is going to be able to
stop it.   *



> *> The exchange with Brent is instructive: can a human level intelligence
> be separated from its arguable 3.5 billion year history?*


*Yes.  *

*> Wouldn't that have to be accounted for? *


*No. *

*> If the current state of development is any indicator, where they keep
> enlarging the mathematical linguistic context which informs the response,
> then that's a lot of data for just one AI, even if you argue that early
> stages of the planet are not necessary.*
>

*True that's a lot of data, but I don't see your point. For over a decade
the amount of computational ability that an AI has at its disposal has been
doubling every six months, that's considerably faster than Moore's law and
there is no indication that's gonna stop anytime soon. And that's not all,
due to improvement in software, improvements largely caused by the AI
themselves not the humans who have only a hazy understanding about what's
going on, every 8 months an AI that uses only half the computational power
can reach the same AI benchmarks.   *



> *> And then superintelligence demands something like "can accomplish
> arbitrary tasks/problems much better than a human and/or all humans". The
> only phenomenon that has reached that level that we have evidence for is
> the development of civilization and science by billions of lifeforms
> reaching humans over, taking your figure, 500 million years.*
>


*True. And that is precisely why I say it is physically impossible to
overhype the importance of AI.  *

*> And to demonstrate that somebody is on the path towards modelling and/or
> surpassing that, you'd need to show how.*
>

*That will never happen, even in this very early stage nobody has a
detailed understanding of how AI's work.  *


> *> I not sure adding verbal/mathematical memory suffices.*
>

*By contrast I am very sure of that. As I have already shown, it can be
proven with mathematical precision that the upper limit to the amount of
information needed to make an entire human being is only 750 megs, and the
algorithm that humans use to extract knowledge from their environment must
be much much smaller than that, probably less than 1 MB. There have been
important developments in the field of AI such as the invention of
transformers, but that only advanced things by a couple of years, the
primary reason we didn't have AI's like we have today in the 1960s is that
back then the hardware simply wasn't able to provide the needed amount of
computation. Frank Rosenblatt invented the Perceptron way back in 1957 and
its basic architecture was similar to what we use today, but it couldn't do
much because Rosenblatt's hardware was pathetically primitive and
agonizingly slow.  *

*I recently watched an old Nova documentary about AI from the 1970s on
YouTube and a guy said that to develop an AI we need an Einstein, or maybe
10 Einsteins, and about 1000 very good engineers, and it's important that
the Einsteins come before the engineers. But it turned out all we needed
was the engineers, Einstein was unnecessary.*

  John K Clark    See what's on my new list at  Extropolis
<https://groups.google.com/g/extropolis>
eun
h

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv1TALebSdvmCuE_CwadWFHgsthbUKWoAEdOAtJ1LWntMA%40mail.gmail.com.

Reply via email to