Re: Coming Singularity

2024-03-29 Thread Russell Standish
On Fri, Mar 29, 2024 at 09:55:28AM -0400, John Clark wrote:
> On Thu, Mar 28, 2024 at 9:27 PM Russell Standish  
> wrote:
>  
> 
> >"So to compare apples with apples - the human brain contains around 700 
> trillion (7E14) synapses"
> 
> 
> I believe 700 trillion is a more than generous estimate of the number of
> synapses in the human brain, but I'll let it go.  
>  
> 
> 
> >"which would roughly correpond to an AI's parameter count
> 
> 
> 
> NO! Comparing the human brain's synapses to the number of parameters that an 
> AI
> program like GPT-4 has is NOT comparing apples to apples, it's comparing 
> apples
> to oranges because the brain is hardware but GPT-4 is software. So let's
> compare the brain hardware that human intelligence is running on with the 
> brain
> hardware that GPT-4 is running on, that is to say let's compare synapses to
> transistors. I'll use your very generous estimate and say the human brain has
> 7*10^14 synapses, but the largest supercomputer in the world, the Frontier
> Computer at Oakridge, has about 2.5*10^15 transistors, over three times as
> many. And we know from experiments that a typical synapse in the human brain
> "fires" between 5 and 50 times per second, but a typical transistor in a
> computer "fires" about 4 billion times a second (4*10^9).  That's why the
> Frontier Computer can perform 1.1 *10^18 floating point calculations per 
> second
> and why the human brain can not.

There is a big difference between the way transistors are wired in a
CPU and the way neurons are wired up in a brain. The brain is not
optimised at all to do floating point calculations, which is why even
the most competent "computer" (in the old fashioned sense) can only
manage sub 1 flops. Conversely, using floating point operations to
perform neural network computations is not exactly efficient
either. We're using GPUs today, because they can perform these very
fast, and its a massively parallel operation, and GPUs are cheap, for
what they are. In the future, I would expect we'd have dedicate neural
processing units, based on memristors, or whatever. Indeed Intel is
now flogging chips with "NPU"s, but how much of that is real and how
much is marketing spin I can't say.

The comparing synapses with ANN parameters is only relevant for the
statement "we can simulate a human brain sized ANN by X
date". Kurzweil didn't say that (for some reason I thought he did), he
said human intelligence parity (which I supose could be taken to be
avergae intelligence, or an IQ of 100). In a human brain, a lot of
neurons are handling body operations - controlling muscles,
interoception, proprioception, endocrine control etc, so the actual
figure related to language processing is likely to be far smaller than
the figure given. But only by an order of magnitude, I would say.

> 
> I should add that although there have been significant improvements in the
> field of AI in recent years, the most important being the "Attention Is All 
> You
> Need" paper, I believe that even if transformers had never been discovered the
> AI explosion that we are currently observing would only have been delayed by a
> few years because the most important thing driving it forward is the brute
> force enormous increase in raw computing speed.
> 
> 
> > "He [Ray Kurzweil]  was predicting 2029 to be the time when AI will
> attain human level intelligence."
> 
> 
> It now looks like Ray was being too conservative and 2024 or 2025 would be
> closer to the Mark, and 2029 would be the time when an AI is smarter than the
> entire human race combined. 
> 

2025 should see the release of GPT5. It is still at least two orders
of magnitude short of the mark IMHO. It is faster though - training
GPT5 will have taken about 2 years, whereas it takes nearly 20 years
to train a human.

> 
> 
> > "I would still say that creativity (which is an essential prerequisite)
> is still mysterious"
> 
> 
> It doesn't matter if humans find creativity to be mysterious because we have 
> an
> existence proof that a lack of understanding of creativity does not prevent
> humans from making a machine that is creative. 

That may be the case, but understanding something does accelerate
process dramatically over blind "trial and error". It is the main
reason for the explosion in technical prowess over the last 400 years.

> Back in 2016 when a computer
> beat Lee Sedol, the top human champion at the game of GO, the thing that
> everybody was talking about was move 37 of the second game of the five game
> tournament. When the computer made that move the live expert commentators were
> shocked and described it as "practically nonsensical" and "something no human
> would do", and yet that crazy "nonsensical" move was the move that enabled the
> computer to win.  Lee Sedol said move 37 was "an incredible move" and was
> completely unexpected and made it impossible for him to win, although it took
> him a few more moves before he realized that. 

Re: Coming Singularity

2024-03-29 Thread Jason Resch
On Fri, Mar 29, 2024, 1:42 AM Dylan Distasio  wrote:

> I think we need to be careful with considering LLM parameters as analogous
> to synapses.   Biological neuronal systems have very significant
> differences in terms of structure, complexity, and operation compared to
> LLM parameters.
>
> Personally, I don't believe it is a given that simply increasing the
> parameters of a LLM is going to result in AGI or parity with overall human
> potential.
>

I agree it may not be apples to apples to compare synapses to parameters,
but of all the comparisons to make it is perhaps the closest one there is.


> I think there is a lot more to figure out before we get there, and LLMs
> (assuming variations on current transformer based architectures) may end up
> a dead end without other AI breakthroughs combining them with other
> components, and inputs (as in sensory inputs)..
>

Here is where I think we may disagree. I think the basic LLM model, as
currently used, is all we need to achieve AGI.

My motivation for this belief is there all forms of intelligence reduce to
prediction (that is, given a sequence observables, determining what is the
most likely next thing to see?).

Take any problem that requires intelligence to solve and I can show you how
it is a subset of the skill of prediction.

Since human language is universal in the forms and types of patterns it can
express, there is no limit to the kinds of patterns and LLM can learn to
recognize and predict. Think of all the thousands, if not millions of types
of patterns that exist in the training corpus. The LLM can learn them all.

We have already seen this. Despite not being trained for anything beyond
prediction, modern LLMs have learned to write code, perform arithmetic,
translate between languages, play chess, summarize text, take tests, draw
pictures, etc.

The "universal approximation theorem" (UAT) is a result in the field of
neural networks which says that with a large enough neural network, and
with enough training, a neural network can learn any function. Given this,
the UAT, and the universality of language to express any pattern, I believe
the only thing holding back LLMs today is their network size and amount of
training. I think the language corpus is sufficiently large and diverse in
the patterns it contains that it isn't what's holding us back.

An argument could be made that we already have achieved AGI. We have AI
that passes the bar in the 90th percentile, passes math olympiad tests in
the 99th percentile, programs better than the average google coder, scores
a 155 in a verbal IQ test, etc. If we took GPT-4 back to the 1980s to show
it off, would anyone at the time say it is not AGI? I think we are only
blinded to the significance of what has happened because we are living
through history now and the history books have not yet covered this time.

Jason



> We may find out that the singularity is a lot further away than it seems,
> but I guess time will tell.Personally, I would be very surprised to see
> it within the next decade.
>
> On Thu, Mar 28, 2024 at 9:27 PM Russell Standish 
> wrote:
>
>>
>> So to compare apples with apples - the human brain contains around 700
>> trillion (7E14) synapses, which would roughly correpond to an AI's
>> parameter count. GPT5 (due to be released sometime next year) will
>> have around 2E12 parameters, still 2-3 orders of magnitude to
>> go. Assuming continuation of current rates of AI improvement
>> GPT3->GPT5 (4 years) is one order of magnitude increase in parameter
>> count, it will take to 2033 for AI to achieve human parity.
>>
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "Everything List" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to everything-list+unsubscr...@googlegroups.com.
>> To view this discussion on the web visit
>> https://groups.google.com/d/msgid/everything-list/20240329012651.GE2357%40zen
>> .
>>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CAJrqPH9L6f%3Dc8%3DjjQAgXSP5WvHQ-k2dUvwS%2Btj-UWqw%2BaxUoZQ%40mail.gmail.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CA%2BBCJUia%2BsapAjkzLRzU9xAAepPhhXCJwuntSp%3D64sCbW%2BCFVA%40mail.gmail.com.


Re: Coming Singularity

2024-03-29 Thread John Clark
On Thu, Mar 28, 2024 at 9:27 PM Russell Standish 
wrote:


> * >"So to compare apples with apples - the human brain contains around
> 700 trillion (7E14) synapses"*


I believe 700 trillion is a more than generous estimate of the number of
synapses in the human brain, but I'll let it go.


*>"which would roughly correpond to an AI's parameter count*



*NO! *Comparing the human brain's synapses to the number of parameters that an
AI program like GPT-4 has is NOT comparing apples to apples, it's comparing
apples to oranges because the brain is hardware but GPT-4 is software. So
let's compare the brain hardware that human intelligence is running on with
the brain hardware that GPT-4 is running on, that is to say let's compare
synapses to transistors. I'll use your very generous estimate and say the
human brain has 7*10^14 synapses, but the largest supercomputer in the
world, the Frontier Computer at Oakridge, has about 2.5*10^15 transistors,
over three times as many. And we know from experiments that a typical
synapse in the human brain "fires" between 5 and 50 times per second, but a
typical transistor in a computer "fires" about 4 billion times a second
(4*10^9).  That's why the Frontier Computer can perform 1.1 *10^18 floating
point calculations per second and why the human brain can not.

I should add that although there have been significant improvements in the
field of AI in recent years, the most important being the "Attention Is All
You Need" paper, I believe that even if transformers had never been
discovered the AI explosion that we are currently observing would only have
been delayed by a few years because the most important thing driving it
forward is the brute force enormous increase in raw computing speed.

> "*He [Ray Kurzweil]  was predicting 2029 to be the time when AI will
> attain human level intelligence.*"


It now looks like Ray was being too conservative and 2024 or 2025 would be
closer to the Mark, and 2029 would be the time when an AI is smarter than
the entire human race combined.


*> "I would still say that creativity (which is an essential prerequisite)
> is still mysterious"*


It doesn't matter if humans find creativity to be mysterious because we
have an existence proof that a lack of understanding of creativity does not
prevent humans from making a machine that is creative. Back in 2016 when a
computer beat Lee Sedol, the top human champion at the game of GO, the
thing that everybody was talking about was move 37 of the second game of
the five game tournament. When the computer made that move the live expert
commentators were shocked and described it as "practically nonsensical" and
"something no human would do", and yet that crazy "nonsensical" move was
the move that enabled the computer to win.  Lee Sedol said move 37 was "*an
incredible move*" and was completely unexpected and made it impossible for
him to win, although it took him a few more moves before he realized that.
If a human had made moves 37 every human GO expert on the planet would've
said it was the most creative move they had ever seen.

>
*> "But singularity requires that machines design themselves"*


Computers are already better at writing software than the average human,
and major chip design and manufacturing companies like  NVIDIA, AMD, Intel
, Cerebras and TSMC are investing heavily in chip design software.



>
> * > Anyway my 2c - I know John is keen to promote the idea of
> singularity this decade - but I don't see it myself.*


One thing I know for certain, whenever the Singularity occurs most people
will be surprised, otherwise it wouldn't be a Singularity.

 John K ClarkSee what's on my new list at  Extropolis

oib

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv3HNhk6ufAiQjjeK419CqpSubiJp%3DnTpPefCSADYs0Osg%40mail.gmail.com.