On Fri, Mar 29, 2024, 1:42 AM Dylan Distasio <[email protected]> wrote:

> I think we need to be careful with considering LLM parameters as analogous
> to synapses.   Biological neuronal systems have very significant
> differences in terms of structure, complexity, and operation compared to
> LLM parameters.
>
> Personally, I don't believe it is a given that simply increasing the
> parameters of a LLM is going to result in AGI or parity with overall human
> potential.
>

I agree it may not be apples to apples to compare synapses to parameters,
but of all the comparisons to make it is perhaps the closest one there is.


> I think there is a lot more to figure out before we get there, and LLMs
> (assuming variations on current transformer based architectures) may end up
> a dead end without other AI breakthroughs combining them with other
> components, and inputs (as in sensory inputs)..
>

Here is where I think we may disagree. I think the basic LLM model, as
currently used, is all we need to achieve AGI.

My motivation for this belief is there all forms of intelligence reduce to
prediction (that is, given a sequence observables, determining what is the
most likely next thing to see?).

Take any problem that requires intelligence to solve and I can show you how
it is a subset of the skill of prediction.

Since human language is universal in the forms and types of patterns it can
express, there is no limit to the kinds of patterns and LLM can learn to
recognize and predict. Think of all the thousands, if not millions of types
of patterns that exist in the training corpus. The LLM can learn them all.

We have already seen this. Despite not being trained for anything beyond
prediction, modern LLMs have learned to write code, perform arithmetic,
translate between languages, play chess, summarize text, take tests, draw
pictures, etc.

The "universal approximation theorem" (UAT) is a result in the field of
neural networks which says that with a large enough neural network, and
with enough training, a neural network can learn any function. Given this,
the UAT, and the universality of language to express any pattern, I believe
the only thing holding back LLMs today is their network size and amount of
training. I think the language corpus is sufficiently large and diverse in
the patterns it contains that it isn't what's holding us back.

An argument could be made that we already have achieved AGI. We have AI
that passes the bar in the 90th percentile, passes math olympiad tests in
the 99th percentile, programs better than the average google coder, scores
a 155 in a verbal IQ test, etc. If we took GPT-4 back to the 1980s to show
it off, would anyone at the time say it is not AGI? I think we are only
blinded to the significance of what has happened because we are living
through history now and the history books have not yet covered this time.

Jason



> We may find out that the singularity is a lot further away than it seems,
> but I guess time will tell.    Personally, I would be very surprised to see
> it within the next decade.
>
> On Thu, Mar 28, 2024 at 9:27 PM Russell Standish <[email protected]>
> wrote:
>
>>
>> So to compare apples with apples - the human brain contains around 700
>> trillion (7E14) synapses, which would roughly correpond to an AI's
>> parameter count. GPT5 (due to be released sometime next year) will
>> have around 2E12 parameters, still 2-3 orders of magnitude to
>> go. Assuming continuation of current rates of AI improvement
>> GPT3->GPT5 (4 years) is one order of magnitude increase in parameter
>> count, it will take to 2033 for AI to achieve human parity.
>>
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "Everything List" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to [email protected].
>> To view this discussion on the web visit
>> https://groups.google.com/d/msgid/everything-list/20240329012651.GE2357%40zen
>> .
>>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to [email protected].
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CAJrqPH9L6f%3Dc8%3DjjQAgXSP5WvHQ-k2dUvwS%2Btj-UWqw%2BaxUoZQ%40mail.gmail.com
> <https://groups.google.com/d/msgid/everything-list/CAJrqPH9L6f%3Dc8%3DjjQAgXSP5WvHQ-k2dUvwS%2Btj-UWqw%2BaxUoZQ%40mail.gmail.com?utm_medium=email&utm_source=footer>
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CA%2BBCJUia%2BsapAjkzLRzU9xAAepPhhXCJwuntSp%3D64sCbW%2BCFVA%40mail.gmail.com.

Reply via email to