Again, no reason to tie 1 neuron to 1 memory or even pattern of
information, and I am strictly against comparing the brain to artificial
neural networks, as there has been no evidence that the facsimiles present
in ANNs are in any way reflective of what is actually happening in living
brains. The most I have seen is that if you bend over backwards and recite
Infinite Jest 9 times while squinting really hard, it *kind of* looks like
the brain can do a form of backpropagation if you forced it to.

Even given the comparison to ANNs, you cannot model non-linearity in a
single perceptron model unless said perceptron uses a polynomial activation
function (which neither brains nor standard ANNs use); to imply a single
perceptron could practically encode a pattern of a memory seems absurd.

On the point of compression, no idea where you arrived at that figure - as
per Chinchilla scaling laws, it has been known for some time the optimal
data : parameter ratio is far off what GPT3 used. Llama 2 has a 70B
parameter variant trained on [IIRC] 2 trillion tokens and outperforms GPT3.
Doesn't make any sense to derive compression ratios based on how OpenAI or
Meta decided to train their systems, for all we know even Chinchilla is
suboptimal, so no idea how you arrive at 0.1 bpc, and definitely no idea
how you arrived at the figure that a human brain stores 10^19...
"characters" of information??

I wouldn't call language "figured out". LLMs are impressive, neural
networks have incredible use cases, but they are still far from
understanding language on any human-like level. Just as is the case with
image generation models, they can definitely produce something that looks
like art if you squint, until you look a little closer and realize there's
one too many hands..

On Sun, Sep 10, 2023, 6:09 PM Matt Mahoney <mattmahone...@gmail.com> wrote:

> On Sat, Sep 9, 2023, 5:57 PM mm ee <csmicahelli...@gmail.com> wrote:
>
>> There is no reason to believe that 1 bit or 1 synaptic connection
>> corresponds to a single pattern of a memory
>>
>
> Not one synapse, one neuron. Human memory is associative. Synapses
> represent associations between concepts, at least in the connectionist
> model that makes neural networks easy to understand. But connectionism
> doesn't have a mechanism for learning new concepts and adding neurons. We
> solve the problem by having neurons represent linear combinations of
> concepts and synapses represent linear combinations of associations.
>
> A rule of thumb for programming neural networks is to use on the order of
> 1 synapse or weight or parameter per bit of compressed training data. Too
> small and you forget. Too big and you over fit. GPT3 uses 175B parameters
> to train on 500B tokens of text, suggesting a compression ratio of 0.1 bits
> per character. A human level language model in theory should need 1B
> parameters to train on 1 GB of text at 1 bpc. ChatGPT knows far more than
> any human could remember. And compression ratio gets better as the training
> set gets bigger. All of human knowledge is 10^19 characters compressing to
> 10^17 bits at 0.01 bpc because 99% of what you know is shared or written
> down (why it costs 1% of lifetime income to replace an employee).
>
> The mystery is why does the brain need 6 x 10^14 synapses to store 10^9
> bits of long term memories? Maybe because neurons are slow so you make
> multiple copies of bits to move them closer to where they are needed. Like
> a server farm stores 1M copies of Linux on disk, RAM, cache, and registers.
> Or your body has 10^13 copies of your DNA and still has to make multiple
> copies of a gene to mRNA before transcribing it.
>
> So if we can optimize LLM storage by using faster components, maybe we can
> do the same for vision at video speeds. We know that we can only store
> visual information at 5 to 10 bits per second, same as language. We figured
> out language by abandoning symbolic reasoning and training semantics before
> grammar. Maybe we can solve vision by modeling a fovea and eye movements.
> *Artificial General Intelligence List <https://agi.topicbox.com/latest>*
> / AGI / see discussions <https://agi.topicbox.com/groups/agi> +
> participants <https://agi.topicbox.com/groups/agi/members> +
> delivery options <https://agi.topicbox.com/groups/agi/subscription>
> Permalink
> <https://agi.topicbox.com/groups/agi/Tc1bcda5fdb4147f4-M2f3394ad2074b3b5f297a8db>
>

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tc1bcda5fdb4147f4-M9a611c490920d6b7e03dfbd0
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to