LLM's aren't compressors.  They are compressed representations of data and
their usage amounts to conditional decompression where the condition is the
prompt.

The learning algorithms that create the LLMs (as they are currently
understood in the vernacular "LLM") are compressors with a bias that
requires them to be lossy compressors.

While it isn't inconceivable that there will one day be lossless
compressors generating unbiased language models, that day is, at present,
receding under the watchful eye of our Brain Police
<https://youtu.be/sM9nx3rUdSg>:

https://youtu.be/J-CTR0xHr98

On Wed, Sep 20, 2023 at 11:21 PM <[email protected]> wrote:

> https://twitter.com/DimitrisPapail/status/1704516092293452097
>
> What if we had 1 big model in the cloud that everyone accesses? So say it
> compresses your file to like really small, your computer now has like so
> much more room, and doesn't need to store the big LLM.
>
> What do yous think?
> *Artificial General Intelligence List <https://agi.topicbox.com/latest>*
> / AGI / see discussions <https://agi.topicbox.com/groups/agi> +
> participants <https://agi.topicbox.com/groups/agi/members> +
> delivery options <https://agi.topicbox.com/groups/agi/subscription>
> Permalink
> <https://agi.topicbox.com/groups/agi/T414bc941cdd95c2d-Md0b810a0c6b0573f910c7791>
>

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T414bc941cdd95c2d-Ma4da80b72b5153e2641d0aa2
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to