Sorry, I'm not sure what you're saying. It's not clear to me if this is
intended as a criticism of me, or of someone else. Also, I lack the context
to draw the connection between what I've said and the topic of
compression/decompression, I think.

On Mon, Jul 22, 2024 at 5:17 PM James Bowery <[email protected]> wrote:

>
>
> On Mon, Jul 22, 2024 at 4:12 PM Aaron Hosford <[email protected]> wrote:
>
>> ...
>>
>> I spend a lot of time with LLMs these days, since I pay my bills by
>> training them....
>>
>
> Maybe you could explain why it is that people who get their hands dirty
> training LLMs, and are therefore acutely aware of the profound difference
> between training and inference (if for no other reason than that training
> takes orders of magnitude more resources), seem to think that these
> benchmark tests should be on the inference side of things whereas the
> Hutter Prize has, *since 2006*, been on the training *and* inference side
> of things, because a winner must both train (compress) and infer
> (decompress).
>
> Are the "AI experts" really as oblivious to the obvious as they appear and
> if so *why*?
> *Artificial General Intelligence List <https://agi.topicbox.com/latest>*
> / AGI / see discussions <https://agi.topicbox.com/groups/agi> +
> participants <https://agi.topicbox.com/groups/agi/members> +
> delivery options <https://agi.topicbox.com/groups/agi/subscription>
> Permalink
> <https://agi.topicbox.com/groups/agi/T6510028eea311a76-M00cc8927f38d88c0c8994483>
>

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T6510028eea311a76-M3115d5de0e38594a9d920218
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to