I see you asked GPT4 3 times why it is important to include the size of the
decompressor executable in a compression contest. GPT4 is a lot like humans
that make up BS when they don't know the answer.

The answer, of course, is so you can't hide information about the target
file in the program. Otherwise it would have known about my BARF compressor
(Better Archiver with Recursive Functionality), which compresses the
Calgary corpus files to 1 byte (or zero bytes in 2 passes because it also
universal and guaranteed to compress any nonempty file by at least 1 byte).

But to meet the contest goals of bounding Kolmogorov complexity, it should
allow either source or executable in a compressed archive from a specified
set of the best available standard formats like zip, bzip2, rar, 7z, zpaq,
etc. If not archived, then the size should include the lengths of the
filenames so you can't use the trick I used in BARF to compress recursively.

GPT4 has a way to go before it can start rewriting its code without our
help.


On Tue, Aug 15, 2023, 5:54 PM James Bowery <[email protected]> wrote:

>
>
> On Tue, Aug 15, 2023 at 3:58 PM <[email protected]> wrote:
>
>> ...
>> But I guess, even if it is smart enough to say what you say I guess GPT-4
>> seems to know it is the BETTER way, perhaps maybe? :)
>>
>
> That's what I meant in the introductory paragraph when I wrote:
>
>> ... apply LLMs to improve ML resulting in profound societal
>> transformations including AGI but already GPT4 is more intelligent on the
>> key issue of allocation of capital in ML and no one in the position to
>> allocate capital is paying any attention to what it says (below)...
>
>
> Here GPT4 is handing them the way to improve GPT4 on a silver platter.
> Unfortunately, the delicious food is under a silver cloche that must be
> removed before benefitting from the chef's work.  That cloche is simply
> this:
>
> "But... but... but... how are you going to losslessly compress so many
>> terabytes of data????"
>
>
> The answer, of course, is as I wrote previously about The Hutter Prize for
> Lossless Compression of Human Knowledge:
>
>
>>    - Every programming language is described in Wikipedia.
>>
>>
>>    - Every scientific concept is described in Wikipedia.
>>
>>
>>    - Every mathematical concept is described in Wikipedia.
>>
>>
>>    - Every historic event is described in Wikipedia.
>>
>>
>>    - Every technology is described in Wikipedia.
>>
>>
>>    - Every work of art is described in Wikipedia -- with examples.
>>
>>
>>    - There is even the Wiki*data* project that provides Wikipedia a
>>    substantial amount of digested statistics about the real world.
>>
>> Are you going to argue that *comprehension* of all that knowledge is
>> insufficient to *generatively* speak the *truth *consistent with all
>> that knowledge -- and that this notion of "truth" will not be *at least* 
>> comparable
>> to that *generatively* spoken by large language models such as ChatGPT?
>
>
> This is why I say the frontier of ML research is *data efficiency*.  We
> haven't even begun to tap the  knowledge available in a tiny fraction of
> the LLM corpora.  Worse, we haven't even begun to recognize that by being 
> *data
> obese* we are hiding the degree to which LLMs are just plain *dumb*.
> *Artificial General Intelligence List <https://agi.topicbox.com/latest>*
> / AGI / see discussions <https://agi.topicbox.com/groups/agi> +
> participants <https://agi.topicbox.com/groups/agi/members> +
> delivery options <https://agi.topicbox.com/groups/agi/subscription>
> Permalink
> <https://agi.topicbox.com/groups/agi/T8e755985a1fa3a8b-Mb46b071ebda90bf07a0ef556>
>

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T8e755985a1fa3a8b-M6a0bf6365a2250e03a889f88
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to