On Tue, Aug 15, 2023 at 3:58 PM <[email protected]> wrote:
> ... > But I guess, even if it is smart enough to say what you say I guess GPT-4 > seems to know it is the BETTER way, perhaps maybe? :) > That's what I meant in the introductory paragraph when I wrote: > ... apply LLMs to improve ML resulting in profound societal > transformations including AGI but already GPT4 is more intelligent on the > key issue of allocation of capital in ML and no one in the position to > allocate capital is paying any attention to what it says (below)... Here GPT4 is handing them the way to improve GPT4 on a silver platter. Unfortunately, the delicious food is under a silver cloche that must be removed before benefitting from the chef's work. That cloche is simply this: "But... but... but... how are you going to losslessly compress so many > terabytes of data????" The answer, of course, is as I wrote previously about The Hutter Prize for Lossless Compression of Human Knowledge: > - Every programming language is described in Wikipedia. > > > - Every scientific concept is described in Wikipedia. > > > - Every mathematical concept is described in Wikipedia. > > > - Every historic event is described in Wikipedia. > > > - Every technology is described in Wikipedia. > > > - Every work of art is described in Wikipedia -- with examples. > > > - There is even the Wiki*data* project that provides Wikipedia a > substantial amount of digested statistics about the real world. > > Are you going to argue that *comprehension* of all that knowledge is > insufficient to *generatively* speak the *truth *consistent with all that > knowledge -- and that this notion of "truth" will not be *at least* comparable > to that *generatively* spoken by large language models such as ChatGPT? This is why I say the frontier of ML research is *data efficiency*. We haven't even begun to tap the knowledge available in a tiny fraction of the LLM corpora. Worse, we haven't even begun to recognize that by being *data obese* we are hiding the degree to which LLMs are just plain *dumb*. ------------------------------------------ Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/T8e755985a1fa3a8b-Mb46b071ebda90bf07a0ef556 Delivery options: https://agi.topicbox.com/groups/agi/subscription
