Could your ideas be used to improve text compression? Current LLMs are just
predicting text tokens on huge neural networks, but I think any new
theories could be tested on a smaller scale, something like the Hutter
prize or large text benchmark. The current leaders are based on context
mixing, combining many different independent predictions of the next but or
token. Your predictor could be tested either independently or mixed with
existing models to show an incremental improvement. You don't need to win
the prize to show a positive result.

The problem with current LLMs is that they require far more training text
than a human and they require separate training and prediction steps. We
know they are on the right track because they make the same kind of math
and coding errors as humans, and of course passing the Turing test and
equivalent academic tests. Can we do this on 1 GB of text and a
corresponding reduction in computation? Any new prediction algorithm would
be a step in this direction.

Yes, it's work. But experimental research always is. The current Hutter
prize entries are based on decades of research starting with my PAQ based
compressors.

Prediction measures intelligence. Compression measures prediction.

On Thu, May 2, 2024, 5:31 AM YKY (Yan King Yin, 甄景贤) <
[email protected]> wrote:

> On Thu, May 2, 2024 at 6:02 PM YKY (Yan King Yin, 甄景贤) <
> [email protected]> wrote:
>
>> The basic idea that runs through all this (ie, the neural-symbolic
>> approach) is "inductive bias" and it is an important foundational concept
>> and may be demonstrable through some experiments... some of which has
>> already been done (ie, invariant neural networks).  If you believe it in
>> principle then the approach can accelerate LLMs, which is a
>> multi-billion-dollar business now.
>>
>
> PS:  this is a hypothesis, it's a scientific hypothesis, is falsifiable,
> can be proven or disproven, but it's very costly to prove directly given
> current resources.  Nevertheless it can be *indirectly* supported by
> experiments.
> *Artificial General Intelligence List <https://agi.topicbox.com/latest>*
> / AGI / see discussions <https://agi.topicbox.com/groups/agi> +
> participants <https://agi.topicbox.com/groups/agi/members> +
> delivery options <https://agi.topicbox.com/groups/agi/subscription>
> Permalink
> <https://agi.topicbox.com/groups/agi/T45b9784382269087-M2d41b288933ff9183512c4b7>
>

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T45b9784382269087-M8cfd835dac738d597562d6ed
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to