Could your ideas be used to improve text compression? Current LLMs are just
predicting text tokens on huge neural networks, but I think any new
theories could be tested on a smaller scale, something like the Hutter
prize or large text benchmark. The current leaders are based on context
mixing, comb
On Thu, May 2, 2024 at 6:02 PM YKY (Yan King Yin, 甄景贤) <
generic.intellige...@gmail.com> wrote:
> The basic idea that runs through all this (ie, the neural-symbolic
> approach) is "inductive bias" and it is an important foundational concept
> and may be demonstrable through some experiments... som
On Wed, May 1, 2024 at 10:29 PM Matt Mahoney
wrote:
> Where are you submitting the paper? Usually they want an experimental
> results section. A math journal would want a new proof and some motivation
> on why the the theorem is important.
>
> You have a lot of ideas on how to apply math to AGI b