If I understood your assertions correctly, then I'd think that a quantum-based (fractal), evolutionary (chemistry-like) model would be suitable for extending the cohesive cognition to x levels.
If the boundaried result emerges as synonymous with an LLM, or NN, then it would be useful. However, if it emerges as an as-of-yet unnamed, recombinatory lattice, it would be groundbreaking. My primary thought here relates to inherent constraints in visualizing quantum systems. Once the debate between simple and complex systems end (e.g., when an absolutist perspective is advanced), then the observational system stops learning. Volume does not equate real progress. With an evolutionary model, "brainsnaps in time" may be possible. This suggests that scaling would be managable within relativistic and relevance boundaries/targets. In the least, trackable evolutionary pathways and predictability of the pattern of tendency of a system should become visible, and manageability would be increased. On Tue, May 7, 2024, 06:22 Rob Freeman <[email protected]> wrote: > Addendum: another candidate for this variational model for finding > distributions to replace back-prop (and consequently with the > potential to capture predictive structure which is chaotic attractors. > Though they don't appreciate the need yet.) There's Extropic, which is > proposing using heat noise. And, another, LiquidAI. If it's true > LiquidAI have nodes which are little reservoir computers, potentially > that might work on a similar variational estimation/generation of > distributions basis. Joscha Bach is involved with that. Though I don't > know in what capacity. > > James: "Physics Informed Machine Learning". "Building models from data > using optimization and regression techniques". > > Fine. If you have a physics to constrain it to. We don't have that > "physics" for language. > > Richard Granger you say? The brain is constrained to be a "nested stack"? > > > https://www.researchgate.net/publication/343648662_Toward_the_quantification_of_cognition > > Language is a nested stack? Possibly. Certainly you get a (softish) > ceiling of recursion starting level 3. The famous, level 2: "The rat > the cat chased escaped" (OK) vs. level 3: "The rat the cat the dog bit > chased escaped." (Borderline not OK.) > > How does that contradict my assertion that such nested structures must > be formed on the fly, because they are chaotic attractors of > predictive symmetry on a sequence network? > > On the other hand, can fixed, pre-structured, nested stacks explain > contradictory (semantic) categories, like "strong tea" (OK) vs > "powerful tea" (not OK)? > > Unless stacks form on the fly, and can contradict, how can we explain > that "strong" can be a synonym (fit in the stack?) for "powerful" in > some contexts, but not others? > > On the other hand, a constraint like an observation of limitations on > nesting, might be a side effect of the other famous soft restriction, > the one on dependency length. A restriction on dependency length is an > easier explanation for nesting limits, and fits with the model that > language is just a sequence network, which gets structured (into > substitution groups/stacks?) on the fly. > > On Mon, May 6, 2024 at 11:06 PM James Bowery <[email protected]> wrote: > > > > Let's give the symbolists their due: > > > > https://youtu.be/JoFW2uSd3Uo?list=PLMrJAkhIeNNQ0BaKuBKY43k4xMo6NSbBa > > > > The problem isn't that symbolists have nothing to offer, it's just that > they're offering it at the wrong level of abstraction. > > > > Even in the extreme case of LLM's having "proven" that language modeling > needs no priors beyond the Transformer model and some hyperparameter > tweaking, there are language-specific priors acquired over the decades if > not centuries that are intractable to learn. > > > > The most important, if not conspicuous, one is Richard Granger's > discovery that Chomsky's hierarchy elides the one grammar category that > human cognition seems to use. ------------------------------------------ Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/Tb63883dd9d6b59cc-M496d4d22693a7e555710d41c Delivery options: https://agi.topicbox.com/groups/agi/subscription
