Quan. You may be talking sense, but you've got to tone down the buzzwords by a whole bunch. It's suspicious when you jam so many in together.
If you think there's a solution there, what are you doing about it in practice? Be more specific. For instance, within the span of what I understand here I might guess at relevance for Coecke's "Togetherness": >From quantum foundations via natural language meaning to a theory of everything https://arxiv.org/pdf/1602.07618.pdf Or Tomas Mikolov's (key instigator of word2vec?) attempts to get funding to explore evolutionary computational automata. Tomas Mikolov - "We can design systems where complexity seems to be growing" (Another one from AGI-21. It can be hard to motivate yourself to listen to a whole conference, but when you pay attention, there can be interesting stuff on the margins.) https://youtu.be/CnsqHSCBgX0?t=10859 There's also an Artificial Life, ALife, community. Which seems to be quite big in Japan. A group down in Okinawa under Tom Froese, anyway. (Though they seem to go right off the edge and focus on some kind of community consciousness.) But also in the ALife category I think of Bert Chan, recently moved to Google(?). https://biofish.medium.com/lenia-beyond-the-game-of-life-344847b10a72 All of that. And what Dreyfus called Heideggerian AI. Associated with Rodney Brooks, and his "Fast, Cheap, and Out of Control", Artificial Organism bots. It had a time in Europe especially, Luc Steels, Rolf Pfeifer? The recently lost Daniel Dennett. Why Heideggerian AI failed and how fixing it would require making it more Heideggerian☆ Hubert L.Dreyfus https://cid.nada.kth.se/en/HeideggerianAI.pdf How would you relate what you are saying to all of these? I'm sympathetic to them all. Though I think they miss the insight of predictive symmetries. Which language drives you to. And what LLMs stumbled on too. And that's held them up. Held them up for 30 years or more. ALife had a spike around 1995. Likely influencing Ben and his Chaotic Logic book, too. They had the complex system idea back then, they just didn't have a generative principle to bring it all together. Meanwhile LLMs have kind of stumbled on the generative principle. Though they remain stuck in the back-prop paradigm, and unable to fully embrace the complexity. I put myself in the context of all those threads. Though I kind of worked back to them, starting with the language problem, and finding the complexity as I went. As I say, language drives you to deal with predictive symmetries. I think ALife has stalled for 30 years because it hasn't had a central generative principle. What James might call a "prior". Language offers a "prior" (predictive symmetries.) Combine that with ALife complex systems, and you start to get something. But that's to go off on my own tangent again. Anyway, if you can be more specific, or put what you're saying in the context of something someone else is doing, you might get more traction. On Thu, May 9, 2024 at 3:10 PM Quan Tesla <[email protected]> wrote: > > Rob, not butting in, but rather adding to what you said (see quotation below). > > The conviction across industries that hierachy (systems robustness) persist > only in descending and/or ascending structures, though true, can be proven to > be somewhat incomplete. > > There's another computational way to derive systems-control hierarchy(ies) > from. This is the quantum-engineering way (referred to before), where > hierachy lies hidden within contextual abstraction, identified via case-based > decision making and represented via compound functionality outcomes. > Hierarchy as a centre-outwards, in the sense of emergent, essential > characteristic of a scalable system. Not deterministically specified. > > In an evolutionary sense, hierarchies are N-nestable and self discoverable. > With the addition of integrated vectors, knowledge graphs may also be > derived, instead of crafted. > > Here, I'm referring to 2 systems hierarchies in particular. 'A', a hierarchy > of criticality (aka constraints) and 'B', a hierarchy of priority (aka > systemic order). > > Over the lifecycles of a growing system, as it mutates and evolve in > relevance (optimal semantics), hierarchy would start resembling - without > compromising -NNs and LLMs. > > Yes, a more-holistic envelope then, a new, quantum reality, where > fully-recursive functionality wasn't only guaranteed, but correlation and > association became foundational, architectural principles. > > This is the future of quantum systems engineering, which I believe quantum > computing would eventually lead all researchers to. Frankly, without it, > we'll remain stuck in the quagmire of early 1990s+ functional > analysis-paralysis, by any name. > > I'll hold out hope for that one, enlightened developer to make that quantum > leap into exponential systems computing. A seachange is needed. > > Inter-alia, Rob Freeman said: "And it seems by chance that the idea seems > consistent with the emergent structure theme of this thread. With the > difference that with language, we have access to the emergent system, > bottom-up, instead of top down, the way we do with physics, maths." > > On Thu, May 9, 2024, 11:15 Rob Freeman <[email protected]> wrote: >> >> On Thu, May 9, 2024 at 6:15 AM James Bowery <[email protected]> wrote: >> > >> > Shifting this thread to a more appropriate topic. >> > >> > ---------- Forwarded message --------- >> >> >> >> From: Rob Freeman <[email protected]> >> >> Date: Tue, May 7, 2024 at 8:33 PM >> >> Subject: Re: [agi] Hey, looks like the goertzel is hiring... >> >> To: AGI <[email protected]> >> > >> > >> >> I'm disappointed you don't address my points James. You just double >> >> down that there needs to be some framework for learning, and that >> >> nested stacks might be one such constraint. >> > ... >> >> Well, maybe for language a) we can't find top down heuristics which >> >> work well enough and b) we don't need to, because for language a >> >> combinatorial basis is actually sitting right there for us, manifest, >> >> in (sequences of) text. >> > >> > >> > The origin of the Combinatorial Hierarchy thence ANPA was the Cambridge >> > Language Research Unit. >> >> Interesting tip about the Cambridge Language Research Unit. Inspired >> by Wittgenstein? >> >> But this history means what? >> >> > PS: I know I've disappointed you yet again for not engaging directly your >> > line of inquiry. Just be assured that my failure to do so is not because >> > I in any way discount what you are doing -- hence I'm not "doubling down" >> > on some opposing line of thought -- I'm just not prepared to defend >> > Granger's work as much as I am prepared to encourage you to take up your >> > line of thought directly with him and his school of thought. >> >> Well, yes. >> >> Thanks for the link to Granger's work. It looks like he did a lot on >> brain biology, and developed a hypothesis that the biology of the >> brain split into different regions is consistent with aspects of >> language suggesting limits on nested hierarchy. >> >> But I don't see it engages in any way with the original point I made >> (in response to Matt's synopsis of OpenCog language understanding.) >> That OpenCog language processing didn't fail because it didn't do >> language learning (or even because it didn't attempt "semantic" >> learning first.) That it was somewhat the opposite. That OpenCog >> language failed because it did attempt to find an abstract grammar. >> And LLMs succeed to the extent they do because they abandon a search >> for abstract grammar, and just focus on prediction. >> >> That's just my take on the OpenCog (and LLM) language situation. >> People can take it or leave it. >> >> Criticisms are welcome. But just saying, oh, but hey look at my idea >> instead... Well, it might be good for people who are really puzzled >> and looking for new ideas. >> >> I guess it's a problem for AI research in general that people rarely >> attempt to engage with other people's ideas. They all just assert >> their own ideas. Like Matt's reply to the above... "Oh no, the real >> problem was they didn't try to learn semantics..." >> >> If you think OpenCog language failed instead because it didn't attempt >> to learn grammar as nested stacks, OK, that's your idea. Good luck >> trying to learn abstract grammar as nested stacks. >> >> Actual progress in the field stumbles along by fits and starts. What's >> happened in 30 years? Nothing much. A retreat to statistical >> uncertainty about grammar in the '90s with HMMs? A first retreat to >> indeterminacy. Then, what, 8 years ago the surprise success of >> transformers, a cross-product of embedding vectors which ignores >> structure and focuses on prediction. Why did it succeed? You, because >> transformers somehow advance the nested stack idea? Matt, because >> transformers somehow advance the semantics first idea? >> >> My idea is that they advance the idea that a search for an abstract >> grammar is flawed (in practice if not in theory.) >> >> My idea is consistent with the ongoing success of LLMs. Which get >> bigger and bigger, and don't appear to have any consistent structure. >> But also their failures. That they still try to learn that structure >> as a fixed artifact. >> >> Actually, as far as I know, the first model in the LLM style of >> indeterminate grammar as a cross-product of embedding vectors, was >> mine. >> >> ***If anyone can point to an earlier precedent I'd love to see it.*** >> >> So LLMs feel like a nice vindication of those early ideas to me. >> Without embracing the full extent of them. They still don't grasp the >> full point. I don't see reason to be discouraged in it. >> >> And it seems by chance that the idea seems consistent with the >> emergent structure theme of this thread. With the difference that with >> language, we have access to the emergent system, bottom-up, instead of >> top down, the way we do with physics, maths. >> >> But everyone is working on their own thing. I just got drawn in by >> Matt's comment that OpenCog didn't do language learning. >> >> -Rob > > Artificial General Intelligence List / AGI / see discussions + participants + > delivery options Permalink ------------------------------------------ Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/Teaac2c1a9c4f4ce3-M22a4728e2fbeba606e5d0de8 Delivery options: https://agi.topicbox.com/groups/agi/subscription
