Rob Thank you for being candid. My verbage isn't deliberate. I don't seek traction, or funding for what I do. There's no real justification for your mistrust.
Perhaps, let me provide some professional background instead. As an independent researcher, I follow scientific developments among multiple domains, seeking coherence and sense-making for my own scientific endeavor, spanning 25 years. AGI has been a keen interest of mine since 2013. For AGI, I advocate pure machine consciousness, shying away from biotech approaches. My field of research interest stems from a previous career in cross-cultural training, and the many challenges it presented in the 80's. As designer/administrator/manager and trainer, one could say I fell in love with optimal learning methodologies and associated technologies. Changing careers, I started in mainframe operating to advance to programming, systems analysis and design, information and business engineering and ultimately contracting consultant. My one, consistent research area remained knowledge engineering, especialky tacit-knowledge engineering. Today, I promote the idea for a campus specializing in quantum systems engineering. I'm generally regarded as being a pracademic of sorts. Like many of us practitioners here, I too was fortunate to learn with a number of founders and world-class methodologists. In 1998, my job in banking was researcher/architect to the board of a 5-bank merger, today part of the Barclays Group. As futurist architect and peer reviewer, I was introduced to quantum physics. Specifically, in context of the discovery of the quark. I realized that future, exponential complexity was approaching, especially for knowledge organizations. I researched possible solutions worldwide, but found none at that time, which concerned me deeply. Industries seemed to be rushing into the digital revolution without a rekiable, methodological management foundation in place. As architect, I had nothing to offer as a useful, 10-year futures outlook either. I didn't feel competent to be the person to address that apparent gap. A good colleague of mine was a proven IE methodologist and consultant to IBM Head Office. I approached him twice with my concerns, asking him to adapt his proven IE methodogy to address the advancing future. He didn't take my concerns seriously at all. For the next year, the future seemed ever-more clearer to me, yet I couldn't find anyone to develop a future aid for enterprises as a roadmap toolkit, or a coping mechanism for a complex-adaptive reality. The world was hung up on UML and Object oriented technologies. In desperation, I decided how, even though I probably was less suitable for the job, to develop the future toolkit I had the vision of. That start was 25 years ago. Today, I have a field tested, hand methodology, which if I had to give it a name, I'd call it: "Essence". As new science emerges, I update it with relevant algorithms and look for a pro-bono project of sufficient complexity to test it on. E.g., I focused on establishing a predictable baseline for rhe covid19 experience. Furthermore, during the last 18 months, I assisted a visiobary in Cleveland with converting his holistic, 4D diagrammatical representation into mature, system models. Presently, he's still working on his lexicon. That was in support of their community based, Cleveland inner-city rejuvenation project. During that test, I added vector specification to the quantum-enabled systems engineering method. That addition now offers deabstraction management to X dimensions. My research continues, my intent being to marry my methodolody with Feynman diagrams and Haramein's latest unified field theory. My modest contributions have been published independently, but as publications are, my public-domain knowledge dates back to 10 years ago. Old stuff. I do protect my personal IP. The investment was considerable. E g., for the past 10 years I've been actively involved with informal, applied learning with a retired prof at NCSU. We grok the latest thinking and advances. In this manner, I discovered a new pattern in nature, which we called the Po1 (the pattern of oneness). This is a fractal-of-fractals pattern, which potentially holds great promise for future society, inasmuch as helping to extract energy and matter from space and distributing it around the globe, spacecraft, and other planets. Unfortunately, it also holds great promise for warcraft, which I'm personally not interested in. This view has frustrated progress, as I refuse to be drawn into speculations about neutron bombs. As such, I don't discuss details of the Po1, or even write them down. I've even "brain encrypted" them against remote viewing. IMO, when I see the frustration on this group by supersmart, exceptionally-talented, yet stubborn and sometimes short-sighted individuals, I sometimes feel compelled to try and provide a nudge. Even Ben can do with it. We all could. After all, we're scientists first, ever learning and coming to some truth of matters. One key area I nudged was CAS object association. I resolved this challenge years ago in my soft-systems method and it works beautifully. So, I attempted to provide a pointer, or two. No big deal. The notion of symmetry from asymmetry, I'm stil learning about. However, Ben has been consistently correct, IMO, about one thing. AGI has to be developed as recombinatory emergence of the vector energy inherent in the outcomes of classical and quantum physics. Sorry Mr. Reich, more big words. Here, I'm using every term as it is currently understood by science. What seems to be a key problem in developing AGI then? I think it is lack of a holistic, quantum-engineering methodology. It's all a scramble to retrofit code to energing, scientific reality and not about jumping the curve. Your "language" models, which would represent hard and soft knowledge artifacts, perhaps to abstract (optimize) in a symbolic schema of choice, thereafter to encrypt with x-bit encryption, requires a methodology such as mine. You.may have your own, off course. For socialized AGI, we'll need to combine disparate, discrete, core-research. One researcher cannot cover all the bases. I'm aware of this. However, the reality of IP-misappropriation and outright theft has put the brakes on collective, altruistic collaboration. Too many PhDs, or startups in the wings. Why do the research if it could simply be scanned from public documents? Here, I'd like to take a dig at IBM Boulder for misappropriating some of my IP directly and selling it as their own in their Architect 2010 product. It remains my bugbear. Perhaps, a significant donation to the industry then, and in my case as pro Westerner (no offense to the other wind directions), publicly to benefit Western interests first. These are bottom-line matters. How many volunteers do we see raising hands to donate a useful component to Western AGI? Have to level the playing field here, somehow. Where can I go read about your research and outputs Rob? I'd like to understand your specialization a little better. On Fri, May 10, 2024, 08:58 Rob Freeman <[email protected]> wrote: > Quan. You may be talking sense, but you've got to tone down the > buzzwords by a whole bunch. It's suspicious when you jam so many in > together. > > If you think there's a solution there, what are you doing about it in > practice? > > Be more specific. For instance, within the span of what I understand > here I might guess at relevance for Coecke's "Togetherness": > > From quantum foundations via natural language meaning to a theory of > everything > https://arxiv.org/pdf/1602.07618.pdf > > Or Tomas Mikolov's (key instigator of word2vec?) attempts to get > funding to explore evolutionary computational automata. > > Tomas Mikolov - "We can design systems where complexity seems to be > growing" (Another one from AGI-21. It can be hard to motivate yourself > to listen to a whole conference, but when you pay attention, there can > be interesting stuff on the margins.) > https://youtu.be/CnsqHSCBgX0?t=10859 > > There's also an Artificial Life, ALife, community. Which seems to be > quite big in Japan. A group down in Okinawa under Tom Froese, anyway. > (Though they seem to go right off the edge and focus on some kind of > community consciousness.) But also in the ALife category I think of > Bert Chan, recently moved to Google(?). > > https://biofish.medium.com/lenia-beyond-the-game-of-life-344847b10a72 > > All of that. And what Dreyfus called Heideggerian AI. Associated with > Rodney Brooks, and his "Fast, Cheap, and Out of Control", Artificial > Organism bots. It had a time in Europe especially, Luc Steels, Rolf > Pfeifer? The recently lost Daniel Dennett. > > Why Heideggerian AI failed and how fixing it would require making it > more Heideggerian☆ > Hubert L.Dreyfus > https://cid.nada.kth.se/en/HeideggerianAI.pdf > > How would you relate what you are saying to all of these? > > I'm sympathetic to them all. Though I think they miss the insight of > predictive symmetries. Which language drives you to. And what LLMs > stumbled on too. And that's held them up. Held them up for 30 years or > more. > > ALife had a spike around 1995. Likely influencing Ben and his Chaotic > Logic book, too. They had the complex system idea back then, they just > didn't have a generative principle to bring it all together. > > Meanwhile LLMs have kind of stumbled on the generative principle. > Though they remain stuck in the back-prop paradigm, and unable to > fully embrace the complexity. > > I put myself in the context of all those threads. Though I kind of > worked back to them, starting with the language problem, and finding > the complexity as I went. As I say, language drives you to deal with > predictive symmetries. I think ALife has stalled for 30 years because > it hasn't had a central generative principle. What James might call a > "prior". Language offers a "prior" (predictive symmetries.) Combine > that with ALife complex systems, and you start to get something. > > But that's to go off on my own tangent again. > > Anyway, if you can be more specific, or put what you're saying in the > context of something someone else is doing, you might get more > traction. > > On Thu, May 9, 2024 at 3:10 PM Quan Tesla <[email protected]> wrote: > > > > Rob, not butting in, but rather adding to what you said (see quotation > below). > > > > The conviction across industries that hierachy (systems robustness) > persist only in descending and/or ascending structures, though true, can be > proven to be somewhat incomplete. > > > > There's another computational way to derive systems-control > hierarchy(ies) from. This is the quantum-engineering way (referred to > before), where hierachy lies hidden within contextual abstraction, > identified via case-based decision making and represented via compound > functionality outcomes. Hierarchy as a centre-outwards, in the sense of > emergent, essential characteristic of a scalable system. Not > deterministically specified. > > > > In an evolutionary sense, hierarchies are N-nestable and self > discoverable. With the addition of integrated vectors, knowledge graphs may > also be derived, instead of crafted. > > > > Here, I'm referring to 2 systems hierarchies in particular. 'A', a > hierarchy of criticality (aka constraints) and 'B', a hierarchy of priority > (aka systemic order). > > > > Over the lifecycles of a growing system, as it mutates and evolve in > relevance (optimal semantics), hierarchy would start resembling - without > compromising -NNs and LLMs. > > > > Yes, a more-holistic envelope then, a new, quantum reality, where > fully-recursive functionality wasn't only guaranteed, but correlation and > association became foundational, architectural principles. > > > > This is the future of quantum systems engineering, which I believe > quantum computing would eventually lead all researchers to. Frankly, > without it, we'll remain stuck in the quagmire of early 1990s+ functional > analysis-paralysis, by any name. > > > > I'll hold out hope for that one, enlightened developer to make that > quantum leap into exponential systems computing. A seachange is needed. > > > > Inter-alia, Rob Freeman said: "And it seems by chance that the idea > seems consistent with the emergent structure theme of this thread. With the > difference that with language, we have access to the emergent system, > bottom-up, instead of top down, the way we do with physics, maths." > > > > On Thu, May 9, 2024, 11:15 Rob Freeman <[email protected]> > wrote: > >> > >> On Thu, May 9, 2024 at 6:15 AM James Bowery <[email protected]> wrote: > >> > > >> > Shifting this thread to a more appropriate topic. > >> > > >> > ---------- Forwarded message --------- > >> >> > >> >> From: Rob Freeman <[email protected]> > >> >> Date: Tue, May 7, 2024 at 8:33 PM > >> >> Subject: Re: [agi] Hey, looks like the goertzel is hiring... > >> >> To: AGI <[email protected]> > >> > > >> > > >> >> I'm disappointed you don't address my points James. You just double > >> >> down that there needs to be some framework for learning, and that > >> >> nested stacks might be one such constraint. > >> > ... > >> >> Well, maybe for language a) we can't find top down heuristics which > >> >> work well enough and b) we don't need to, because for language a > >> >> combinatorial basis is actually sitting right there for us, manifest, > >> >> in (sequences of) text. > >> > > >> > > >> > The origin of the Combinatorial Hierarchy thence ANPA was the > Cambridge Language Research Unit. > >> > >> Interesting tip about the Cambridge Language Research Unit. Inspired > >> by Wittgenstein? > >> > >> But this history means what? > >> > >> > PS: I know I've disappointed you yet again for not engaging directly > your line of inquiry. Just be assured that my failure to do so is not > because I in any way discount what you are doing -- hence I'm not "doubling > down" on some opposing line of thought -- I'm just not prepared to defend > Granger's work as much as I am prepared to encourage you to take up your > line of thought directly with him and his school of thought. > >> > >> Well, yes. > >> > >> Thanks for the link to Granger's work. It looks like he did a lot on > >> brain biology, and developed a hypothesis that the biology of the > >> brain split into different regions is consistent with aspects of > >> language suggesting limits on nested hierarchy. > >> > >> But I don't see it engages in any way with the original point I made > >> (in response to Matt's synopsis of OpenCog language understanding.) > >> That OpenCog language processing didn't fail because it didn't do > >> language learning (or even because it didn't attempt "semantic" > >> learning first.) That it was somewhat the opposite. That OpenCog > >> language failed because it did attempt to find an abstract grammar. > >> And LLMs succeed to the extent they do because they abandon a search > >> for abstract grammar, and just focus on prediction. > >> > >> That's just my take on the OpenCog (and LLM) language situation. > >> People can take it or leave it. > >> > >> Criticisms are welcome. But just saying, oh, but hey look at my idea > >> instead... Well, it might be good for people who are really puzzled > >> and looking for new ideas. > >> > >> I guess it's a problem for AI research in general that people rarely > >> attempt to engage with other people's ideas. They all just assert > >> their own ideas. Like Matt's reply to the above... "Oh no, the real > >> problem was they didn't try to learn semantics..." > >> > >> If you think OpenCog language failed instead because it didn't attempt > >> to learn grammar as nested stacks, OK, that's your idea. Good luck > >> trying to learn abstract grammar as nested stacks. > >> > >> Actual progress in the field stumbles along by fits and starts. What's > >> happened in 30 years? Nothing much. A retreat to statistical > >> uncertainty about grammar in the '90s with HMMs? A first retreat to > >> indeterminacy. Then, what, 8 years ago the surprise success of > >> transformers, a cross-product of embedding vectors which ignores > >> structure and focuses on prediction. Why did it succeed? You, because > >> transformers somehow advance the nested stack idea? Matt, because > >> transformers somehow advance the semantics first idea? > >> > >> My idea is that they advance the idea that a search for an abstract > >> grammar is flawed (in practice if not in theory.) > >> > >> My idea is consistent with the ongoing success of LLMs. Which get > >> bigger and bigger, and don't appear to have any consistent structure. > >> But also their failures. That they still try to learn that structure > >> as a fixed artifact. > >> > >> Actually, as far as I know, the first model in the LLM style of > >> indeterminate grammar as a cross-product of embedding vectors, was > >> mine. > >> > >> ***If anyone can point to an earlier precedent I'd love to see it.*** > >> > >> So LLMs feel like a nice vindication of those early ideas to me. > >> Without embracing the full extent of them. They still don't grasp the > >> full point. I don't see reason to be discouraged in it. > >> > >> And it seems by chance that the idea seems consistent with the > >> emergent structure theme of this thread. With the difference that with > >> language, we have access to the emergent system, bottom-up, instead of > >> top down, the way we do with physics, maths. > >> > >> But everyone is working on their own thing. I just got drawn in by > >> Matt's comment that OpenCog didn't do language learning. > >> > >> -Rob > > > > Artificial General Intelligence List / AGI / see discussions + > participants + delivery options Permalink ------------------------------------------ Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/Teaac2c1a9c4f4ce3-Mf6e532112a76ab14787f1fb1 Delivery options: https://agi.topicbox.com/groups/agi/subscription
