In quantum systems, symmetry emerges from asymmetry. The transitioning logic 
from one such a quantum state to another remains at the forefront of physics. 
The thermodynamical approach is also aligned to this way of reasoning about AGI.

perhaps, we need to continuously be asking ourselves: "What is the Western 
AGI?" vs "What is the Eastern AGI?" Inter alia, all design starts with a 
systems view and a clear identification of the core system as well as its 
constraints.

Given the above fractal question, it seems highly likely that if one asked a 
dev team in the Far Est vs a team in the West, one would get different answers. 
In truth, AGI isn't one thing. If it should be compared to the development of 
the atomic bomb, it should be accepted that more than 3 versions of a 
mainstream-AGI truth would be discernable.

For now, there's no right, or wrong way, only experimentation.

However, this remains a valid question, worthy of reliable input. "What is 
future AGI?" Thus, we remove the research biases and take a stab at defining a 
system of the future, which could then be elucidated as knowledge grows and 
scientific progress is announced. What is hampering real progress? I think it 
is vested interests. That "takeback" energy loop is the real problem here. Does 
an AGI care about what it gets out of being an AGI? If done correctly, it 
doesn't. It simply fulfils its singular functionality. AGI doesn't have to have 
real emotions. All it has to do is convince human beings that it does. In 
others words, for personality, try coding a highly-functional sociopathic 
tendency towards an intermediate point on the stochastic scale.

I think I might've just hit the nail on the head. Think of AGI as a sociopathic 
system.

First consciousness, therefore AGI. You're doing it the wrong way round. AGI 
won't emerge from all your code. It's not your AGI. On the contrary, AGI 
already exists, and would exist in quantumphysical gestalt, therefore absorb 
and inherently digest all "AGI" code. if AGI emerges from anything, it does so 
from the geometry of spacetime.

Seems, all physicists are unknowingly working towards a common goal, by 
different names.

If the AGI system you're busy developing isn't a function of light, it's 
probably obsolete. Moore's law isn't AGI compatible. Only nature's laws would 
provide sufficient power without the thermal complications to power a real AGI.
________________________________
From: Matt Mahoney <[email protected]>
Sent: Monday, 06 May 2024 16:49
To: AGI <[email protected]>
Subject: Re: [agi] Hey, looks like the goertzel is hiring...

The problem with AGI is Wolpert's law. A can predict B or B can
predict A but not both. When we try to understand our own brains,
that's the special case of A = B. You can't. It is the same with AGI.
If you want to create an agent smarter than you, it can predict you
but you can't predict it. Otherwise, it is not as intelligent as you.
That is why LLMs work but we don't know how.

OpenCog's approach to language modeling was the traditional pipeline
of lexical tokenizing, grammar parsing, and semantics in that order.
It works fine for compilers but not for natural language. Children
learn to segment continuous speech before they learn any vocabulary
and they learn semantics before grammar. There are plenty of examples.
How do you parse "I ate pizza with pepperoni/a fork/Bob"? You can't
parse without knowing what the words mean. It turns out that learning
language this way takes a lot more computation because you need a
neural network with separate layers for phonemes or letters, tokens,
semantics, and grammar in that order.

How much computation? For a text only model, about 1 GB of text. For
AGI, the human brain has 86B neurons and 600T connections at 10 Hz.
You need about 10 petaflops, 1 petabyte and several years of training
video. If you want it faster than raising a child, then you need more
compute. That is why we had the AGI winter. Now it is spring. Before
summer, we need several billion of those to automate human labor and
our $1 quadrillion economy.

On Mon, May 6, 2024 at 12:11 AM Rob Freeman <[email protected]> wrote:
>
> On Sat, May 4, 2024 at 4:53 AM Matt Mahoney <[email protected]> wrote:
> >
> > ... OpenCog was a hodgepodge of a hand coded structured natural language 
> > parser, a toy neural vision system, and a hybrid fuzzy logic knowledge 
> > representation data structure that was supposed to integrate it all 
> > together but never did after years of effort. There was never any knowledge 
> > base or language learning algorithm.
>
> Good summary of the OpenCog system Matt.
>
> But there was a language learning algorithm. Actually there was more
> of a language learning algorithm in OpenCog than there is now in LLMs.
> That's been the problem with OpenCog. By contrast LLMs don't try to
> learn grammar. They just try to learn to predict words.
>
> Rather than the mistake being that they had no language learning
> algorithm, the mistake was OpenCog _did_ try to implement a language
> learning algorithm.
>
> By contrast the success, with LLMs, came to those who just tried to
> predict words. Using a kind of vector cross product across word
> embedding vectors, as it turns out.
>
> Trying to learn grammar was linguistic naivety. You could have seen it
> back then. Hardly anyone in the AI field has any experience with
> language, actually, that's the problem. Even now with LLMs. They're
> all linguistic naifs. A tragedy for wasted effort for OpenCog. Formal
> grammars for natural language are unlearnable. I was telling Linas
> that since 2011. I posted about it here numerous times. They spent a
> decade, and millions(?) trying to learn a formal grammar.
>
> Meanwhile vector language models which don't coalesce into formal
> grammars, swooped in and scooped the pool.
>
> That was NLP. But more broadly in OpenCog too, the problem seems to be
> that Ben is still convinced AI needs some kind of symbolic
> representation to build chaos on top of. A similar kind of error.
>
> I tried to convince Ben otherwise the last time he addressed the
> subject of semantic primitives in this AGI Discussion Forum session
> two years ago, here:
>
> March 18, 2022, 7AM-8:30AM Pacific time: Ben Goertzel leading
> discussion on semantic primitives
> https://singularitynet.zoom.us/rec/share/qwLpQuc_4UjESPQyHbNTg5TBo9_U7TSyZJ8vjzudHyNuF9O59pJzZhOYoH5ekhQV.2QxARBxV5DZxtqHQ?startTime=1647613120000
>
> Starting timestamp 1:24:48, Ben says, disarmingly:
>
> "For f'ing decades, which is ridiculous, it's been like, OK, I want to
> explore these chaotic dynamics and emergent strange attractors, but I
> want to explore them in a very fleshed out system, with a rich
> representational capability, interacting with a complex world, and
> then we still haven't gotten to that system ... Of course, an
> alternative approach could be taken as you've been attempting, of ...
> starting with the chaotic dynamics but in a simpler setting. ... But I
> think we have agreed over the decades that to get to human level AGI
> you need structure emerging from chaos. You need a system with complex
> chaotic dynamics, you need structured strange attractors there, you
> need the system's own pattern recognition to be recognizing the
> patterns in these structured strange attractors, and then you have
> that virtuous cycle."
>
> So he embraces the idea cognitive structure is going to be chaotic
> attractors, as he did when he wrote his "Chaotic Logic" book back in
> 1994. But he's still convinced the chaos needs to emerge on top of
> some kind of symbolic representation.
>
> I think there's a sunken cost fallacy at work. So much is invested in
> the paradigm of chaos appearing on top of a "rich" symbolic
> representation. He can't try anything else.
>
> As I understand it, Hyperon is a re-jig of the software for this
> symbol based "atom" network representation, to make it easier to
> spread the processing load over networks.
>
> As a network representation, the potential is there to merge insights
> of no formal symbolic representation which has worked for LLMs, with
> chaos on top which was Ben's earlier insight.
>
> I presented on that potential at a later AGI Discussion Forum session.
> But mysteriously the current devs failed to upload the recording for
> that session.
>
> > Maybe Hyperon will go better. But I suspect that LLMs on GPU clusters will 
> > make it irrelevant.
>
> Here I disagree with you. LLMs are at their own dead-end. What they
> got right was to abandon formal symbolic representation. They likely
> generate their own version of chaos, but they are unaware of it. They
> are still trapped in their own version of the "learning" idea. Any
> chaos generated is frozen and tangled in their enormous
> back-propagated networks. That's why they exhibit no structure,
> hallucinate, and their processing of novelty is limited to rough
> mapping to previous knowledge. The solution will require a different
> way of identifying chaotic attractors in networks of sequences.
>
> A Hyperon style network might be a better basis to make that advance.
> It would have to abandon the search for a symbolic representation.
> LLMs can show the way there. Make prediction not representation the
> focus. Just start with any old (sequential) tokens. But in contrast to
> LLMs, instead of back-prop to find groupings which predict, we can
> find groupings that predict in another way. Simple. It's mostly just
> abandoning back-prop, use another way to find (chaotic attractor)
> groupings which predict, on the fly.
>
> When that insight will happen, I don't know. We have company Extropic
> now, which are attempting to model distributions using heat noise.
> Heat noise instead of back-prop. Modelling predictive symmetries in a
> network using heat noise might lead them to it.
>
> Really, any kind of noise in a network might be used to find these
> predictive symmetry groups on the fly. Someone may stumble on it soon.
>
> When they do, that'll make GPU clusters irrelevant. Nvidia down. And
> no more talk of 7T investment in power generation needed. Mercifully!
>
> -Rob



--
-- Matt Mahoney, [email protected]

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tb63883dd9d6b59cc-Mfd9253b355201931408cf768
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to