On Wed, Mar 2, 2022 at 12:47 PM Mark Wigzell <[email protected]> wrote:

> Hi LInas,
>
> So with regards to the foundational thinking of how the AtomSpace can
> truly unite the sensory data with the motors, the background knowledge,
> what theory is governing that? I see that the AtomSpace allows linkages to
> be made, and it allows a common data representation/language. But what is
> the "glue" that causes these commonly held but algorithmically distinct
> islands of atomspace to coalesce?
>

I'm not sure how to answer that question. At the lowest level of technical
detail, the answer is "any way you want". Sounds flippant, I suppose. The
AtomSpace was designed so that you can do things however you want, using
whatever data representation you want, and still get decent performance.

At the medium-level, there is that issue that Dave Xanatos has mentioned,
has faced: how do you glue together multiple, disparate systems into a
functional whole? The answer is "very carefully", and, as everyone who has
ever tried this eventually learns, the result is complex and fragile and
not general. Certainly, lots of lone-wolf developers have tried this path,
as have many, many universities and corporations. It's very much the
mainstream-thinking path.  If you throw 50 or 100 or 500 developers at it,
you can actually do OK: you get stuff like Siri or Alexa, or self-driving
cars or assorted autonomous military technologies. What the heck: Hanson
Robotics Sophia had it's run-in-the-sun following this design mindset. So,
sure, it can be scaled, at least a little bit.

Some number of years ago, I decided that this architecture, of carefully
hand-crafted subsystems carefully assembled by technicians into a
simulacrum of a human being, that this is not really the correct approach
to AGI. So I'm  working on something else. I'm trying to figure out how to
perceive structure in raw environmental data. This broadly encompasses
audio, video, speech, blueprints, drawings, language, astronomical
telescope data, disassembled binaries of computer viruses, economic data,
social graphs, whatever. "Data" in the large.  In reality, I've only been
able to take a few minor steps, in limited domains. I can clearly see the
next several steps ahead. They are described in greater detail in one short
ten-page paper:
https://github.com/opencog/learn/blob/master/learn-lang-diary/agi-2022/grammar-induction.pdf
with longer and more detailed descritions scattered about here and there.

Care to rephrase your question?

-- Linas

-- 
Patrick: Are they laughing at us?
Sponge Bob: No, Patrick, they are laughing next to us.

-- 
You received this message because you are subscribed to the Google Groups 
"opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/opencog/CAHrUA369MCC9QE3HpxoMpArTh6N1nUDq1wpaPnbeKKNJixi%2BQw%40mail.gmail.com.

Reply via email to