Hi Michele,

On Tue, Mar 23, 2021 at 6:01 AM Michele Thiella <[email protected]>
wrote:

>
> " There is little/no actual "AGI" in that code base."
> Regarding AGI, I am aware of the distance that separates us from the goal.
> But in my simplistic view (don't kill me for this phrase), human
> intelligence is just an excellent inference on an excellent knowledge base
> maintained by excellent learning. Excellence is a matter of progress.
>

Oh, please let me kill you! That's where all the fun is!  Based on
discussions with many people, there is a wide-spread misunderstanding of
what AGI is or how it might be achieved. Although what you said is
superficially, simplistically correct, I want to point out that
"excellence" cannot be achieved by hand-crafting knowledge bases. Very few
people seem to understand this, and seem to believe that somehow just
slapping a bunch of parts together will result in AGI. That designing AGI
is like designing an airplane, that it's just a matter of "excellent
design" and it will fly by itself. This is not the case.

Thus, I was trying to be careful in distinguishing the "scaffolding", which
is hand-crafted, from actual AGI type work. The scaffolding is needed to
bring data into a format where an AGI type system can interface with it.
At every point of design, you have to ask: is this piece of code just some
more hand-crafted (human-crafted) special-case code that is being used to
convert the external world into a form that a computer algorithm can
interact with? Or is this piece of code "AGI" (or as close to AGI as we can
get right now)?  So I am trying to draw a contrast between "those things
that are AGI" and "ancillary support services".

This is to say that I think you are closer to AGI than it looks.
>

Thank you.  But my personal view is that the part of opencog that is
closest to true AGI is the code and algos in opencog/learn and
opencog/generate. However, from what I can tell, no one else shares my
view; or at least, Ben doesn't, and he's the  most important one to
convince.


> - In reply to Nil: (on Slack i'm named Raschild)
>

A bunch of people convinced me to hang out on discord, so that is where I
am these days.  From what I can tell, no one working on opencog is using
slack.

I had seen the beginning of the work and it is very interesting. In the
> next few days I will look at the current state.
> Two quick questions:
> 1) How complicated is it to work directly with Ros + Gazebo compared to
> Malmo and Gym?
>

I have only used ROS. The design is straight-forward.  If a ROS event comes
in (some face is perceived; there is some loud noise, other environmental
change) there is a python snippet (ROS is easiest to use with python) that
converts that event into Atomese, and sends that Atomese to the cogserver
(the cogserver is a network server, nothing more). So for example, a loud
sound might be converted to `(StateLink (PredicateNode "ambient sound")
(ConceptNode "loud sound"))` Then, on the opencog side, processing does
whatever you've set it up to do with this kind of information.  Exactly how
sophisticated you want to be is up to you.

For output, it's even easier: `(cog-evaluate! (EvaluationLink
(GroundedPredicateNode "py:twiddle_ROS_message") (ListLink ... arguments
...))))` which calls a python function "twiddle_ROS_message" to send some
data somewhere in ROS.

My remarks about "excellent design" and "AGI" above means that python
wrappers for converting ROS data to Atomese should be minimal, or that they
should do just enough to bring in external information into the AtomSpace.
You want to avoid a game of writing large, complex python scripts. So when
you ask "How complicated is it to work directly with Ros + Gazebo compared
to Malmo and Gym?" The answer should be "about the same" and "not
complicated" because there should be only minimalistic shims to convert
to/from Atomese and the message formats these other systems use.  If you
are creating something complicated in these systems, you are not doing AGI,
you are doing robotics.


2) Are Values already usable instead of OctoMap and SpaceTime server?
>

For rocca, I have no clue. For the core AtomSpace, Values are a fully
complete and functional component. They work.

The OctoMap and SpaceTime servers are kind-of broken, kind-of useless in
their current form. I wanted these servers to implement prepositions:
above, below, next to, behind, in front, left of, right of, bigger,
smaller.  Ths would allow a natural-language subsystem (or a reasoning
subsystem) to work "naturally". However, this implementation was never
done.  Currently, all that octomap/spacetime do is to just store xyz 3D
values. This is fairly useless, as there are dozens of external systems
used in robotics that do 3D much better and faster: point-clouds, SLAM
simultaneous localization and mapping, and so on.

Again: scaffolding vs AGI. So, 3D location is part of the external world,
and the scaffolding must interface to the external world, and take 3D data
and convert it into a format that the AGI code can operate on.  If you have
AGI code that can work directly with 3D point clouds, then great! No
scaffolding is needed! If you (like me) have proto-AGI code that wants to
work with symbolic-natural-language, then some scaffolding is needed to
convert point-clouds into prepositions.  Some day in the future, maybe we
can remove some of the scaffolding.

However, up until now, almost all work that has been done, that is being
done, is on scaffolding. If you are not careful, you will find yourself
doing the same. This is not bad: it's educational, and it's important, and
it helps show where the boundary is between the scaffolding and the AGI. --
if nothing else, this is called "learning at the school of hard knocks" --
"I built one and it didn't work, but I learned something". At the forefront
of knowledge, that's the only school that is open. That's what science is.


> I started with the reasoning: I am currently learning the inference rules
> and how they work with the atomspace, I have seen part of the examples in
> ure and pln and I was trying to understand the blocksworld problem
> developed by Anatoly Belikov here:
> https://github.com/noskill/ure/tree/planning/examples/ure/planning
>

Reasoning and inference is a very dangerous place to start, and may kill
your project before it even gets started. There are several reasons for
this.

* Reasoning presumes that you have already decided on a representation for
your data (either hand-crafted it, or automatically learned, somehow.) Once
you have this representation, then you can reason on it. But do you have
this representation? No, you don't. You might borrow one from blocks-world,
or borrow the one from Eva, or borrow the one from rocca (or the one from
agi-bio, which represents DNA, RNA and proteins).  You then have the
problem of pulling external data and placing it into your representation,
where "external data" is vision, sound, text, or RNA/DNA genetic sequences.
This is scaffolding.

* Reasoning presumes that you have inference rules. Where did these come
from? Did you hand-craft them? PLN has a bunch of hand-crafted inference
rules that Ben and friends hand-crafted 10-15 years ago, and Nil has
carefully implemented in C code. They work, kind-of, whenever you have a
hand-crafted representation for your data that is PLN-compatible. Nil
spends a lot of time, a huge amount of time (the last 10 years) getting the
hand-crafted rules to fit with the hand-crafted representation, and to get
reasoning working efficiently and quickly. But if your representation does
not fit the PLN structure, then it won't work.  (None of my language work
was ever able to fit with PLN. My new AGI work (at opencog/learn) will
almost surely not fit with PLN; the goal there is to learn brand-new
inference rules, instead of using the hand-crafted ones.)

* The actual implementation of the URE is "hard-core comp-sci" or maybe
"good old-fashioned comp sci": its a set of algorithms to apply some
rewrite rules to a network. There are many non-opencog systems that do
something similar, such as SAT-solvers, constraint satisfaction systems,
ASP-answer-set programming, the "lambda cube", higher-order logic,
theorem-proving systems, etc. It's hard core, it's not easy.  Many of these
systems are much much faster, and are much more flexible, *if* your data
representation is not PLN, but is something else: e.g. boolean expressions
or prolog-like assertions. So we are back again to "what is your internal
model"?

For example, in robotics, for a robot inside an office building, a common
inference task is "is the door open? If the door is open then roll through
it, else grasp the door handle and open the door."  The standard
grad-school robotics approach to solve this is to use ROS or something
similar to "see" the door, and then to use ASP (answer-set programming) to
perform very fast crisp-logic reasoning and inference. It works. It's what
90% of all university robotics departments use. It is reasoning and
inference. It's not AGI.


> * Ideally my goal was to extend the "model of the world" to work more with
> objects than people and to extend the "self-model" to execute navigation
> and manipulation plans. In all of this, I haven't yet explored the learning.
>

For Eva, the self-model and world-model are all part of the same thing, and
they were hand-crafted (not learned).  The goal was to interface language
to movement and perception. The inspiration was to use concepts and ideas
from Melcuk's "Meaning-Text Theory" (MTT) for the world-model.

Getting this to work involved a sequence of rickety and fragile
transformations: from sound to text (via google voice-to-text) which is
inaccurate. From text to a parse-tree (via link-grammar). From parse-tree
to the internal model. From the internal model to robot motion/action.
Changing anything anywhere was both conceptually hard (no one else
understood what the heck I was doing, including, among others, "the
management" (Ben and David) and without management support, the going gets
tough.)  Also, it was abstract enough and complex enough that other
programmers were unwilling to learn how it worked, and so were unwilling to
help.  If you personally  want to work on this, then be aware that it is
abstract and complex. And fragile. (Part of the goal of "good engineering"
is to compartmentalize the complexity so that it becomes "easy to use" and
non-fragile. This code bases needed a little bit more "good engineering"
than it ever got.)

My goal with the opencog/learn project is to automate all of the above,
including the reasoning, inference, and world-model, but it is far away
from that, so far. I think I know how to do these things, but now I have to
... do them.

-- Linas

-- 
Patrick: Are they laughing at us?
Sponge Bob: No, Patrick, they are laughing next to us.

-- 
You received this message because you are subscribed to the Google Groups 
"opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/opencog/CAHrUA357DoV_BAg%2B6vtFKi47uwhpaPt8ELuPi9wdh5jsNgD-NQ%40mail.gmail.com.

Reply via email to