ok, lot of things here! 
I'm trying to learn more than 10 years of your work in few month. 

Il giorno martedì 23 marzo 2021 alle 18:10:24 UTC+1 linas ha scritto:

> Oh, please let me kill you! That's where all the fun is!  Based on 
> discussions with many people, there is a wide-spread misunderstanding of 
> what AGI is or how it might be achieved. Although what you said is 
> superficially, simplistically correct, I want to point out that 
> "excellence" cannot be achieved by hand-crafting knowledge bases. Very few 
> people seem to understand this, and seem to believe that somehow just 
> slapping a bunch of parts together will result in AGI. That designing AGI 
> is like designing an airplane, that it's just a matter of "excellent 
> design" and it will fly by itself. This is not the case.
>
> Thus, I was trying to be careful in distinguishing the "scaffolding", 
> which is hand-crafted, from actual AGI type work. The scaffolding is needed 
> to bring data into a format where an AGI type system can interface with 
> it.  At every point of design, you have to ask: is this piece of code just 
> some more hand-crafted (human-crafted) special-case code that is being used 
> to convert the external world into a form that a computer algorithm can 
> interact with? Or is this piece of code "AGI" (or as close to AGI as we can 
> get right now)?  So I am trying to draw a contrast between "those things 
> that are AGI" and "ancillary support services".
>

I'm starting to understand what you mean. 
Probably all the code that i'm thinking is "scaffolding". So, "bring data 
into a format where an AGI type system can interface with it". 
Maybe it's not clear to me what an AGI-code is. I had seen /learn and 
/generate only from readme, maybe because i found them hard at that time 
(and at this time, for sure). 
I agree that hand-crafting KB and "execellence" don't mix. But so, your 
proposal would be to achieved "execellence" by a knowledge base built by 
what? I suppose that learn and generate repos are the answer. I'll look 
better at those repos!

 

> I had seen the beginning of the work and it is very interesting. In the 
>> next few days I will look at the current state.
>> Two quick questions:
>> 1) How complicated is it to work directly with Ros + Gazebo compared to 
>> Malmo and Gym? 
>>
>
> I have only used ROS. The design is straight-forward.  If a ROS event 
> comes in (some face is perceived; there is some loud noise, other 
> environmental change) there is a python snippet (ROS is easiest to use with 
> python) that converts that event into Atomese, and sends that Atomese to 
> the cogserver (the cogserver is a network server, nothing more). So for 
> example, a loud sound might be converted to `(StateLink (PredicateNode 
> "ambient sound") (ConceptNode "loud sound"))` Then, on the opencog side, 
> processing does whatever you've set it up to do with this kind of 
> information.  Exactly how sophisticated you want to be is up to you.
>
> For output, it's even easier: `(cog-evaluate! (EvaluationLink 
> (GroundedPredicateNode "py:twiddle_ROS_message") (ListLink ... arguments 
> ...))))` which calls a python function "twiddle_ROS_message" to send some 
> data somewhere in ROS.
>
> My remarks about "excellent design" and "AGI" above means that python 
> wrappers for converting ROS data to Atomese should be minimal, or that they 
> should do just enough to bring in external information into the AtomSpace. 
> You want to avoid a game of writing large, complex python scripts. So when 
> you ask "How complicated is it to work directly with Ros + Gazebo compared 
> to Malmo and Gym?" The answer should be "about the same" and "not 
> complicated" because there should be only minimalistic shims to convert 
> to/from Atomese and the message formats these other systems use.  If you 
> are creating something complicated in these systems, you are not doing AGI, 
> you are doing robotics.
>
>
I saw the Python wrapper from ROS to Atomose (i used ROS with c++ in a 
Robotic course and yes, python is simpler) in the Eva forlder and are 
really minimal. Better like this, I'll try to keep the same wavelength. 
I have understand the interaction from ROS to Atomese. But there is a thing 
that i not completly understand: to activate a GroundedPredicateNode to 
execute a py function i can use its STI right? It should be automatic, how 
is it works? 

Again: scaffolding vs AGI. So, 3D location is part of the external world, 
> and the scaffolding must interface to the external world, and take 3D data 
> and convert it into a format that the AGI code can operate on.  If you have 
> AGI code that can work directly with 3D point clouds, then great! No 
> scaffolding is needed! If you (like me) have proto-AGI code that wants to 
> work with symbolic-natural-language, then some scaffolding is needed to 
> convert point-clouds into prepositions.  Some day in the future, maybe we 
> can remove some of the scaffolding.
>
> However, up until now, almost all work that has been done, that is being 
> done, is on scaffolding. If you are not careful, you will find yourself 
> doing the same. This is not bad: it's educational, and it's important, and 
> it helps show where the boundary is between the scaffolding and the AGI. -- 
> if nothing else, this is called "learning at the school of hard knocks" -- 
> "I built one and it didn't work, but I learned something". At the forefront 
> of knowledge, that's the only school that is open. That's what science is.
>
>
Ideally, is there an AGI code idea that works directly with pointcloud 3D? 
I also suppose that working with symbolic-natural-language and so 
propositions is more efficient! Point clouds are heavy and it takes a lot 
of work to extract information, so why would we want this?
 
I'll pay attention to the boundary between scaffolding and AGI but I'll 
have to try it first-hand to really understand what we are talking about.


Reasoning and inference is a very dangerous place to start, and may kill 
> your project before it even gets started. There are several reasons for 
> this.
>

 I'm feeling it!

* Reasoning presumes that you have already decided on a representation for 
> your data (either hand-crafted it, or automatically learned, somehow.) Once 
> you have this representation, then you can reason on it. But do you have 
> this representation? No, you don't. You might borrow one from blocks-world, 
> or borrow the one from Eva, or borrow the one from rocca (or the one from 
> agi-bio, which represents DNA, RNA and proteins).  You then have the 
> problem of pulling external data and placing it into your representation, 
> where "external data" is vision, sound, text, or RNA/DNA genetic sequences. 
> This is scaffolding. 
>
* Reasoning presumes that you have inference rules. Where did these come 
> from? Did you hand-craft them? PLN has a bunch of hand-crafted inference 
> rules that Ben and friends hand-crafted 10-15 years ago, and Nil has 
> carefully implemented in C code. They work, kind-of, whenever you have a 
> hand-crafted representation for your data that is PLN-compatible. Nil 
> spends a lot of time, a huge amount of time (the last 10 years) getting the 
> hand-crafted rules to fit with the hand-crafted representation, and to get 
> reasoning working efficiently and quickly. But if your representation does 
> not fit the PLN structure, then it won't work.  (None of my language work 
> was ever able to fit with PLN. My new AGI work (at opencog/learn) will 
> almost surely not fit with PLN; the goal there is to learn brand-new 
> inference rules, instead of using the hand-crafted ones.) 
>
> * The actual implementation of the URE is "hard-core comp-sci" or maybe 
> "good old-fashioned comp sci": its a set of algorithms to apply some 
> rewrite rules to a network. There are many non-opencog systems that do 
> something similar, such as SAT-solvers, constraint satisfaction systems, 
> ASP-answer-set programming, the "lambda cube", higher-order logic, 
> theorem-proving systems, etc. It's hard core, it's not easy.  Many of these 
> systems are much much faster, and are much more flexible, *if* your data 
> representation is not PLN, but is something else: e.g. boolean expressions 
> or prolog-like assertions. So we are back again to "what is your internal 
> model"? 
>
> For example, in robotics, for a robot inside an office building, a common 
> inference task is "is the door open? If the door is open then roll through 
> it, else grasp the door handle and open the door."  The standard 
> grad-school robotics approach to solve this is to use ROS or something 
> similar to "see" the door, and then to use ASP (answer-set programming) to 
> perform very fast crisp-logic reasoning and inference. It works. It's what 
> 90% of all university robotics departments use. It is reasoning and 
> inference. It's not AGI.
>

I don't think the following is exactly a representation of the data, but...
I thought I was starting with a trivial representation, 
objects are described by (ConceptNode "English-object-name"), 
primitive robot actions by GroundedPredicateNode which call py functions 
that actually perform those actions via ROS.
A vision algorithm recognizes certain objects and returns the English name 
and their 3D coordinates.
The robot receives goals to complete via English sentences with 
Relex2Logic. Once the inference rules are written, the robot tries to solve 
the goals. When it doesn't know what to do, it tries randomly and ramps up 
its KB from the sensors and continues to make inferences. 

The following is an example in a very Pseudo-language. 
That's what my mind thinks when planning the resolution of this problem. 
It certainly has many wrong ideas, concepts, ways of doing and dealing with 
things... 
What are the critical errors that I've made?
What are the main differences from Eva?


Atomspace:
  Concepts: "name" - "3D pose"
  - bottle - Na
  - table - Na
  (Predicate: "over" List ("bottle") ("table"))
  Actions:
  - Go random
  - Go to coord
  - Grab obj

Goal: (bottle in hand)    // = grab bottle

Inference rules: all the necessary rules, i.e.
* grab-rule: preconditions: (robot-coord = obj-coord) ..., effects: (obj in 
hand) ...
* coord-rule: if x is in "coord1" and y is over x then y is in "coord1"

-> So, robot try backward chaining to find the behavior tree to run. It 
doesn't find it, it lacks knowledge, it doesn't know where the bottle is 
(let's leave out partial trees).
-> Go random ...
-> Vision sensor recognizes table
-> atomspace update: table in coord (1,1,1)
-> forward chaining -> bottle in coord (1,1,1)
-> backward chaining finds a tree, that is
Go to coord (1,1,1) + Grap obj
-> goal achieved
 

>
>> * Ideally my goal was to extend the "model of the world" to work more 
>> with objects than people and to extend the "self-model" to execute 
>> navigation and manipulation plans. In all of this, I haven't yet explored 
>> the learning.
>>
>
> For Eva, the self-model and world-model are all part of the same thing, 
> and they were hand-crafted (not learned).  The goal was to interface 
> language to movement and perception. The inspiration was to use concepts 
> and ideas from Melcuk's "Meaning-Text Theory" (MTT) for the world-model. 
>
> Getting this to work involved a sequence of rickety and fragile 
> transformations: from sound to text (via google voice-to-text) which is 
> inaccurate. From text to a parse-tree (via link-grammar). From parse-tree 
> to the internal model. From the internal model to robot motion/action. 
> Changing anything anywhere was both conceptually hard (no one else 
> understood what the heck I was doing, including, among others, "the 
> management" (Ben and David) and without management support, the going gets 
> tough.)  Also, it was abstract enough and complex enough that other 
> programmers were unwilling to learn how it worked, and so were unwilling to 
> help.  If you personally  want to work on this, then be aware that it is 
> abstract and complex. And fragile. (Part of the goal of "good engineering" 
> is to compartmentalize the complexity so that it becomes "easy to use" and 
> non-fragile. This code bases needed a little bit more "good engineering" 
> than it ever got.)
>
> My goal with the opencog/learn project is to automate all of the above, 
> including the reasoning, inference, and world-model, but it is far away 
> from that, so far. I think I know how to do these things, but now I have to 
> ... do them.
>
> -- Linas
>

I haven't looked at the Meaning-Text Theory yet (serious I think!) I'll fix 
it!
What I have described has precisely this direction it seems to me, but it 
was still only an idea. I can still change, I will have to speak to my 
supervisors to also evaluate the new possibilities you have shown me!
In the meantime, thanks to everyone, my knowledge base is improving a lot 
too!

Michele
 

-- 
You received this message because you are subscribed to the Google Groups 
"opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/opencog/2c33b2f2-c02d-486f-bf58-4122ed11b73dn%40googlegroups.com.

Reply via email to