Hi Linas, 
I hope I'm not disturbing and wasting too much of your time!

Il giorno mercoledì 24 marzo 2021 alle 20:11:49 UTC+1 linas ha scritto:

> So, naively, simplistically, a "true AGI system" should be capable of 
> learning the "knowledge base", instead of relying on humans to craft one.
>
> Now comes the blurry parts: the learning algorithm itself is hand-crafted, 
> so isn't that a form of cheating? We are once again relying on humans to do 
> the work. For example, for neural nets, effectively all of them are trained 
> on a selection of images curated by human beings. The neural net learns how 
> to recognize a photo of a horse, but it was trained on a human-curated 
> training set. So, again, that's "cheating". Have you heard the expression 
> "its turtles all the way down"? Well, for neural nets, its hand-crafted 
> datasets all the way down. It's people all the way down. The goal of 
> building a true AGI is to avoid this.  Step 1 is to avoid hand-crafted 
> training sets. Step 2 is to avoid hand-crafted algorithms.  I'm working on 
> Step 1. I suppose that Step 2 is beyond the abilities of what can be done 
> today. It's a bit blurry.
>

I begin to struggle not to say nonsense.

Thus, if i understand correctly your point 1, considering the README of the 
repo learn, the small idea is unsupervised learning for natural language; 
with the next goal of extending the domain from text to "all things in the 
world". 
So, (still using unsupervised learning?) let the robot build its 
representation of the world through its observations.
Now, the input set is no longer hand-crafted but the algorithm still has 
the "turtles problem" .. Your point 2 would solve that too, right?
The only option I can think of is that AGI itself writes its algorithms .. 
but recursively AGI would have to invent itself ..

But then, without a "true-AGI" learning, I'll never have a "true-AGI" 
knowledge base and without that I'll not be able to continue, right?
Why work for point 1 if point 2 is a prerequisite?
It seems like a no-win situation. Maybe I'm just a pessimist!
There will be another way... In the end, our knowledge base was also helped 
by our parents in some way.


But there is a thing that i not completly understand: to activate a 
>> GroundedPredicateNode to execute a py function i can use its STI right? It 
>> should be automatic, how is it works? 
>>
>
> It is not automatic.  You have to `cog-execute!` whatever code you want to 
> trigger. There are several ways of doing this.
> 1) by hand .. obviously.
> 2) write some sheme or some python code that loops over whatever needs to 
> be looped over, searching for high or low STI or any other Value or 
> StateLink or whatever might be changing, and call cog-execute! as needed.
> 3) Do the above, but entirely in Atomese.  There is a way to do an 
> infinite loop in atomese -- it is actually a tail-recursive call to the 
> function itself. It should be possible to do everything you need to do in 
> "pure atomese". Now, atomese was never meant to be a full-scale programming 
> language, like python or scheme, so it is missing many commonplace ideas 
> that make python/scheme/c++/etc. "human friendly". But it does have enough 
> to make most things possible, and many things "easy" (-ish)
>
> The Eva/Sophia code did version 3. I forget where the main loop is; its 
> only 3 lines of code total, so its easy to miss. It might be in one of the 
> repos that was moved around. Everything else was controlled by 
> SequentialAndLinks, which stepped through a tree of decisions, triggering a 
> GroundedPredicate whenever some condition was met.
>
There were three design goals:
> a) Make sure atomese had everything it needed to control a robot
> b) Make sure that the atomese was simple enough that other algorithms 
> could analyze it and modify it. For example, it should be possible (in 
> principle) for MOSES or URE or PLN or some other system to analyze and 
> modify the robot-control code. (in practice, this was never done)  
>
> Keeping the robot code in the form of a decision tree should mean that it 
> is simple enough that other systems could analyze that tree, edit that 
> tree, modify it, extend it, and thus create brand-new robot behaviors out 
> of "thin air".
>
> c) Make sure that the design of atomese itself was simple enough and 
> usable enough to allow a) and b) above. This is an ongoing project. 
>
>
Ah ok now it makes a lot more sense! I really like solution 3).
I'm experimenting a little with the potential of Atomese (sometimes at 
random) but it's nice to write, I'm also learning scheme that I didn't know 
anything about.

Can I ask you to say something about tree of decisions in Eva? Was it a 
separate scheme/python module that analyzed SequentialAnd?
While i'm at it, I can't place some components in your architecture:
I read Moshe Looks thesis on MOSES and what I found on OpenPsi. But in 
practice what were they used for?
Finally, in practice what does PLN do/have more than URE?


Before reasoning is possible, one must have a world-model. This model has 
> several parts to it:
> * The people in the room, and their 3D coordinates
> * The objects on the table and their 3D coordinates.
> * The self-model (current position of robot, and of its arms, etc.)
> The above is updated rapidly, by sensor information.
>
> Then there is some long-term knowledge:
> * The names of everyone who is known. A dictionary linking names to faces.
>
> Then there is some common-sense knowledge:
> * you can talk to people,
> * you can pick up bottles on a table
> * you cannot talk to bottles
> * you cannot pick up people.
> * bottles can be picked up with the arm. 
> * facial expressions and arm movements can be used to communicate with 
> people.
>
> The world model needs to represent all of this. It also needs to store all 
> of the above in a representation that is accessible to natural language, so 
> that it can talk about the position of its arm, the location of the bottle, 
> and the name of the person it is talking to.
>
> Reasoning is possible only *after* all of the above has been satisfied, 
> not before.  Attempts to do reasoning before the above has been built will 
> always come up short, because some important piece of information will be 
> missing, or will be stored somewhere, in some format that the reasoning 
> system does not have access to it.
>
> The point here is that people have been building "reasoning systems" for 
> the last 30 or 40 years. They are always frail and fragile. They are always 
> missing key information.  I think it is important to try to understand how 
> to represent information in a uniform manner, so that reasoning does not 
> stumble.
>

> Atomspace:
>
>>   Concepts: "name" - "3D pose"
>>   - bottle - Na
>>   - table - Na
>>   (Predicate: "over" List ("bottle") ("table"))
>>   Actions:
>>   - Go random
>>   - Go to coord
>>   - Grab obj
>>
>> Goal: (bottle in hand)    // = grab bottle
>>
>> Inference rules: all the necessary rules, i.e.
>> * grab-rule: preconditions: (robot-coord = obj-coord) ..., effects: (obj 
>> in hand) ...
>> * coord-rule: if x is in "coord1" and y is over x then y is in "coord1"
>>
>> -> So, robot try backward chaining to find the behavior tree to run. It 
>> doesn't find it, it lacks knowledge, it doesn't know where the bottle is 
>> (let's leave out partial trees).
>> -> Go random ...
>> -> Vision sensor recognizes table
>> -> atomspace update: table in coord (1,1,1)
>> -> forward chaining -> bottle in coord (1,1,1)
>> -> backward chaining finds a tree, that is
>> Go to coord (1,1,1) + Grap obj
>> -> goal achieved
>>
>
> This is a more-or-less textbook robotics homework assignment. It has 
> certainly been solved in many different ways by many different people using 
> many different technologies, over the last 40-60 years. Algorithms like 
> A-star search are one of the research results of trying to solve the above. 
> The AtomSpace would be a horrible technology to solve the above problem, 
> its too slow, too bulky, too complicated.
>
> The chaining steps can be called "inference", but it is inference devoid 
> of natural language, devoid of "true understanding". My goal is to have a 
> conversation with the robot:
>
> "What do you see?"
> "A bottle"
> "where is it?"
> "on the table"
> "can you reach it?"
> "no"
> "could you reach it if you move to a different place?"
> "yes"
> "where would you move?"
> "closer to the bottle"
> "can you please move closer to the bottle?"
> (robot moves)
>
>
This is now clear to me, but why natural language?
if i didn't want interactions with humans could i do it differently?
A certain variation of the sensor values already represents "the forward 
movement", I do not need to associate a name with it if I don't speak,
also for the Atom "bottle" I could use its ID instead. 
I don't understand why removing natural language implies having an 
inference devoid of "true understanding". 

Stupid example: If I speak Italian with a French, neither of us understands 
the other. But a bottle remains a bottle for both and if I give him my hand 
he will probably do it too ... or he will leave without saying goodbye.

I'm probably missing something big, but until I don't bang my head against 
it, I don't see.


This can be solved by carefully hand-crafting a chatbot dialog tree. (The 
> ghost chatbot system in opencog was designed to allow such dialog trees to 
> be created) Over the decades, many chatbots have been written. Again: there 
> are common problems:
>
> -- the text is hard-coded, and not linguistic.  Minor changes in wording 
> cause the chatbot to get confused.
> -- there is no world-model, or it is ad hoc and scattered over many places
> -- no ability to perform reasoning
> -- no memory of the dialog ("what were we talking about?" - well, chatbots 
> do have a one-word "topic" variable, so the chatbot can answer "we are 
> talking about baseball", but that's it. There is no "world model" of the 
> conversation, and no "world model" of who the conversation was with ("On 
> Sunday, I talked to John about a bottle on a table and how to grasp it") 
>
> Note that ghost has all of the above problems. It's not linguistic, it has 
> no world-model, it has no defined representation that can be reasoned over, 
> and it has no memory.
>
> 20 years ago, it was hard to build a robot that could grasp a bottle. It 
> was hard to create a good chatbot.
>
> What is the state of the art, today? Well, Tesla has self-driving cars, 
> and Amazon and Apple have chatbots that are very sophisticated.  There is 
> no open source for any of this, and there are no open standards, so if you 
> are a university grad student (or a university professor) it is still very 
> very hard to build a robot that can grasp a bottle, or a robot that you can 
> talk to.  And yet, these basic tasks have become "engineering"; they are no 
> longer "science".  The science resides at a more abstract level.
>
> --linas
>

I find the abstract level incredible, both in terms of beauty and 
difficulty!

Michele
 

-- 
You received this message because you are subscribed to the Google Groups 
"opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/opencog/5ac81cf1-c4cd-40cd-9438-55d8dc3d95f5n%40googlegroups.com.

Reply via email to