Sorry for the late reply; I've been busy.
On Thu, Mar 25, 2021 at 2:03 PM Michele Thiella <[email protected]>
wrote:
> Hi Linas,
> I hope I'm not disturbing and wasting too much of your time!
>
Not at all. It allows me to practice explaining things. Knowing something
is not all that useful, if you can't explain it to someone else!
> Il giorno mercoledì 24 marzo 2021 alle 20:11:49 UTC+1 linas ha scritto:
>
>> So, naively, simplistically, a "true AGI system" should be capable of
>> learning the "knowledge base", instead of relying on humans to craft one.
>>
>> Now comes the blurry parts: the learning algorithm itself is
>> hand-crafted, so isn't that a form of cheating? We are once again relying
>> on humans to do the work. For example, for neural nets, effectively all of
>> them are trained on a selection of images curated by human beings. The
>> neural net learns how to recognize a photo of a horse, but it was trained
>> on a human-curated training set. So, again, that's "cheating". Have you
>> heard the expression "its turtles all the way down"? Well, for neural nets,
>> its hand-crafted datasets all the way down. It's people all the way down.
>> The goal of building a true AGI is to avoid this. Step 1 is to avoid
>> hand-crafted training sets. Step 2 is to avoid hand-crafted algorithms.
>> I'm working on Step 1. I suppose that Step 2 is beyond the abilities of
>> what can be done today. It's a bit blurry.
>>
>
> I begin to struggle not to say nonsense.
>
> Thus, if i understand correctly your point 1, considering the README of
> the repo learn, the small idea is unsupervised learning for natural
> language; with the next goal of extending the domain from text to "all
> things in the world".
>
Yes.
So, (still using unsupervised learning?) let the robot build its
> representation of the world through its observations.
>
Yes.
Now, the input set is no longer hand-crafted but the algorithm still has
> the "turtles problem" .. Your point 2 would solve that too, right?
> The only option I can think of is that AGI itself writes its algorithms ..
> but recursively AGI would have to invent itself
>
Uhh....
The input I'm working with has never been hand-crafted -- the input text is
just .. text.
The hand-crafting refers to training sets: thus, it is common in neural-net
training to take 100 photos of a hamburger, and 100 photos of a horse, and
have some grad student apply these labels - hamburger or horse - and use
that as the training set. Eventually, the neural net learns to tell apart
hamburgers and horses. (but that's all).
The analogous task in linguistics is to have a grad student go through a
text corpus, and mark all the nouns and all the verbs. Eventually, the
machine-learning algo learns how to tell part nouns from verbs.
I'm trying to work with an input stream that has not been marked up - its
just raw input, raw text.
The algorithm I'm using tries to stay as close as possible to "physics" --
using principles of entropy & probability to do it's work. Thus, hopefully,
the algo is not biasing the results.
Sure, some futuristic day, a human-level AGI will be able to write code,
but that is too futuristic to affect the curent design work.
But then, without a "true-AGI" learning, I'll never have a "true-AGI"
> knowledge base and without that I'll not be able to continue, right?
>
I don't understand the question.
Why work for point 1 if point 2 is a prerequisite?
>
It's not a pre-requisite!
It seems like a no-win situation. Maybe I'm just a pessimist!
>
I don't understand.
There will be another way... In the end, our knowledge base was also helped
> by our parents in some way.
>
? I don't understand what our parents have to do with this...
But there is a thing that i not completly understand: to activate a
>>> GroundedPredicateNode to execute a py function i can use its STI right? It
>>> should be automatic, how is it works?
>>>
>>
>> It is not automatic. You have to `cog-execute!` whatever code you want
>> to trigger. There are several ways of doing this.
>> 1) by hand .. obviously.
>> 2) write some sheme or some python code that loops over whatever needs to
>> be looped over, searching for high or low STI or any other Value or
>> StateLink or whatever might be changing, and call cog-execute! as needed.
>> 3) Do the above, but entirely in Atomese. There is a way to do an
>> infinite loop in atomese -- it is actually a tail-recursive call to the
>> function itself. It should be possible to do everything you need to do in
>> "pure atomese". Now, atomese was never meant to be a full-scale programming
>> language, like python or scheme, so it is missing many commonplace ideas
>> that make python/scheme/c++/etc. "human friendly". But it does have enough
>> to make most things possible, and many things "easy" (-ish)
>>
>> The Eva/Sophia code did version 3. I forget where the main loop is; its
>> only 3 lines of code total, so its easy to miss. It might be in one of the
>> repos that was moved around. Everything else was controlled by
>> SequentialAndLinks, which stepped through a tree of decisions, triggering a
>> GroundedPredicate whenever some condition was met.
>>
> There were three design goals:
>> a) Make sure atomese had everything it needed to control a robot
>> b) Make sure that the atomese was simple enough that other algorithms
>> could analyze it and modify it. For example, it should be possible (in
>> principle) for MOSES or URE or PLN or some other system to analyze and
>> modify the robot-control code. (in practice, this was never done)
>>
>> Keeping the robot code in the form of a decision tree should mean that it
>> is simple enough that other systems could analyze that tree, edit that
>> tree, modify it, extend it, and thus create brand-new robot behaviors out
>> of "thin air".
>>
>> c) Make sure that the design of atomese itself was simple enough and
>> usable enough to allow a) and b) above. This is an ongoing project.
>>
>>
> Ah ok now it makes a lot more sense! I really like solution 3).
>
I do too! It is certainly one of the things that makes the AtomSpace
distinct from anything else out there. There are plenty of graph databases
these days. They just can't do this.
I'm experimenting a little with the potential of Atomese (sometimes at
> random) but it's nice to write, I'm also learning scheme that I didn't know
> anything about.
>
The primary benefit of scheme is that it is functional programming, and
learning how to code in a functional programming language completely
changes your world-view of what a program is, and what software is. If you
only know C/C++/java/python, then you have a very narrow, very restricted
view of the world. You're missing a large variety of important concepts in
software. Yes, learning functional programming is "good for you".
>
> Can I ask you to say something about tree of decisions in Eva? Was it a
> separate scheme/python module that analyzed SequentialAnd?
>
No, it was just plain Atomese.
Many Atoms have an execute method (actuall, all Atoms have an execute
method, but it is non-trivial on only some of them.)
The execute method on SequentialAnd simply steps through each Atom in it's
outgoing set, and asks "are you true?" -- by calling execute, and seeing if
it returns "true". If some atom in the outgoing list returns "false", then
SequentialAnd stops and returns false. Otherwise, it continues till it
reaches the end of the list, and then returns true.
There is no "external module" to perform this analysis.
While i'm at it, I can't place some components in your architecture:
> I read Moshe Looks thesis on MOSES and what I found on OpenPsi. But in
> practice what were they used for?
>
I used MOSES to analyze medical notes from a hospital (free-text doctor and
nurses notes) and predict patient outcomes. Some other people used MOSES to
try to predict the stock market. Ben/Nill used it to hunt down genes that
correlate with long life.
OpenPsi was used as an inspiration for a kind-of combined
prioritization-plus-human-emotion-modelling system. It was, still is
problematic, for failing to separate these two ideas. There are many
practical problems in AtomSpace applications that lead to a combinatorial
explosion of possibilities, and one part of open-psi seems to be effective
in deciding which of these possibilities should be explored first.
Unfortunately, the design combined it with a really terrible model of human
psychology, and this lead to a mass of confusion that was never fully
resolved. it doesn't help that the creator of micro-psi came back and said
that open-psi has no resemblance to micro-psi whatsoever. There are some
good ideas in there, but the implementation remains problematic.
> Finally, in practice what does PLN do/have more than URE?
>
I suppose Nil answered this already, but ... PLN defines a certain specific
set of truth-value formulas. URE doesn't care about truth value formulas.
URE can chain together rules, -- arbitrary collections of rules. PLN is a
specific collection of rules, and they are not only specific rules, but
they are coupled with specific formulas for determining the truth value.
So, for example, consider chaining implications: If A implies B and B
implies C then A implies C. This is a "rule" that recognizes an input of
two pairs (A,B) and (B,C), and creates the pair (A,C) if the truth of A is
T. it marks the truth of C as being T. A variant of this is Bayesian
deduction, where the truth values are replaced by conditional probabilities.
URE doesn't care what kind of rule it is, or what happens to the truth
values. The rules could be non-sense, and the formulas could be crazy, and
URE would still try to chain them.
> Before reasoning is possible, one must have a world-model. This model has
>> several parts to it:
>> * The people in the room, and their 3D coordinates
>> * The objects on the table and their 3D coordinates.
>> * The self-model (current position of robot, and of its arms, etc.)
>> The above is updated rapidly, by sensor information.
>>
>> Then there is some long-term knowledge:
>> * The names of everyone who is known. A dictionary linking names to faces.
>>
>> Then there is some common-sense knowledge:
>> * you can talk to people,
>> * you can pick up bottles on a table
>> * you cannot talk to bottles
>> * you cannot pick up people.
>> * bottles can be picked up with the arm.
>> * facial expressions and arm movements can be used to communicate with
>> people.
>>
>> The world model needs to represent all of this. It also needs to store
>> all of the above in a representation that is accessible to natural
>> language, so that it can talk about the position of its arm, the location
>> of the bottle, and the name of the person it is talking to.
>>
>> Reasoning is possible only *after* all of the above has been satisfied,
>> not before. Attempts to do reasoning before the above has been built will
>> always come up short, because some important piece of information will be
>> missing, or will be stored somewhere, in some format that the reasoning
>> system does not have access to it.
>>
>> The point here is that people have been building "reasoning systems" for
>> the last 30 or 40 years. They are always frail and fragile. They are always
>> missing key information. I think it is important to try to understand how
>> to represent information in a uniform manner, so that reasoning does not
>> stumble.
>>
>
>> Atomspace:
>>
>>> Concepts: "name" - "3D pose"
>>> - bottle - Na
>>> - table - Na
>>> (Predicate: "over" List ("bottle") ("table"))
>>> Actions:
>>> - Go random
>>> - Go to coord
>>> - Grab obj
>>>
>>> Goal: (bottle in hand) // = grab bottle
>>>
>>> Inference rules: all the necessary rules, i.e.
>>> * grab-rule: preconditions: (robot-coord = obj-coord) ..., effects: (obj
>>> in hand) ...
>>> * coord-rule: if x is in "coord1" and y is over x then y is in "coord1"
>>>
>>> -> So, robot try backward chaining to find the behavior tree to run. It
>>> doesn't find it, it lacks knowledge, it doesn't know where the bottle is
>>> (let's leave out partial trees).
>>> -> Go random ...
>>> -> Vision sensor recognizes table
>>> -> atomspace update: table in coord (1,1,1)
>>> -> forward chaining -> bottle in coord (1,1,1)
>>> -> backward chaining finds a tree, that is
>>> Go to coord (1,1,1) + Grap obj
>>> -> goal achieved
>>>
>>
>> This is a more-or-less textbook robotics homework assignment. It has
>> certainly been solved in many different ways by many different people using
>> many different technologies, over the last 40-60 years. Algorithms like
>> A-star search are one of the research results of trying to solve the above.
>> The AtomSpace would be a horrible technology to solve the above problem,
>> its too slow, too bulky, too complicated.
>>
>> The chaining steps can be called "inference", but it is inference devoid
>> of natural language, devoid of "true understanding". My goal is to have a
>> conversation with the robot:
>>
>> "What do you see?"
>> "A bottle"
>> "where is it?"
>> "on the table"
>> "can you reach it?"
>> "no"
>> "could you reach it if you move to a different place?"
>> "yes"
>> "where would you move?"
>> "closer to the bottle"
>> "can you please move closer to the bottle?"
>> (robot moves)
>>
>>
> This is now clear to me, but why natural language?
>
If your machine is incapable of talking, it would be hard to argue that
it's smart. Now, dogs, cats, crows and octopi can't talk, and for
centuries, some people (many people) believed they weren't smart. Well, now
I think we all know better, but still, the best way to prove how smart or
stupid you are is to open your mouth.
> if i didn't want interactions with humans could i do it differently?
>
Well, you could build a self-driving car. But I don't think Elon Musk is
claiming that FSD is AGI.
A certain variation of the sensor values already represents "the forward
> movement", I do not need to associate a name with it if I don't speak,
> also for the Atom "bottle" I could use its ID instead.
> I don't understand why removing natural language implies having an
> inference devoid of "true understanding".
>
You know the expression "writing about music is like dancing about
architecture"? Well, you could build a robot that dances, but you would
have a hard time convincing anyone that its smart, that it's anything other
than a clever puppet.
>
> Stupid example: If I speak Italian with a French, neither of us
> understands the other. But a bottle remains a bottle for both and if I give
> him my hand he will probably do it too ... or he will leave without saying
> goodbye.
>
It's all very contextual. If you speak Italian, and you see a human, you
assume that what you see has all the other properties of being a human. If
you speak Italian, and you see a robot with a mechanical arm, you assume
that it has all the typical properites of a robot: stupid and lifeless,
just a machine.
-- Linas
>
> I'm probably missing something big, but until I don't bang my head against
> it, I don't see.
>
>
> This can be solved by carefully hand-crafting a chatbot dialog tree. (The
>> ghost chatbot system in opencog was designed to allow such dialog trees to
>> be created) Over the decades, many chatbots have been written. Again: there
>> are common problems:
>>
>> -- the text is hard-coded, and not linguistic. Minor changes in wording
>> cause the chatbot to get confused.
>> -- there is no world-model, or it is ad hoc and scattered over many places
>> -- no ability to perform reasoning
>> -- no memory of the dialog ("what were we talking about?" - well,
>> chatbots do have a one-word "topic" variable, so the chatbot can answer "we
>> are talking about baseball", but that's it. There is no "world model" of
>> the conversation, and no "world model" of who the conversation was with
>> ("On Sunday, I talked to John about a bottle on a table and how to grasp
>> it")
>>
>> Note that ghost has all of the above problems. It's not linguistic, it
>> has no world-model, it has no defined representation that can be reasoned
>> over, and it has no memory.
>>
>> 20 years ago, it was hard to build a robot that could grasp a bottle. It
>> was hard to create a good chatbot.
>>
>> What is the state of the art, today? Well, Tesla has self-driving cars,
>> and Amazon and Apple have chatbots that are very sophisticated. There is
>> no open source for any of this, and there are no open standards, so if you
>> are a university grad student (or a university professor) it is still very
>> very hard to build a robot that can grasp a bottle, or a robot that you can
>> talk to. And yet, these basic tasks have become "engineering"; they are no
>> longer "science". The science resides at a more abstract level.
>>
>> --linas
>>
>
> I find the abstract level incredible, both in terms of beauty and
> difficulty!
>
> Michele
>
>
>
--
Patrick: Are they laughing at us?
Sponge Bob: No, Patrick, they are laughing next to us.
--
You received this message because you are subscribed to the Google Groups
"opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
To view this discussion on the web visit
https://groups.google.com/d/msgid/opencog/CAHrUA35bjeeSQhdUuXTbAGOZakWmkqreqGy4Dq8KikLgZUQSAw%40mail.gmail.com.