Hi Linas, 
don't worry, no rush!

Il giorno venerdì 2 aprile 2021 alle 20:05:23 UTC+2 linas ha scritto:

> But then, without a "true-AGI" learning, I'll never have a "true-AGI" 
>> knowledge base and without that I'll not be able to continue, right?
>>
>
> I don't understand the question.
>
> Why work for point 1 if point 2 is a prerequisite?
>>
>
> It's not a pre-requisite!
>
> It seems like a no-win situation. Maybe I'm just a pessimist!
>>
>
> I don't understand.
>
> There will be another way... In the end, our knowledge base was also 
>> helped by our parents in some way.
>>
>
> ? I don't understand what our parents have to do with this...
>
>
It was just a personal reflection. I mean that I cannot get a project to 
AGI without a learning algorithm (because the knowledge base would then 
surely be hand-crafted).

Regarding the parents, I didn't know how to explain.
My idea is that maybe I wouldn't rule out supervised learning. Because 
human learning is sometimes guided by a teacher, who gives you the image of 
a horse and also tells you that it is a horse.

 

> The primary benefit of scheme is that it is functional programming, and 
> learning how to code in a functional programming language completely 
> changes your world-view of what a program is, and what software is.  If you 
> only know C/C++/java/python, then you have a very narrow, very restricted 
> view of the world. You're missing a large variety of important concepts in 
> software. Yes, learning functional programming is "good for you".
>

I took a course on the semantics and type system of a functional 
mini-language. Now I'm learning the practical code!
 

Can I ask you to say something about tree of decisions in Eva? Was it a 
>> separate scheme/python module that analyzed SequentialAnd?
>>
>
> No, it was just plain Atomese.
>
> Many Atoms have an execute method (actuall, all Atoms have an execute 
> method, but it is non-trivial on only some of them.)
>
> The execute method on SequentialAnd simply steps through each Atom in it's 
> outgoing set, and asks "are you true?" -- by calling execute, and seeing if 
> it returns "true". If some atom in the outgoing list returns "false", then 
> SequentialAnd stops and returns false. Otherwise, it continues till it 
> reaches the end of the list, and then returns true.
>
> There is no "external module" to perform this analysis.
>
> While i'm at it, I can't place some components in your architecture:
>> I read Moshe Looks thesis on MOSES and what I found on OpenPsi. But in 
>> practice what were they used for?
>>
>
> I used MOSES to analyze medical notes from a hospital (free-text doctor 
> and nurses notes) and predict patient outcomes. Some other people used 
> MOSES to try to predict the stock market. Ben/Nill used it to hunt down 
> genes that correlate with long life.
>
> OpenPsi was used as an inspiration for a kind-of combined 
> prioritization-plus-human-emotion-modelling system. It was, still is 
> problematic, for failing to separate these two ideas. There are many 
> practical problems in AtomSpace applications that lead to a combinatorial 
> explosion of possibilities, and one part of open-psi seems to be effective 
> in deciding which of these possibilities should be explored first.  
> Unfortunately, the design combined it with a really terrible model of human 
> psychology, and this lead to a mass of confusion that was never fully 
> resolved. it doesn't help that the creator of micro-psi came back and said 
> that open-psi has no resemblance to micro-psi whatsoever. There are some 
> good ideas in there, but the implementation remains problematic.
>  
>
>> Finally, in practice what does PLN do/have more than URE?
>>
>
> I suppose Nil answered this already, but ... PLN defines a certain 
> specific set of truth-value formulas. URE doesn't care about truth value 
> formulas.
>
> URE can chain together rules, -- arbitrary collections of rules. PLN is a 
> specific collection of rules, and they are not only specific rules, but 
> they are coupled with specific formulas for determining the truth value.
>
> So, for example, consider chaining implications: If A implies B and B 
> implies C then A implies C. This is a "rule" that recognizes an input of 
> two pairs (A,B) and (B,C), and creates the pair (A,C) if the truth of A is 
> T. it marks the truth of C as being T. A variant of this is Bayesian 
> deduction, where the truth values are replaced by conditional probabilities.
>
> URE doesn't care what kind of rule it is, or what happens to the truth 
> values. The rules could be non-sense, and the formulas could be crazy, and 
> URE would still try to chain them.
>
>  
Thanks for these explanations, I'm continuing to expand my knowledge!


If your machine is incapable of talking, it would be hard to argue that 
> it's smart. Now, dogs, cats, crows and octopi can't talk, and for 
> centuries, some people (many people) believed they weren't smart. Well, now 
> I think we all know better, but still, the best way to prove how smart or 
> stupid you are is to open your mouth.
>   
>
>> if i didn't want interactions with humans could i do it differently?
>>
>
> Well, you could build a self-driving car.  But I don't think Elon Musk is 
> claiming that FSD is AGI.
>
> A certain variation of the sensor values already represents "the forward 
>> movement", I do not need to associate a name with it if I don't speak,
>> also for the Atom "bottle" I could use its ID instead. 
>> I don't understand why removing natural language implies having an 
>> inference devoid of "true understanding". 
>>
>
> You know the expression "writing about music is like dancing about 
> architecture"? Well, you could build a robot that dances, but you would 
> have a hard time convincing anyone that its smart, that it's anything other 
> than a clever puppet.
>  
>
>>
>> Stupid example: If I speak Italian with a French, neither of us 
>> understands the other. But a bottle remains a bottle for both and if I give 
>> him my hand he will probably do it too ... or he will leave without saying 
>> goodbye.
>>
>
> It's all very contextual. If you speak Italian, and you see a human, you 
> assume that what you see has all the other properties of being a human. If 
> you speak Italian, and you see a robot with a mechanical arm, you assume 
> that it has all the typical properites of a robot: stupid and lifeless, 
> just a machine. 
>
> -- Linas
>

Ok so, in summary: either I make it talk or I have to invent another way to 
demonstrate his intelligence!
I will have to think better about all this, elaborate the concepts that 
have been said. 

In the meantime, thanks for everything Linas.

Michele

-- 
You received this message because you are subscribed to the Google Groups 
"opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/opencog/944c8705-dd55-4f5f-a8e0-7912d6ec3a37n%40googlegroups.com.

Reply via email to