Hi Linas,

Thanks for the response below and good counter examples. 

Agreed on your point regarding the modifiers and models. I've been trying 
to consider both natural language and basic scenes (eg. camera etc) this 
week, but even on the most trivial cases, I'm finding that I need many 
additional models and extensions, to say even represent even a short 
sentence of English effectively, often with limited continuity between my 
efforts in each case.

I hadn't really considered your second point on grounding in the real 
world, outside of considering automated techniques to build the KR. I've 
not seen anything in traditional KR which considers that... it's always 
domain knowledge thats represented, not a representation of how the domain 
concepts are mapped back.

Your work on English sentences and learning an internal KR structure sounds 
mega interesting. I thought at length about language after reading some odd 
bits on Universal Grammar, Merge and iLanguage. I thought the poverty of 
stimulus problem in language acquisition was of particular interest and had 
considered that the human language acquisition mechanism and internal 
representations might have special significance in an AGI design.  

I will ask in a few months for sure.

Thanks

Adam

On Sunday, 2 April 2017 19:01:08 UTC+1, linas wrote:
>
> Hi Adam,
>
> My personal instinct is that a human-curated KR system is kind-of 
> pointless. Let me explain why.  I've actually tired to create one several 
> times now, and have been dis-satisfied with the results.
>
> The first time, I thought Icould do it with "semantic triples" -- 
> subject-verb-object type stuctures, and this seems to work, sort-of, at the 
> simplest, naivest levels.  But very quickly one discovers that one needs to 
> deal with modifiers -- adverbs, adjectives, prepositions, relative clauses, 
> models of mental states (John thinks that ...)   It turns out that the 
> reason that natural language is complicated is because there is an actual 
> need for that complexity to convey factual knowledge.  It is hard to 
> capture that complex structure.
>
> But if you still do want to hand-create such a system, the best place to 
> start would be with the DSynt layer of the MTT -- "Meaning Text Theory" of 
> linguistics.
>
> One of the many problems with traditional KR systems is that they fail to 
> deal with grounding in the physical world: that discusssions about object X 
> actually pertain to an actual object in the field of view of some camera, 
> or maybe some sound heard on a microphone.   That perhaps talk about some 
> action is acutally about an action that must be undertaken in the real 
> world: say, for example, you wanted to tell a self-driving car to turn 
> left. 
>
> My current plan/hope is to build a system that can develop it's own 
> internal KR representation automatically, instead of having a human design 
> one. Prototypes of this concept have been published in academic journals 
> for decades, starting with papers 10 or 20 years old that discuss the 
> automated learning of synonymous words and synonymous phrases.
>
> I'm actually starting work on this now, full time, but am still at the 
> very earliest stages.  Ask me again in a few months.
>
> Anyway, if one does have a system capable of learning a KR system by 
> itself, then the best-possible KR representation system is just a large 
> collection of short English-language sentences asserting facts.  That's it. 
> Just read the corpus.  The system will assign the facts into slots, as 
> needed.
>
>  --linas
>
>
>
> On Sat, Apr 1, 2017 at 2:47 PM, 'Adam Gwizdala' via opencog <
> [email protected] <javascript:>> wrote:
>
>> Hey OpenCog,
>>
>> I've been following your work for a few years now, great effort, and some 
>> solid justification for your design principles. Keep on truckin' with it :-)
>>
>> I'm currently working to define my thesis, which is going to focus on 
>> concept pattern mining, DL and ontology learning, specifically in the AGI 
>> context.
>>
>> In particular, I wanted to develop an KR standard for AGI (like OWL2 on 
>> steroids) which is extensible enough to enable AGI researchers to 
>> collaborate effectively, plug in learning algorithm or other modules more 
>> readily, but also enable low-level types/relationships to be defined so 
>> that economics or probability concepts (for example) can be implemented. I 
>> still wanted to keep track of the formalisation. (eg. inferences, 
>> satisfiability, chaining, uniform interpolation etc. all the good stuff we 
>> get from a formalised KR like OWL, where it applies).
>>
>> As part of my pre-work I am considering the AtomSpace in detail, due to 
>> some of its properties. Eg. large-scale KR, query engine, bias towards 
>> modular/hybrid AGI. But also because any standard would need to meet 
>> advanced requirements like those found in OpenCog to be an effective 
>> standard.
>>
>> I have a couple of questions I was hoping someone could answer to help me 
>> decide to progress:
>>
>> Given that you guys have gone through the process of implementing the 
>> AtomSpace, do you think that such a 'standard AGI KR' would be practical in 
>> real terms? Or just be a bit too much of monster to define with too steep a 
>> learning curve for encouraging a new user base?
>>
>> Also, in many of the AtomSpace-related publications there is frequent 
>> mention of performance trade-offs and data-persistence dynamics. Do you 
>> feel that distributed computation and general HPC should be considered as a 
>> central principle to such a standard KR? eg. in the same way OWL is 
>> 'web-biased' the AGI standard should be 'HPC-biased'.
>>
>> Given that perspectives on AGI research differ significantly between 
>> individuals, do you think a KR standard which tries to unify 
>> viewpoints/requirements would end up being so generalised that you might as 
>> well just not bother? 
>>
>> Thanks
>>
>> Adam Gwizdala 
>>
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "opencog" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to [email protected] <javascript:>.
>> To post to this group, send email to [email protected] 
>> <javascript:>.
>> Visit this group at https://groups.google.com/group/opencog.
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/opencog/5064103e-2eb0-41a1-a9dc-feeec578b962%40googlegroups.com
>>  
>> <https://groups.google.com/d/msgid/opencog/5064103e-2eb0-41a1-a9dc-feeec578b962%40googlegroups.com?utm_medium=email&utm_source=footer>
>> .
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at https://groups.google.com/group/opencog.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/opencog/b04ca035-5a48-4177-8da8-347517857d66%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to