Hi Linas, 

Thank you for your responses, and the pointer. 

It seems to me that your example further pin-points my question:

A quasi-linear walk through a semantic network is essentially a constructed 
structure (or path) through the use of grammar, to get at a possible 
reading of a sentence that would make sense to a person within a "semantic 
space", without however capturing meaning per-se. A lexicon, say, "merely' 
captures the rules of constructions of particular given verbs and nouns 
*based* on their human interpreted meaning).

Hence, grammar's purpose seems to really "only" to construct a meanginful 
path rather than tell us what the meaning of the knowledge embodied in that 
path is. The latter seems to require another "kind" of semantics/meaning 
(and perhaps some might say that there are turtles all the way down -- or 
at least until some grounding). 

does my intuition make sense, 

thank you,

Daniel


On Thursday, 20 April 2017 16:59:38 UTC+3, linas wrote:
>
> Semantics and syntax are two different things. Syntax allows you to parse 
> sentences. Semantics is more about how concepts inter-relate to each other. 
> --  a network. A sentence tends to be a quasi-linearized walk through such 
> a network. For example, take a look at the "deep" and the "surface" 
> structures in meaning-text theory.  From there, one asks "what kind of 
> speech acts are there?"  and "why do people talk?" and this is would be the 
> "next level", beyond the homework exercise I mentioned in the previous 
> email.
>
> --linas  
>
> On Wed, Apr 19, 2017 at 7:23 PM, Daniel Gross <[email protected] 
> <javascript:>> wrote:
>
>> Hi Ben, 
>>
>> Thank you for your response. I started reading the paper and was 
>> wondering if you could help me clarify a confusion i apparently have when 
>> it comes to the meaning of meaning: 
>>
>> How is linguistic meaning connected to human embodied meaning that we 
>> would call human (or AGI) understanding. 
>>
>> Linguistic meaning seems to be about the linguistic meta-language that 
>> shows how a human would parse a sentence unambiguously, so that a human 
>> can, in principle, understand the meaning of a sentence, although, what is, 
>> say, instructed by a sentence, as understood by a human seems not captured, 
>> but would require more machinery.
>>
>> In this sense, linguistic machinery seems to embody (as a theory of mind) 
>> how humans understand (in a cognitive economical manner), rather than what 
>> humans understand --at least this is what confuses me ...
>>
>> any thought would be much appreciated ...
>>
>> thank you,
>>
>> Daniel
>>
>>
>>
>>
>>
>> On Wednesday, 19 April 2017 09:16:42 UTC+3, Ben Goertzel wrote:
>>>
>>> We have a probabilistic logic engine (PLN) which works on (optionally 
>>> probabilistically labeled) logic expressions....   This logic engine 
>>> can also help with extracting semantic information from natural 
>>> language or perceptual observations.  However, it's best used together 
>>> with other methods that carry out "lower levels" of processing in 
>>> feedback and cooperation with it... 
>>>
>>> In the case of vision, Ralf Mayet is leading an effort to use a 
>>> modified InfoGAN deep NN to extract semantic information from 
>>> images/videos/sounds to pass into PLN, the Pattern Miner, and so forth 
>>>
>>> In the case of textual language, Linas is leading an effort to extract 
>>> a first pass of semantic and syntactic information from unannotated 
>>> text corpora via this general approach 
>>>
>>> https://arxiv.org/abs/1401.3372 
>>>
>>> The same approach should work when non-textual groundings are included 
>>> in the corpus, or when the learning is real-time experiential rather 
>>> than batch-based.... but there's plenty of nitty-gritty work here... 
>>>
>>> ben goertzel 
>>>
>>> On Wed, Apr 19, 2017 at 7:23 AM, Daniel Gross <[email protected]> 
>>> wrote: 
>>> > Hi Linas, 
>>> > 
>>> > How do you propose to learn an ontology from the data -- also, what 
>>> purpose 
>>> > would, in your opinion, the learned ontology serve. Or stated 
>>> differently, 
>>> > in what way are you thinking to engender higher-level cognitive 
>>> capabilities 
>>> > via machine learned bundled neuron (and implicit ontologies, perhaps). 
>>> > 
>>> > thank you, 
>>> > 
>>> > Daniel 
>>> > 
>>> > 
>>> > On Wednesday, 19 April 2017 03:40:47 UTC+3, linas wrote: 
>>> >> 
>>> >> 
>>> >> 
>>> >> On Tue, Apr 18, 2017 at 3:22 PM, Alex <[email protected]> wrote: 
>>> >>> 
>>> >>> Maybe we can solve the problem about modelling classes (and using OO 
>>> and 
>>> >>> UML notions for knowledge representation) with the following 
>>> (pseudo)code 
>>> >>> 
>>> >>> - We can define ConceptNode "Object", that consists from the set or 
>>> >>> properties and functions 
>>> >>> 
>>> >>> - We can require that any class e.g. Invoice is the inherited from 
>>> the 
>>> >>> Object: 
>>> >>>   IntensionalInheritanceLink 
>>> >>>     Invoice 
>>> >>>     Object 
>>> >>> 
>>> >>> - We can require that any more specifica class, e.g. VATInvoice is 
>>> the 
>>> >>> inherited from the more general class: 
>>> >>>   IntensionalInheritanceLink 
>>> >>>     VATInvoice 
>>> >>>     Invoice 
>>> >>> 
>>> >>> - We can require that any instance is inherited from the concrete 
>>> class: 
>>> >>>   ExtensionalInheritanceLinks 
>>> >>>     invoice_no_2314 
>>> >>>     VATInvoice 
>>> >> 
>>> >> 
>>> >> If you wish, you can do stuff like that. opencog per se is agnostic 
>>> about 
>>> >> how you do this, you can do it however you want. The proper way to do 
>>> this 
>>> >> is discussed in many places; for example here: 
>>> >> https://en.wikipedia.org/wiki/Upper_ontology 
>>> >> 
>>> >> I'm not particularly excited about building ontologies by hand, its 
>>> much 
>>> >> more interesting (to me) to understand how they can be learned 
>>> >> automatically, from raw data. 
>>> >>> 
>>> >>> 
>>> >>> But I don't know yet what can and what can not be the parent for 
>>> >>> extensional and intensional inheritance. Can an entity be 
>>> extensionally 
>>> >>> inherited from the more complex object or it can be extensionally 
>>> inherited 
>>> >>> from empty set-placeholder only. When we introduce notion of set, 
>>> then the 
>>> >>> futher question always arise - does OpenCog make distinction between 
>>> sets 
>>> >>> and proper classes? 
>>> >> 
>>> >> 
>>> >> Why? This "distinction" only matters if you want to implement set 
>>> theory. 
>>> >> My pre-emptive strike to halt this train of thought is this: Why 
>>> would you 
>>> >> want to implement set theory, instead of, say, model theory or 
>>> universal 
>>> >> algebra, or category theory, or topos theory?  why the heck would 
>>> >> distinguishing a set-theoretical-set from a 
>>> set-theoretical-proper-class 
>>> >> matter? (which oh by the way is similar but not the same thing as a 
>>> >> category-theoretic-proper-class...) 
>>> >> 
>>> >> You've got multiple ideas going here, at once: the best way to 
>>> hand-craft 
>>> >> some ontology; the best theoretical framework to do it in; the 
>>> philosophy of 
>>> >> knowledge representation in general... and, my personal favorite: how 
>>> do I 
>>> >> get the machine to do this automatically, without manual 
>>> intervention? 
>>> >> 
>>> >>> 
>>> >>> 
>>> >>> There is second problem as well - there is only one - mixed 
>>> >>> InheritanceLink. One can use SubsetLink for the extensional 
>>> inheritance 
>>> >>> (still it feels strange), but there is certainly necessary syntactic 
>>> sugar 
>>> >>> for intensional inheritance, because it is hard to write and read 
>>> SubsetLink 
>>> >>> of property sets again and again 
>>> >>> (http://wiki.opencog.org/w/InheritanceLink). 
>>> >> 
>>> >> 
>>> >> If the machine has learned an ontology with a million subset links in 
>>> it, 
>>> >> no human being is ever going to read or want to read that network. 
>>> It'll be 
>>> >> like looking at a bundle of neurons: the best you can do is say "oh 
>>> wow, a 
>>> >> bundle of neurons!" 
>>> >> 
>>> >> --linas 
>>> >>> 
>>> >>> 
>>> >>> -- 
>>> >>> You received this message because you are subscribed to the Google 
>>> Groups 
>>> >>> "opencog" group. 
>>> >>> To unsubscribe from this group and stop receiving emails from it, 
>>> send an 
>>> >>> email to [email protected]. 
>>> >>> To post to this group, send email to [email protected]. 
>>> >>> Visit this group at https://groups.google.com/group/opencog. 
>>> >>> To view this discussion on the web visit 
>>> >>> 
>>> https://groups.google.com/d/msgid/opencog/a6d0102e-9ca1-4204-8dd4-75a9fb2ec06b%40googlegroups.com.
>>>  
>>>
>>> >>> 
>>> >>> For more options, visit https://groups.google.com/d/optout. 
>>> >> 
>>> >> 
>>> > -- 
>>> > You received this message because you are subscribed to the Google 
>>> Groups 
>>> > "opencog" group. 
>>> > To unsubscribe from this group and stop receiving emails from it, send 
>>> an 
>>> > email to [email protected]. 
>>> > To post to this group, send email to [email protected]. 
>>> > Visit this group at https://groups.google.com/group/opencog. 
>>> > To view this discussion on the web visit 
>>> > 
>>> https://groups.google.com/d/msgid/opencog/01d0f8ad-2c6c-44af-9e46-fc71e2f2559f%40googlegroups.com.
>>>  
>>>
>>> > 
>>> > For more options, visit https://groups.google.com/d/optout. 
>>>
>>>
>>>
>>> -- 
>>> Ben Goertzel, PhD 
>>> http://goertzel.org 
>>>
>>> "I am God! I am nothing, I'm play, I am freedom, I am life. I am the 
>>> boundary, I am the peak." -- Alexander Scriabin 
>>>
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at https://groups.google.com/group/opencog.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/opencog/9ec7d280-dcc6-44f1-b5ae-8b9731d280a0%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to