On Thu, Apr 20, 2017 at 11:19 AM, Daniel Gross <[email protected]> wrote:

> Hi Linas,
>
> Thank you for your responses, and the pointer.
>
> It seems to me that your example further pin-points my question:
>
> A quasi-linear walk through a semantic network is essentially a
> constructed structure (or path) through the use of grammar, to get at a
> possible reading of a sentence that would make sense to a person within a
> "semantic space", without however capturing meaning per-se. A lexicon, say,
> "merely' captures the rules of constructions of particular given verbs and
> nouns *based* on their human interpreted meaning).
>
> Hence, grammar's purpose seems to really "only" to construct a meanginful
> path rather than tell us what the meaning of the knowledge embodied in that
> path is. The latter seems to require another "kind" of semantics/meaning
> (and perhaps some might say that there are turtles all the way down -- or
> at least until some grounding).
>
> does my intuition make sense,
>

Yes, sort of.  If one is a structuralist (and I guess I am one), then *all*
knowledge is encoded as structure.   Therefore, anything that contains
structure does in fact encode some amount of knowledge; and the question is
"how much knowledge"?

To illustrate: what do we know about "turtles"? They have four feet, or
maybe flippers. They have a hard shell, except when they don't. Most people
bring up one or more "mental images" (literally, photo-like
representations) when they think of turtles, although the precise images
are highly variable from individual to individual.  What more can one say
about turtles? The layman, not much, the specialist, a whole lot more.

OK, so what about four feet? what can one say about feet? ... flippers ...?
well those are just more networks of knowledge: just more inter-related
facts.

Is the meaning of "turtle" anything more than just this network of
mostly-factual beliefs?  I claim that there isn't. If you disagree, it
might be because you don't understand what I mean by a "network of
mostly-factual beliefs", and we can try to clarify this further.  That
network is very much tied into my sense of self, into my core beleif-system
and world-view, which will be different than yours, and that is, in turn,
different than that of wikipedia.  In particular, wikipedia does not eat,
sleep or breath, and most people would argue that it's not even alive.

Where do we go from here? Well, "turtle" is associated with images, so AGI
needs an image-processing subsystem. And one can hold a turtle in one's
hands, and so sensory-motor capabilities are needed.  Humans get emotional
when they do things, so if an AGI wants to understand how humans feel when
they hold turtles in their hands, they may want to have some model of
hormones and emotional outbursts typical of humans: its all part of
understanding turtle-ness.  Again, turtle-ness for humans is different from
turtleness for wikipedia, since wikipedia knows more than any human, and
yet, wikipedia has no hormones or circulatory system and doesn't express
emotional outbursts. Heck, wikipedia cannot hold a turtle in its hands,
cause it has no hands. Heck, wikipedia has no mechanical attachements
whatsoever, and if it did, would be incapable of moving them.  Or vision,
or smell or touch.  It would need all of that to truly understand
turtleness more fully.

Wikipedia has no dynamical system; it cannot alter itself. It alsmost
sort-of-can: it has "bots" which alter it, but those bots are under human
guidance, and are very very weak. Imagine a bot that could observe and
read, and then edit wikipedia articles, based on the new knowledge it
obtained. That would be a pretty large step towards what we call "AGI".

Gosh I make it sound so simple. What's the problem? what's taking so long?
How come no one has done this yet?

--linas

>
> thank you,
>
> Daniel
>
>
> On Thursday, 20 April 2017 16:59:38 UTC+3, linas wrote:
>>
>> Semantics and syntax are two different things. Syntax allows you to parse
>> sentences. Semantics is more about how concepts inter-relate to each other.
>> --  a network. A sentence tends to be a quasi-linearized walk through such
>> a network. For example, take a look at the "deep" and the "surface"
>> structures in meaning-text theory.  From there, one asks "what kind of
>> speech acts are there?"  and "why do people talk?" and this is would be the
>> "next level", beyond the homework exercise I mentioned in the previous
>> email.
>>
>> --linas
>>
>> On Wed, Apr 19, 2017 at 7:23 PM, Daniel Gross <[email protected]> wrote:
>>
>>> Hi Ben,
>>>
>>> Thank you for your response. I started reading the paper and was
>>> wondering if you could help me clarify a confusion i apparently have when
>>> it comes to the meaning of meaning:
>>>
>>> How is linguistic meaning connected to human embodied meaning that we
>>> would call human (or AGI) understanding.
>>>
>>> Linguistic meaning seems to be about the linguistic meta-language that
>>> shows how a human would parse a sentence unambiguously, so that a human
>>> can, in principle, understand the meaning of a sentence, although, what is,
>>> say, instructed by a sentence, as understood by a human seems not captured,
>>> but would require more machinery.
>>>
>>> In this sense, linguistic machinery seems to embody (as a theory of
>>> mind) how humans understand (in a cognitive economical manner), rather than
>>> what humans understand --at least this is what confuses me ...
>>>
>>> any thought would be much appreciated ...
>>>
>>> thank you,
>>>
>>> Daniel
>>>
>>>
>>>
>>>
>>>
>>> On Wednesday, 19 April 2017 09:16:42 UTC+3, Ben Goertzel wrote:
>>>>
>>>> We have a probabilistic logic engine (PLN) which works on (optionally
>>>> probabilistically labeled) logic expressions....   This logic engine
>>>> can also help with extracting semantic information from natural
>>>> language or perceptual observations.  However, it's best used together
>>>> with other methods that carry out "lower levels" of processing in
>>>> feedback and cooperation with it...
>>>>
>>>> In the case of vision, Ralf Mayet is leading an effort to use a
>>>> modified InfoGAN deep NN to extract semantic information from
>>>> images/videos/sounds to pass into PLN, the Pattern Miner, and so forth
>>>>
>>>> In the case of textual language, Linas is leading an effort to extract
>>>> a first pass of semantic and syntactic information from unannotated
>>>> text corpora via this general approach
>>>>
>>>> https://arxiv.org/abs/1401.3372
>>>>
>>>> The same approach should work when non-textual groundings are included
>>>> in the corpus, or when the learning is real-time experiential rather
>>>> than batch-based.... but there's plenty of nitty-gritty work here...
>>>>
>>>> ben goertzel
>>>>
>>>> On Wed, Apr 19, 2017 at 7:23 AM, Daniel Gross <[email protected]>
>>>> wrote:
>>>> > Hi Linas,
>>>> >
>>>> > How do you propose to learn an ontology from the data -- also, what
>>>> purpose
>>>> > would, in your opinion, the learned ontology serve. Or stated
>>>> differently,
>>>> > in what way are you thinking to engender higher-level cognitive
>>>> capabilities
>>>> > via machine learned bundled neuron (and implicit ontologies,
>>>> perhaps).
>>>> >
>>>> > thank you,
>>>> >
>>>> > Daniel
>>>> >
>>>> >
>>>> > On Wednesday, 19 April 2017 03:40:47 UTC+3, linas wrote:
>>>> >>
>>>> >>
>>>> >>
>>>> >> On Tue, Apr 18, 2017 at 3:22 PM, Alex <[email protected]> wrote:
>>>> >>>
>>>> >>> Maybe we can solve the problem about modelling classes (and using
>>>> OO and
>>>> >>> UML notions for knowledge representation) with the following
>>>> (pseudo)code
>>>> >>>
>>>> >>> - We can define ConceptNode "Object", that consists from the set or
>>>> >>> properties and functions
>>>> >>>
>>>> >>> - We can require that any class e.g. Invoice is the inherited from
>>>> the
>>>> >>> Object:
>>>> >>>   IntensionalInheritanceLink
>>>> >>>     Invoice
>>>> >>>     Object
>>>> >>>
>>>> >>> - We can require that any more specifica class, e.g. VATInvoice is
>>>> the
>>>> >>> inherited from the more general class:
>>>> >>>   IntensionalInheritanceLink
>>>> >>>     VATInvoice
>>>> >>>     Invoice
>>>> >>>
>>>> >>> - We can require that any instance is inherited from the concrete
>>>> class:
>>>> >>>   ExtensionalInheritanceLinks
>>>> >>>     invoice_no_2314
>>>> >>>     VATInvoice
>>>> >>
>>>> >>
>>>> >> If you wish, you can do stuff like that. opencog per se is agnostic
>>>> about
>>>> >> how you do this, you can do it however you want. The proper way to
>>>> do this
>>>> >> is discussed in many places; for example here:
>>>> >> https://en.wikipedia.org/wiki/Upper_ontology
>>>> >>
>>>> >> I'm not particularly excited about building ontologies by hand, its
>>>> much
>>>> >> more interesting (to me) to understand how they can be learned
>>>> >> automatically, from raw data.
>>>> >>>
>>>> >>>
>>>> >>> But I don't know yet what can and what can not be the parent for
>>>> >>> extensional and intensional inheritance. Can an entity be
>>>> extensionally
>>>> >>> inherited from the more complex object or it can be extensionally
>>>> inherited
>>>> >>> from empty set-placeholder only. When we introduce notion of set,
>>>> then the
>>>> >>> futher question always arise - does OpenCog make distinction
>>>> between sets
>>>> >>> and proper classes?
>>>> >>
>>>> >>
>>>> >> Why? This "distinction" only matters if you want to implement set
>>>> theory.
>>>> >> My pre-emptive strike to halt this train of thought is this: Why
>>>> would you
>>>> >> want to implement set theory, instead of, say, model theory or
>>>> universal
>>>> >> algebra, or category theory, or topos theory?  why the heck would
>>>> >> distinguishing a set-theoretical-set from a
>>>> set-theoretical-proper-class
>>>> >> matter? (which oh by the way is similar but not the same thing as a
>>>> >> category-theoretic-proper-class...)
>>>> >>
>>>> >> You've got multiple ideas going here, at once: the best way to
>>>> hand-craft
>>>> >> some ontology; the best theoretical framework to do it in; the
>>>> philosophy of
>>>> >> knowledge representation in general... and, my personal favorite:
>>>> how do I
>>>> >> get the machine to do this automatically, without manual
>>>> intervention?
>>>> >>
>>>> >>>
>>>> >>>
>>>> >>> There is second problem as well - there is only one - mixed
>>>> >>> InheritanceLink. One can use SubsetLink for the extensional
>>>> inheritance
>>>> >>> (still it feels strange), but there is certainly necessary
>>>> syntactic sugar
>>>> >>> for intensional inheritance, because it is hard to write and read
>>>> SubsetLink
>>>> >>> of property sets again and again
>>>> >>> (http://wiki.opencog.org/w/InheritanceLink).
>>>> >>
>>>> >>
>>>> >> If the machine has learned an ontology with a million subset links
>>>> in it,
>>>> >> no human being is ever going to read or want to read that network.
>>>> It'll be
>>>> >> like looking at a bundle of neurons: the best you can do is say "oh
>>>> wow, a
>>>> >> bundle of neurons!"
>>>> >>
>>>> >> --linas
>>>> >>>
>>>> >>>
>>>> >>> --
>>>> >>> You received this message because you are subscribed to the Google
>>>> Groups
>>>> >>> "opencog" group.
>>>> >>> To unsubscribe from this group and stop receiving emails from it,
>>>> send an
>>>> >>> email to [email protected].
>>>> >>> To post to this group, send email to [email protected].
>>>> >>> Visit this group at https://groups.google.com/group/opencog.
>>>> >>> To view this discussion on the web visit
>>>> >>> https://groups.google.com/d/msgid/opencog/a6d0102e-9ca1-4204
>>>> -8dd4-75a9fb2ec06b%40googlegroups.com.
>>>> >>>
>>>> >>> For more options, visit https://groups.google.com/d/optout.
>>>> >>
>>>> >>
>>>> > --
>>>> > You received this message because you are subscribed to the Google
>>>> Groups
>>>> > "opencog" group.
>>>> > To unsubscribe from this group and stop receiving emails from it,
>>>> send an
>>>> > email to [email protected].
>>>> > To post to this group, send email to [email protected].
>>>> > Visit this group at https://groups.google.com/group/opencog.
>>>> > To view this discussion on the web visit
>>>> > https://groups.google.com/d/msgid/opencog/01d0f8ad-2c6c-44af
>>>> -9e46-fc71e2f2559f%40googlegroups.com.
>>>> >
>>>> > For more options, visit https://groups.google.com/d/optout.
>>>>
>>>>
>>>>
>>>> --
>>>> Ben Goertzel, PhD
>>>> http://goertzel.org
>>>>
>>>> "I am God! I am nothing, I'm play, I am freedom, I am life. I am the
>>>> boundary, I am the peak." -- Alexander Scriabin
>>>>
>>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at https://groups.google.com/group/opencog.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/opencog/CAHrUA37adMr9RfS-vs%2BxHz0wa5wx-fMA_Z6xfZvdgOQnFvj_6A%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to