OK, let me try to clarify more thoroughly...

Firstly, I take as the premise of my discussion here that we are
building an AGI system which has explicit, abstract logical inference
as a significant component (e.g. PLN).   If you want to argue for an
AGI path that is purely subsymbolic, then I'm not going to dispute the
viability of such a path, but I don't think it's optimal and anyway my
suggestion of Lojban for OpenCog is premised on the fact that PLN is a
big component of OpenCog...

The question then is how to map natural-language relationships into
logic relationships....   Four approaches are obvious given current
technologies:

1) Hand-code mapping rules in some form

2) Learn mapping rules via supervised learning, from a training corpus

3) Learn mapping rules via unsupervised learning, from e.g. a big
corpus of texts or speech

4) Learn mapping rules via an embodied system's experience, i.e. via
reinforcement and imitation learning combined with unsupervised
learning

...

(4) is obviously appealing to me.   For (4) to work one probably needs
to hand-code mappings from nonlinguistic perception (e.g. vision,
audition) into logical representation, but this is perhaps less
problematic than hand-coding mappings from language into logic,
because vision and audition have simpler structures in a way

Without hand-coded mappings from nonlinguistic perception into logic,
it's hard to see how (4) would work *unless* one was also willing to
have the logic itself emerge via reinforcement/ imitation /
unsupervised learning.  That is, unless one was willing to give up
starting from a fixed logic like PLN and let the logic be learned....
I think this is possible but IMO it gets into "evolution of a brain
architecture" territory rather than "learning within a brain
architecture" territory...

What I am hoping to do is seed (4) with a combination of (1) and (3)

Specifically, regarding (3), Linas and I already wrote a paper
pointing in the direction of what we want to do....

https://arxiv.org/abs/1401.3372

However, at the moment I don't personally see how that approach is
going to let us learn something analogous to the RelEx2Logic rules.  I
think it can let us learn something analogous to the link parser
grammar plus the RelEx rules.  But I don't see how the unsupervised
learning paradigm we describe there is going to learn rules that
connect to PLN logic specifically....   I can sorta imagine how this
might happen, but it seems really hard...

So then we could use our unsupervised learning method for (3) and then
do (4) just for learning R2L rules.  That might be viable....

However, Lojban seems to me like it could yield a robust way of doing
(1), which could potentially accelerate the overall process of making
an AGI that really understands language...

Our current R2L rule-base is kind of a mess and is also very
incomplete.  So if we're going to do practical NLP dialogue
applications with OpenCog in the near future we need to either
extend/improve R2L or replace it.   Taking approach (4) or "(4) on top
of (3)" is too researchy and difficult to be relevant to near-term
application development, though it's an important research direction..

The value of Lojban for an R2L-type layer is based on the facts that

A) Lojban directly maps into predicate logic, so into PLN-friendly Atomese

B) Lojban expresses everything that natural language expresses, in
ways that are reasonably elegant and already worked-out by other
people, and honed by decades of practice

On the other hand, the current system of R2L outputs is kind of
unsystematic and messy... and turning it into something elegant and
coherent would be a lot of work...

C) via generating Relex2Lojban or LinkGrammar2Lojban rules from a
parallel English/Lojban corpus, one avoids hand-coding any rules...
instead one can use this sample corpus to generate a R2L-like layer
for any syntax parser, including one learned via (3) or (3)+(4)... or
Google's newly released parser... or whatever...

D) unlike hand-coding R2L rules, the approach is more
language-independent (one only needs to create a parallel corpus in
Lojban and the new language, to extend the approach to a new language)

...

Regarding B, please do not minimize this point.  FrameNet doesn't do
this, Cyc-L doesn't do this, SUMO doesn't do this..  the system of R2L
outputs doesn't currently do this ... Lojban does this...

I hope this long email at least conveys my line of thinking a bit better...

...

The point is not relabeling ConceptNodes with Lojban word-names
instead of English word-names.  The point is that Lojban contains

B1) a more complete and commonsensical list of argument-structures for
verbs than Framenet

B2) systematic, commonsensical ways of dealing with everyday uses of
time, space, conjunction, possession, comparisons, etc. etc. in formal
logic

It's not the Lojban word-names that matter, it's the precisely-stated
logical relationships between the Lojban words...

-- Ben










On Sat, Jul 9, 2016 at 12:09 PM, Matt Chapman <[email protected]> wrote:
> How does storing ConceptNode atoms with lojbanic labels improve over storing
> atoms with English labels? For practical applications, it seems like it
> would unnecessarily increase the size of the atomspace, and for training
> data, I expect there are vastly many more English to $X translation examples
> than Lojban to $X. Lojban is a fun toy, but like Linas, I don't see the
> problem that is being solved here. Sure Lojban has fewer rules to encode,
> but you still end up manually encoding them, as far as I  can tell. Maybe it
> feel less like cheating because writing lojban feels like writing code to
> begin with...
>
> All the Best,
>
> Matt
>
> --
> Standard Disclaimer:
> Please interpret brevity as me valuing your time, and not as any negative
> intention.
>
> On Fri, Jul 8, 2016 at 5:03 PM, Linas Vepstas <[email protected]>
> wrote:
>>
>> FWIW, I am virulently anti-lojban, because mostly I believe it doesn't
>> solve any problems that we actually have. --linas
>>
>> On Fri, Jul 8, 2016 at 9:12 AM, Jim Rutt <[email protected]> wrote:
>>>
>>> I like this idea very much.  I'm currently considering Lojban as a
>>> "knowledge engineering" language for a "really smart AI for games" project
>>> I'm starting to spin up.  Prior to full on AGI I see some fruitful problems
>>> to be solved using "sort of AGIish" software that depends on human created
>>> domain specific declarative knowledge.  My hypothesis is that there is a
>>> useful and talented - and not too expensive - class of human talent that can
>>> learn Lojban well who would not be appropriate for using tools that are less
>>> human language-like.  These might include very bright but highly
>>> anti-quantitative liberal arts grads.  Lojban strikes me as a potentially
>>> quite good adapter between the world of humans and the world of machines.
>>>
>>> ko pilno lo clearer pensi la lojban
>>>
>>> jim
>>>
>>>
>>> On Fri, Jul 8, 2016 at 7:44 AM, Ben Goertzel <[email protected]> wrote:
>>>>
>>>> Here is a modest proposal, which would replace Relex2Logic with
>>>> something vaguely similar in spirit but much superior,
>>>>
>>>> http://wiki.opencog.org/wikihome/index.php/Lojbanic_Relex2Logic
>>>>
>>>> Actually it's a bit closer to the spirit of the bad old RelEx2Frame,
>>>> but with the significant difference that Lojban is a language with
>>>> complete coverage of everyday semantics, whereas FrameNet is sorely
>>>> limited and hasn't been honed by usage...
>>>>
>>>> -- Ben
>>>>
>>>>
>>>> --
>>>> Ben Goertzel, PhD
>>>> http://goertzel.org
>>>>
>>>> Super-benevolent super-intelligence is the thought the Global Brain is
>>>> currently struggling to form...
>>>
>>>
>>>
>>>
>>> --
>>> ===========================
>>> Jim Rutt
>>> JPR Ventures
>>>
>>> --
>>> You received this message because you are subscribed to the Google Groups
>>> "opencog" group.
>>> To unsubscribe from this group and stop receiving emails from it, send an
>>> email to [email protected].
>>> To post to this group, send email to [email protected].
>>> Visit this group at https://groups.google.com/group/opencog.
>>> To view this discussion on the web visit
>>> https://groups.google.com/d/msgid/opencog/CAPzPGw7p7MKC2d0MucQf8s4AimtSu-rNtkmm8Pprf03iJhJVdA%40mail.gmail.com.
>>>
>>> For more options, visit https://groups.google.com/d/optout.
>>
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "opencog" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to [email protected].
>> To post to this group, send email to [email protected].
>> Visit this group at https://groups.google.com/group/opencog.
>> To view this discussion on the web visit
>> https://groups.google.com/d/msgid/opencog/CAHrUA35G-ooD2i_RMi93YN2T%2Bmo44coYytAkTj_0RsH9KOdzvg%40mail.gmail.com.
>>
>> For more options, visit https://groups.google.com/d/optout.
>
>



-- 
Ben Goertzel, PhD
http://goertzel.org

Super-benevolent super-intelligence is the thought the Global Brain is
currently struggling to form...

-- 
You received this message because you are subscribed to the Google Groups 
"opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at https://groups.google.com/group/opencog.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/opencog/CACYTDBecpd2xSZ%2BMBOvEz1zBJ2WS87DfNv%3DdMyPsrYUYfyH0rA%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to