Re: [agi] Understanding Natural Language

2006-11-24 Thread J. Storrs Hall, PhD.
On Friday 24 November 2006 06:03, YKY (Yan King Yin) wrote:
 You talked mainly about how sentences require vast amounts of external
 knowledge to interpret, but it does not imply that those sentences cannot
 be represented in (predicate) logical form. 

Substitute bit string for predicate logic and you'll have a sentence that 
is just as true and not a lot less useful.

 I think there should be a 
 working memory in which sentences under attention would bring up other
 sentences by association.  For example if a person is being kicked is in
 working memory, that fact would bring up other facts such as being kicked
 causes a person to feel pain and possibly to get angry, etc.  All this is
 orthogonal to *how* the facts are represented.

Oh, I think the representation is quite important. In particular, logic lets 
you in for gazillions of inferences that are totally inapropos and no good 
way to say which is better. Logic also has the enormous disadvantage that you 
tend to have frozen the terms and levels of abstraction. Actual word meanings 
are a lot more plastic, and I'd bet internal representations are damn near 
fluid.

 What you have described is how facts in working memory invoke other facts,
 to form a complex scenario.  This is what classical AI calls frames, I
 call it working memory.  As Ben pointed out, one of the major challenges in
 AGI is how to control vast amounts of facts that follow from or associate
 with the current facts,

What Minsky said was the more important part of his notion than frames, were 
what he called frame-arrays in the early papers (I think he adopted some 
other name like frame-systems later).  A frame-array is like a movie with 
frames for the, ah, frames. It can represent what you see as you turn in a 
room, or what happens as you watch a fight. If you look up and down in the 
room, the array may be 2-D; given other actions it may be n-D.

What Minsky doesn't understand, for my money, is that the brain has enough 
oomph to have the equivalent of a fairly substantial processor for every 
frame-array in memory, so they can all be comparing themselves to the item 
of attention all the time. Given that, you can produce a damn good 
predictive model with (a) a representation that allows you to interpolate in 
some appropriate space between frames, and (b) enough experience to have 
remembered arrays in the vicinity of the actual experience you're trying to 
extrapolate. Then take the weighted average of the arrays in the neighborhood 
of the given experience that best approximates it, which gives you a model 
for how it will continue.

The open questions are representation -- I'm leaning towards CSG in Hilbert 
spaces at the moment, but that may be too computationally demanding -- and 
how to form abstractions. As I noted in the original essay, a key need is to 
be able to do interpolation not only between situations at the same levels, 
but between levels as well.

--Josh

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Re: [agi] Understanding Natural Language

2006-11-24 Thread Ben Goertzel

Oh, I think the representation is quite important. In particular, logic lets
you in for gazillions of inferences that are totally inapropos and no good
way to say which is better. Logic also has the enormous disadvantage that you
tend to have frozen the terms and levels of abstraction. Actual word meanings
are a lot more plastic, and I'd bet internal representations are damn near
fluid.


Logic is a highly generic term ...

I agree with your statement re crisp predicate logic as typically
utilized, but uncertain term logic does provide guidance regarding
which inferences are apropos It also however gets rid of the
elegance and compactness that YKY likes: an uncertain logic
representation of a simple sentence may involve tens of thousands of
contextal, uncertain relationships, possibly including the obvious
ones involved in the standard crisp predicate logic representation...

-- Ben G

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Natural versus formal AI interface languages

2006-11-24 Thread Eric Baum

Richard Eric Baum wrote:
 I don't think the proofs depend on any special assumptions about
 the  nature of learning.
 
 I beg to differ.  IIRC the sense of learning they require is
 induction over example sentences.  They exclude the use of real
 world knowledge, in spite of the fact that such knowledge (or at
 least primitives involved in the development of real world
knowledge ) are posited to play a significant role in the learning
 of grammar in humans.  As such, these proofs say nothing
 whatsoever about the learning of NL grammars.
 
 I fully agree the proofs don't take into account such stuff.  And I
 believe such stuff is critical. Thus I've never claimed language
 learning was proved hard, I've just suggested evolution could have
 encrypted it.
 
 The point I began with was, if there are lots of different locally
 optimal codings for thought, it may be hard to figure out which one
 is programamed into the mind, and thus language learning could be a
 hard additional problem to producing an AGI. The AGI has to
 understand what the word foobar means, and thus it has to have
 (or build) a code module meaning ``foobar that it can invoke with
 this word. If it has a different set of modules, it might be sunk
 in communication.
 
 My sense about grammars for natural language, is that there are
 lots of different equally valid grammars that could be used to
 communicate.  For example, there are the grammars of English and of
 Swahili. One isn't better than the other. And there is a wide
 variety of other kinds of grammars that might be just as good, that
 aren't even used in natural language, because evolution chose one
 convention at random.  Figuring out what that convention is, is
 hard, at least Linguists have tried hard to do it and failed.  And
 this grammar stuff is pretty much on top of, the meanings of the
 words. It serves to disambiguate, for example for error correction
 in understanding. But you could communicate pretty well in pidgin,
 without it, so long as you understand the meanings of the words.
 
 The grammar learning results (as well as the experience of
 linguists, who've tried very hard to build a model for natural
 grammar) I think, are indicative that this problem is hard, and it
 seems that this problem is superimposed above the real world
 knowledge aspect.

Richard Eric,

Richard Thankyou, I think you have focussed down on the exact nature
Richard of the claim.

Richard My reply could start from a couple of different places in
Richard your above text (all equivalent), but the one that brings out
Richard the point best is this:

 And there is a wide variety of other kinds of grammars that might
 be just as good, that aren't even used in natural language, because
 evolution chose one convention at random.
Richard
Richard ^^

Richard This is precisely where I think the flase assumption is
Richard buried.  When I say that grammar learning can be dependent on
Richard real world knowledge, I mean specifically that there are
Richard certain conceptual primitives involved in the basic design of
Richard a concept-learning system.  We all share these primitives,
Richard and [my claim is that] our language learning mechanisms start
Richard from those things.  Because both I and a native Swahili
Richard speaker use languages whose grammars are founded on common
Richard conceptual primitives, our grammars are more alike than we
Richard imagine.

Richard Not only that, but if myself and the Swahili speaker suddenly
Richard met and tried to discover each other's languages, we would be
Richard able to do so, eventually, because our conceptual primitives
Richard are the same and our learning mechanisms are so similar.

Richard Finally, I would argue that most cognitive systems, if they
Richard are to be successful in negotiating this same 3-D universe,
Richard would do best to have much the same conceptual primitives
Richard that we do.  This is much harder to argue, but it could be
Richard done.

Richard As a result of this, evolution would not by any means have
Richard been making random choices of languages to implement.  It
Richard remains to be seen just how constrained the choices are, but
Richard there is at least a prima facie case to be made (the one I
Richard just sketched) that evolution was extremely constrained in
Richard her choices.

Richard In the face of these ideas, your argument that evolution
Richard essentially made a random choice from a quasi-infinite space
Richard of possibilities needs a great deal more to back it up.  The
Richard grammar-from-conceptual-primitives idea is so plausible that
Richard the burden is on you to give a powerful reason for rejecting
Richard it.

Richard Correct me if I am wrong, but I see no argument from you on
Richard this specific point (maybe there is one in your book  but
Richard in that case, why say without qualification, as if it was
Richard obvious, that evolution made a random selection?).

Richard Unless you can destroy the 

Re: Re: [agi] Understanding Natural Language

2006-11-24 Thread Andrii (lOkadin) Zvorygin

It was a true solar-plexus blow, and completely knocked out, Perkins
staggered back against the instrument-board. His outflung arm pushed the
power-lever out to its last notch, throwing full current through the
bar, which was pointed straight up as it had been when they made their
landing.


LOJban: zo'e seDARxi foloMIDjubeloCUTne gi'e CIRko leVACri leFEPri
Prolog: 
gihe(['fa'],darxi(_,_,_,lo('MIDju',[be(lo('CUTne'))])),cirko(_,le('VACri'),le('FEPri'))).
English: unknown is hit at locus that which really is the middle of
the chest and unknown is a loser of air at/near lungs.

LOJban: .i la.PERkinz. FALdzu seka'aleTUTcizeiTANbo
Prolog: gihe(['fa'],faldzu(la('PERkinz')), klama(_,le(zei('TUTci','TANbo'
English: Perkins (fell kind of walked, I don't know what stagger
really means, dictionary didn't help much, walking with uncertainty?
Probably not the intended meaning here.) (with destination) tool kind
of (board/plank).


DARxi dax da'i hit
   x1 hits/strikes/[beats] x2 with instrument [or
   body-part] x3 at locus x4

CIRko cri  lose
   x1 loses person/thing x2 at/near x3; x1 loses
   property/feature x2 in conditions/situation x3

I hope Ben Goertzel has introduced you to Lojban already -- I haven't
checked the logs but saw his lojbanplusplus proposal.

I  personally don't understand why everyone seems to insist on using
ambiguous illogical languages to express things when there are viable
alternative available. The mass can get it's English translation
rendered out of a Logical language very easily. It's harder to make a
translation function from English to Lojban, than one from Lojban to
English. Though it is possible and I'm sure it will be done as that
would mean we could have a Prolog database of facts representing any
english text. Such as say the book you were refering to.

I'm currently working on a Lojban parser in Prolog. I've just recently
started learning Prolog though the program is going rather well that
considered. I currently have support for CMAvo(read grammar word(s))
and GISmu(read root word(s)), in the example I gave I also used a
LUJvo(read compound word(s)) and CMEne(read name(s)) all of wthich
I have support for in my Haskell(with Parsec) Lojban Parser(there is
one more Lojban word class fu'ivla for foreign words, I'll get support
for that as well). I started coding the Prolog parser maybe a week
ago, so it should be able to parse all Lojban text before the new
year.

Well after that there will be a pretty trivial design phase of the
Lojban-to-Prolog translator, which will support direct conversion.
Then a rewrite of the Lojban-to-Prolog parser and translator in
Lojban. We'll have the first ever programming language that is used
for human-to-human interaction (I use it everyday) http://lojban.org.


mibaziKLAma(I short time interval future am goer)
.imu'o(over to you)mi'e.LOkadin.(my name is Lokadin.)



On 11/24/06, Ben Goertzel [EMAIL PROTECTED] wrote:

 Oh, I think the representation is quite important. In particular, logic lets
 you in for gazillions of inferences that are totally inapropos and no good
 way to say which is better. Logic also has the enormous disadvantage that you
 tend to have frozen the terms and levels of abstraction. Actual word meanings
 are a lot more plastic, and I'd bet internal representations are damn near
 fluid.

Logic is a highly generic term ...

I agree with your statement re crisp predicate logic as typically
utilized, but uncertain term logic does provide guidance regarding
which inferences are apropos It also however gets rid of the
elegance and compactness that YKY likes: an uncertain logic
representation of a simple sentence may involve tens of thousands of
contextal, uncertain relationships, possibly including the obvious
ones involved in the standard crisp predicate logic representation...

-- Ben G

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303




--
ta'o(by the way)  We With You Network at: http://lokiworld.org .i(and)
more on Lojban: http://lojban.org
mu'oimi'e lOkadin (Over, my name is lOkadin)

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Re: [agi] Natural versus formal AI interface languages

2006-11-24 Thread Ben Goertzel

Richard,

I know it's peripheral to your main argument, but in this example ...


Suppose that the computational effort that evolution needs to build
different sized language understanding mechanisms scales as:

2.5 * (N/7 + 1)^^6 planet-years

... where different sized is captured by the value N, which is the
number of conceptual primitives used in the language understanding
mechanism, and a planet-year is one planet worth of human DNA randomly
working on the problem for one year.  (I am plucking this out of the
air, of course, but that doesn't matter.)

Here are the resource requirements for this polynomial resource function:

N   R

1   2.23E+000
7   6.40E+001
10  2.05E+002
50  2.92E+005
100 1.28E+007
300 7.12E+009

(N = Number of conceptual primitives)
(R = resource requirement in planet-years)

I am assuming that the appropriate measure of size of problem is number
of conceptual primitives that are involved in the language understanding
mechanism (a measure picked at random, and as far as I can see, as
likely a measure as any, but if you think something else should be the
N, be my guest).

If there were 300 conceptual primitives in the human LUM, resource
requirement would be 7 billion planet-years.  That would be bad.

But if there are only 7 conceptual primitives, it would take 64 years.
Pathetically small and of no consequence.

The function is polynomial, so in a sense you could say this is an
NP-hard problem.


I don't think you're using the term NP-hard correctly.

http://en.wikipedia.org/wiki/Complexity_classes_P_and_NP


The class P consists of all those decision problems that can be solved
on a deterministic sequential machine in an amount of time that is
polynomial in the size of the input; the class NP consists of all
those decision problems whose positive solutions can be **verified**
in polynomial time given the right information.


[This page also reviews, and agrees with, many of your complaints
regarding the intuitive interpretation of P as easy and NP as hard]

http://en.wikipedia.org/wiki/NP-hard


In computational complexity theory, NP-hard (Non-deterministic
Polynomial-time hard) refers to the class of decision problems H such
that for every decision problem L in NP there exists a polynomial-time
many-one reduction to H, written . If H itself is in NP, then H is
called NP-complete.


-- Ben G

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Natural versus formal AI interface languages

2006-11-24 Thread Richard Loosemore

Ben Goertzel wrote:

Richard,

I know it's peripheral to your main argument, but in this example ...


Suppose that the computational effort that evolution needs to build
different sized language understanding mechanisms scales as:

2.5 * (N/7 + 1)^^6 planet-years

... where different sized is captured by the value N, which is the
number of conceptual primitives used in the language understanding
mechanism, and a planet-year is one planet worth of human DNA randomly
working on the problem for one year.  (I am plucking this out of the
air, of course, but that doesn't matter.)

Here are the resource requirements for this polynomial resource function:

N   R

1   2.23E+000
7   6.40E+001
10  2.05E+002
50  2.92E+005
100 1.28E+007
300 7.12E+009

(N = Number of conceptual primitives)
(R = resource requirement in planet-years)

I am assuming that the appropriate measure of size of problem is number
of conceptual primitives that are involved in the language understanding
mechanism (a measure picked at random, and as far as I can see, as
likely a measure as any, but if you think something else should be the
N, be my guest).

If there were 300 conceptual primitives in the human LUM, resource
requirement would be 7 billion planet-years.  That would be bad.

But if there are only 7 conceptual primitives, it would take 64 years.
Pathetically small and of no consequence.

The function is polynomial, so in a sense you could say this is an
NP-hard problem.


I don't think you're using the term NP-hard correctly.

http://en.wikipedia.org/wiki/Complexity_classes_P_and_NP


The class P consists of all those decision problems that can be solved
on a deterministic sequential machine in an amount of time that is
polynomial in the size of the input; the class NP consists of all
those decision problems whose positive solutions can be **verified**
in polynomial time given the right information.


[This page also reviews, and agrees with, many of your complaints
regarding the intuitive interpretation of P as easy and NP as hard]

http://en.wikipedia.org/wiki/NP-hard


In computational complexity theory, NP-hard (Non-deterministic
Polynomial-time hard) refers to the class of decision problems H such
that for every decision problem L in NP there exists a polynomial-time
many-one reduction to H, written . If H itself is in NP, then H is
called NP-complete.



I'd certainly welcome clarification, and I may have gotten this wrong... 
but I'm not quite sure where you are directing my attention here.


Are you targeting the fact that NP-Hard is defined with respect to 
decision problems, or to the reduction aspect?


My understanding of NP-hard is that it does strictly only apply to 
decision problems ... but what I was doing was trying to interpret the 
loose sense in which Eric himself was using NP-Hard, so if I have 
stretched the definition a little, I woudl claim I was inheriting 
something that was already stretched.


But maybe that was not what you meant.  I stand ready to be corrected, 
if it turns out I have goofed.




Richard Loosemore.


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303