Re: Re: [agi] Wbat would it take to move rapidly toward advanced AGI?

2010-10-18 Thread Andrii Zvorygin
On Sat, Oct 16, 2010 at 5:15 PM, Matt Mahoney matmaho...@yahoo.com wrote:
 However, programming
  languages are fundamentally different from natural language in that
 (1) they have a precise grammar and semantics,

That's an advantage.
Since you can precisely define what you wish to express.


 Only when talking to computers. Humans don't learn language that way. Humans
 learn incrementally yet are still able to use partially learned languages.

Ya, well I don't see any difference.
Computers also learn incrementally,
they start with bios, then kernel, then system services...
same thing compilers,
they have a base vocabulary (assembly),
that's extended by the standard-library,
which can be further extended by third-party libraries.

humans also learn the meaning of words,
by a complex set of sensual cues,
associated with a word.

Just as a computer can learn the meaning of a word,
by reading in a library or dictionary what it means.

partially learned languages can also be used,
since functional correlations between one word and another can be identified.

 and (3) the complexity is on the order of 10^5 to 10^6 bits vs. 10^9 bits 
 for

natural language.

Again it's an advantage,
as it would require less resources to do so.

 You can't express much with such a small language.


? I'd say it's difficult to express stuff with the gooey inconsistency
found in NLP.
simply due to people not having common dictionary,
or many people not using dictionaries.

It's far easier to express yourself precisely,
when there are standard words,
and a consistent grammar.

Another advantage of HSPL,
is that it's easy to learn,
due to it's simplicity.

So could be used as an intermediary language.
Especially for international negotiations,
where clarity and precision,
may be valued.


---
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/8660244-d750797a
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


[agi] Re: [GI] Digest for general-intellige...@googlegroups.com - 10 Messages in 2 Topics

2010-10-18 Thread Linas Vepstas
On 17 October 2010 18:20, Ben Goertzel b...@goertzel.org wrote:
 In other words, using formal grammar actually makes it harder to establish
 the connection at the NL-logic interface. IE, it is harder to translate NL
 sentences to formal grammar than to formal logic.

 KY

 Quite the opposite, actually.

 Translating an NL sentence to a *set* of grammatical trees,
 representing syntactically possible parses, is almost a solved
 problem.  E.g. the Stanford parser or the link parser do that.

 Then, translating each of these grammatical trees into a *set* of
 formal logic expressions, each representing a possible semantic
 interpretation of the tree, is a partially-solved problem.  E.g.
 OpenCog's RelEx and RelEx2Frame components and Cyc's NL subsystem both
 do that (in different ways), though not perfectly.

 So based on the current state of the art, it seems that turning NL
 into a formal grammar (e.g. a dependency grammar) is significantly
 less problematic than turning NL into logic, because forming the logic
 representation requires resolving additional ambiguity, beyond that
 which must be resolved to form the formal-grammar representation

Agree; but would like to add several remarks:

--part of the difficulty of applying logic of NL is the need to handle
spatial reasoning (A is next to B and B is next to C therefore ...? C is
not far from A)

-- part of the difficulty of applying logic of NL is the need to handle
more abstract reasoning (A is the major of B and majors are people
therefore  B is a person)  (opencyc does this ... not badly)

-- Some philosophers of mathematics e.g. Carlo cellucci (see 18
unconventional essays on the nature of mathematics) will stridently
point out that, while classical logic is the format in which proofs are
stated, it is not at all the method by which mathematicians generate
new ideas -- they use reasoning by analogy, by allegory, by induction,
and many others, to generate hypothesis which might be possible
solutions to problems.

I think that we should realize that the same techniques should be
applied in AGI: we use reasoning by analogy not because it gives
formally correct answers, but because it generates reasonable
hypothesis which may or may not be true, but which can be
examined in greater detail to see if they are true.   These other,
non-rigorous reasoning methods are all parts of what we might
call intuition --  a set of hard-to-explain reasons why we think
something might  be true -- which must then be subjected to more
rigorous analysis to see if yet more evidence can be found.

In short, real-life, just like mathematics, is all about problem-solving
and not theorem-proving (which is the last step of creating math,
not the first).

--linas


---
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/8660244-d750797a
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


[agi] Re: [GI] Digest for general-intellige...@googlegroups.com - 10 Messages in 2 Topics

2010-10-18 Thread Abram Demski
Linas,

It seems to me that analogy falls rather simply out of relational
probabilistic reasoning. Say we want to make an analogy between two entities
A and B. We essentially look for predicates that hold for both A and B; ie,
we look for a way to fill in the blank in A is like B, because _. Then, if
we want to predict something about B, we know A belongs to the same
reference class and can provide 1 piece if evidence concerning the
behavior of entities in that reference class.

--A

On Mon, Oct 18, 2010 at 5:10 PM, Linas Vepstas linasveps...@gmail.comwrote:

 On 17 October 2010 18:20, Ben Goertzel b...@goertzel.org wrote:
  In other words, using formal grammar actually makes it harder to
 establish
  the connection at the NL-logic interface. IE, it is harder to translate
 NL
  sentences to formal grammar than to formal logic.
 
  KY
 
  Quite the opposite, actually.
 
  Translating an NL sentence to a *set* of grammatical trees,
  representing syntactically possible parses, is almost a solved
  problem.  E.g. the Stanford parser or the link parser do that.
 
  Then, translating each of these grammatical trees into a *set* of
  formal logic expressions, each representing a possible semantic
  interpretation of the tree, is a partially-solved problem.  E.g.
  OpenCog's RelEx and RelEx2Frame components and Cyc's NL subsystem both
  do that (in different ways), though not perfectly.
 
  So based on the current state of the art, it seems that turning NL
  into a formal grammar (e.g. a dependency grammar) is significantly
  less problematic than turning NL into logic, because forming the logic
  representation requires resolving additional ambiguity, beyond that
  which must be resolved to form the formal-grammar representation

 Agree; but would like to add several remarks:

 --part of the difficulty of applying logic of NL is the need to handle
 spatial reasoning (A is next to B and B is next to C therefore ...? C is
 not far from A)

 -- part of the difficulty of applying logic of NL is the need to handle
 more abstract reasoning (A is the major of B and majors are people
 therefore  B is a person)  (opencyc does this ... not badly)

 -- Some philosophers of mathematics e.g. Carlo cellucci (see 18
 unconventional essays on the nature of mathematics) will stridently
 point out that, while classical logic is the format in which proofs are
 stated, it is not at all the method by which mathematicians generate
 new ideas -- they use reasoning by analogy, by allegory, by induction,
 and many others, to generate hypothesis which might be possible
 solutions to problems.

 I think that we should realize that the same techniques should be
 applied in AGI: we use reasoning by analogy not because it gives
 formally correct answers, but because it generates reasonable
 hypothesis which may or may not be true, but which can be
 examined in greater detail to see if they are true.   These other,
 non-rigorous reasoning methods are all parts of what we might
 call intuition --  a set of hard-to-explain reasons why we think
 something might  be true -- which must then be subjected to more
 rigorous analysis to see if yet more evidence can be found.

 In short, real-life, just like mathematics, is all about problem-solving
 and not theorem-proving (which is the last step of creating math,
 not the first).

 --linas




-- 
Abram Demski
http://lo-tho.blogspot.com/
http://groups.google.com/group/one-logic



---
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/8660244-d750797a
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com