Re: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-22 Thread David Hart
Matthias,

You've presented a straw man argument to criticize embodiment; As a
counter-example, in the OCP AGI-development plan, embodiment is not
primarily used to provide domains (via artificial environments) in which an
AGI might work out abstract problems, directly or comparatively (not to
discount the potential utility of this approach in many scenarios), but
rather to provide an environment for the grounding of symbols (which include
concepts important for doing mathematics), similar to the way in which
humans (from infants through to adults) learn through play and also through
guided education.

'Abstraction' is so named because it involves generalizing from the
specifics of one or more domains (d1, d2), and is useful when it can be
applied (with *any* degree of success) to other domains (d3, ...). Virtual
embodied interactive learning utilizes virtual objects and their properties
as a way of generating these specifics for artificial minds to use to build
abstractions, to grok the abstractions of others, and ultimately to build a
deep understanding of our reality (yes, 'deep' in this sense is used in a
very human-mind-centric way).

Of course, few people claim that machine learning with the help of virtually
embodied environments is the ONLY way to approach building an AI capable of
doing and mathematics (and communicating with humans about mathematics), but
it is an approach that has *many* good things going for it, including
proving tractable via measurable incremental improvements (even though it is
admittedly still at a *very* early stage).

-dave

On Wed, Oct 22, 2008 at 4:20 PM, Dr. Matthias Heger [EMAIL PROTECTED] wrote:

   It seems to me that many people think that embodiment is very important
 for AGI.

 For instance some people seem to believe that you can't be a good
 mathematician if you haven't made some embodied experience.



 But this would have a rather strange consequence:

 If you give your AGI a difficult mathematical problem to solve, then it
 would answer:



 Sorry, I still cannot solve your problem, but let me walk with my body
 through the virtual world.

 Hopefully, I will then understand your mathematical question end even more
 hopefully I will be able to solve it after some further embodied
 experience.



 AGI is the ability to solve different problems in different domains. But
 such an AGI would need to make experiences in domain d1 in order to solve
 problems of domain d2. Does this really make sense, if every information
 necessary to solve problems of d2 is in d2? I think an AGI which has to make
 experiences in d1 in order to solve a problem of domain d2 which contains
 everything to solve this problem is no AGI. How should such an AGI know what
 experiences in d1 are necessary to solve the problem of d2?



 In my opinion a real AGI must be able to solve a problem of a domain d
 without leaving this domain if in this domain there is everything to solve
 this problem.



 From this we can define a simple benchmark which is not sufficient for AGI
 but which is **necessary** for a system to be an AGI system:



 Within the domain of chess there is everything to know about chess. So if
 it comes up to be a good chess player

 learning chess from playing chess must be sufficient. Thus, an AGI which is
 not able to enhance its abilities in chess from playing chess alone is no
 AGI.



 Therefore, my first steps in the roadmap towards AGI would be the
 following:

 1.   Make a concept for your architecture of your AGI

 2.   Implement the software for your AGI

 3.   Try if your AGI is able to become a good chess player from
 learning in the domain of chess alone.

 4.   If your AGI can't even learn to play good chess then it is no AGI
 and it would be a waste of time to make experiences with your system in more
 complex domains.



 -Matthias








   --
   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-22 Thread Trent Waddington
On Wed, Oct 22, 2008 at 3:20 PM, Dr. Matthias Heger [EMAIL PROTECTED] wrote:
 It seems to me that many people think that embodiment is very important for
 AGI.

I'm not one of these people, but I at least learn what their
arguments.  You seem to have made up an argument which you've then
knocked down (poorly) and claimed success.

Which, BTW, is a very human thing to do and is not something an AGI
could learn without being embodied and surrounded by other people who
do it ;)

Trent


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


AW: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-22 Thread Dr. Matthias Heger
I see no argument in your text against my main argumentation, that an AGI
should be able to learn chess from playing chess alone. This I call straw
man replies.

 

My main point against embodiment is just the huge effort for embodiment. You
could work for years with this approach and  a certain AGI concept until you
recognize that it doesn't work.

 

If you apply your AGI concept in a small and even not necessarily
AGI-complete domain you would come much faster to a benchmark whether your
concept is even worth to make difficult studies with embodiment.

 

Chess is a very good domain for this benchmark because it is very easy to
program and it is very difficult to outperform human intelligence in this
domain.

 

- Matthias

 

 

 

 

Von: David Hart [mailto:[EMAIL PROTECTED] 
Gesendet: Mittwoch, 22. Oktober 2008 09:43
An: agi@v2.listbox.com
Betreff: Re: [agi] If your AGI can't learn to play chess it is no AGI

 

Matthias, 

You've presented a straw man argument to criticize embodiment; As a
counter-example, in the OCP AGI-development plan, embodiment is not
primarily used to provide domains (via artificial environments) in which an
AGI might work out abstract problems, directly or comparatively (not to
discount the potential utility of this approach in many scenarios), but
rather to provide an environment for the grounding of symbols (which include
concepts important for doing mathematics), similar to the way in which
humans (from infants through to adults) learn through play and also through
guided education.

'Abstraction' is so named because it involves generalizing from the
specifics of one or more domains (d1, d2), and is useful when it can be
applied (with *any* degree of success) to other domains (d3, ...). Virtual
embodied interactive learning utilizes virtual objects and their properties
as a way of generating these specifics for artificial minds to use to build
abstractions, to grok the abstractions of others, and ultimately to build a
deep understanding of our reality (yes, 'deep' in this sense is used in a
very human-mind-centric way).

Of course, few people claim that machine learning with the help of virtually
embodied environments is the ONLY way to approach building an AI capable of
doing and mathematics (and communicating with humans about mathematics), but
it is an approach that has *many* good things going for it, including
proving tractable via measurable incremental improvements (even though it is
admittedly still at a *very* early stage).

-dave

On Wed, Oct 22, 2008 at 4:20 PM, Dr. Matthias Heger [EMAIL PROTECTED] wrote:

It seems to me that many people think that embodiment is very important for
AGI.

For instance some people seem to believe that you can't be a good
mathematician if you haven't made some embodied experience.

 

But this would have a rather strange consequence:

If you give your AGI a difficult mathematical problem to solve, then it
would answer:

 

Sorry, I still cannot solve your problem, but let me walk with my body
through the virtual world. 

Hopefully, I will then understand your mathematical question end even more
hopefully I will be able to solve it after some further embodied
experience.

 

AGI is the ability to solve different problems in different domains. But
such an AGI would need to make experiences in domain d1 in order to solve
problems of domain d2. Does this really make sense, if every information
necessary to solve problems of d2 is in d2? I think an AGI which has to make
experiences in d1 in order to solve a problem of domain d2 which contains
everything to solve this problem is no AGI. How should such an AGI know what
experiences in d1 are necessary to solve the problem of d2?

 

In my opinion a real AGI must be able to solve a problem of a domain d
without leaving this domain if in this domain there is everything to solve
this problem.

 

From this we can define a simple benchmark which is not sufficient for AGI
but which is *necessary* for a system to be an AGI system:

 

Within the domain of chess there is everything to know about chess. So if it
comes up to be a good chess player

learning chess from playing chess must be sufficient. Thus, an AGI which is
not able to enhance its abilities in chess from playing chess alone is no
AGI.  

 

Therefore, my first steps in the roadmap towards AGI would be the following:

1.   Make a concept for your architecture of your AGI

2.   Implement the software for your AGI

3.   Try if your AGI is able to become a good chess player from learning
in the domain of chess alone.

4.   If your AGI can't even learn to play good chess then it is no AGI
and it would be a waste of time to make experiences with your system in more
complex domains.

 

-Matthias

 

 

 

 

  _  


agi |  https://www.listbox.com/member/archive/303/=now Archives
https://www.listbox.com/member/archive/rss/303/ Fehler! Es wurde kein
Dateiname angegeben.|  https://www.listbox.com/member/?; Modify Your

AW: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-22 Thread Dr. Matthias Heger
The restriction is by far not arbitrary. If your AGI is in a space ship or
on a distant planet and has to solve the problems in this domain then it has
no chance to leave this domain.

 

If this domain contains every information which is necessary to solve the
problem then an AGI *must* be able to solve this problem without leaving
this domain. Otherwise it would have an essential lack of intelligence and
it would not be a real AGI.

 

By the way:

Generalization is a mythical thing, because you can never make conclusions
from past visited state-action pairs to still unvisited state-action pairs.
The reason why this often works are just regularities in the environment.
But of course you can not presume that these regularities hold for arbitrary
domains. The only thing you can do is to use your past experiences and
*hope* they will apply in still unknown domains.

 

- Matthias

 

 

Von: David Hart [mailto:[EMAIL PROTECTED] 
Gesendet: Mittwoch, 22. Oktober 2008 11:27
An: agi@v2.listbox.com
Betreff: Re: [agi] If your AGI can't learn to play chess it is no AGI

 

I see no reason to impose on AGI the arbitrary restriction that it need
posses the ability to learn to perform in a given domain by learning from
only within that domain. An AGI should be able to, by definition, adapt
itself to function across different and varied domains, using its
multi-domain knowledge and experience to improve its performance in any
single domain. Choosing a performance metric from only a single domain as a
benchmark for an AGI is antithetical to this definition, because, e.g.,
software that can perform well at chess without being adaptable to other
domains is not AGI, but merely narrow AI, and such simplistic single-domain
benchmarks can be easily tricked by collections of well orchestrated narrow
AI programs. Rather, good benchmarks should be composite benchmarks with
component sub-benchmarks spanning multiple and varied domains.

A human analogue of the multi-domain AGI concept is nicely paraphrased by
Robert A. Heinlein: A human being should be able to change a diaper, plan
an invasion, butcher a hog, conn a ship, design a building, write a sonnet,
balance accounts, build a wall, set a bone, comfort the dying, take orders,
give orders, cooperate, act alone, solve equations, analyze a new problem,
pitch manure, program a computer, cook a tasty meal, fight efficiently, die
gallantly. Specialization is for insects. 

-dave



On Wed, Oct 22, 2008 at 7:23 PM, Dr. Matthias Heger [EMAIL PROTECTED] wrote:

I see no argument in your text against my main argumentation, that an AGI
should be able to learn chess from playing chess alone. This I call straw
man replies.

 

My main point against embodiment is just the huge effort for embodiment. You
could work for years with this approach and  a certain AGI concept until you
recognize that it doesn't work.

 

If you apply your AGI concept in a small and even not necessarily
AGI-complete domain you would come much faster to a benchmark whether your
concept is even worth to make difficult studies with embodiment.

 

Chess is a very good domain for this benchmark because it is very easy to
program and it is very difficult to outperform human intelligence in this
domain.

 

- Matthias

 

 

 

 

Von: David Hart [mailto:[EMAIL PROTECTED] 
Gesendet: Mittwoch, 22. Oktober 2008 09:43
An: agi@v2.listbox.com
Betreff: Re: [agi] If your AGI can't learn to play chess it is no AGI

 

Matthias, 

You've presented a straw man argument to criticize embodiment; As a
counter-example, in the OCP AGI-development plan, embodiment is not
primarily used to provide domains (via artificial environments) in which an
AGI might work out abstract problems, directly or comparatively (not to
discount the potential utility of this approach in many scenarios), but
rather to provide an environment for the grounding of symbols (which include
concepts important for doing mathematics), similar to the way in which
humans (from infants through to adults) learn through play and also through
guided education.

'Abstraction' is so named because it involves generalizing from the
specifics of one or more domains (d1, d2), and is useful when it can be
applied (with *any* degree of success) to other domains (d3, ...). Virtual
embodied interactive learning utilizes virtual objects and their properties
as a way of generating these specifics for artificial minds to use to build
abstractions, to grok the abstractions of others, and ultimately to build a
deep understanding of our reality (yes, 'deep' in this sense is used in a
very human-mind-centric way).

Of course, few people claim that machine learning with the help of virtually
embodied environments is the ONLY way to approach building an AI capable of
doing and mathematics (and communicating with humans about mathematics), but
it is an approach that has *many* good things going for it, including
proving tractable via measurable incremental improvements (even though it is

AW: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-22 Thread Dr. Matthias Heger
I do not claim that AGI might not have bias which is equivalent to genes of
your example. The point is that AGI is the union set of all AI sets. If I
have a certain domain d and a problem p and I know that p can be solved
using nothing else than d, then AGI must be able to solve problem p in d
otherwise it is not AGI.

- Matthias

Bob Mottram wrote


In the case of humans embodied experience also includes the
experience accumulated by our genes over many generations of
evolutionary time.  This means that even if you personally have not
had much embodied experience during your lifetime evolution has shaped
your brain wiring ready for that sort of cognition to take place (for
instance the ability to perform mental rotations).




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-22 Thread Trent Waddington
On Wed, Oct 22, 2008 at 6:23 PM, Dr. Matthias Heger [EMAIL PROTECTED] wrote:
 I see no argument in your text against my main argumentation, that an AGI
 should be able to learn chess from playing chess alone. This I call straw
 man replies.

No-one can learn chess from playing chess alone.

Chess is necessarily a social activity.

As such, your suggestion isn't even sensible, let alone reasonable.

Trent


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-22 Thread Vladimir Nesov
On Wed, Oct 22, 2008 at 2:10 PM, Trent Waddington
[EMAIL PROTECTED] wrote:

 No-one can learn chess from playing chess alone.

 Chess is necessarily a social activity.

 As such, your suggestion isn't even sensible, let alone reasonable.


Current AIs learn chess without engaging in social activities ;-).
And chess might be a good drosophila for AI, if it's treated as such (
http://www-formal.stanford.edu/jmc/chess.html ).
This was uncalled for.

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


AW: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-22 Thread Dr. Matthias Heger
If you give the system the rules of chess then it has all which is necessary
to know to become a good chess player.
It may play against itself or against a common chess program or against
humans.


- Matthias


Trent Waddington [mailto:[EMAIL PROTECTED] wrote


No-one can learn chess from playing chess alone.

Chess is necessarily a social activity.

As such, your suggestion isn't even sensible, let alone reasonable.

Trent


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


AW: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-22 Thread Dr. Matthias Heger
I do not regard chess as important as a drosophila for AI. It would just be
a first milestone where we can make a fast proof of concept for an AGI
approach. The faster we can sort out bad AGI approaches the sooner we will
obtain a successful one.

 

Chess has the advantage to be an easy programmable domain.

The domain of chess is not AGI-complete but crucial problems for AGI can be
found in chess as well.

AGI can be trained automatically against strong chess programs because those
engines offer an open API.

The performance can be evaluated by elo-ranking, i.e. a common evaluation
algorithm for chess players

 

But I do not emphasize performance evaluation too much. The milestone would
be passed successfully if the AGI would use a current PC and would be able
to beat average human chess players after it has played many thousand chess
games against chess programs.

 

It would be a big step towards AGI if someone could build a chess playing
program by a learning software which is pattern based and is not inherently
build for chess.

 

I think such a program would gain much attention in the community of AI
which is also necessary to accelerate the research of AGI.

Of course successful experiments with embodiment would probably gain more
attention. But the development cycle from concept to experiment would take
much longer time with embodiment than with an easy to program and
automatically testable chess domain. 

 

We should suspect that we still have to go many times through this cycle and
therefore it is essential that the cycle should need as few efforts and time
as possible.

 

- Matthias

 

 

Vladimir Nesov [mailto:[EMAIL PROTECTED] wrote



Current AIs learn chess without engaging in social activities ;-).

And chess might be a good drosophila for AI, if it's treated as such (

http://www-formal.stanford.edu/jmc/chess.html ).

This was uncalled for.

 




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


AW: AW: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-22 Thread Dr. Matthias Heger
I agree that chess is far from sufficient for AGI. But I have mentioned this
already at the beginning of this thread.

The important role of chess for AGI could be to rule out bad AGI approaches
as fast as possible.

 

Before you go to more complex domains you should consider chess as a first
important milestone which helps you not to go a long way towards a dead end
with the wrong approach for AGI.

 

If chess is so easy because it is completely described, complete information
about state available, fully deterministic etc. then the more important it
is that your AGI can learn such an easy task before you try something more
difficult.

 

 

-Matthias

 

 

 Derek Zahn [mailto:[EMAIL PROTECTED] wrote



I would agree with this and also with your thesis that a true AGI must be
able to learn chess in this way.  However, although this ability is
necessary it is far from sufficient for AGI, and thinking about AGI from
this very narrow perspective seems to me to be a poor way to attack the
problem.  Very few of the things an AGI must be able to do (as the Heinlein
quote points out) are similar to chess -- completely described, complete
information about state available, fully deterministic.  If you aim at chess
you might hit chess but there's no reason that you will achieve anything
higher.
 
Still, using chess as a test case may not be useless; a system that produces
a convincing story about concept formation in the chess domain (that is,
that invents concepts for pinning, pawn chains, speculative sacrifices in
exchange for piece mobility, zugzwang, and so on without an identifiable
bias toward these things) would at least be interesting to those interested
in AGI.
 
Mathematics, though, is interesting in other ways.  I don't believe that
much of mathematics involves the logical transformations performed in proof
steps.  A system that invents new fields of mathematics, new terms, new
mathematical ideas -- that is truly interesting.  Inference control is
boring, but inventing mathematical induction, complex numbers, or ring
theory -- THAT is AGI-worthy.
 

  _  


agi |  https://www.listbox.com/member/archive/303/=now Archives
https://www.listbox.com/member/archive/rss/303/ |
https://www.listbox.com/member/?;
7 Modify Your Subscription

 http://www.listbox.com 

 




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] constructivist issues

2008-10-22 Thread Mark Waser
 You may not like Therefore, we cannot understand the math needed to define
 our own intelligence., but I'm rather convinced that it's correct. 

Do you mean to say that there are parts that we can't understand or that the 
totality is too large to fit and that it can't be cleanly and completely 
decomposed into pieces (i.e. it's a complex system ;-).

Personally, I believe that the foundational pieces necessary to 
construct/boot-strap an intelligence are eminently understandable (if not even 
fairly simple) but that the resulting intelligence that a) organically grows 
from it's interaction with an environment that it can only extract partial, 
dirty, and ambiguous data and b) does not have the time, computational 
capability, or data to make itself even remotely consistent past a certain 
level IS large and complex enough that you will never truly understand it 
(which is where I have sympathy with Richard Loosemore's arguments -- but don't 
buy that the interaction of the pieces is necessarily so complex that we can't 
make broad predictions that are accurate enough to be able to engineer 
intelligence).

If you say parts we can't understand, how do you reconcile that with your 
statements of yesterday about what general intelligences can learn?


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


[agi] A huge amount of math now in standard first-order predicate logic format!

2008-10-22 Thread Ben Goertzel
I had not noticed this before, though it was posted earlier this year.

Finally Josef Urban translated Mizar into a standard first-order logic
format:

http://www.cs.miami.edu/~tptp/MizarTPTP/http://www.cs.miami.edu/%7Etptp/MizarTPTP/

Note that there are hyperlinks pointing to the TPTP-ized proofs of each
theorem.

This is math with **no steps left out of the proofs** ... everything
included ...

This should be a great resource for AI systems that want to learn about math
by reading definitions/theorems/proofs without needing to grok English
language or diagrams...

Translating this TPTP format into something easily loadable into OpenCog,
for
example, would not be a big trick

Doing useful inference on the data, on the other hand, is another story ;-)

To try this in OpenCog, we gotta wait for Joel to finish porting the
backward-chainer
from NM to OpenCog ... and then, dealing with all this data would be a
mighty test
of adaptive inference control ;-O

ben g




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

A human being should be able to change a diaper, plan an invasion, butcher
a hog, conn a ship, design a building, write a sonnet, balance accounts,
build a wall, set a bone, comfort the dying, take orders, give orders,
cooperate, act alone, solve equations, analyze a new problem, pitch manure,
program a computer, cook a tasty meal, fight efficiently, die gallantly.
Specialization is for insects.  -- Robert Heinlein



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: AW: AW: [agi] Re: Defining AGI

2008-10-22 Thread Mark Waser
 However, the point I took issue with was your claim that a stupid person 
 could be taught to effectively do science ... or (your later modification) 
 evaluation of scientific results.
 At the time I originally took exception to your claim, I had not read the 
 earlier portion of the thread, and I still haven't; so I still do not know 
 why you made the claim in the first place.

In brief -- You've agreed that even a stupid person is a general intelligence. 
 By do science, I (originally and still) meant the amalgamation that is 
probably best expressed as a combination of critical thinking and/or the 
scientific method.  My point was a combination of both a) to be a general 
intelligence, you really must have a domain model and the rudiments of critical 
thinking/scientific methodology in order to be able to competently/effectively 
update it and b) if you're a general intelligence, even if you don't need it, 
you should be able to be taught the rudiments of critical thinking/scientific 
methodology.  

Are those points that you would agree with?  (A serious question -- and, in 
particular, if you don't agree, I'd be very interested in why since I'm trying 
to arrive at a reasonable set of distinctions that define a general 
intelligence).

In typical list fashion, rather than asking what I meant (or, in your case, 
even having the courtesy to read what came before -- so that you might have 
*some* chance of understanding what I was trying to get at -- in case my 
immediate/proximate phrasing was as awkward as I'll freely admit that it was 
;-), it effectively turned into an argument past each other when your immediate 
concept/interpretation of *science = advanced statistical interpretation* hit 
the blindingly obvious shoals of it's not easy teaching stupid people 
complicated things (I mean -- seriously, dude --do you *really* think that I'm 
going to be that far off base?  And, if not, why disrupt the conversation so 
badly by coming in in such a fashion?)..

(And I have to say -- As list owner, it would be helpful if you would set a 
good example of reading threads and trying to understand what people meant 
rather than immediately coming in and flinging insults and accusations of 
ignorance e.g.  This is obviously spoken by someone who has never . . . . ).

So . . . . can you agree with the claim as phrased above?  (i.e. What were we 
disagreeing on again? ;-)

Oh, and the original point was part of a discussion about the necessary and 
sufficient pre-requisites for general intelligence so it made sense to 
(awkwardly :-) say that a domain model and the rudiments of critical 
thinking/scientific methodology are a (major but not complete) part of that.

  - Original Message - 
  From: Ben Goertzel 
  To: agi@v2.listbox.com 
  Sent: Tuesday, October 21, 2008 8:51 PM
  Subject: Re: AW: AW: [agi] Re: Defining AGI



  Mark W wrote:


What were we disagreeing on again?


  This conversation has drifted into interesting issues in the philosophy of 
science, most of which you and I seem to substantially agree on.

  However, the point I took issue with was your claim that a stupid person 
could be taught to effectively do science ... or (your later modification) 
evaluation of scientific results.

  At the time I originally took exception to your claim, I had not read the 
earlier portion of the thread, and I still haven't; so I still do not know why 
you made the claim in the first place.

  ben




--
agi | Archives  | Modify Your Subscription  



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: AW: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-22 Thread Ben Goertzel
I don't agree at all.

The ability to cope with narrow, closed, deterministic environments in an
isolated way is VERY DIFFERENT from the ability to cope with a more
open-ended, indeterminate environment like the one humans live in

Not everything that is a necessary capability of a completed human-level,
roughly human-like AGI, is a sensible first step toward a human-level,
roughly human-like AGI

I'm not saying that making a system that's able to learn chess is a **bad**
idea.   I am saying that I suspect it's not the best path to AGI.

I'm slightly more attracted to the General Gameplaying (GGP) Competition
than to a narrow-focus on chess

http://games.stanford.edu/

but not so much to that either...

I look at it this way.  I have a basic understanding of how a roughly
human-like AGI mind (with virtual embodiment and language facility) might
progress from the preschool level up through the university level, by
analogy to human cognitive development.

On the other hand, I do not have a very good understanding at all of how a
radically non-human-like AGI mind would progress from learn to play chess
level to the university level, or to the level of GGP, or robust
mathematical theorem-proving, etc.  If you have a good understanding of this
I'd love to hear it.

-- Ben G



On Wed, Oct 22, 2008 at 9:47 AM, Dr. Matthias Heger [EMAIL PROTECTED] wrote:

  I agree that chess is far from sufficient for AGI. But I have mentioned
 this already at the beginning of this thread.

 The important role of chess for AGI could be to rule out bad AGI approaches
 as fast as possible.



 Before you go to more complex domains you should consider chess as a first
 important milestone which helps you not to go a long way towards a dead end
 with the wrong approach for AGI.



 If chess is so easy because it is completely described, complete
 information about state available, fully deterministic etc. then the more
 important it is that your AGI can learn such an easy task before you try
 something more difficult.





 -Matthias





  Derek Zahn [mailto:[EMAIL PROTECTED] wrote

  I would agree with this and also with your thesis that a true AGI must be
 able to learn chess in this way.  However, although this ability is
 necessary it is far from sufficient for AGI, and thinking about AGI from
 this very narrow perspective seems to me to be a poor way to attack the
 problem.  Very few of the things an AGI must be able to do (as the Heinlein
 quote points out) are similar to chess -- completely described, complete
 information about state available, fully deterministic.  If you aim at chess
 you might hit chess but there's no reason that you will achieve anything
 higher.

 Still, using chess as a test case may not be useless; a system that
 produces a convincing story about concept formation in the chess domain
 (that is, that invents concepts for pinning, pawn chains, speculative
 sacrifices in exchange for piece mobility, zugzwang, and so on without an
 identifiable bias toward these things) would at least be interesting to
 those interested in AGI.

 Mathematics, though, is interesting in other ways.  I don't believe that
 much of mathematics involves the logical transformations performed in proof
 steps.  A system that invents new fields of mathematics, new terms, new
 mathematical ideas -- that is truly interesting.  Inference control is
 boring, but inventing mathematical induction, complex numbers, or ring
 theory -- THAT is AGI-worthy.

  --

 *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/| 
 Modifyhttps://www.listbox.com/member/?;Your Subscription

 http://www.listbox.com


   --
   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

A human being should be able to change a diaper, plan an invasion, butcher
a hog, conn a ship, design a building, write a sonnet, balance accounts,
build a wall, set a bone, comfort the dying, take orders, give orders,
cooperate, act alone, solve equations, analyze a new problem, pitch manure,
program a computer, cook a tasty meal, fight efficiently, die gallantly.
Specialization is for insects.  -- Robert Heinlein



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


RE: AW: AW: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-22 Thread Derek Zahn
Matthias Heger:

 
 If chess is so easy because it is completely described, complete information 
 about 
 state available, fully deterministic etc. then the more important it is that 
 your AGI 
 can learn such an easy task before you try something more difficult.
 
Chess is not easy.  Becoming good at chess is something that most humans 
never accomplish and none accomplish without years of training in background 
material.  The question is whether chess is representative  of the domains we 
want AGIs to master.  I think a case could be made either way.
 
I don't want to be discouraging -- any concrete demonstration of AGI ideas is 
of great interest, even in formal toy domains.
 


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: AW: AW: [agi] Re: Defining AGI

2008-10-22 Thread Ben Goertzel
In brief -- You've agreed that even a stupid person is a general
 intelligence.  By do science, I (originally and still) meant the
 amalgamation that is probably best expressed as a combination of critical
 thinking and/or the scientific method.  My point was a combination of both
 a) to be a general intelligence, you really must have a domain model and the
 rudiments of critical thinking/scientific methodology in order to be able to
 competently/effectively update it and b) if you're a general intelligence,
 even if you don't need it, you should be able to be taught the rudiments of
 critical thinking/scientific methodology.

 Are those points that you would agree with?



The rudiments, yes.

But the rudiments are not enough to perform effectively by accepted
standards ... e.g. they are not enough to avoid getting fired from your job
as a scientist... unless it's a government job ;-)

ben



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] constructivist issues

2008-10-22 Thread Mark Waser
 It doesn't, because **I see no evidence that humans can
 understand the semantics of formal system in X in any sense that
 a digital computer program cannot**

I just argued that humans can't understand the totality of any formal system X 
due to Godel's Incompleteness Theorem but the rest of this is worth addressing 
. . . . 

 Whatever this mysterious understanding is that you believe you
 possess, **it cannot be communicated to me in language or
 mathematics**.  Because any series of symbols you give me, could
 equally well be produced by some being without this mysterious
 understanding.

Excellent!  Except for the fact that the probability of the being *continuing* 
to emit those symbols without this mysterious understanding rapidly 
approaches zero.  So I'm going to argue that understanding *can* effectively be 
communicated/determined.  Arguing otherwise is effectively arguing for 
vanishingly small probabilities in infinities (and why I hate most arguments 
involving AIXI as proving *anything* except absolute limits c.f. Matt Mahoney 
and compression = intelligence).

I'm going to continue arguing that understanding exactly equates to having a 
competent domain model and being able to communicate about it (i.e. that there 
is no mystery about understanding -- other than not understanding it ;-).

 Can you describe any possible finite set of finite-precision observations
 that could provide evidence in favor of the hypothesis that you possess
 this posited understanding, and against the hypothesis that you are
 something equivalent to a digital computer?

 I think you cannot.

But I would argue that this is because a digital computer can have 
understanding (and must and will in order to be an AGI).

 So, your belief in this posited understanding has nothing to do with 
 science, it's
 basically a kind of religious faith, it seems to me... '-)

If you're assuming that humans have it and computers can't, then I have to 
strenuously agree.  There is no data (that I am aware of) to support this 
conclusion so it's pure faith, not science.




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] constructivist issues

2008-10-22 Thread Mark Waser
 I don't want to diss the personal value of logically inconsistent thoughts.  
 But I doubt their scientific and engineering value.

I doesn't seem to make sense that something would have personal value and then 
not have scientific or engineering value.

I can sort of understand science if you're interpreting science looking for the 
final correct/optimal value but engineering generally goes for either good 
enough or the best of the currently known available options and anything 
that really/truly has personal value would seem to have engineering value.





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


AW: [agi] A huge amount of math now in standard first-order predicate logic format!

2008-10-22 Thread Dr. Matthias Heger
Very useful link. Thanks.

 

-Matthias

 

Von: Ben Goertzel [mailto:[EMAIL PROTECTED] 
Gesendet: Mittwoch, 22. Oktober 2008 15:40
An: agi@v2.listbox.com
Betreff: [agi] A huge amount of math now in standard first-order predicate
logic format!

 


I had not noticed this before, though it was posted earlier this year.

Finally Josef Urban translated Mizar into a standard first-order logic
format:

http://www.cs.miami.edu/~tptp/MizarTPTP/
http://www.cs.miami.edu/%7Etptp/MizarTPTP/ 

Note that there are hyperlinks pointing to the TPTP-ized proofs of each
theorem.

This is math with **no steps left out of the proofs** ... everything
included ...

This should be a great resource for AI systems that want to learn about math
by reading definitions/theorems/proofs without needing to grok English
language or diagrams...

Translating this TPTP format into something easily loadable into OpenCog,
for
example, would not be a big trick

Doing useful inference on the data, on the other hand, is another story ;-)

To try this in OpenCog, we gotta wait for Joel to finish porting the
backward-chainer
from NM to OpenCog ... and then, dealing with all this data would be a
mighty test
of adaptive inference control ;-O

ben g




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

A human being should be able to change a diaper, plan an invasion, butcher
a hog, conn a ship, design a building, write a sonnet, balance accounts,
build a wall, set a bone, comfort the dying, take orders, give orders,
cooperate, act alone, solve equations, analyze a new problem, pitch manure,
program a computer, cook a tasty meal, fight efficiently, die gallantly.
Specialization is for insects.  -- Robert Heinlein



  _  


agi |  https://www.listbox.com/member/archive/303/=now Archives
https://www.listbox.com/member/archive/rss/303/ |
https://www.listbox.com/member/?;
7 Modify Your Subscription

 http://www.listbox.com 

 




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: AW: AW: [agi] Re: Defining AGI

2008-10-22 Thread Mark Waser

I'm also confused. This has been a strange thread. People of average
and around-average intelligence are trained as lab technicians or
database architects every day. Many of them are doing real science.
Perhaps a person with down's syndrome would do poorly in one of these
largely practical positions. Perhaps.

The consensus seems to be that there is no way to make a fool do a
scientist's job. But he can do parts of it. A scientist with a dozen
fools at hand could be a great deal more effective than a rival with
none, whereas a dozen fools on their own might not be expected to do
anything at all. So it is complicated.


Or maybe another way to rephrase it is combine it with another thread . . . 
.


Any individual piece of science is understandable/teachable to (or my 
original point -- verifiable or able to be validated by) any general 
intelligence but the totally of science combined with the world is far too 
large to . . . . (which is effectively Ben's point) 





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] constructivist issues

2008-10-22 Thread Mark Waser

(1) We humans understand the semantics of formal system X.


No.  This is the root of your problem.  For example, replace formal system 
X with XML.  Saying that We humans understand the semantics of XML 
certainly doesn't work and why I would argue that natural language 
understanding is AGI-complete (i.e. by the time all the RDF, OWL, and other 
ontology work is completed -- you'll have an AGI).  Any formal system can 
always be extended *within it's defined syntax* to have any meaning.  That 
is the essence of Godel's Incompleteness Theorem.


It's also sort of the basis for my argument with Dr. Matthias Heger. 
Semantics are never finished except when your model of the world is finished 
(including all possibilities and infinitudes) so language understanding 
can't be simple and complete.


Personally, rather than starting with NLP, I think that we're going to need 
to start with a formal language that is a disambiguated subset of English 
and figure out how to use our world model/knowledge to translate English to 
this disambiguated subset -- and then we can build from there.  (or maybe 
this makes Heger's argument for him . . . .  ;-)





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] constructivist issues

2008-10-22 Thread Ben Goertzel
On Wed, Oct 22, 2008 at 10:51 AM, Mark Waser [EMAIL PROTECTED] wrote:

   I don't want to diss the personal value of logically inconsistent
 thoughts.  But I doubt their scientific and engineering value.
 I doesn't seem to make sense that something would have personal value and
 then not have scientific or engineering value.


Come by the house, we'll drop some acid together and you'll be convinced ;-)



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] constructivist issues

2008-10-22 Thread Mark Waser
 Well, if you are a computable system, and if by think you mean represent 
 accurately and internally then you can only think that odd thought via 
 being logically inconsistent... ;-)

True -- but why are we assuming *internally*?  Drop that assumption as Charles 
clearly did and there is no problem.

It's like infrastructure . . . . I don't have to know all the details of 
something to use it under normal circumstances though I frequently need to know 
the details is I'm doing something odd with it or looking for extreme 
performance and I definitely need to know the details if I'm 
diagnosing/fixing/debugging it -- but I can always learn them as I go . . . . 


  - Original Message - 
  From: Ben Goertzel 
  To: agi@v2.listbox.com 
  Sent: Tuesday, October 21, 2008 11:26 PM
  Subject: Re: [agi] constructivist issues



  Well, if you are a computable system, and if by think you mean represent 
accurately and internally then you can only think that odd thought via being 
logically inconsistent... ;-)




  On Tue, Oct 21, 2008 at 11:23 PM, charles griffiths [EMAIL PROTECTED] wrote:

  I disagree, and believe that I can think X: This is a thought (T) 
that is way too complex for me to ever have.

  Obviously, I can't think T and then think X, but I might represent T 
as a combination of myself plus a notebook or some other external media. Even 
if I only observe part of T at once, I might appreciate that it is one thought 
and believe (perhaps in error) that I could never think it.

  I might even observe T in action, if T is the result of billions of 
measurements, comparisons and calculations in a computer program.

  Isn't it just like thinking This is an image that is way too 
detailed for me to ever see?

  Charles Griffiths

  --- On Tue, 10/21/08, Ben Goertzel [EMAIL PROTECTED] wrote:

From: Ben Goertzel [EMAIL PROTECTED]
Subject: Re: [agi] constructivist issues
To: agi@v2.listbox.com
Date: Tuesday, October 21, 2008, 7:56 PM



I am a Peircean pragmatist ...

I have no objection to using infinities in mathematics ... they can 
certainly be quite useful.  I'd rather use differential calculus to do 
calculations, than do everything using finite differences.

It's just that, from a science perspective, these mathematical 
infinities have to be considered finite formal constructs ... they don't existP 
except in this way ...

I'm not going to claim the pragmatist perspective is the only 
subjectively meaningful one.  But so far as I can tell it's the only useful one 
for science and engineering...

To take a totally different angle, consider the thought X = This 
is a thought that is way too complex for me to ever have

Can I actually think X?

Well, I can understand the *idea* of X.  I can manipulate it 
symbolically and formally.  I can reason about it and empathize with it by 
analogy to A thought that is way too complex for my three-year-old past-self 
to have ever had , and so forth.

But it seems I can't ever really think X, except by being logically 
inconsistent within that same thought ... this is the Godel limitation applied 
to my own mind...

I don't want to diss the personal value of logically inconsistent 
thoughts.  But I doubt their scientific and engineering value.

-- Ben G




On Tue, Oct 21, 2008 at 10:43 PM, Abram Demski [EMAIL PROTECTED] 
wrote:

  Ben,

  How accurate would it be to describe you as a finitist or
  ultrafinitist? I ask because your view about restricting 
quantifiers
  seems to reject even the infinities normally allowed by
  constructivists.

  --Abram



  ---
  agi
  Archives: https://www.listbox.com/member/archive/303/=now
  RSS Feed: https://www.listbox.com/member/archive/rss/303/

  Modify Your Subscription: https://www.listbox.com/member/?;

  Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

Nothing will ever be attempted if all possible objections must be 
first overcome   - Dr Samuel Johnson





  agi | Archives  | Modify Your Subscription  
 




  agi | Archives  | Modify Your Subscription  




  -- 
  Ben Goertzel, PhD
  CEO, Novamente LLC and Biomind LLC
  Director of Research, SIAI
  [EMAIL PROTECTED]

  Nothing will ever be attempted if all possible objections must be first 
overcome   - Dr 

[agi] Fun with first-order inference in OpenCog ...

2008-10-22 Thread Ben Goertzel
http://brainwave.opencog.org/


-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

A human being should be able to change a diaper, plan an invasion, butcher
a hog, conn a ship, design a building, write a sonnet, balance accounts,
build a wall, set a bone, comfort the dying, take orders, give orders,
cooperate, act alone, solve equations, analyze a new problem, pitch manure,
program a computer, cook a tasty meal, fight efficiently, die gallantly.
Specialization is for insects.  -- Robert Heinlein



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] constructivist issues

2008-10-22 Thread Mark Waser
 I disagree, and believe that I can think X: This is a thought (T) that is 
 way too complex for me to ever have.
 Obviously, I can't think T and then think X, but I might represent T as a 
 combination of myself plus a notebook or some other external media. Even if 
 I only observe part of T at once, I might appreciate that it is one thought 
 and believe (perhaps in error) that I could never think it.
 I might even observe T in action, if T is the result of billions of 
 measurements, comparisons and calculations in a computer program.
 Isn't it just like thinking This is an image that is way too detailed for 
 me to ever see?

Excellent!  This is precisely how I feel about intelligence . . . .  (and why 
we *can* understand it even if we can't hold the totality of it -- or fully 
predict it -- sort of like the weather :-)




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] constructivist issues

2008-10-22 Thread Mark Waser
 You have not convinced me that you can do anything a computer can't do.
 And, using language or math, you never will -- because any finite set of 
 symbols
 you can utter, could also be uttered by some computational system.
 -- Ben G

Can we pin this somewhere?

(Maybe on Penrose?  ;-)


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] constructivist issues

2008-10-22 Thread Ben Goertzel

 The problem is to gradually improve overall causal model of
 environment (and its application for control), including language and
 dynamics of the world. Better model allows more detailed experience,
 and so through having a better inbuilt model of an aspect of
 environment, such as language, it's possible to communicate richer
 description of other aspects of environment. But it's not obvious that
 bandwidth of experience is the bottleneck here.


No, but nor is it obvious that this *isn't* one of the major bottlenecks...


 It's probably just
 limitations of the cognitive algorithm that simply can't efficiently
 improve its model, and so feeding it more experience through tricks
 like this is like trying to get a hundredfold speedup in the
 O(log(log(n))) algorithm by feeding it more hardware.


Hard to say...

Remember, we humans have a load of evolved inductive bias for
understanding human language ... AGI's don't ...  so using Lojban
to talk to an AGI could be a way to partly make up for this deficit in
inductive bias...


 It should be
 possible to get a proof-of-concept level results about efficiency
 without resorting to Cycs and Lojbans, and after that they'll turn out
 to be irrelevant.


Cyc and Lojban are not comparable, one is a  knowledge-base, the other
is a language

Cyc-L and Lojban are more closely comparable, though still very different
because Lojban allows for more ambiguity (as well as Cyc-L level precision,
depending on speaker's choice) ... and of course Lojban is intended for
interactive conversation rather than knowledge entry

ben g



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


AW: AW: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-22 Thread Dr. Matthias Heger
Ben wrote:

The ability to cope with narrow, closed, deterministic environments in an
isolated way is VERY DIFFERENT from the ability to cope with a more
open-ended, indeterminate environment like the one humans live in

 

These narrow, closed, deterministic domains are *subsets* of what AGI is
intended to do and what humans can do. Chess can be learned by young
children.  



Not everything that is a necessary capability of a completed human-level,
roughly human-like AGI, is a sensible first step toward a human-level,
roughly human-like AGI

 

This is surely true.  But let's say someone wants to develop a car. Doesn't
it makes sense first to develop and test its essential parts before I put
everything together and go to the road? 

I think chess is a good testing area because in the domain of chess there
are too many situations to consider them all. This is a very typical and
very important problem of human environments as well. On the other hand
there are patterns in chess which can be learned and which makes life less
complex. This is the second analogy to human environments. Therefore the
domain of chess is not so different. It contains an important subset of
typical problems for human-level AI.

And if you want to solve the complex problem to build AGI then you cannot
avoid the task of solving every single of its sub problems. 

If your system sees no patterns in chess, then I would doubt whether it is
really suitable for AGI.

 


I'm not saying that making a system that's able to learn chess is a **bad**
idea.   I am saying that I suspect it's not the best path to AGI.

 

Ok.




I'm slightly more attracted to the General Gameplaying (GGP) Competition
than to a narrow-focus on chess

 http://games.stanford.edu/ http://games.stanford.edu/

but not so much to that either...

I look at it this way.  I have a basic understanding of how a roughly
human-like AGI mind (with virtual embodiment and language facility) might
progress from the preschool level up through the university level, by
analogy to human cognitive development.

On the other hand, I do not have a very good understanding at all of how a
radically non-human-like AGI mind would progress from learn to play chess
level to the university level, or to the level of GGP, or robust
mathematical theorem-proving, etc.  If you have a good understanding of this
I'd love to hear it.

 

Ok. I do not say that your approach is wrong. In fact I think it is very
interesting and ambitious. But as you think that my approach is not the best
one I think that your approach is not the best one.  Probably, the
discussion could be endless. And probably you already have invested too much
effort in your approach that you really can consider to change it. I hope
you are right because I would be very happy to see the first AGI soon,
regardless who will build it and regardless which concept is used.

-Matthias








---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] constructivist issues

2008-10-22 Thread Mark Waser
 IMHO that is an almost hopeless approach, ambiguity is too integral to 
 English or any natural language ... e.g preposition ambiguity

Actually, I've been making pretty good progress.  You just always use big words 
and never use small words and/or you use a specific phrase as a word.  
Ambiguous prepositions just disambiguate to one of three/four/five/more 
possible unambiguous words/phrases.

The problem is that most previous subsets (Simplified English, Basic English) 
actually *favored* the small tremendously over-used/ambiguous words (because 
you got so much more bang for the buck with them).

Try only using big unambiguous words and see if you still have the same 
opinion.  

 If you want to take this sort of approach, you'd better start with Lojban 
 instead  Learning Lojban is a pain but far less pain than you'll have 
 trying to make a disambiguated subset of English.

My first reaction is . . . . Take a Lojban dictionary and see if you can come 
up with an unambiguous English word or very short phrase for each Lojban word.  
If you can do it, my approach will work and will have the advantage that the 
output can be read by anyone (i.e. it's the equivalent of me having done it in 
Lojban and then added a Lojban - English translation on the end) though the 
input is still *very* problematical (thus the need for a semantically-driven 
English-subset translator).  If you can't do it, then my approach won't work.

Can you do it?  Why or why not?  If you can, do you still believe that my 
approach won't work?  Oh, wait . . . . a Lojban-to-English dictionary *does* 
attempt to come up with an unambiguous English word or very short phrase for 
each Lojban word.  :-)

Actually, h . . . . a Lojban dictionary would probably help me focus my 
efforts a bit better and highlight things that I may have missed . . . . do you 
have a preferred dictionary or resource?  (Google has too many for me to do a 
decent perusal quickly)



  - Original Message - 
  From: Ben Goertzel 
  To: agi@v2.listbox.com 
  Sent: Wednesday, October 22, 2008 11:11 AM
  Subject: Re: [agi] constructivist issues







Personally, rather than starting with NLP, I think that we're going to need 
to start with a formal language that is a disambiguated subset of English 


  IMHO that is an almost hopeless approach, ambiguity is too integral to 
English or any natural language ... e.g preposition ambiguity

  If you want to take this sort of approach, you'd better start with Lojban 
instead  Learning Lojban is a pain but far less pain than you'll have 
trying to make a disambiguated subset of English.

  ben g 




--
agi | Archives  | Modify Your Subscription  



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] constructivist issues

2008-10-22 Thread Mark Waser
(joke)

What?  You don't love me any more?  

/thread
  - Original Message - 
  From: Ben Goertzel 
  To: agi@v2.listbox.com 
  Sent: Wednesday, October 22, 2008 11:11 AM
  Subject: Re: [agi] constructivist issues



  (joke)


  On Wed, Oct 22, 2008 at 11:11 AM, Ben Goertzel [EMAIL PROTECTED] wrote:




On Wed, Oct 22, 2008 at 10:51 AM, Mark Waser [EMAIL PROTECTED] wrote:

   I don't want to diss the personal value of logically inconsistent 
thoughts.  But I doubt their scientific and engineering value.

  I doesn't seem to make sense that something would have personal value and 
then not have scientific or engineering value.

Come by the house, we'll drop some acid together and you'll be convinced ;-)
 





  -- 
  Ben Goertzel, PhD
  CEO, Novamente LLC and Biomind LLC
  Director of Research, SIAI
  [EMAIL PROTECTED]

  A human being should be able to change a diaper, plan an invasion, butcher a 
hog, conn a ship, design a building, write a sonnet, balance accounts, build a 
wall, set a bone, comfort the dying, take orders, give orders, cooperate, act 
alone, solve equations, analyze a new problem, pitch manure, program a 
computer, cook a tasty meal, fight efficiently, die gallantly. Specialization 
is for insects.  -- Robert Heinlein




--
agi | Archives  | Modify Your Subscription  



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] constructivist issues

2008-10-22 Thread Mark Waser
 Come by the house, we'll drop some acid together and you'll be convinced ;-)

Been there, done that.  Just because some logically inconsistent thoughts have 
no value doesn't mean that all logically inconsistent thoughts have no value.

Not to mention the fact that hallucinogens, if not the subsequently warped 
thoughts, do have the serious value of raising your mental Boltzmann 
temperature.

  - Original Message - 
  From: Ben Goertzel 
  To: agi@v2.listbox.com 
  Sent: Wednesday, October 22, 2008 11:11 AM
  Subject: Re: [agi] constructivist issues





  On Wed, Oct 22, 2008 at 10:51 AM, Mark Waser [EMAIL PROTECTED] wrote:

 I don't want to diss the personal value of logically inconsistent 
thoughts.  But I doubt their scientific and engineering value.

I doesn't seem to make sense that something would have personal value and 
then not have scientific or engineering value.

  Come by the house, we'll drop some acid together and you'll be convinced ;-)
   



--
agi | Archives  | Modify Your Subscription  



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: AW: [agi] Language learning (was Re: Defining AGI)

2008-10-22 Thread Matt Mahoney
--- On Tue, 10/21/08, Dr. Matthias Heger [EMAIL PROTECTED] wrote:

 Sorry, but this was no proof that a natural language
 understanding system is
 necessarily able to solve the equation x*3 = y for
 arbitrary y.
 
 1) You have not shown that a language understanding system
 must necessarily(!) have made statistical experiences on the
 equation x*3 =y.

A language model is a probability distribution P over text of human origin. If 
you can compute P(x) for given text string x, then you can pass the Turing test 
because for any question Q and answer A you can compute P(A|Q) = P(QA)/P(Q) 
using the same distribution that a human would use to answer the question. This 
includes any math questions that the average human could answer.

 2) you give only a few examples. For a proof of the claim,
 you have to prove it for every(!) y.

You originally allowed *any* y. To quote your earlier email:

  For instance, I doubt that anyone can prove that
  any system which understands natural language is
  necessarily able to solve
  the simple equation x *3 = y for a given y.

Anyway I did the experiment for y = 12. You can try the experiment for other 
values of y if you wish. Let me know what happens.

 3) you apply rules such as 5 * 7 = 35 - 35 / 7 = 5 but
 you have not shown that
 3a) that a language understanding system necessarily(!) has
 this rules
 3b) that a language understanding system necessarily(!) can
 apply such rules

It must have the rules and apply them to pass the Turing test.

-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] constructivist issues

2008-10-22 Thread Vladimir Nesov
On Wed, Oct 22, 2008 at 7:47 PM, Ben Goertzel [EMAIL PROTECTED] wrote:

 The problem is to gradually improve overall causal model of
 environment (and its application for control), including language and
 dynamics of the world. Better model allows more detailed experience,
 and so through having a better inbuilt model of an aspect of
 environment, such as language, it's possible to communicate richer
 description of other aspects of environment. But it's not obvious that
 bandwidth of experience is the bottleneck here.

 No, but nor is it obvious that this *isn't* one of the major bottlenecks...


My intuition is that it's very easy to steadily increase bandwidth of
experience, the more you know the more you understand. If you start
from simple sensors/actuators (or even chess or Go), progress is
gradual and open-ended.



 It's probably just
 limitations of the cognitive algorithm that simply can't efficiently
 improve its model, and so feeding it more experience through tricks
 like this is like trying to get a hundredfold speedup in the
 O(log(log(n))) algorithm by feeding it more hardware.

 Hard to say...

 Remember, we humans have a load of evolved inductive bias for
 understanding human language ... AGI's don't ...  so using Lojban
 to talk to an AGI could be a way to partly make up for this deficit in
 inductive bias...


Any language at all is a way of increasing experiential bandwidth
about environment. If bandwidth isn't essential, bootstrapping this
process through a language is equally irrelevant. At some point,
however inefficiently, language can be learned if system allows
open-ended learning.

This is a question of not doing premature optimization of a program
that is not even designed yet, not talking about being implemented and
profiled.


 It should be
 possible to get a proof-of-concept level results about efficiency
 without resorting to Cycs and Lojbans, and after that they'll turn out
 to be irrelevant.

 Cyc and Lojban are not comparable, one is a  knowledge-base, the other
 is a language

 Cyc-L and Lojban are more closely comparable, though still very different
 because Lojban allows for more ambiguity (as well as Cyc-L level precision,
 depending on speaker's choice) ... and of course Lojban is intended for
 interactive conversation rather than knowledge entry


(as tools towards improving bandwidth of experience, they do the same thing)

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] constructivist issues

2008-10-22 Thread Abram Demski
Too many responses for me to comment on everything! So, sorry to those
I don't address...

Ben,

When I claim a mathematical entity exists, I'm saying loosely that
meaningful statements can be made using it. So, I think meaning is
more basic. I mentioned already what my current definition of meaning
is: a statement is meaningful if it is associated with a computable
rule of deduction that it can use to operate on other (meaningful)
statements. This is in contrast to positivist-style definitions of
meaning, that would instead require a computable test of truth and/or
falsehood.

So, a statement is meaningful if it has procedural deductive meaning.
We *understand* a statement if we are capable of carrying out the
corresponding deductive procedure. A statement is *true* if carrying
out that deductive procedure only produces more true statements. We
*believe* a statement if we not only understand it, but proceed to
apply its deductive procedure.

There is of course some basic level of meaningful statements, such as
sensory observations, so that this is a working recursive definition
of meaning and truth.

By this definition of meaning, any statement in the arithmetical
hierarchy is meaningful (because each statement can be represented by
computable consequences on other statements in the arithmetical
hierarchy). I think some hyperarithmetical truths are captured as
well. I am more doubtful about it capturing anything beyond the first
level of the analytic hierarchy, and general set-theoretic discourse
seems far beyond its reach. Regardless, the definition of meaning
makes a very large number of uncomputable truths nonetheless
meaningful.

Russel,

I think both Ben and I would approximately agree with everything you
said, but that doesn't change our disagreeing with each other :).

Mark,

Good call... I shouldn't be talking like I think it is terrifically
unlikely that some more-intelligent alien species would find humans
mathematically crude. What I meant was, it seems like humans are
logically complete in some sense. In practice we are greatly limited
by memory and processing speed and so on; but I *don't* think we're
limited by lacking some important logical construct. It would be like
us discovering some alien species whose mathematicians were able to
understand each individual case of mathematical induction, but were
unable to comprehend the argument for accepting it as a general
principle, because they lack the abstraction. Something like that is
what I find implausible.

--Abram


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] constructivist issues

2008-10-22 Thread Ben Goertzel
This is the standard Lojban dictionary

http://jbovlaste.lojban.org/

I am not so worried about word meanings, they can always be handled via
reference to WordNet via usages like run_1, run_2, etc. ... or as you say by
using rarer, less ambiguous words

Prepositions are more worrisome, however, I suppose they can be handled in a
similar way, e.g. by defining an ontology of preposition meanings like
with_1, with_2, with_3, etc.

In fact we had someone spend a couple months integrating existing resources
into a preposition-meaning ontology like this a while back ... the so-called
PrepositionWordNet ... or as it eventually came to be called the LARDict or
LogicalArgumentRelationshipDictionary ...

I think it would be feasible to tweak RelEx to recognize these sorts of
subscripts, and in this way to recognize a highly controlled English that
would be unproblematic to map semantically...

We would then say e.g.

I ate dinner with_2 my fork

I live in_2 Maryland

I have lived_6 for_3 41 years

(where I suppress all _1's, so that e.g. ate means ate_1)

Because, RelEx already happily parses the syntax of all simple sentences, so
the only real hassle to deal with is disambiguation.   We could use similar
hacking for reference resolution, temporal sequencing, etc.

The terrorists_v1 robbed_v2 my house.   After that_v2, the jerks_v1 urinated
in_3 my yard.

I think this would be a relatively pain-free way to communicate with an AI
that lacks the common sense to carry out disambiguation and reference
resolution reliably.   Also, the log of communication would provide a nice
training DB for it to use in studying disambiguation.

-- Ben G


On Wed, Oct 22, 2008 at 12:00 PM, Mark Waser [EMAIL PROTECTED] wrote:

   IMHO that is an almost hopeless approach, ambiguity is too integral to
 English or any natural language ... e.g preposition ambiguity
 Actually, I've been making pretty good progress.  You just always use big
 words and never use small words and/or you use a specific phrase as a
 word.  Ambiguous prepositions just disambiguate to one of
 three/four/five/more possible unambiguous words/phrases.

 The problem is that most previous subsets (Simplified English, Basic
 English) actually *favored* the small tremendously over-used/ambiguous words
 (because you got so much more bang for the buck with them).

 Try only using big unambiguous words and see if you still have the same
 opinion.

  If you want to take this sort of approach, you'd better start with
 Lojban instead  Learning Lojban is a pain but far less pain than you'll
 have trying to make a disambiguated subset of English.

 My first reaction is . . . . Take a Lojban dictionary and see if you can
 come up with an unambiguous English word or very short phrase for each
 Lojban word.  If you can do it, my approach will work and will have the
 advantage that the output can be read by anyone (i.e. it's the equivalent of
 me having done it in Lojban and then added a Lojban - English translation
 on the end) though the input is still *very* problematical (thus the need
 for a semantically-driven English-subset translator).  If you can't do it,
 then my approach won't work.

 Can you do it?  Why or why not?  If you can, do you still believe that my
 approach won't work?  Oh, wait . . . . a Lojban-to-English dictionary *does*
 attempt to come up with an unambiguous English word or very short phrase for
 each Lojban word.  :-)

 Actually, h . . . . a Lojban dictionary would probably help me focus my
 efforts a bit better and highlight things that I may have missed . . . . do
 you have a preferred dictionary or resource?  (Google has too many for me to
 do a decent perusal quickly)



 - Original Message -
 *From:* Ben Goertzel [EMAIL PROTECTED]
 *To:* agi@v2.listbox.com
 *Sent:* Wednesday, October 22, 2008 11:11 AM
 *Subject:* Re: [agi] constructivist issues





 Personally, rather than starting with NLP, I think that we're going to
 need to start with a formal language that is a disambiguated subset of
 English



 IMHO that is an almost hopeless approach, ambiguity is too integral to
 English or any natural language ... e.g preposition ambiguity

 If you want to take this sort of approach, you'd better start with Lojban
 instead  Learning Lojban is a pain but far less pain than you'll have
 trying to make a disambiguated subset of English.

 ben g

  --
   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com

 --
   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

A human being should be able 

Re: [agi] constructivist issues

2008-10-22 Thread Ben Goertzel

 So, a statement is meaningful if it has procedural deductive meaning.
 We *understand* a statement if we are capable of carrying out the
 corresponding deductive procedure. A statement is *true* if carrying
 out that deductive procedure only produces more true statements. We
 *believe* a statement if we not only understand it, but proceed to
 apply its deductive procedure.


OK, then according to your definition, Godel's Theorem says that if humans
are computable there are some things that we cannot understand ... just
as, for any computer program, there are some things it can't understand.

It just happens that according to your definition, a computer system can
understand some fabulously uncomputable entities.  But there's no
contradiction
there.

Just like a human can, a digital theorem prover can understand some
uncomputable entities in the sense you specify...

ben g



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: AW: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-22 Thread Ben Goertzel

 
 Not everything that is a necessary capability of a completed human-level,
 roughly human-like AGI, is a sensible first step toward a human-level,
 roughly human-like AGI

 

 This is surely true.  But let's say someone wants to develop a car. Doesn't
 it makes sense first to develop and test its essential parts before I put
 everything together and go to the road?


Yes, and we are of course doing that


 I think chess is a good testing area


I strongly disagree...



 If your system sees no patterns in chess, then I would doubt whether it is
 really suitable for AGI.




I strongly suspect that OpenCog ... once more of the NM tools are ported to
it (e.g. the completion of the backward chainer port) ... could learn to
play chess legally but not very well.   To get it to play really well would
probably require either a lot of specialized hacking with inference control,
or a broader AGI approach going beyond the chess domain... or a lot more
advancement of the learning mechanisms (along lines already specified in the
OCP design)  To me, teaching OpenCog to play chess poorly would prove
almost nothing.  And getting it to play chess well via tailoring the
inference control mechanisms would prove little that's relevant to AGI,
though it would be cool.



 Ok. I do not say that your approach is wrong. In fact I think it is very
 interesting and ambitious. But as you think that my approach is not the best
 one I think that your approach is not the best one.  Probably, the
 discussion could be endless. And probably you already have invested too much
 effort in your approach that you really can consider to change it. I hope
 you are right because I would be very happy to see the first AGI soon,
 regardless who will build it and regardless which concept is used.

I would change my approach if I thought there were a better one.  But you
haven't convinced me, just as I haven't convinced you ;-)

Anyway, to take your approach I would not need to change my AGI design at
all: OCP could be pursued in the domain of learning to play chess.  I just
don't think that's the best choice.

BTW, if I were going to pursue a board game I'd choose Go not chess ... at
least it hasn't been solved by narrow-AI very well yet ... so a really good
OpenCog-based Go program would have more sex appeal ... there has not been a
Deep Blue of Go

My son is a good Go player so maybe I'll talk him into trying this one day
;-)

ben g



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] constructivist issues

2008-10-22 Thread Abram Demski
Mark,

The way you invoke Godel's Theorem is strange to me... perhaps you
have explained your argument more fully elsewhere, but as it stands I
do not see your reasoning.

--Abram

On Wed, Oct 22, 2008 at 12:20 PM, Mark Waser [EMAIL PROTECTED] wrote:
 It looks like all this disambiguation by moving to a more formal
 language is about sweeping the problem under the rug, removing the
 need for uncertain reasoning from surface levels of syntax and
 semantics, to remember about it 10 years later, retouch the most
 annoying holes with simple statistical techniques, and continue as
 before.

 That's an excellent criticism but not the intent.

 Godel's Incompleteness Theorem means that you will be forever building . . .
 .

 All that disambiguation does is provides a solid, commonly-agreed upon
 foundation to build from.

 English and all natural languages are *HARD*.  They are not optimal for
 simple understanding particularly given the realms we are currently in and
 ambiguity makes things even worse.

 Languages have so many ambiguities because of the way that they (and
 concepts) develop.  You see something new, you grab the nearest analogy and
 word/label and then modify it to fit.  That's why you then later need the
 much longer words and very specific scientific terms and names.

 Simple language is what you need to build the more specific complex
 language.  Having an unambiguous constructed language is simply a template
 or mold that you can use as scaffolding while you develop NLU.  Children
 start out very unambiguous and concrete and so should we.

 (And I don't believe in statistical techniques unless you have the resources
 of Google or AIXI)



 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] constructivist issues

2008-10-22 Thread Mark Waser

What I meant was, it seems like humans are
logically complete in some sense. In practice we are greatly limited
by memory and processing speed and so on; but I *don't* think we're
limited by lacking some important logical construct. It would be like
us discovering some alien species whose mathematicians were able to
understand each individual case of mathematical induction, but were
unable to comprehend the argument for accepting it as a general
principle, because they lack the abstraction. Something like that is
what I find implausible.


I like the phrase logically complete.

The way that I like to think about it is that we have the necessary seed of 
whatever intelligence/competence is that can be logically extended to cover 
all circumstances.


We may not have the personal time or resources to do so but given infinite 
time and resources there is no block on the path from what we have to 
getting there.


Note, however, that it is my understanding that a number of people on this 
list do not agree with this statement (feel free to chime in with you 
reasons why folks).



- Original Message - 
From: Abram Demski [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Wednesday, October 22, 2008 12:20 PM
Subject: Re: [agi] constructivist issues



Too many responses for me to comment on everything! So, sorry to those
I don't address...

Ben,

When I claim a mathematical entity exists, I'm saying loosely that
meaningful statements can be made using it. So, I think meaning is
more basic. I mentioned already what my current definition of meaning
is: a statement is meaningful if it is associated with a computable
rule of deduction that it can use to operate on other (meaningful)
statements. This is in contrast to positivist-style definitions of
meaning, that would instead require a computable test of truth and/or
falsehood.

So, a statement is meaningful if it has procedural deductive meaning.
We *understand* a statement if we are capable of carrying out the
corresponding deductive procedure. A statement is *true* if carrying
out that deductive procedure only produces more true statements. We
*believe* a statement if we not only understand it, but proceed to
apply its deductive procedure.

There is of course some basic level of meaningful statements, such as
sensory observations, so that this is a working recursive definition
of meaning and truth.

By this definition of meaning, any statement in the arithmetical
hierarchy is meaningful (because each statement can be represented by
computable consequences on other statements in the arithmetical
hierarchy). I think some hyperarithmetical truths are captured as
well. I am more doubtful about it capturing anything beyond the first
level of the analytic hierarchy, and general set-theoretic discourse
seems far beyond its reach. Regardless, the definition of meaning
makes a very large number of uncomputable truths nonetheless
meaningful.

Russel,

I think both Ben and I would approximately agree with everything you
said, but that doesn't change our disagreeing with each other :).

Mark,

Good call... I shouldn't be talking like I think it is terrifically
unlikely that some more-intelligent alien species would find humans
mathematically crude. What I meant was, it seems like humans are
logically complete in some sense. In practice we are greatly limited
by memory and processing speed and so on; but I *don't* think we're
limited by lacking some important logical construct. It would be like
us discovering some alien species whose mathematicians were able to
understand each individual case of mathematical induction, but were
unable to comprehend the argument for accepting it as a general
principle, because they lack the abstraction. Something like that is
what I find implausible.

--Abram


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


AW: AW: [agi] Language learning (was Re: Defining AGI)

2008-10-22 Thread Dr. Matthias Heger
You make the implicit assumption that a natural language understanding
system will pass the turing test. Can you prove this?

Furthermore,  it is just an assumption that the ability to have and to apply
the rules are really necessary to pass the turing test.

For these two reasons, you still haven't shown 3a and 3b.

By the way:
The turing test must convince 30% of the people.
Today there is a system which can already convince 25%

http://www.sciencedaily.com/releases/2008/10/081013112148.htm

-Matthias


 3) you apply rules such as 5 * 7 = 35 - 35 / 7 = 5 but
 you have not shown that
 3a) that a language understanding system necessarily(!) has
 this rules
 3b) that a language understanding system necessarily(!) can
 apply such rules

It must have the rules and apply them to pass the Turing test.

-- Matt Mahoney, [EMAIL PROTECTED]





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [OpenCog] Re: [agi] constructivist issues

2008-10-22 Thread Mark Waser
 I think this would be a relatively pain-free way to communicate with an AI 
 that lacks the common sense to carry out disambiguation and reference 
 resolution reliably.   Also, the log of communication would provide a nice 
 training DB for it to use in studying disambiguation.

Awesome.  Like I said, it's a piece of something that I'm trying currently.  If 
I get positive results, I'm certainly not going to hide the fact.  ;-)

(or, it could turn into a learning experience like my attempts with Simplified 
English and Basic English :-)
  - Original Message - 
  From: Ben Goertzel 
  To: agi@v2.listbox.com 
  Cc: [EMAIL PROTECTED] 
  Sent: Wednesday, October 22, 2008 12:27 PM
  Subject: [OpenCog] Re: [agi] constructivist issues



  This is the standard Lojban dictionary

  http://jbovlaste.lojban.org/

  I am not so worried about word meanings, they can always be handled via 
reference to WordNet via usages like run_1, run_2, etc. ... or as you say by 
using rarer, less ambiguous words

  Prepositions are more worrisome, however, I suppose they can be handled in a 
similar way, e.g. by defining an ontology of preposition meanings like with_1, 
with_2, with_3, etc.

  In fact we had someone spend a couple months integrating existing resources 
into a preposition-meaning ontology like this a while back ... the so-called 
PrepositionWordNet ... or as it eventually came to be called the LARDict or 
LogicalArgumentRelationshipDictionary ...

  I think it would be feasible to tweak RelEx to recognize these sorts of 
subscripts, and in this way to recognize a highly controlled English that would 
be unproblematic to map semantically...

  We would then say e.g.

  I ate dinner with_2 my fork

  I live in_2 Maryland

  I have lived_6 for_3 41 years

  (where I suppress all _1's, so that e.g. ate means ate_1)

  Because, RelEx already happily parses the syntax of all simple sentences, so 
the only real hassle to deal with is disambiguation.   We could use similar 
hacking for reference resolution, temporal sequencing, etc.

  The terrorists_v1 robbed_v2 my house.   After that_v2, the jerks_v1 urinated 
in_3 my yard.  

  I think this would be a relatively pain-free way to communicate with an AI 
that lacks the common sense to carry out disambiguation and reference 
resolution reliably.   Also, the log of communication would provide a nice 
training DB for it to use in studying disambiguation.

  -- Ben G



  On Wed, Oct 22, 2008 at 12:00 PM, Mark Waser [EMAIL PROTECTED] wrote:

 IMHO that is an almost hopeless approach, ambiguity is too integral to 
English or any natural language ... e.g preposition ambiguity

Actually, I've been making pretty good progress.  You just always use big 
words and never use small words and/or you use a specific phrase as a word.  
Ambiguous prepositions just disambiguate to one of three/four/five/more 
possible unambiguous words/phrases.

The problem is that most previous subsets (Simplified English, Basic 
English) actually *favored* the small tremendously over-used/ambiguous words 
(because you got so much more bang for the buck with them).

Try only using big unambiguous words and see if you still have the same 
opinion.  

 If you want to take this sort of approach, you'd better start with 
Lojban instead  Learning Lojban is a pain but far less pain than you'll 
have trying to make a disambiguated subset of English.

My first reaction is . . . . Take a Lojban dictionary and see if you can 
come up with an unambiguous English word or very short phrase for each Lojban 
word.  If you can do it, my approach will work and will have the advantage that 
the output can be read by anyone (i.e. it's the equivalent of me having done it 
in Lojban and then added a Lojban - English translation on the end) though the 
input is still *very* problematical (thus the need for a semantically-driven 
English-subset translator).  If you can't do it, then my approach won't work.

Can you do it?  Why or why not?  If you can, do you still believe that my 
approach won't work?  Oh, wait . . . . a Lojban-to-English dictionary *does* 
attempt to come up with an unambiguous English word or very short phrase for 
each Lojban word.  :-)

Actually, h . . . . a Lojban dictionary would probably help me focus my 
efforts a bit better and highlight things that I may have missed . . . . do you 
have a preferred dictionary or resource?  (Google has too many for me to do a 
decent perusal quickly)



  - Original Message - 
  From: Ben Goertzel 
  To: agi@v2.listbox.com 
  Sent: Wednesday, October 22, 2008 11:11 AM
  Subject: Re: [agi] constructivist issues







Personally, rather than starting with NLP, I think that we're going to 
need to start with a formal language that is a disambiguated subset of English 


  IMHO that is an almost hopeless approach, ambiguity is too integral to 
English or any natural language ... e.g 

Re: [agi] constructivist issues

2008-10-22 Thread Ben Goertzel
All theorems in the same formal system are equivalent anyways ;-)

On Wed, Oct 22, 2008 at 1:03 PM, Abram Demski [EMAIL PROTECTED] wrote:

 Ben,

 What, then, do you make of my definition? Do you think deductive
 consequence is insufficient for meaningfulness?

 I am not sure exactly where you draw the line as to what is really
 meaningful (as in finite collections of finite statements about
 finite-precision measurements) and what is only indirectly meaningful
 by its usefulness (as in differential calculus). Perhaps any universal
 statements are only meaningful by usefulness?

 Also, it seems like when you say Godel's Incompleteness, you mean
 Tarski's Undefinability? (Can't let the theorems be misused!)

 About the theorem prover; yes, absolutely, so long as the mathematical
 entity is understandable by the definition I gave. Unfortunately, I
 still have some work to do, because as far as I can tell that
 definition does not explain how uncountable sets are meaningful...
 (maybe it does and I am just missing something...)

 --Abram

 On Wed, Oct 22, 2008 at 12:30 PM, Ben Goertzel [EMAIL PROTECTED] wrote:
 
 
  So, a statement is meaningful if it has procedural deductive meaning.
  We *understand* a statement if we are capable of carrying out the
  corresponding deductive procedure. A statement is *true* if carrying
  out that deductive procedure only produces more true statements. We
  *believe* a statement if we not only understand it, but proceed to
  apply its deductive procedure.
 
  OK, then according to your definition, Godel's Theorem says that if
 humans
  are computable there are some things that we cannot understand ... just
  as, for any computer program, there are some things it can't understand.
 
  It just happens that according to your definition, a computer system can
  understand some fabulously uncomputable entities.  But there's no
  contradiction
  there.
 
  Just like a human can, a digital theorem prover can understand some
  uncomputable entities in the sense you specify...
 
  ben g
 
  
  agi | Archives | Modify Your Subscription


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

A human being should be able to change a diaper, plan an invasion, butcher
a hog, conn a ship, design a building, write a sonnet, balance accounts,
build a wall, set a bone, comfort the dying, take orders, give orders,
cooperate, act alone, solve equations, analyze a new problem, pitch manure,
program a computer, cook a tasty meal, fight efficiently, die gallantly.
Specialization is for insects.  -- Robert Heinlein



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] constructivist issues

2008-10-22 Thread Ben Goertzel
Also, I don't prefer to define meaning the way you do ... so clarifying
issues with your definition is your problem, not mine!!



On Wed, Oct 22, 2008 at 1:03 PM, Abram Demski [EMAIL PROTECTED] wrote:

 Ben,

 What, then, do you make of my definition? Do you think deductive
 consequence is insufficient for meaningfulness?

 I am not sure exactly where you draw the line as to what is really
 meaningful (as in finite collections of finite statements about
 finite-precision measurements) and what is only indirectly meaningful
 by its usefulness (as in differential calculus). Perhaps any universal
 statements are only meaningful by usefulness?

 Also, it seems like when you say Godel's Incompleteness, you mean
 Tarski's Undefinability? (Can't let the theorems be misused!)

 About the theorem prover; yes, absolutely, so long as the mathematical
 entity is understandable by the definition I gave. Unfortunately, I
 still have some work to do, because as far as I can tell that
 definition does not explain how uncountable sets are meaningful...
 (maybe it does and I am just missing something...)

 --Abram

 On Wed, Oct 22, 2008 at 12:30 PM, Ben Goertzel [EMAIL PROTECTED] wrote:
 
 
  So, a statement is meaningful if it has procedural deductive meaning.
  We *understand* a statement if we are capable of carrying out the
  corresponding deductive procedure. A statement is *true* if carrying
  out that deductive procedure only produces more true statements. We
  *believe* a statement if we not only understand it, but proceed to
  apply its deductive procedure.
 
  OK, then according to your definition, Godel's Theorem says that if
 humans
  are computable there are some things that we cannot understand ... just
  as, for any computer program, there are some things it can't understand.
 
  It just happens that according to your definition, a computer system can
  understand some fabulously uncomputable entities.  But there's no
  contradiction
  there.
 
  Just like a human can, a digital theorem prover can understand some
  uncomputable entities in the sense you specify...
 
  ben g
 
  
  agi | Archives | Modify Your Subscription


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

A human being should be able to change a diaper, plan an invasion, butcher
a hog, conn a ship, design a building, write a sonnet, balance accounts,
build a wall, set a bone, comfort the dying, take orders, give orders,
cooperate, act alone, solve equations, analyze a new problem, pitch manure,
program a computer, cook a tasty meal, fight efficiently, die gallantly.
Specialization is for insects.  -- Robert Heinlein



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] constructivist issues

2008-10-22 Thread Mark Waser
Douglas Hofstadter's newest book I Am A Strange Loop (currently available 
from Amazon for $7.99 - 
http://www.amazon.com/Am-Strange-Loop-Douglas-Hofstadter/dp/B001FA23HM) has 
an excellent chapter showing Godel in syntax and semantics.  I highly 
recommend it.


The upshot is that while it is easily possible to define a complete formal 
system of syntax, that formal system can always be used to convey something 
(some semantics) that is (are) outside/beyond the system -- OR, to 
paraphrase -- meaning is always incomplete because it can always be added to 
even inside a formal system of syntax.


This is why I contend that language translation ends up being AGI-complete 
(although bounded subsets clearly don't need to be -- the question is 
whether you get a usable/useful subset more easily with or without first 
creating a seed AGI).


- Original Message - 
From: Abram Demski [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Wednesday, October 22, 2008 12:38 PM
Subject: Re: [agi] constructivist issues



Mark,

The way you invoke Godel's Theorem is strange to me... perhaps you
have explained your argument more fully elsewhere, but as it stands I
do not see your reasoning.

--Abram

On Wed, Oct 22, 2008 at 12:20 PM, Mark Waser [EMAIL PROTECTED] wrote:

It looks like all this disambiguation by moving to a more formal
language is about sweeping the problem under the rug, removing the
need for uncertain reasoning from surface levels of syntax and
semantics, to remember about it 10 years later, retouch the most
annoying holes with simple statistical techniques, and continue as
before.


That's an excellent criticism but not the intent.

Godel's Incompleteness Theorem means that you will be forever building . 
. .

.

All that disambiguation does is provides a solid, commonly-agreed upon
foundation to build from.

English and all natural languages are *HARD*.  They are not optimal for
simple understanding particularly given the realms we are currently in 
and

ambiguity makes things even worse.

Languages have so many ambiguities because of the way that they (and
concepts) develop.  You see something new, you grab the nearest analogy 
and

word/label and then modify it to fit.  That's why you then later need the
much longer words and very specific scientific terms and names.

Simple language is what you need to build the more specific complex
language.  Having an unambiguous constructed language is simply a 
template

or mold that you can use as scaffolding while you develop NLU.  Children
start out very unambiguous and concrete and so should we.

(And I don't believe in statistical techniques unless you have the 
resources

of Google or AIXI)



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


RE: [agi] Who is smart enough to answer this question?

2008-10-22 Thread Ed Porter
Vlad,

 

Thanks for your below reply to my prior email of Tue 10/21/2008 7:08 PM

 

I agreed with most of your reply.  There are only two major issues upon
which I wanted further confirmation, clarification, or comment.

 

 

 

1. WHY C(N,S) IS DIVIDED BY T(N,S,O) TO FORM A LOWER BOUNDS FOR A(N,S,O)

 

You have stated that C(N,S) / T(N,S,O) is a lower bounds for the value
A(N,S,O) where A is the number of sets of length S (which I will also refer
to as assemblies) that can be formed from N nodes, where none of the A
assemblies overlap the population of any other of such assemblies by an
overlap of O or larger.

 

I want to see if my understanding of this is correct.  I understand what
C(N,S) is.  Thanks to your explanations I think I understand what T(N,S,O)
is.  And as is explained below under heading 2, I think I understand why
T(N,S,O) is likely to over count the number of assemblies that impermissibly
overlap the population of any allowed assembly that is counted as part of A.

 

But until a few minutes ago I didn't understand the mathematical bases for 

 

A = C(N,S) / T(N,S,O)

 

I understood why T(N,S,O) could be subtracted from C(N,S), but not why
should be its divisor.

 

Now I think I do.  PLEASE CONFIRM IF MY NEW UNDERSTANDING IS CORRECT.  

 

For purposes of simplicity, until I state otherwise let us assume there is
no multiple counting of excluded assemblies in the calculation of T(N,S,O),
and, thus, that T(N,S,O) is exact.

 

As I understand your argument you are saying every time you increase the
count of allowable assemblies by 1, you increase the count of unallowable
assemblies --- i.e., those having an impermissible overlap --- by T.

 

Thus if you had A allowable assemblies, you would have A x T unallowable
assemblies and, thus,

 

A + A x T = C(N,S)

 

This says all the allowable and unallowable sets of length S would equal the
total number of different possible sets of length S that can be made from N
elements.

 

If one solves this for A one gets

 

A(1 + T) = C(N,S)

 

A = C(N,S) / (1 + T)

 

And since T is normally much, much larger than 1, we can forget the 1 to
give your formula

 

A = C(N,S) / T

 

Now lets take into account the fact that it appears T(N,S,O) is normally
larger than the number of actual sets excluded by the addition of each
allowable set.  That makes A smaller than it should be in the above equation
and thus, changes the equation to 

A = C(N,S) / T

 

Which is equivalent to saying C(N,S) / T is a lower bounds for A, just as
you have been saying.

 

IS THIS EXPLANATION CORRECT?

 

 

 

2. THE SOURCE OF THE OVER COUNTING ASSOCIATED WITH T(S,N,O)

 

I think I understand why there would be over counting in the formula
T(N,S,O) used in your formula A = C(N,S) / T(N,S,O). It appears to results
from the fact that --- when you calculate T for a given allowable assembly
of length S using the formula:

 

T(N,S,O) = SUM FROM X =O TO S OF C(S,X)*C(N-S,S-X)

 

to estimate the number T of possible assemblies that have impermissible
overlap with the given allowable assembly --- it would appear that some
assemblies counted as excluded by an iteration of T with a smaller value of
X would also be counted as excluded in other iterations having a larger
value of X.  This is because all the overlapping sub-combinations C(S,X)
that would occur for a smaller value of X, would also occur as part of one
or more of the sub-combinations C(S,X) that would occur for a larger value
of X.  Thus, T would create a number that is larger than the actual number
of assemblies that would have impermissible overlap with a given allowable
assembly.  

 

IS THIS CORRECT?

IS THERE ANY OTHER SOURCE OF OVERCOUNTING?

 

I though their might also be over counting because it appears --- as is
stated under heading 1 --- that in the formula A = C(N,S) / T(N,S,O), T is
implicitly calculated for each allowable assembly in A, and I though their
might be overlap between the count T of excluded assembles made for
different allowable assemblies.  

 

But it now appears to me that since none of the allowable assemblies share
any overlaps X of length O to S with any other allowable set, it would
appear that there would be no overlap between any of the T(N,S,O) assemblies
counted as being unallowable for any first allowed assembly and those
calculated as unallowable for any second allowed assembly.  That is, none of
the C(S,X) sub-combinations, with X  O, that could be made from any first
allowable assembly of length S, would be shared with any other allowable
assemblies, meaning that none of the assemblies excluded by the calculation
of T for one allowable assembly, could be included in the calculation of T
for another allowable assembly.  

 

IS THIS CORRECT?

 

=

 

Finally, I have to ask if you came up with the equation A = C(N,S) / T
yourself, or if you got it from some other source (and if so which source).
I AM VERY THANKFUL IF YOU FOUND IT FROM ANOTHER 

Re: [OpenCog] Re: [agi] constructivist issues

2008-10-22 Thread Mark Waser
 Well, I am confident my approach with subscripts to handle disambiguation 
 and reference resolution would work, in conjunction with the existing 
 link-parser/RelEx framework...
 If anyone wants to implement it, it seems like just some hacking with the 
 open-source Java RelEx code...

Like what I called a semantically-driven English-subset translator?.  

Oh, I'm pretty confidant that it will work as well . . . . after the LaBrea tar 
pit of implementations . . . . (exactly how little semantic-related coding do 
you think will be necessary? ;-)



  - Original Message - 
  From: Ben Goertzel 
  To: agi@v2.listbox.com 
  Cc: [EMAIL PROTECTED] 
  Sent: Wednesday, October 22, 2008 1:06 PM
  Subject: Re: [OpenCog] Re: [agi] constructivist issues



  Well, I am confident my approach with subscripts to handle disambiguation and 
reference resolution would work, in conjunction with the existing 
link-parser/RelEx framework...

  If anyone wants to implement it, it seems like just some hacking with the 
open-source Java RelEx code...

  ben g


  On Wed, Oct 22, 2008 at 12:59 PM, Mark Waser [EMAIL PROTECTED] wrote:

 I think this would be a relatively pain-free way to communicate with an 
AI that lacks the common sense to carry out disambiguation and reference 
resolution reliably.   Also, the log of communication would provide a nice 
training DB for it to use in studying disambiguation.

Awesome.  Like I said, it's a piece of something that I'm trying currently. 
 If I get positive results, I'm certainly not going to hide the fact.  ;-)

(or, it could turn into a learning experience like my attempts with 
Simplified English and Basic English :-)
  - Original Message - 
  From: Ben Goertzel 
  To: agi@v2.listbox.com 
  Cc: [EMAIL PROTECTED] 
  Sent: Wednesday, October 22, 2008 12:27 PM
  Subject: [OpenCog] Re: [agi] constructivist issues



  This is the standard Lojban dictionary

  http://jbovlaste.lojban.org/

  I am not so worried about word meanings, they can always be handled via 
reference to WordNet via usages like run_1, run_2, etc. ... or as you say by 
using rarer, less ambiguous words

  Prepositions are more worrisome, however, I suppose they can be handled 
in a similar way, e.g. by defining an ontology of preposition meanings like 
with_1, with_2, with_3, etc.

  In fact we had someone spend a couple months integrating existing 
resources into a preposition-meaning ontology like this a while back ... the 
so-called PrepositionWordNet ... or as it eventually came to be called the 
LARDict or LogicalArgumentRelationshipDictionary ...

  I think it would be feasible to tweak RelEx to recognize these sorts of 
subscripts, and in this way to recognize a highly controlled English that would 
be unproblematic to map semantically...

  We would then say e.g.

  I ate dinner with_2 my fork

  I live in_2 Maryland

  I have lived_6 for_3 41 years

  (where I suppress all _1's, so that e.g. ate means ate_1)

  Because, RelEx already happily parses the syntax of all simple sentences, 
so the only real hassle to deal with is disambiguation.   We could use similar 
hacking for reference resolution, temporal sequencing, etc.

  The terrorists_v1 robbed_v2 my house.   After that_v2, the jerks_v1 
urinated in_3 my yard.  

  I think this would be a relatively pain-free way to communicate with an 
AI that lacks the common sense to carry out disambiguation and reference 
resolution reliably.   Also, the log of communication would provide a nice 
training DB for it to use in studying disambiguation.

  -- Ben G



  On Wed, Oct 22, 2008 at 12:00 PM, Mark Waser [EMAIL PROTECTED] wrote:

 IMHO that is an almost hopeless approach, ambiguity is too integral 
to English or any natural language ... e.g preposition ambiguity

Actually, I've been making pretty good progress.  You just always use 
big words and never use small words and/or you use a specific phrase as a 
word.  Ambiguous prepositions just disambiguate to one of 
three/four/five/more possible unambiguous words/phrases.

The problem is that most previous subsets (Simplified English, Basic 
English) actually *favored* the small tremendously over-used/ambiguous words 
(because you got so much more bang for the buck with them).

Try only using big unambiguous words and see if you still have the same 
opinion.  

 If you want to take this sort of approach, you'd better start with 
Lojban instead  Learning Lojban is a pain but far less pain than you'll 
have trying to make a disambiguated subset of English.

My first reaction is . . . . Take a Lojban dictionary and see if you 
can come up with an unambiguous English word or very short phrase for each 
Lojban word.  If you can do it, my approach will work and will have the 
advantage that the output can be read by anyone (i.e. it's the equivalent 

AW: AW: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-22 Thread Dr. Matthias Heger
 

 

It depends what to play chess poorly mean. No one would expect that a
general AGI architecture can outperform special chess programs with the same
computational resources. I think you could convince a lot of people if you
demonstrate that your approach which is obviously completely different from
brute force chess can learn chess to a moderate level of a let's say average
10 year old human chess player.

 

At least when you are in your open cog roadmap  between phase artificial
child and artificial adult  then your system should necessarily be able
to learn chess without any special hacking of hidden chess knowledge. 

 

BTW, Computer GO is already not so bad:

 

 
http://www.engadget.com/2008/08/15/supercomputer-huygens-beats-go-professio
nal-no-one-is-safe/
http://www.engadget.com/2008/08/15/supercomputer-huygens-beats-go-profession
al-no-one-is-safe/

 

- Matthias

 

Ben wrote:


I strongly suspect that OpenCog ... once more of the NM tools are ported to
it (e.g. the completion of the backward chainer port) ... could learn to
play chess legally but not very well.   To get it to play really well would
probably require either a lot of specialized hacking with inference control,
or a broader AGI approach going beyond the chess domain... or a lot more
advancement of the learning mechanisms (along lines already specified in the
OCP design)  To me, teaching OpenCog to play chess poorly would prove
almost nothing.  And getting it to play chess well via tailoring the
inference control mechanisms would prove little that's relevant to AGI,
though it would be cool.

 

Ok. I do not say that your approach is wrong. In fact I think it is very
interesting and ambitious. But as you think that my approach is not the best
one I think that your approach is not the best one.  Probably, the
discussion could be endless. And probably you already have invested too much
effort in your approach that you really can consider to change it. I hope
you are right because I would be very happy to see the first AGI soon,
regardless who will build it and regardless which concept is used.

I would change my approach if I thought there were a better one.  But you
haven't convinced me, just as I haven't convinced you ;-)

Anyway, to take your approach I would not need to change my AGI design at
all: OCP could be pursued in the domain of learning to play chess.  I just
don't think that's the best choice.

BTW, if I were going to pursue a board game I'd choose Go not chess ... at
least it hasn't been solved by narrow-AI very well yet ... so a really good
OpenCog-based Go program would have more sex appeal ... there has not been a
Deep Blue of Go

My son is a good Go player so maybe I'll talk him into trying this one day
;-)

ben g
 

 

  _  


agi |  https://www.listbox.com/member/archive/303/=now Archives
https://www.listbox.com/member/archive/rss/303/ |
https://www.listbox.com/member/?;
7 Modify Your Subscription

 http://www.listbox.com 

 




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: AW: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-22 Thread Mark Waser
A couple of distinctions that I think would be really helpful for this 
discussion . . . . 

There is a profound difference between learning to play chess legally and 
learning to play chess well.

There is an equally profound difference between discovering how to play chess 
well and being taught to play chess well.

Personally, I think that a minimal AGI should be able to be taught to play 
chess reasonably well (i.e. about how well an average human would play after 
being taught the rules and playing a reasonable number of games with 
hints/pointers/tutoring provided) at about the same rate as a human when given 
the same assistance as that human.

Given that grandmasters don't learn solely from chess-only examples without 
help or without analogies and strategies from other domains, I don't see why an 
AGI should be forced to operate under those constraints.  Being taught is much 
faster and easier than discovering on your own.  Translating an analogy or 
transferring a strategy from another domain is much faster than discovering 
something new or developing something from scratch.  Why are we crippling our 
AGI in the name of simplicity?

(And Go is obviously the same)



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: AW: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-22 Thread Ben Goertzel
* *

  Mathematics, though, is interesting in other ways.  I don't believe that
 much of mathematics involves the logical transformations performed in
 proof steps.  A system that invents new fields of mathematics, new terms,
 new mathematical ideas -- that is truly interesting.  Inference control is
 boring, but inventing mathematical induction, complex numbers, or ring
 theory -- THAT is AGI-worthy.

 Is this different from generic concept formulation and explanation (just in
 a slightly different domain)?


No system can make those kinds of inventions without sophisticated inference
control.  Concept creation of course is required also, though.

-- Ben



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: AW: AW: [agi] Language learning (was Re: Defining AGI)

2008-10-22 Thread Matt Mahoney
--- On Wed, 10/22/08, Dr. Matthias Heger [EMAIL PROTECTED] wrote:

 You make the implicit assumption that a natural language
 understanding system will pass the turing test. Can you prove this?

If you accept that a language model is a probability distribution over text, 
then I have already proved something stronger. A language model exactly 
duplicates the distribution of answers that a human would give. The output is 
indistinguishable by any test. In fact a judge would have some uncertainty 
about other people's language models. A judge could be expected to attribute 
some errors in the model to normal human variation.

 Furthermore,  it is just an assumption that the ability to
 have and to apply
 the rules are really necessary to pass the turing test.
 
 For these two reasons, you still haven't shown 3a and
 3b.

I suppose you are right. Instead of encoding mathematical rules as a grammar, 
with enough training data you can just code all possible instances that are 
likely to be encountered. For example, instead of a grammar rule to encode the 
commutative law of addition,

  5 + 3 = a + b = b + a = 3 + 5

a model with a much larger training data set could just encode instances with 
no generalization:

  12 + 7 = 7 + 12
  92 + 0.5 = 0.5 + 92
  etc.

I believe this is how Google gets away with brute force n-gram statistics 
instead of more sophisticated grammars. It's language model is probably 10^5 
times larger than a human model (10^14 bits vs 10^9 bits). Shannon observed in 
1949 that random strings generated by n-gram models of English (where n is the 
number of either letters or words) look like natural language up to length 2n. 
For a typical human sized model (1 GB text), n is about 3 words. To model 
strings longer than 6 words we would need more sophisticated grammar rules. 
Google can model 5-grams (see 
http://googleresearch.blogspot.com/2006/08/all-our-n-gram-are-belong-to-you.html
 ), so it is able to generate and recognize (thus appear to understand) 
sentences up to about 10 words. 

 By the way:
 The turing test must convince 30% of the people.
 Today there is a system which can already convince 25%
 
 http://www.sciencedaily.com/releases/2008/10/081013112148.htm

It would be interesting to see a version of the Turing test where the human 
confederate, machine, and judge all have access to a computer with an internet 
connection. I wonder if this intelligence augmentation would make the test 
easier or harder to pass?

 
 -Matthias
 
 
  3) you apply rules such as 5 * 7 = 35 - 35 / 7 = 5
 but
  you have not shown that
  3a) that a language understanding system
 necessarily(!) has
  this rules
  3b) that a language understanding system
 necessarily(!) can
  apply such rules
 
 It must have the rules and apply them to pass the Turing
 test.
 
 -- Matt Mahoney, [EMAIL PROTECTED]


-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] constructivist issues

2008-10-22 Thread Abram Demski
Mark,

I own and have read the book-- but my first introduction to Godel's
Theorem was Douglas Hofstadter's earlier work, Godel Escher Bach.
Since I had already been guided through the details of the proof (and
grappled with the consequences), to be honest chapter 10 you refer to
was a little boring :).

But, I still do not agree with the way you are using the incompleteness theorem.

It is important to distinguish between two different types of incompleteness.

1. Normal Incompleteness-- a logical theory fails to completely
specify something.
2. Godelian Incompleteness-- a logical theory fails to completely
specify something, even though we want it to.

Logicians always mean type 2 incompleteness when they use the term. To
formalize the difference between the two, the measuring stick of
semantics is used. If a logic's provably-true statements don't match
up to its semantically-true statements, it is incomplete.

However, it seems like all you need is type 1 completeness for what
you are saying. Nobody claims that there is a complete, well-defined
semantics for natural language against which we could measure the
provably-true (whatever THAT would mean).

So, Godel's theorem is way overkill here in my opinion.

--Abram

On Wed, Oct 22, 2008 at 7:48 PM, Mark Waser [EMAIL PROTECTED] wrote:
 Most of what I was thinking of and referring to is in Chapter 10.  Gödel's
 Quintessential Strange Loop (pages 125-145 in my version) but I would
 suggest that you really need to read the shorter Chapter 9. Pattern and
 Provability (pages 113-122) first.

 I actually had them conflated into a single chapter in my memory.

 I think that you'll enjoy them tremendously.

 - Original Message - From: Abram Demski [EMAIL PROTECTED]
 To: agi@v2.listbox.com
 Sent: Wednesday, October 22, 2008 4:19 PM
 Subject: Re: [agi] constructivist issues


 Mark,

 Chapter number please?

 --Abram

 On Wed, Oct 22, 2008 at 1:16 PM, Mark Waser [EMAIL PROTECTED] wrote:

 Douglas Hofstadter's newest book I Am A Strange Loop (currently available
 from Amazon for $7.99 -
 http://www.amazon.com/Am-Strange-Loop-Douglas-Hofstadter/dp/B001FA23HM)
 has
 an excellent chapter showing Godel in syntax and semantics.  I highly
 recommend it.

 The upshot is that while it is easily possible to define a complete
 formal
 system of syntax, that formal system can always be used to convey
 something
 (some semantics) that is (are) outside/beyond the system -- OR, to
 paraphrase -- meaning is always incomplete because it can always be added
 to
 even inside a formal system of syntax.

 This is why I contend that language translation ends up being
 AGI-complete
 (although bounded subsets clearly don't need to be -- the question is
 whether you get a usable/useful subset more easily with or without first
 creating a seed AGI).

 - Original Message - From: Abram Demski [EMAIL PROTECTED]
 To: agi@v2.listbox.com
 Sent: Wednesday, October 22, 2008 12:38 PM
 Subject: Re: [agi] constructivist issues


 Mark,

 The way you invoke Godel's Theorem is strange to me... perhaps you
 have explained your argument more fully elsewhere, but as it stands I
 do not see your reasoning.

 --Abram

 On Wed, Oct 22, 2008 at 12:20 PM, Mark Waser [EMAIL PROTECTED]
 wrote:

 It looks like all this disambiguation by moving to a more formal
 language is about sweeping the problem under the rug, removing the
 need for uncertain reasoning from surface levels of syntax and
 semantics, to remember about it 10 years later, retouch the most
 annoying holes with simple statistical techniques, and continue as
 before.

 That's an excellent criticism but not the intent.

 Godel's Incompleteness Theorem means that you will be forever building
 .
 . .
 .

 All that disambiguation does is provides a solid, commonly-agreed upon
 foundation to build from.

 English and all natural languages are *HARD*.  They are not optimal for
 simple understanding particularly given the realms we are currently in
 and
 ambiguity makes things even worse.

 Languages have so many ambiguities because of the way that they (and
 concepts) develop.  You see something new, you grab the nearest analogy
 and
 word/label and then modify it to fit.  That's why you then later need
 the
 much longer words and very specific scientific terms and names.

 Simple language is what you need to build the more specific complex
 language.  Having an unambiguous constructed language is simply a
 template
 or mold that you can use as scaffolding while you develop NLU. Children
 start out very unambiguous and concrete and so should we.

 (And I don't believe in statistical techniques unless you have the
 resources
 of Google or AIXI)



 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com



 

Re: Lojban (was Re: [agi] constructivist issues)

2008-10-22 Thread Ben Goertzel
[Usual disclaimer: this is not the approach I'm taking, but I don't find it
stupid]

The idea is that by teaching an AI in a minimally-ambiguous language, one
can build up its commonsense understanding such that it can then deal with
the ambiguities of natural language better, using this understanding...

Just because Cyc failed doesn't mean teaching a system using Lojban would
necessarily fail.  Lojban is a lot more interesting than Cyc-L because it
can tractably be used by people to informally chat with AI's, just as can a
natural language...

For instance, one could chat in Lojban with an embodied AI system, and it
would then get strong symbol groundings for its Lojban ;-)

ben g

On Wed, Oct 22, 2008 at 9:23 PM, Matt Mahoney [EMAIL PROTECTED] wrote:

 Why would anyone use a simplified or formalized English (with regular
 grammar and no ambiguities) as a path to natural language understanding?
 Formal language processing has nothing to do with natural language
 processing other than sharing a common lexicon that make them appear
 superficially similar.

 - Natural language can be learned from examples. Formal language can not.
 - Formal language has an exact grammar and semantics. Natural language does
 not.
 - Formal language must be parsed before it can be understood. Natural
 language must be understood before it can be parsed.
 - Formal language is designed to be processed efficiently on a fast,
 reliable, sequential computer that neither makes nor tolerates errors,
 between systems that have identical, fixed language models. Natural language
 evolved to be processed efficiently by a slow, unreliable, massively
 parallel computer with enormous memory in a noisy environment between
 systems that have different but adaptive language models.

 So how does yet another formal language processing system help us
 understand natural language? This route has been a dead end for 50 years, in
 spite of the ability to always make some initial progress before getting
 stuck.

 -- Matt Mahoney, [EMAIL PROTECTED]

 --- On *Wed, 10/22/08, Ben Goertzel [EMAIL PROTECTED]* wrote:

 From: Ben Goertzel [EMAIL PROTECTED]
 Subject: Re: [agi] constructivist issues
 To: agi@v2.listbox.com
 Cc: [EMAIL PROTECTED]
 Date: Wednesday, October 22, 2008, 12:27 PM


 This is the standard Lojban dictionary

 http://jbovlaste.lojban.org/

 I am not so worried about word meanings, they can always be handled via
 reference to WordNet via usages like run_1, run_2, etc. ... or as you say by
 using rarer, less ambiguous words

 Prepositions are more worrisome, however, I suppose they can be handled in
 a similar way, e.g. by defining an ontology of preposition meanings like
 with_1, with_2, with_3, etc.

 In fact we had someone spend a couple months integrating existing resources
 into a preposition-meaning ontology like this a while back ... the so-called
 PrepositionWordNet ... or as it eventually came to be called the LARDict or
 LogicalArgumentRelationshipDictionary ...

 I think it would be feasible to tweak RelEx to recognize these sorts of
 subscripts, and in this way to recognize a highly controlled English that
 would be unproblematic to map semantically...

 We would then say e.g.

 I ate dinner with_2 my fork

 I live in_2 Maryland

 I have lived_6 for_3 41 years

 (where I suppress all _1's, so that e.g. ate means ate_1)

 Because, RelEx already happily parses the syntax of all simple sentences,
 so the only real hassle to deal with is disambiguation.   We could use
 similar hacking for reference resolution, temporal sequencing, etc.

 The terrorists_v1 robbed_v2 my house.   After that_v2, the jerks_v1
 urinated in_3 my yard.

 I think this would be a relatively pain-free way to communicate with an AI
 that lacks the common sense to carry out disambiguation and reference
 resolution reliably.   Also, the log of communication would provide a nice
 training DB for it to use in studying disambiguation.

 -- Ben G


 On Wed, Oct 22, 2008 at 12:00 PM, Mark Waser [EMAIL PROTECTED] wrote:

   IMHO that is an almost hopeless approach, ambiguity is too integral
 to English or any natural language ... e.g preposition ambiguity
 Actually, I've been making pretty good progress.  You just always use big
 words and never use small words and/or you use a specific phrase as a
 word.  Ambiguous prepositions just disambiguate to one of
 three/four/five/more possible unambiguous words/phrases.

 The problem is that most previous subsets (Simplified English, Basic
 English) actually *favored* the small tremendously over-used/ambiguous words
 (because you got so much more bang for the buck with them).

 Try only using big unambiguous words and see if you still have the same
 opinion.

  If you want to take this sort of approach, you'd better start with
 Lojban instead  Learning Lojban is a pain but far less pain than you'll
 have trying to make a disambiguated subset of English.

 My first reaction is . . . . Take a Lojban dictionary and see 

Re: Lojban (was Re: [agi] constructivist issues)

2008-10-22 Thread Trent Waddington
On Thu, Oct 23, 2008 at 11:23 AM, Matt Mahoney [EMAIL PROTECTED] wrote:
 So how does yet another formal language processing system help us understand
 natural language? This route has been a dead end for 50 years, in spite of
 the ability to always make some initial progress before getting stuck.

Although I mostly agree with you, I do often think that humans
understand formal languages very differently to, say, compilers (if
they can be said to understand them at all) and I think it is
interesting to study how one might build an AGI system that
understands formal languages the way humans do.  I have no idea
whether it is easier to do this with formal languages than it is to do
this with natural languages.

Trent


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com