Re: [agi] Comments On My Skepticism of Solomonoff Induction

2010-07-18 Thread Jim Bromer
Solomonoff Induction is not well-defined because it is either incomputable
and/or absurdly irrelevant.  This is where the communication breaks down.  I
have no idea why you would make a remark like that.  It is interesting that
you are an incremental-progress guy.



On Sat, Jul 17, 2010 at 10:59 PM, Abram Demski abramdem...@gmail.comwrote:

 Jim,


 Saying that something approximates Solomonoff Induction doesn't have any
 meaning since we don't know what Solomonoff Induction actually
 represents.  And does talk about the full program space, merit mentioning?


 I'm not sure what you mean here; Solomonoff induction and the full program
 space both seem like well-defined concepts to me.


 I think we both believe that there must be some major breakthrough in
 computational theory waiting to be discovered, but I don't see how
 that could be based on anything other than Boolean Satisfiability.


 A polynom SAT would certainly be a major breakthrough for AI and
 computation generally; and if the brain utilizes something like such an
 algorithm, then AGI could almost certainly never get off the ground without
 it.

 However, I'm far from saying there must be a breakthrough coming in this
 area, and I don't have any other areas in mind. I'm more of an
 incremental-progress type guy. :) IMHO, what the field needs to advance is
 for more people to recognize the importance of relational methods (as you
 put it I think, the importance of structure).

 --Abram

   On Sat, Jul 17, 2010 at 10:28 PM, Jim Bromer jimbro...@gmail.comwrote:

   Well I guess I misunderstood what you said.
 But, you did say,
  The question of whether the function would be useful for the sorts of
 things we keep talking about ... well, I think the best argument that I can
 give is that MDL is strongly supported by both theory and practice for many
 *subsets* of the full program space. The concern might be that, so far, it
 is only supported by *theory* for the full program space-- and since
 approximations have very bad error-bound properties, it may never be
 supported in practice. My reply to this would be that it still appears
 useful to approximate Solomonoff induction, since most successful predictors
 can be viewed as approximations to Solomonoff induction. It approximates
 solomonoff induction appears to be a good _explanation_ for the success of
 many systems.

 Saying that something approximates Solomonoff Induction doesn't have any
 meaning since we don't know what Solomonoff Induction actually
 represents.  And does talk about the full program space, merit mentioning?

 I can see how some of the kinds of things that you have talked about (to
 use my own phrase in order to avoid having to list all the kinds of claims
 that I think have been made about this subject) could be produced from
 finite sets, but I don't understand why you think they are important.

 I think we both believe that there must be some major breakthrough in
 computational theory waiting to be discovered, but I don't see how
 that could be based on anything other than Boolean Satisfiability.

 Can you give me a simple example and explanation of the kind of thing you
 have in mind, and why you think it is important?

 Jim Bromer


  On Fri, Jul 16, 2010 at 12:40 AM, Abram Demski abramdem...@gmail.comwrote:

 Jim,

 The statements about bounds are mathematically provable... furthermore, I
 was just agreeing with what you said, and pointing out that the statement
 could be proven. So what is your issue? I am confused at your response. Is
 it because I didn't include the proofs in my email?

 --Abram

   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com/




 --
 Abram Demski
 http://lo-tho.blogspot.com/
 http://groups.google.com/group/one-logic
   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com/




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


[agi] Is there any Contest or test to ensure that a System is AGI?

2010-07-18 Thread deepakjnath
I wanted to know if there is any bench mark test that can really convince
majority of today's AGIers that a System is true AGI?

Is there some real prize like the XPrize for AGI or AI in general?

thanks,
Deepak



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Is there any Contest or test to ensure that a System is AGI?

2010-07-18 Thread Panu Horsmalahti
2010/7/18 deepakjnath deepakjn...@gmail.com

 I wanted to know if there is any bench mark test that can really convince
 majority of today's AGIers that a System is true AGI?

 Is there some real prize like the XPrize for AGI or AI in general?

 thanks,
 Deepak


Have you heard about the Turing test?

- Panu Horsmalahti



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Is there any Contest or test to ensure that a System is AGI?

2010-07-18 Thread deepakjnath
Yes, but is there a competition like the XPrize or something that we can
work towards. ?

On Sun, Jul 18, 2010 at 6:40 PM, Panu Horsmalahti nawi...@gmail.com wrote:

 2010/7/18 deepakjnath deepakjn...@gmail.com

 I wanted to know if there is any bench mark test that can really convince
 majority of today's AGIers that a System is true AGI?

 Is there some real prize like the XPrize for AGI or AI in general?

 thanks,
 Deepak


 Have you heard about the Turing test?

 - Panu Horsmalahti
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




-- 
cheers,
Deepak



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Is there any Contest or test to ensure that a System is AGI?

2010-07-18 Thread David Jones
not really.

On Sun, Jul 18, 2010 at 9:41 AM, deepakjnath deepakjn...@gmail.com wrote:

 Yes, but is there a competition like the XPrize or something that we can
 work towards. ?


 On Sun, Jul 18, 2010 at 6:40 PM, Panu Horsmalahti nawi...@gmail.comwrote:

 2010/7/18 deepakjnath deepakjn...@gmail.com

 I wanted to know if there is any bench mark test that can really convince
 majority of today's AGIers that a System is true AGI?

 Is there some real prize like the XPrize for AGI or AI in general?

 thanks,
 Deepak


 Have you heard about the Turing test?

 - Panu Horsmalahti
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




 --
 cheers,
 Deepak
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Comments On My Skepticism of Solomonoff Induction

2010-07-18 Thread Abram Demski
Jim,

I think you are using a different definition of well-defined :). I am
saying Solomonoff induction is totally well-defined as a mathematical
concept. You are saying it isn't well-defined as a computational entity.
These are both essentially true.

Why you might insist that program-space is not well-defined, on the other
hand, I do not know.

--Abram

On Sun, Jul 18, 2010 at 8:02 AM, Jim Bromer jimbro...@gmail.com wrote:

 Solomonoff Induction is not well-defined because it is either incomputable
 and/or absurdly irrelevant.  This is where the communication breaks down.  I
 have no idea why you would make a remark like that.  It is interesting that
 you are an incremental-progress guy.



 On Sat, Jul 17, 2010 at 10:59 PM, Abram Demski abramdem...@gmail.comwrote:

 Jim,


 Saying that something approximates Solomonoff Induction doesn't have any
 meaning since we don't know what Solomonoff Induction actually
 represents.  And does talk about the full program space, merit mentioning?


 I'm not sure what you mean here; Solomonoff induction and the full program
 space both seem like well-defined concepts to me.


 I think we both believe that there must be some major breakthrough in
 computational theory waiting to be discovered, but I don't see how
 that could be based on anything other than Boolean Satisfiability.


 A polynom SAT would certainly be a major breakthrough for AI and
 computation generally; and if the brain utilizes something like such an
 algorithm, then AGI could almost certainly never get off the ground without
 it.

 However, I'm far from saying there must be a breakthrough coming in this
 area, and I don't have any other areas in mind. I'm more of an
 incremental-progress type guy. :) IMHO, what the field needs to advance is
 for more people to recognize the importance of relational methods (as you
 put it I think, the importance of structure).

 --Abram

   On Sat, Jul 17, 2010 at 10:28 PM, Jim Bromer jimbro...@gmail.comwrote:

   Well I guess I misunderstood what you said.
 But, you did say,
  The question of whether the function would be useful for the sorts of
 things we keep talking about ... well, I think the best argument that I can
 give is that MDL is strongly supported by both theory and practice for many
 *subsets* of the full program space. The concern might be that, so far, it
 is only supported by *theory* for the full program space-- and since
 approximations have very bad error-bound properties, it may never be
 supported in practice. My reply to this would be that it still appears
 useful to approximate Solomonoff induction, since most successful predictors
 can be viewed as approximations to Solomonoff induction. It approximates
 solomonoff induction appears to be a good _explanation_ for the success of
 many systems.

 Saying that something approximates Solomonoff Induction doesn't have
 any meaning since we don't know what Solomonoff Induction actually
 represents.  And does talk about the full program space, merit mentioning?

 I can see how some of the kinds of things that you have talked about (to
 use my own phrase in order to avoid having to list all the kinds of claims
 that I think have been made about this subject) could be produced from
 finite sets, but I don't understand why you think they are important.

 I think we both believe that there must be some major breakthrough in
 computational theory waiting to be discovered, but I don't see how
 that could be based on anything other than Boolean Satisfiability.

 Can you give me a simple example and explanation of the kind of thing you
 have in mind, and why you think it is important?

 Jim Bromer


  On Fri, Jul 16, 2010 at 12:40 AM, Abram Demski 
 abramdem...@gmail.comwrote:

 Jim,

 The statements about bounds are mathematically provable... furthermore,
 I was just agreeing with what you said, and pointing out that the statement
 could be proven. So what is your issue? I am confused at your response. Is
 it because I didn't include the proofs in my email?

 --Abram

   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com/




 --
 Abram Demski
 http://lo-tho.blogspot.com/
 http://groups.google.com/group/one-logic
   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com/


*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




-- 
Abram Demski
http://lo-tho.blogspot.com/
http://groups.google.com/group/one-logic



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: 

Re: [agi] Is there any Contest or test to ensure that a System is AGI?

2010-07-18 Thread A. T. Murray
Deepak wrote on Sun, 18 Jul 2010:

 I wanted to know if there is any bench mark test 
 that can really convince a majority of today's AGIers 
 that a System is true AGI? 

Obvious AGI functionality is the default test for AGI.

http://www.scn.org/~mentifex/AiMind.html 
is an incipient AGI with slowly accreting
AGI functionality and with easy accessability
due to its running in the MSIE browser.


 Is there some real prize like the XPrize for AGI or AI in general?

 thanks,
 Deepak

As others on the AGI list have pointed out, 
there may not yet be such an AGI Prize, but 
it would be easy to create one and announce it in

http://groups.google.com/group/comp.programming.contests

on Usenet. Meanwhile, in other A(G)I news, someone is
creating an AI Cookbook in wiki format, with e.g.

http://aicookbook.com/wiki/AiMind

as a stub added yesterday by

Yours Truly,

ATM/Mentifex


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


[agi] Of definitions and tests of AGI

2010-07-18 Thread Mike Tintner
I realised that what is needed is a *joint* definition *and*  range of tests of 
AGI.

Benamin Johnston has submitted one valid test - the toy box problem. (See 
archives).

I have submitted another still simpler valid test - build a rock wall from 
rocks given, (or fill an earth hole with rocks).

However, I see that there are no valid definitions of AGI that explain what AGI 
is generally , and why these tests are indeed AGI. Google - there are v. few 
defs. of AGI or Strong AI, period.

The most common: AGI is human-level intelligence -  is an embarrassing 
non-starter - what distinguishes human intelligence? No explanation offered.

The other two are also inadequate if not as bad: Ben's solves a variety of 
complex problems in a variety of complex environments. Nope, so does  a 
multitasking narrow AI. Complexity does not distinguish AGI. Ditto Pei's - 
something to do with insufficient knowledge and resources...
Insufficient is open to narrow AI interpretations and reducible to 
mathematically calculable probabilities.or uncertainties. That doesn't 
distinguish AGI from narrow AI.

The one thing we should all be able to agree on (but who can be sure?) is that:

** an AGI is a general intelligence system, capable of independent learning**

i.e. capable of independently learning new activities/skills with minimal 
guidance or even, ideally, with zero guidance (as humans and animals are) - and 
thus acquiring a general, all-round range of intelligence..  

This is an essential AGI goal -  the capacity to keep entering and mastering 
new domains of both mental and physical skills WITHOUT being specially 
programmed each time - that crucially distinguishes it from narrow AI's, which 
have to be individually programmed anew for each new task. Ben's AGI dog 
exemplified this in a v simple way -  the dog is supposed to be able to learn 
to fetch a ball, with only minimal instructions, as real dogs do - they can 
learn a whole variety of new skills with minimal instruction.  But I am 
confident Ben's dog can't actually do this.

However, the independent learning def. while focussing on the distinctive AGI 
goal,  still is not detailed enough by itself.

It requires further identification of the **cognitive operations** which 
distinguish AGI,  and wh. are exemplified by the above tests.

[I'll stop there for interruptions/comments  continue another time].

 P.S. Deepakjnath,

It is vital to realise that the overwhelming majority of AGI-ers do not * want* 
an AGI test -  Ben has never gone near one, and is merely typical in this 
respect. I'd put almost all AGI-ers here in the same league as the US banks, 
who only want mark-to-fantasy rather than mark-to-market tests of their assets.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Comments On My Skepticism of Solomonoff Induction

2010-07-18 Thread Jim Bromer
On Sun, Jul 18, 2010 at 11:09 AM, Abram Demski abramdem...@gmail.comwrote:

 Jim,

 I think you are using a different definition of well-defined :). I am
 saying Solomonoff induction is totally well-defined as a mathematical
 concept. You are saying it isn't well-defined as a computational entity.
 These are both essentially true.

 Why you might insist that program-space is not well-defined, on the other
 hand, I do not know.

 --Abram


I said: does talk about the full program space, merit mentioning?
Solomonoff Induction is not totally well-defined as a mathematical
concept, as you said it was.
In both of these instances you used qualifications of excess.  Totally,
well-defined and full. It would be like me saying that because your
thesis is wrong in a few ways, your thesis is 'totally wrong in full concept
space or something like that.
Jim Bromer






 On Sun, Jul 18, 2010 at 8:02 AM, Jim Bromer jimbro...@gmail.com wrote:

 Solomonoff Induction is not well-defined because it is either incomputable
 and/or absurdly irrelevant.  This is where the communication breaks down.  I
 have no idea why you would make a remark like that.  It is interesting that
 you are an incremental-progress guy.



 On Sat, Jul 17, 2010 at 10:59 PM, Abram Demski abramdem...@gmail.comwrote:

 Jim,


 Saying that something approximates Solomonoff Induction doesn't have
 any meaning since we don't know what Solomonoff Induction actually
 represents.  And does talk about the full program space, merit 
 mentioning?


 I'm not sure what you mean here; Solomonoff induction and the full
 program space both seem like well-defined concepts to me.


 I think we both believe that there must be some major breakthrough in
 computational theory waiting to be discovered, but I don't see how
 that could be based on anything other than Boolean Satisfiability.


 A polynom SAT would certainly be a major breakthrough for AI and
 computation generally; and if the brain utilizes something like such an
 algorithm, then AGI could almost certainly never get off the ground without
 it.

 However, I'm far from saying there must be a breakthrough coming in this
 area, and I don't have any other areas in mind. I'm more of an
 incremental-progress type guy. :) IMHO, what the field needs to advance is
 for more people to recognize the importance of relational methods (as you
 put it I think, the importance of structure).

 --Abram

   On Sat, Jul 17, 2010 at 10:28 PM, Jim Bromer jimbro...@gmail.comwrote:

   Well I guess I misunderstood what you said.
 But, you did say,
  The question of whether the function would be useful for the sorts
 of things we keep talking about ... well, I think the best argument that I
 can give is that MDL is strongly supported by both theory and practice for
 many *subsets* of the full program space. The concern might be that, so 
 far,
 it is only supported by *theory* for the full program space-- and since
 approximations have very bad error-bound properties, it may never be
 supported in practice. My reply to this would be that it still appears
 useful to approximate Solomonoff induction, since most successful 
 predictors
 can be viewed as approximations to Solomonoff induction. It approximates
 solomonoff induction appears to be a good _explanation_ for the success of
 many systems.

 Saying that something approximates Solomonoff Induction doesn't have
 any meaning since we don't know what Solomonoff Induction actually
 represents.  And does talk about the full program space, merit 
 mentioning?

 I can see how some of the kinds of things that you have talked about (to
 use my own phrase in order to avoid having to list all the kinds of claims
 that I think have been made about this subject) could be produced from
 finite sets, but I don't understand why you think they are important.

 I think we both believe that there must be some major breakthrough in
 computational theory waiting to be discovered, but I don't see how
 that could be based on anything other than Boolean Satisfiability.

 Can you give me a simple example and explanation of the kind of thing
 you have in mind, and why you think it is important?

 Jim Bromer


  On Fri, Jul 16, 2010 at 12:40 AM, Abram Demski 
 abramdem...@gmail.comwrote:

 Jim,

 The statements about bounds are mathematically provable... furthermore,
 I was just agreeing with what you said, and pointing out that the 
 statement
 could be proven. So what is your issue? I am confused at your response. Is
 it because I didn't include the proofs in my email?

 --Abram

   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com/




 --
 Abram Demski
 http://lo-tho.blogspot.com/
 http://groups.google.com/group/one-logic
   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 

Re: [agi] Of definitions and tests of AGI

2010-07-18 Thread deepakjnath
So if I have a system that is close to AGI, I have no way of really knowing
it right?

Even if I believe that my system is a true AGI there is no way of convincing
the others irrefutably that this system is indeed a AGI not just an advanced
AI system.

I have read the toy box problem and rock wall problem, but not many people
will still be convinced I am sure.

I wanted to know that if there is any consensus on a general problem which
can be solved and only solved by a true AGI. Without such a test bench how
will we know if we are moving closer or away from our quest. There is no
map.

Deepak



On Sun, Jul 18, 2010 at 11:50 PM, Mike Tintner tint...@blueyonder.co.ukwrote:

  I realised that what is needed is a *joint* definition *and*  range of
 tests of AGI.

 Benamin Johnston has submitted one valid test - the toy box problem. (See
 archives).

 I have submitted another still simpler valid test - build a rock wall from
 rocks given, (or fill an earth hole with rocks).

 However, I see that there are no valid definitions of AGI that explain what
 AGI is generally , and why these tests are indeed AGI. Google - there are v.
 few defs. of AGI or Strong AI, period.

 The most common: AGI is human-level intelligence -  is an
 embarrassing non-starter - what distinguishes human intelligence? No
 explanation offered.

 The other two are also inadequate if not as bad: Ben's solves a variety of
 complex problems in a variety of complex environments. Nope, so does  a
 multitasking narrow AI. Complexity does not distinguish AGI. Ditto Pei's -
 something to do with insufficient knowledge and resources...
 Insufficient is open to narrow AI interpretations and reducible to
 mathematically calculable probabilities.or uncertainties. That doesn't
 distinguish AGI from narrow AI.

 The one thing we should all be able to agree on (but who can be sure?) is
 that:

 ** an AGI is a general intelligence system, capable of independent
 learning**

 i.e. capable of independently learning new activities/skills with minimal
 guidance or even, ideally, with zero guidance (as humans and animals are) -
 and thus acquiring a general, all-round range of intelligence..

 This is an essential AGI goal -  the capacity to keep entering and
 mastering new domains of both mental and physical skills WITHOUT being
 specially programmed each time - that crucially distinguishes it from narrow
 AI's, which have to be individually programmed anew for each new task. Ben's
 AGI dog exemplified this in a v simple way -  the dog is supposed to be able
 to learn to fetch a ball, with only minimal instructions, as real dogs do -
 they can learn a whole variety of new skills with minimal instruction.  But
 I am confident Ben's dog can't actually do this.

 However, the independent learning def. while focussing on the distinctive
 AGI goal,  still is not detailed enough by itself.

 It requires further identification of the **cognitive operations** which
 distinguish AGI,  and wh. are exemplified by the above tests.

 [I'll stop there for interruptions/comments  continue another time].

  P.S. Deepakjnath,

 It is vital to realise that the overwhelming majority of AGI-ers do not *
 want* an AGI test -  Ben has never gone near one, and is merely typical in
 this respect. I'd put almost all AGI-ers here in the same league as the US
 banks, who only want mark-to-fantasy rather than mark-to-market tests of
 their assets.
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




-- 
cheers,
Deepak



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Of definitions and tests of AGI

2010-07-18 Thread David Jones
If you can't convince someone, clearly something is wrong with it. I don't
think a test is the right way to do this. Which is why I haven't commented
much. When you understand how to create AGI, it will be obvious that it is
AGI or that it is what you intend it to be. You'll then understand how what
you have built fits into the bigger scheme of things. There is no such point
at which you can say something is AGI and not AGI. Intelligence is a
very subjective thing that really depends on your goals. Someone will always
say it is not good enough. But if it really works, people will quickly
realize it based on results.

What you want is to develop a system that can learn about the world or its
environment in a general way so that it can solve arbitrary problems, be
able to plan in general ways, act in general ways and perform the types of
goals you want it to perform.

Dave

On Sun, Jul 18, 2010 at 3:03 PM, deepakjnath deepakjn...@gmail.com wrote:

 So if I have a system that is close to AGI, I have no way of really knowing
 it right?

 Even if I believe that my system is a true AGI there is no way of
 convincing the others irrefutably that this system is indeed a AGI not just
 an advanced AI system.

 I have read the toy box problem and rock wall problem, but not many people
 will still be convinced I am sure.

 I wanted to know that if there is any consensus on a general problem which
 can be solved and only solved by a true AGI. Without such a test bench how
 will we know if we are moving closer or away from our quest. There is no
 map.

 Deepak




 On Sun, Jul 18, 2010 at 11:50 PM, Mike Tintner 
 tint...@blueyonder.co.ukwrote:

  I realised that what is needed is a *joint* definition *and*  range of
 tests of AGI.

 Benamin Johnston has submitted one valid test - the toy box problem. (See
 archives).

 I have submitted another still simpler valid test - build a rock wall from
 rocks given, (or fill an earth hole with rocks).

 However, I see that there are no valid definitions of AGI that explain
 what AGI is generally , and why these tests are indeed AGI. Google - there
 are v. few defs. of AGI or Strong AI, period.

 The most common: AGI is human-level intelligence -  is an
 embarrassing non-starter - what distinguishes human intelligence? No
 explanation offered.

 The other two are also inadequate if not as bad: Ben's solves a variety
 of complex problems in a variety of complex environments. Nope, so does  a
 multitasking narrow AI. Complexity does not distinguish AGI. Ditto Pei's -
 something to do with insufficient knowledge and resources...
 Insufficient is open to narrow AI interpretations and reducible to
 mathematically calculable probabilities.or uncertainties. That doesn't
 distinguish AGI from narrow AI.

 The one thing we should all be able to agree on (but who can be sure?) is
 that:

 ** an AGI is a general intelligence system, capable of independent
 learning**

 i.e. capable of independently learning new activities/skills with minimal
 guidance or even, ideally, with zero guidance (as humans and animals are) -
 and thus acquiring a general, all-round range of intelligence..

 This is an essential AGI goal -  the capacity to keep entering and
 mastering new domains of both mental and physical skills WITHOUT being
 specially programmed each time - that crucially distinguishes it from narrow
 AI's, which have to be individually programmed anew for each new task. Ben's
 AGI dog exemplified this in a v simple way -  the dog is supposed to be able
 to learn to fetch a ball, with only minimal instructions, as real dogs do -
 they can learn a whole variety of new skills with minimal instruction.  But
 I am confident Ben's dog can't actually do this.

 However, the independent learning def. while focussing on the distinctive
 AGI goal,  still is not detailed enough by itself.

 It requires further identification of the **cognitive operations** which
 distinguish AGI,  and wh. are exemplified by the above tests.

 [I'll stop there for interruptions/comments  continue another time].

  P.S. Deepakjnath,

 It is vital to realise that the overwhelming majority of AGI-ers do not *
 want* an AGI test -  Ben has never gone near one, and is merely typical in
 this respect. I'd put almost all AGI-ers here in the same league as the US
 banks, who only want mark-to-fantasy rather than mark-to-market tests of
 their assets.
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




 --
 cheers,
 Deepak
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: 

Re: [agi] Of definitions and tests of AGI

2010-07-18 Thread Matt Mahoney
http://www.loebner.net/Prizef/loebner-prize.html

 -- Matt Mahoney, matmaho...@yahoo.com





From: David Jones davidher...@gmail.com
To: agi agi@v2.listbox.com
Sent: Sun, July 18, 2010 3:10:12 PM
Subject: Re: [agi] Of definitions and tests of AGI

If you can't convince someone, clearly something is wrong with it. I don't 
think 
a test is the right way to do this. Which is why I haven't commented much. 
When you understand how to create AGI, it will be obvious that it is AGI or 
that 
it is what you intend it to be. You'll then understand how what you have built 
fits into the bigger scheme of things. There is no such point at which you can 
say something is AGI and not AGI. Intelligence is a very subjective thing 
that really depends on your goals. Someone will always say it is not good 
enough. But if it really works, people will quickly realize it based on results.

What you want is to develop a system that can learn about the world or its 
environment in a general way so that it can solve arbitrary problems, be able 
to 
plan in general ways, act in general ways and perform the types of goals you 
want it to perform. 


Dave


On Sun, Jul 18, 2010 at 3:03 PM, deepakjnath deepakjn...@gmail.com wrote:

So if I have a system that is close to AGI, I have no way of really knowing it 
right? 


Even if I believe that my system is a true AGI there is no way of convincing 
the 
others irrefutably that this system is indeed a AGI not just an advanced AI 
system.

I have read the toy box problem and rock wall problem, but not many people 
will 
still be convinced I am sure.

I wanted to know that if there is any consensus on a general problem which can 
be solved and only solved by a true AGI. Without such a test bench how will we 
know if we are moving closer or away from our quest. There is no map.

Deepak





On Sun, Jul 18, 2010 at 11:50 PM, Mike Tintner tint...@blueyonder.co.uk 
wrote:

I realised that what is needed is a *joint*  definition *and*  range of tests 
of 
AGI.
 
Benamin Johnston has submitted one valid test - the  toy box problem. (See 
archives).
 
I have submitted another still simpler valid test -  build a rock wall from 
rocks given, (or fill an earth hole with  rocks).
 
However, I see that there are no valid definitions  of AGI that explain what 
AGI 
is generally , and why these tests are indeed  AGI. Google - there are v. few 
defs. of AGI or Strong AI, period.
 
The most common: AGI is human-level intelligence -   is an 
embarrassing non-starter - what distinguishes human  intelligence? No 
explanation offered.
 
The other two are also inadequate if not as bad:  Ben's solves a variety of 
complex problems in a variety of complex  environments. Nope, so does  a 
multitasking narrow AI. Complexity does not  distinguish AGI. Ditto Pei's - 
something to do with insufficient knowledge and  resources...
Insufficient is open to narrow AI  interpretations and reducible to 
mathematically calculable probabilities.or  uncertainties. That doesn't 
distinguish AGI from narrow AI.
 
The one thing we should all be able to agree on  (but who can be sure?) is 
that:
 
** an AGI is a general intelligence system,  capable of independent learning**
 
i.e. capable of independently learning new  activities/skills with minimal 
guidance or even, ideally, with zero guidance (as  humans and animals are) - 
and 
thus acquiring a general, all-round range of  intelligence..  

 
This is an essential AGI goal -  the capacity  to keep entering and mastering 
new domains of both mental and physical skills  WITHOUT being specially 
programmed each time - that crucially distinguishes it  from narrow AI's, 
which 
have to be individually programmed anew for each new  task. Ben's AGI dog 
exemplified this in a v simple way -  the dog is  supposed to be able to 
learn 
to fetch a ball, with only minimal instructions, as  real dogs do - they can 
learn a whole variety of new skills with minimal  instruction.  But I am 
confident Ben's dog can't actually do  this.
 
However, the independent learning def. while  focussing on the distinctive 
AGI 
goal,  still is not detailed enough by  itself.
 
It requires further identification of the  **cognitive operations** which 
distinguish AGI,  and wh. are exemplified by  the above tests.
 
[I'll stop there for interruptions/comments   continue another time].
 
 P.S. Deepakjnath,
 
It is vital to realise that the overwhelming  majority of AGI-ers do not * 
want* 
an AGI test -  Ben has never gone near  one, and is merely typical in this 
respect. I'd put almost all AGI-ers here in  the same league as the US banks, 
who only want mark-to-fantasy rather than  mark-to-market tests of their 
assets.
agi | Archives  | Modify Your Subscription  


-- 
cheers,
Deepak

agi | Archives  | Modify Your Subscription  

agi | Archives  | Modify Your Subscription  


---
agi
Archives: 

Re: [agi] Of definitions and tests of AGI

2010-07-18 Thread Mike Tintner
Please explain/expound freely why you're not convinced - and indicate what 
you expect,  - and I'll reply - but it may not be till tomorrow.

Re your last point, there def. is no consensus on a general problem/test OR a 
def. of AGI.  

One flaw in your expectations seems to be a desire for a single test -  almost 
by definition, there is no such thing as 

a) a single test - i.e. there should be at least a dual or serial test - having 
passed any given test, like the rock/toy test, the AGI must be presented with a 
new adjacent test for wh. it has had no preparation,  like say building with 
cushions or sand bags or packing with fruit. (and neither rock/toy test state 
that clearly)

b) one kind of test - this is an AGI, so it should be clear that if it can pass 
one kind of test, it has the basic potential to go on to many different kinds, 
and it doesn't really matter which kind of test you start with - that is partly 
the function of having a good.definition of AGI .


From: deepakjnath 
Sent: Sunday, July 18, 2010 8:03 PM
To: agi 
Subject: Re: [agi] Of definitions and tests of AGI


So if I have a system that is close to AGI, I have no way of really knowing it 
right? 

Even if I believe that my system is a true AGI there is no way of convincing 
the others irrefutably that this system is indeed a AGI not just an advanced AI 
system.

I have read the toy box problem and rock wall problem, but not many people will 
still be convinced I am sure.

I wanted to know that if there is any consensus on a general problem which can 
be solved and only solved by a true AGI. Without such a test bench how will we 
know if we are moving closer or away from our quest. There is no map.

Deepak




On Sun, Jul 18, 2010 at 11:50 PM, Mike Tintner tint...@blueyonder.co.uk wrote:

  I realised that what is needed is a *joint* definition *and*  range of tests 
of AGI.

  Benamin Johnston has submitted one valid test - the toy box problem. (See 
archives).

  I have submitted another still simpler valid test - build a rock wall from 
rocks given, (or fill an earth hole with rocks).

  However, I see that there are no valid definitions of AGI that explain what 
AGI is generally , and why these tests are indeed AGI. Google - there are v. 
few defs. of AGI or Strong AI, period.

  The most common: AGI is human-level intelligence -  is an embarrassing 
non-starter - what distinguishes human intelligence? No explanation offered.

  The other two are also inadequate if not as bad: Ben's solves a variety of 
complex problems in a variety of complex environments. Nope, so does  a 
multitasking narrow AI. Complexity does not distinguish AGI. Ditto Pei's - 
something to do with insufficient knowledge and resources...
Insufficient is open to narrow AI interpretations and reducible to 
mathematically calculable probabilities.or uncertainties. That doesn't 
distinguish AGI from narrow AI.

  The one thing we should all be able to agree on (but who can be sure?) is 
that:

  ** an AGI is a general intelligence system, capable of independent learning**

  i.e. capable of independently learning new activities/skills with minimal 
guidance or even, ideally, with zero guidance (as humans and animals are) - and 
thus acquiring a general, all-round range of intelligence..  

  This is an essential AGI goal -  the capacity to keep entering and mastering 
new domains of both mental and physical skills WITHOUT being specially 
programmed each time - that crucially distinguishes it from narrow AI's, which 
have to be individually programmed anew for each new task. Ben's AGI dog 
exemplified this in a v simple way -  the dog is supposed to be able to learn 
to fetch a ball, with only minimal instructions, as real dogs do - they can 
learn a whole variety of new skills with minimal instruction.  But I am 
confident Ben's dog can't actually do this.

  However, the independent learning def. while focussing on the distinctive AGI 
goal,  still is not detailed enough by itself.

  It requires further identification of the **cognitive operations** which 
distinguish AGI,  and wh. are exemplified by the above tests.

  [I'll stop there for interruptions/comments  continue another time].

   P.S. Deepakjnath,

  It is vital to realise that the overwhelming majority of AGI-ers do not * 
want* an AGI test -  Ben has never gone near one, and is merely typical in this 
respect. I'd put almost all AGI-ers here in the same league as the US banks, 
who only want mark-to-fantasy rather than mark-to-market tests of their assets.
agi | Archives  | Modify Your Subscription  




-- 
cheers,
Deepak

  agi | Archives  | Modify Your Subscription   



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: 

Re: [agi] Of definitions and tests of AGI

2010-07-18 Thread deepakjnath
Let me clarify. As you all know there are somethings computers are good at
doing and somethings that Humans can do but a computer cannot.

One of the test that I was thinking about recently is to have to movies show
to the AGI. Both movies will have the same story but it would be a totally
different remake of the film probably in different languages and settings.
If the AGI is able to understand the sub plot and say that the story line is
similar in the two movies then it could be a good test for AGI structure.

The ability of a system to understand its environment and underlying sub
plots is an important requirement of AGI.

Deepak

On Mon, Jul 19, 2010 at 1:14 AM, Mike Tintner tint...@blueyonder.co.ukwrote:

  Please explain/expound freely why you're not convinced - and indicate
 what you expect,  - and I'll reply - but it may not be till tomorrow.

 Re your last point, there def. is no consensus on a general problem/test OR
 a def. of AGI.

 One flaw in your expectations seems to be a desire for a single test -
 almost by definition, there is no such thing as

 a) a single test - i.e. there should be at least a dual or serial test -
 having passed any given test, like the rock/toy test, the AGI must be
 presented with a new adjacent test for wh. it has had no preparation,
 like say building with cushions or sand bags or packing with fruit. (and
 neither rock/toy test state that clearly)

 b) one kind of test - this is an AGI, so it should be clear that if it can
 pass one kind of test, it has the basic potential to go on to many different
 kinds, and it doesn't really matter which kind of test you start with - that
 is partly the function of having a good.definition of AGI .


  *From:* deepakjnath deepakjn...@gmail.com
 *Sent:* Sunday, July 18, 2010 8:03 PM
 *To:* agi agi@v2.listbox.com
 *Subject:* Re: [agi] Of definitions and tests of AGI

 So if I have a system that is close to AGI, I have no way of really knowing
 it right?

 Even if I believe that my system is a true AGI there is no way of
 convincing the others irrefutably that this system is indeed a AGI not just
 an advanced AI system.

 I have read the toy box problem and rock wall problem, but not many people
 will still be convinced I am sure.

 I wanted to know that if there is any consensus on a general problem which
 can be solved and only solved by a true AGI. Without such a test bench how
 will we know if we are moving closer or away from our quest. There is no
 map.

 Deepak



 On Sun, Jul 18, 2010 at 11:50 PM, Mike Tintner 
 tint...@blueyonder.co.ukwrote:

  I realised that what is needed is a *joint* definition *and*  range of
 tests of AGI.

 Benamin Johnston has submitted one valid test - the toy box problem. (See
 archives).

 I have submitted another still simpler valid test - build a rock wall from
 rocks given, (or fill an earth hole with rocks).

 However, I see that there are no valid definitions of AGI that explain
 what AGI is generally , and why these tests are indeed AGI. Google - there
 are v. few defs. of AGI or Strong AI, period.

 The most common: AGI is human-level intelligence -  is an
 embarrassing non-starter - what distinguishes human intelligence? No
 explanation offered.

 The other two are also inadequate if not as bad: Ben's solves a variety
 of complex problems in a variety of complex environments. Nope, so does  a
 multitasking narrow AI. Complexity does not distinguish AGI. Ditto Pei's -
 something to do with insufficient knowledge and resources...
 Insufficient is open to narrow AI interpretations and reducible to
 mathematically calculable probabilities.or uncertainties. That doesn't
 distinguish AGI from narrow AI.

 The one thing we should all be able to agree on (but who can be sure?) is
 that:

 ** an AGI is a general intelligence system, capable of independent
 learning**

 i.e. capable of independently learning new activities/skills with minimal
 guidance or even, ideally, with zero guidance (as humans and animals are) -
 and thus acquiring a general, all-round range of intelligence..

 This is an essential AGI goal -  the capacity to keep entering and
 mastering new domains of both mental and physical skills WITHOUT being
 specially programmed each time - that crucially distinguishes it from narrow
 AI's, which have to be individually programmed anew for each new task. Ben's
 AGI dog exemplified this in a v simple way -  the dog is supposed to be able
 to learn to fetch a ball, with only minimal instructions, as real dogs do -
 they can learn a whole variety of new skills with minimal instruction.  But
 I am confident Ben's dog can't actually do this.

 However, the independent learning def. while focussing on the distinctive
 AGI goal,  still is not detailed enough by itself.

 It requires further identification of the **cognitive operations** which
 distinguish AGI,  and wh. are exemplified by the above tests.

 [I'll stop there for interruptions/comments  continue another time].

  P.S. 

Re: [agi] Of definitions and tests of AGI

2010-07-18 Thread David Jones
Deepak,

I think you would be much better off focusing on something more practical.
Understanding a movie and all the myriad things going on, their
significance, etc... that's AI complete. There is no way you are going to
get there without a hell of a lot of steps in between. So, you might as well
focus on the steps required to get there. Such a test is so complicated,
that you cannot even start, except to look for simpler test cases and goals.


My approach to testing agi has been to define what AGI must accomplish.
Which I have in the following steps:
1) understand the environment
2) understand ones own actions and how they affect the environment
3) understand language
4) learn goals from other people through language
5) perform planning and attempt to achieve goals
6) other miscellaneous requirements.

Each step must be accomplished in a general way. By general, I mean that it
can solve many many problems with the same programming.

Each step must be done in order because each step requires previous steps to
proceed. So, to me, the most important place to start is general environment
understanding.

Then, now that you know where to start, you pick more specific goals and
test cases. How do you develop and test general environment understanding?
What is a simple test case you can develop on? What are the fundamental
problems and principles involved? What is required to solve these problems?

Those are the sorts of tests you should be considering. But that only comes
after you decide what AGI requires and steps required. Maybe you'll agree
with me, maybe you won't. So, that's how I would recommend going about it.

Dave

On Sun, Jul 18, 2010 at 4:04 PM, deepakjnath deepakjn...@gmail.com wrote:

 Let me clarify. As you all know there are somethings computers are good at
 doing and somethings that Humans can do but a computer cannot.

 One of the test that I was thinking about recently is to have to movies
 show to the AGI. Both movies will have the same story but it would be a
 totally different remake of the film probably in different languages and
 settings. If the AGI is able to understand the sub plot and say that the
 story line is similar in the two movies then it could be a good test for AGI
 structure.

 The ability of a system to understand its environment and underlying sub
 plots is an important requirement of AGI.

 Deepak

 On Mon, Jul 19, 2010 at 1:14 AM, Mike Tintner tint...@blueyonder.co.ukwrote:

  Please explain/expound freely why you're not convinced - and indicate
 what you expect,  - and I'll reply - but it may not be till tomorrow.

 Re your last point, there def. is no consensus on a general problem/test
 OR a def. of AGI.

 One flaw in your expectations seems to be a desire for a single test -
 almost by definition, there is no such thing as

 a) a single test - i.e. there should be at least a dual or serial test -
 having passed any given test, like the rock/toy test, the AGI must be
 presented with a new adjacent test for wh. it has had no preparation,
 like say building with cushions or sand bags or packing with fruit. (and
 neither rock/toy test state that clearly)

 b) one kind of test - this is an AGI, so it should be clear that if it can
 pass one kind of test, it has the basic potential to go on to many different
 kinds, and it doesn't really matter which kind of test you start with - that
 is partly the function of having a good.definition of AGI .


  *From:* deepakjnath deepakjn...@gmail.com
 *Sent:* Sunday, July 18, 2010 8:03 PM
 *To:* agi agi@v2.listbox.com
 *Subject:* Re: [agi] Of definitions and tests of AGI

 So if I have a system that is close to AGI, I have no way of really
 knowing it right?

 Even if I believe that my system is a true AGI there is no way of
 convincing the others irrefutably that this system is indeed a AGI not just
 an advanced AI system.

 I have read the toy box problem and rock wall problem, but not many people
 will still be convinced I am sure.

 I wanted to know that if there is any consensus on a general problem which
 can be solved and only solved by a true AGI. Without such a test bench how
 will we know if we are moving closer or away from our quest. There is no
 map.

 Deepak



 On Sun, Jul 18, 2010 at 11:50 PM, Mike Tintner 
 tint...@blueyonder.co.ukwrote:

  I realised that what is needed is a *joint* definition *and*  range of
 tests of AGI.

 Benamin Johnston has submitted one valid test - the toy box problem. (See
 archives).

 I have submitted another still simpler valid test - build a rock wall
 from rocks given, (or fill an earth hole with rocks).

 However, I see that there are no valid definitions of AGI that explain
 what AGI is generally , and why these tests are indeed AGI. Google - there
 are v. few defs. of AGI or Strong AI, period.

 The most common: AGI is human-level intelligence -  is an
 embarrassing non-starter - what distinguishes human intelligence? No
 explanation offered.

 The other two are also inadequate 

Re: [agi] Is there any Contest or test to ensure that a System is AGI?

2010-07-18 Thread Ian Parker
In my view the main obstacle to AGI is the understanding of Natural
Language. If we have NL comprehension we have the basis for doing a whole
host of marvellous things.

There is the Turing test. A good question to ask is What is the difference
between laying concrete at 50C and fighting Israel. Google translated wsT
jw AlmErkp or وسط جو المعركة  as central air battle. Correct is the
climatic environmental battle or a more free translation would be the
battle against climate and environment. In Turing competitions no one ever
asks the questions that really would tell AGI apart from a brand X
chatterbox.

http://sites.google.com/site/aitranslationproject/Home/formalmethods

http://sites.google.com/site/aitranslationproject/Home/formalmethodsWe can
I think say that anything which can carry out the program of my blog would
be well on its way. AGI will also be the link between NL and
formal mathematics. Let me take yet another example.

http://sites.google.com/site/aitranslationproject/deepknowled

Google translated it as 4 times the temperature. Ponder this, you have in
fact 3 chances to get this right.

1)  درجة means degree. GT has not translated this word. In this context it
means power.

2) If you search for Stefan Boltzmann or Black Body Google gives you the
correct law.

3) The translation is obviously mathematically incorrect from the
dimensional stand-point.

This 3 things in fact represent different aspects of knowledge. In AGI they
all have to be present.

The other interesting point is that there are programs in existence now that
will address the last two questions. A translator that produces OWL solves
2.

If we match up AGI to Mizar we can put dimensions into the proof engine.

There are a great many things on the Web which will solve specific problems.
NL is *THE* problem since it will allow navigation between the different
programs on the Web.

MOLTO BTW does have its mathematical parts even though it is primerally
billed as a translator.


  - Ian Parker

On 18 July 2010 14:41, deepakjnath deepakjn...@gmail.com wrote:

 Yes, but is there a competition like the XPrize or something that we can
 work towards. ?

 On Sun, Jul 18, 2010 at 6:40 PM, Panu Horsmalahti nawi...@gmail.comwrote:

 2010/7/18 deepakjnath deepakjn...@gmail.com

 I wanted to know if there is any bench mark test that can really convince
 majority of today's AGIers that a System is true AGI?

 Is there some real prize like the XPrize for AGI or AI in general?

 thanks,
 Deepak


 Have you heard about the Turing test?

 - Panu Horsmalahti
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




 --
 cheers,
 Deepak
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Is there any Contest or test to ensure that a System is AGI?

2010-07-18 Thread David Jones
Ian,

Although most people see natural language as one of the most important parts
of AGI, if you think about it carefully, you'll realize that solving natural
language could be done with sufficient knowledge of the world and sufficient
ability to learn this knowledge automatically. That's why i don't consider
natural language a problem we can focus on until we solve the knowledge
problem... which is what I'm focusing on.

Dave

2010/7/18 Ian Parker ianpark...@gmail.com

 In my view the main obstacle to AGI is the understanding of Natural
 Language. If we have NL comprehension we have the basis for doing a whole
 host of marvellous things.

 There is the Turing test. A good question to ask is What is the difference
 between laying concrete at 50C and fighting Israel. Google translated wsT
 jw AlmErkp or وسط جو المعركة  as central air battle. Correct is the
 climatic environmental battle or a more free translation would be the
 battle against climate and environment. In Turing competitions no one ever
 asks the questions that really would tell AGI apart from a brand X
 chatterbox.

 http://sites.google.com/site/aitranslationproject/Home/formalmethods

 http://sites.google.com/site/aitranslationproject/Home/formalmethodsWe
 can I think say that anything which can carry out the program of my blog
 would be well on its way. AGI will also be the link between NL and
 formal mathematics. Let me take yet another example.

 http://sites.google.com/site/aitranslationproject/deepknowled

 Google translated it as 4 times the temperature. Ponder this, you have in
 fact 3 chances to get this right.

 1)  درجة means degree. GT has not translated this word. In this context it
 means power.

 2) If you search for Stefan Boltzmann or Black Body Google gives you
 the correct law.

 3) The translation is obviously mathematically incorrect from the
 dimensional stand-point.

 This 3 things in fact represent different aspects of knowledge. In AGI they
 all have to be present.

 The other interesting point is that there are programs in existence now
 that will address the last two questions. A translator that produces OWL
 solves 2.

 If we match up AGI to Mizar we can put dimensions into the proof engine.

 There are a great many things on the Web which will solve specific
 problems. NL is *THE* problem since it will allow navigation between the
 different programs on the Web.

 MOLTO BTW does have its mathematical parts even though it is primerally
 billed as a translator.


   - Ian Parker

 On 18 July 2010 14:41, deepakjnath deepakjn...@gmail.com wrote:

 Yes, but is there a competition like the XPrize or something that we can
 work towards. ?

 On Sun, Jul 18, 2010 at 6:40 PM, Panu Horsmalahti nawi...@gmail.comwrote:

 2010/7/18 deepakjnath deepakjn...@gmail.com

 I wanted to know if there is any bench mark test that can really
 convince majority of today's AGIers that a System is true AGI?

 Is there some real prize like the XPrize for AGI or AI in general?

 thanks,
 Deepak


 Have you heard about the Turing test?

 - Panu Horsmalahti
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




 --
 cheers,
 Deepak
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com


*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Of definitions and tests of AGI

2010-07-18 Thread Mike Tintner
Jeez,  no AI program can understand *two* consecutive *sentences* in a text - 
can understand any text period - can understand language, period. And you want 
an AGI that can understand a *story*. You don't seem to understand that 
requires cognitively a fabulous, massively evolved, highly educated, hugely 
complex set of powers . 

No AI can understand a photograph of a scene, period - a crowd scene, a house 
by the river. Programs are hard put to recognize any objects other than those 
in v. standard positions. And you want an AGI that can understand a *movie*. 

You don't seem to realise that we can't take the smallest AGI  *step* yet - and 
you're fantasying about a superevolved AGI globetrotter.

That's why Benjamin  I tried to focus on v. v. simple tests -  they're still 
way too complex  they (or comparable tests) will have to be refined down 
considerably for anyone who is interested in practical vs sci-fi fantasy AGI.

I recommend looking at Packbots and other military robots and hospital robots 
and the like, and asking how we can free them from their human masters and give 
them the very simplest of capacities to rove and handle the world independently 
- like handling and travelling on rocks. 

Anyone dreaming of computers or robots that can follow Gone with The Wind or 
become a child (real) scientist in the foreseeable future pace Ben, has no 
realistic understanding of what is involved.

From: deepakjnath 
Sent: Sunday, July 18, 2010 9:04 PM
To: agi 
Subject: Re: [agi] Of definitions and tests of AGI


Let me clarify. As you all know there are somethings computers are good at 
doing and somethings that Humans can do but a computer cannot.

One of the test that I was thinking about recently is to have to movies show to 
the AGI. Both movies will have the same story but it would be a totally 
different remake of the film probably in different languages and settings. If 
the AGI is able to understand the sub plot and say that the story line is 
similar in the two movies then it could be a good test for AGI structure. 

The ability of a system to understand its environment and underlying sub plots 
is an important requirement of AGI.

Deepak


On Mon, Jul 19, 2010 at 1:14 AM, Mike Tintner tint...@blueyonder.co.uk wrote:

  Please explain/expound freely why you're not convinced - and indicate what 
you expect,  - and I'll reply - but it may not be till tomorrow.

  Re your last point, there def. is no consensus on a general problem/test OR a 
def. of AGI.  

  One flaw in your expectations seems to be a desire for a single test -  
almost by definition, there is no such thing as 

  a) a single test - i.e. there should be at least a dual or serial test - 
having passed any given test, like the rock/toy test, the AGI must be presented 
with a new adjacent test for wh. it has had no preparation,  like say 
building with cushions or sand bags or packing with fruit. (and neither 
rock/toy test state that clearly)

  b) one kind of test - this is an AGI, so it should be clear that if it can 
pass one kind of test, it has the basic potential to go on to many different 
kinds, and it doesn't really matter which kind of test you start with - that is 
partly the function of having a good.definition of AGI .


  From: deepakjnath 
  Sent: Sunday, July 18, 2010 8:03 PM
  To: agi 
  Subject: Re: [agi] Of definitions and tests of AGI


  So if I have a system that is close to AGI, I have no way of really knowing 
it right? 

  Even if I believe that my system is a true AGI there is no way of convincing 
the others irrefutably that this system is indeed a AGI not just an advanced AI 
system.

  I have read the toy box problem and rock wall problem, but not many people 
will still be convinced I am sure.

  I wanted to know that if there is any consensus on a general problem which 
can be solved and only solved by a true AGI. Without such a test bench how will 
we know if we are moving closer or away from our quest. There is no map.

  Deepak




  On Sun, Jul 18, 2010 at 11:50 PM, Mike Tintner tint...@blueyonder.co.uk 
wrote:

I realised that what is needed is a *joint* definition *and*  range of 
tests of AGI.

Benamin Johnston has submitted one valid test - the toy box problem. (See 
archives).

I have submitted another still simpler valid test - build a rock wall from 
rocks given, (or fill an earth hole with rocks).

However, I see that there are no valid definitions of AGI that explain what 
AGI is generally , and why these tests are indeed AGI. Google - there are v. 
few defs. of AGI or Strong AI, period.

The most common: AGI is human-level intelligence -  is an embarrassing 
non-starter - what distinguishes human intelligence? No explanation offered.

The other two are also inadequate if not as bad: Ben's solves a variety of 
complex problems in a variety of complex environments. Nope, so does  a 
multitasking narrow AI. Complexity does not distinguish AGI. Ditto Pei's - 

Re: [agi] Is there any Contest or test to ensure that a System is AGI?

2010-07-18 Thread David Jones
Oh, I wanted to add one thing that I've learned recently. The core problem
of AGI is to come up with hypotheses (hopefully the right hypothesis or
one that is good enough is included) and then determine whether the
hypothesis is 1) acceptable and 2) better than other hypotheses. In
addition, you have to have a way to decide *when* to look for better
hypotheses, because you can't just always be looking at all possible
hypotheses.

So, with that in mind, the reason that natural language can only be very
roughly approximated without a lot more knowledge is because there isn't
sufficient knowledge to say that one hypothesis is better than another in
the vast majority of cases. The AI doesn't have sufficient *reason* to think
that the right hypothesis is better than others. The only way to give it
that sufficient reason is to give it sufficient knowledge.

Dave

2010/7/18 David Jones davidher...@gmail.com

 Ian,

 Although most people see natural language as one of the most important
 parts of AGI, if you think about it carefully, you'll realize that solving
 natural language could be done with sufficient knowledge of the world and
 sufficient ability to learn this knowledge automatically. That's why i don't
 consider natural language a problem we can focus on until we solve the
 knowledge problem... which is what I'm focusing on.

 Dave

 2010/7/18 Ian Parker ianpark...@gmail.com

 In my view the main obstacle to AGI is the understanding of Natural
 Language. If we have NL comprehension we have the basis for doing a whole
 host of marvellous things.

 There is the Turing test. A good question to ask is What is the
 difference between laying concrete at 50C and fighting Israel. Google
 translated wsT jw AlmErkp or وسط جو المعركة  as central air battle.
 Correct is the climatic environmental battle or a more free translation
 would be the battle against climate and environment. In Turing
 competitions no one ever asks the questions that really would tell AGI apart
 from a brand X chatterbox.

 http://sites.google.com/site/aitranslationproject/Home/formalmethods

 http://sites.google.com/site/aitranslationproject/Home/formalmethodsWe
 can I think say that anything which can carry out the program of my blog
 would be well on its way. AGI will also be the link between NL and
 formal mathematics. Let me take yet another example.

 http://sites.google.com/site/aitranslationproject/deepknowled

 Google translated it as 4 times the temperature. Ponder this, you have in
 fact 3 chances to get this right.

 1)  درجة means degree. GT has not translated this word. In this context
 it means power.

 2) If you search for Stefan Boltzmann or Black Body Google gives you
 the correct law.

 3) The translation is obviously mathematically incorrect from the
 dimensional stand-point.

 This 3 things in fact represent different aspects of knowledge. In AGI
 they all have to be present.

 The other interesting point is that there are programs in existence now
 that will address the last two questions. A translator that produces OWL
 solves 2.

 If we match up AGI to Mizar we can put dimensions into the proof engine.

 There are a great many things on the Web which will solve specific
 problems. NL is *THE* problem since it will allow navigation between the
 different programs on the Web.

 MOLTO BTW does have its mathematical parts even though it is primerally
 billed as a translator.


   - Ian Parker

 On 18 July 2010 14:41, deepakjnath deepakjn...@gmail.com wrote:

 Yes, but is there a competition like the XPrize or something that we can
 work towards. ?

 On Sun, Jul 18, 2010 at 6:40 PM, Panu Horsmalahti nawi...@gmail.comwrote:

 2010/7/18 deepakjnath deepakjn...@gmail.com

 I wanted to know if there is any bench mark test that can really
 convince majority of today's AGIers that a System is true AGI?

 Is there some real prize like the XPrize for AGI or AI in general?

 thanks,
 Deepak


 Have you heard about the Turing test?

 - Panu Horsmalahti
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




 --
 cheers,
 Deepak
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com


*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Comments On My Skepticism of Solomonoff Induction

2010-07-18 Thread Jim Bromer
Abram,
I was going to drop the discussion, but then I thought I figured out why you
kept trying to paper over the difference.  Of course, our personal
disagreement is trivial; it isn't that important.  But the problem with
Solomonoff Induction is that not only is the output hopelessly tangled and
seriously infinite, but the input is as well.  The definition of all
possible programs, like the definition of all possible mathematical
functions, is not a proper mathematical problem that can be comprehended in
an analytical way.  I think that is the part you haven't totally figured out
yet (if you will excuse the pun).  Total program space, does not represent
a comprehensible computational concept.  When you try find a way to work out
feasible computable examples it is not enough to limit the output string
space, you HAVE to limit the program space in the same way.  That second
limitation makes the entire concept of total program space, much too
weak for our purposes.  You seem to know this at an intuitive operational
level, but it seems to me that you haven't truly grasped the implications.

I say that Solomonoff Induction is computational but I have to use a trick
to justify that remark.  I think the trick may be acceptable, but I am not
sure.  But the possibility that the concept of all possible programs,
might be computational doesn't mean that that it is a sound mathematical
concept.  This underlies the reason that I intuitively came to the
conclusion that Solomonoff Induction was transfinite.  However, I wasn't
able to prove it because the hypothetical concept of all possible program
space, is so pretentious that it does not lend itself to mathematical
analysis.

I just wanted to point this detail out because your implied view that you
agreed with me but total program space was mathematically well-defined
did not make any sense.
Jim Bromer



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Comments On My Skepticism of Solomonoff Induction

2010-07-18 Thread Matt Mahoney
Jim Bromer wrote:
 The definition of all possible programs, like the definition of all 
 possible 
mathematical functions, is not a proper mathematical problem that can be 
comprehended in an analytical way.

Finding just the shortest program is close enough because it dominates the 
probability. Or which step in the proof of theorem 1.7.2 
in http://www.vetta.org/documents/disSol.pdf do you disagree with?

You have been saying that you think Solomonoff induction is wrong, but offering 
no argument except your own intuition. So why should we care?

 -- Matt Mahoney, matmaho...@yahoo.com





From: Jim Bromer jimbro...@gmail.com
To: agi agi@v2.listbox.com
Sent: Sun, July 18, 2010 9:09:36 PM
Subject: Re: [agi] Comments On My Skepticism of Solomonoff Induction


Abram,
I was going to drop the discussion, but then I thought I figured out why you 
kept trying to paper over the difference.  Of course, our personal disagreement 
is trivial; it isn't that important.  But the problem with Solomonoff Induction 
is that not only is the output hopelessly tangled and seriously infinite, but 
the input is as well.  The definition of all possible programs, like the 
definition of all possible mathematical functions, is not a proper 
mathematical problem that can be comprehended in an analytical way.  I think 
that is the part you haven't totally figured out yet (if you will excuse the 
pun).  Total program space, does not represent a comprehensible computational 
concept.  When you try find a way to work out feasible computable examples it 
is 
not enough to limit the output string space, you HAVE to limit the program 
space 
in the same way.  That second limitation makes the entire concept of total 
program space, much too weak for our purposes.  You seem to know this at an 
intuitive operational level, but it seems to me that you haven't truly grasped 
the implications.
 
I say that Solomonoff Induction is computational but I have to use a trick to 
justify that remark.  I think the trick may be acceptable, but I am not sure.  
But the possibility that the concept of all possible programs, might be 
computational doesn't mean that that it is a sound mathematical concept.  This 
underlies the reason that I intuitively came to the conclusion that Solomonoff 
Induction was transfinite.  However, I wasn't able to prove it because the 
hypothetical concept of all possible program space, is so pretentious that it 
does not lend itself to mathematical analysis.
 
I just wanted to point this detail out because your implied view that you 
agreed 
with me but total program space was mathematically well-defined did not 
make 
any sense.
Jim Bromer
agi | Archives  | Modify Your Subscription  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Is there any Contest or test to ensure that a System is AGI?

2010-07-18 Thread Steve Richfield
Deepak,

An intermediate step is the reverse Turing test (RTT), wherein people or
teams of people attempt to emulate an AGI. I suspect that from such a
competition would come a better idea as to what to expect from an AGI.

I have attempted in the past to drum up interest in a RTT, but so far, no
one seems interested.

Do you want to play a game?!

Steve

On Sun, Jul 18, 2010 at 5:15 AM, deepakjnath deepakjn...@gmail.com wrote:

 I wanted to know if there is any bench mark test that can really convince
 majority of today's AGIers that a System is true AGI?

 Is there some real prize like the XPrize for AGI or AI in general?

 thanks,
 Deepak
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Is there any Contest or test to ensure that a System is AGI?

2010-07-18 Thread Colin Hales

Try this one ...
http://www.bentham.org/open/toaij/openaccess2.htm
If the test subject can be a scientist, it is an AGI.
cheers
colin


Steve Richfield wrote:

Deepak,

An intermediate step is the reverse Turing test (RTT), wherein 
people or teams of people attempt to emulate an AGI. I suspect that 
from such a competition would come a better idea as to what to expect 
from an AGI.


I have attempted in the past to drum up interest in a RTT, but so far, 
no one seems interested.


Do you want to play a game?!

Steve

On Sun, Jul 18, 2010 at 5:15 AM, deepakjnath deepakjn...@gmail.com 
mailto:deepakjn...@gmail.com wrote:


I wanted to know if there is any bench mark test that can really
convince majority of today's AGIers that a System is true AGI?

Is there some real prize like the XPrize for AGI or AI in general?

thanks,
Deepak
*agi* | Archives https://www.listbox.com/member/archive/303/=now
https://www.listbox.com/member/archive/rss/303/ | Modify
https://www.listbox.com/member/?; Your Subscription [Powered by
Listbox] http://www.listbox.com


*agi* | Archives https://www.listbox.com/member/archive/303/=now 
https://www.listbox.com/member/archive/rss/303/ | Modify 
https://www.listbox.com/member/?; 
Your Subscription 	[Powered by Listbox] http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Comments On My Skepticism of Solomonoff Induction

2010-07-18 Thread Abram Demski
Jim,

I'm still not sure what your point even is, which is probably why my
responses seem so strange to you. It still seems to me as if you are jumping
back and forth between different positions, like I said at the start of this
discussion.

You didn't answer why you think program space does not represent a
comprehensible concept. (I will drop the full if it helps...)

My only conclusion can be that you are (at least implicitly) rejecting some
classical mathematical principles and using your own very different notion
of which proofs are valid, which concepts are well-defined, et cetera.

(Or perhaps you just don't have a background in the formal theory of
computation?)

Also, not sure what difference you mean to say I'm papering over.

Perhaps it *is* best that we drop it, since neither one of us is getting
through to the other; but, I am genuinely trying to figure out what you are
saying...

--Abram

On Sun, Jul 18, 2010 at 9:09 PM, Jim Bromer jimbro...@gmail.com wrote:

 Abram,
 I was going to drop the discussion, but then I thought I figured out why
 you kept trying to paper over the difference.  Of course, our personal
 disagreement is trivial; it isn't that important.  But the problem with
 Solomonoff Induction is that not only is the output hopelessly tangled and
 seriously infinite, but the input is as well.  The definition of all
 possible programs, like the definition of all possible mathematical
 functions, is not a proper mathematical problem that can be comprehended in
 an analytical way.  I think that is the part you haven't totally figured out
 yet (if you will excuse the pun).  Total program space, does not represent
 a comprehensible computational concept.  When you try find a way to work out
 feasible computable examples it is not enough to limit the output string
 space, you HAVE to limit the program space in the same way.  That second
 limitation makes the entire concept of total program space, much too
 weak for our purposes.  You seem to know this at an intuitive operational
 level, but it seems to me that you haven't truly grasped the implications.

 I say that Solomonoff Induction is computational but I have to use a trick
 to justify that remark.  I think the trick may be acceptable, but I am not
 sure.  But the possibility that the concept of all possible programs,
 might be computational doesn't mean that that it is a sound mathematical
 concept.  This underlies the reason that I intuitively came to the
 conclusion that Solomonoff Induction was transfinite.  However, I wasn't
 able to prove it because the hypothetical concept of all possible program
 space, is so pretentious that it does not lend itself to mathematical
 analysis.

 I just wanted to point this detail out because your implied view that you
 agreed with me but total program space was mathematically well-defined
 did not make any sense.
 Jim Bromer
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




-- 
Abram Demski
http://lo-tho.blogspot.com/
http://groups.google.com/group/one-logic



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Is there any Contest or test to ensure that a System is AGI?

2010-07-18 Thread Michael Swan

Numbers combined together are a form of language that can form every
other language. 

and...

If you insist on using a natural language, why don't you use the
language most natural to computers - ie code ( which can directly
translates to numbers - machine language ...)

Code is better because you can automatically test then observe to see if
your new code combination works. It's also more pedantic and doesn't
allow ambiguity. 




On Sun, 2010-07-18 at 21:28 +0100, Ian Parker wrote:
 In my view the main obstacle to AGI is the understanding of Natural
 Language. If we have NL comprehension we have the basis for doing a
 whole host of marvellous things.
 
 
 There is the Turing test. A good question to ask is What is the
 difference between laying concrete at 50C and fighting Israel. Google
 translated wsT jw AlmErkp or وسط جو المعركة  as central air
 battle. Correct is the climatic environmental battle or a more free
 translation would be the battle against climate and environment. In
 Turing competitions no one ever asks the questions that really would
 tell AGI apart from a brand X chatterbox.
 
 
 http://sites.google.com/site/aitranslationproject/Home/formalmethods
 
 
 We can I think say that anything which can carry out the program of my
 blog would be well on its way. AGI will also be the link between NL
 and formal mathematics. Let me take yet another example.
 
 
 http://sites.google.com/site/aitranslationproject/deepknowled
 
 
 Google translated it as 4 times the temperature. Ponder this, you have
 in fact 3 chances to get this right.
 
 
 1)  درجة means degree. GT has not translated this word. In this
 context it means power.
 
 
 2) If you search for Stefan Boltzmann or Black Body Google gives
 you the correct law.
 
 
 3) The translation is obviously mathematically incorrect from the
 dimensional stand-point.
 
 
 This 3 things in fact represent different aspects of knowledge. In AGI
 they all have to be present.
 
 
 The other interesting point is that there are programs in existence
 now that will address the last two questions. A translator that
 produces OWL solves 2.
 
 
 If we match up AGI to Mizar we can put dimensions into the proof
 engine.
 
 
 There are a great many things on the Web which will solve specific
 problems. NL is THE problem since it will allow navigation between the
 different programs on the Web.
 
 
 MOLTO BTW does have its mathematical parts even though it is
 primerally billed as a translator.
 
 
 
 
   - Ian Parker
 
 
 On 18 July 2010 14:41, deepakjnath deepakjn...@gmail.com wrote:
 Yes, but is there a competition like the XPrize or something
 that we can work towards. ?
 
 On Sun, Jul 18, 2010 at 6:40 PM, Panu Horsmalahti
 nawi...@gmail.com wrote:
 2010/7/18 deepakjnath deepakjn...@gmail.com
 
 I wanted to know if there is any bench mark
 test that can really convince majority of
 today's AGIers that a System is true AGI?
 
 Is there some real prize like the XPrize for
 AGI or AI in general?
 
 thanks,
 Deepak
 
 Have you heard about the Turing test?
 
 - Panu Horsmalahti 
 
 agi | Archives | Modify
 Your Subscription
 
 
 
 
 -- 
 cheers,
 Deepak
 agi | Archives | Modify Your
 Subscription
 
 
 
 agi | Archives  | Modify Your
 Subscription
 



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com