Re: [agi] Of definitions and tests of AGI

2010-07-19 Thread deepakjnath
Dave,

I agree completely on your point of having a general unifying system that
will solve a simple problem. This system when scaled should be able to solve
all the other problems that you were talking about.

How will we recognize the solution when we get it. I believe that it will be
elegant and simple and would address many problems rather than just one.

I disagree on breaking the problem and looking at it step by step. This is
how we solve any problem logically. Millions of years of evolution has gone
into perfecting our minds and optimizing the brain. So following the normal
engineering way of breaking a big problem in to small manageable problems
and working on it may take a long time. Because when we optimize locally we
may find that globally the system is not optimized and vice versa.

My approach is of looking at the whole problem and finding a simple solution
that will be answer to many problems. We should use our superior processing
power of subconscious to find a solution. The same way artists make their
creations.

I am no way discounting the enormity of the challenge. But different
approaches are valid and it will be too arrogant to say that my approach is
superior to another one. So the more number of radically different approach
the chances of finding a solution increases.

Cheers,
Deepak






On Mon, Jul 19, 2010 at 1:43 AM, David Jones davidher...@gmail.com wrote:

 Deepak,

 I think you would be much better off focusing on something more practical.
 Understanding a movie and all the myriad things going on, their
 significance, etc... that's AI complete. There is no way you are going to
 get there without a hell of a lot of steps in between. So, you might as well
 focus on the steps required to get there. Such a test is so complicated,
 that you cannot even start, except to look for simpler test cases and goals.


 My approach to testing agi has been to define what AGI must accomplish.
 Which I have in the following steps:
 1) understand the environment
 2) understand ones own actions and how they affect the environment
 3) understand language
 4) learn goals from other people through language
 5) perform planning and attempt to achieve goals
 6) other miscellaneous requirements.

 Each step must be accomplished in a general way. By general, I mean that it
 can solve many many problems with the same programming.

 Each step must be done in order because each step requires previous steps
 to proceed. So, to me, the most important place to start is general
 environment understanding.

 Then, now that you know where to start, you pick more specific goals and
 test cases. How do you develop and test general environment understanding?
 What is a simple test case you can develop on? What are the fundamental
 problems and principles involved? What is required to solve these problems?

 Those are the sorts of tests you should be considering. But that only comes
 after you decide what AGI requires and steps required. Maybe you'll agree
 with me, maybe you won't. So, that's how I would recommend going about it.

 Dave

 On Sun, Jul 18, 2010 at 4:04 PM, deepakjnath deepakjn...@gmail.comwrote:

 Let me clarify. As you all know there are somethings computers are good at
 doing and somethings that Humans can do but a computer cannot.

 One of the test that I was thinking about recently is to have to movies
 show to the AGI. Both movies will have the same story but it would be a
 totally different remake of the film probably in different languages and
 settings. If the AGI is able to understand the sub plot and say that the
 story line is similar in the two movies then it could be a good test for AGI
 structure.

 The ability of a system to understand its environment and underlying sub
 plots is an important requirement of AGI.

 Deepak

 On Mon, Jul 19, 2010 at 1:14 AM, Mike Tintner 
 tint...@blueyonder.co.ukwrote:

  Please explain/expound freely why you're not convinced - and indicate
 what you expect,  - and I'll reply - but it may not be till tomorrow.

 Re your last point, there def. is no consensus on a general problem/test
 OR a def. of AGI.

 One flaw in your expectations seems to be a desire for a single test -
 almost by definition, there is no such thing as

 a) a single test - i.e. there should be at least a dual or serial test -
 having passed any given test, like the rock/toy test, the AGI must be
 presented with a new adjacent test for wh. it has had no preparation,
 like say building with cushions or sand bags or packing with fruit. (and
 neither rock/toy test state that clearly)

 b) one kind of test - this is an AGI, so it should be clear that if it
 can pass one kind of test, it has the basic potential to go on to many
 different kinds, and it doesn't really matter which kind of test you start
 with - that is partly the function of having a good.definition of AGI .


  *From:* deepakjnath deepakjn...@gmail.com
 *Sent:* Sunday, July 18, 2010 8:03 PM
 *To:* agi agi@v2.listbox.com
 

Re: [agi] Is there any Contest or test to ensure that a System is AGI?

2010-07-19 Thread Ian Parker
What is the difference between laying concrete at 50C and fighting
Israel?. That is my question my 2 pennyworth. Other people can elaborate.

If that question can be answered you can have an automated advisor in BQ.
Suppose I want to know about the characteristics of concrete. Of course one
thing you could do is go to BQ and ask them what they would be looking for
in an avatar.


  - Ian Parker

On 19 July 2010 02:43, Colin Hales c.ha...@pgrad.unimelb.edu.au wrote:

  Try this one ...
 http://www.bentham.org/open/toaij/openaccess2.htm
 If the test subject can be a scientist, it is an AGI.
 cheers
 colin


 Steve Richfield wrote:

 Deepak,

 An intermediate step is the reverse Turing test (RTT), wherein people or
 teams of people attempt to emulate an AGI. I suspect that from such a
 competition would come a better idea as to what to expect from an AGI.

 I have attempted in the past to drum up interest in a RTT, but so far, no
 one seems interested.

 Do you want to play a game?!

 Steve
 

 On Sun, Jul 18, 2010 at 5:15 AM, deepakjnath deepakjn...@gmail.comwrote:

 I wanted to know if there is any bench mark test that can really convince
 majority of today's AGIers that a System is true AGI?

 Is there some real prize like the XPrize for AGI or AI in general?

 thanks,
 Deepak
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com


*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com

*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Of definitions and tests of AGI

2010-07-19 Thread Mike Tintner
No, Dave  I vaguely agree here that you have to start simple. To think of 
movies is massively confused - rather like saying: when we have created an 
entire new electric supply system for cars, we will have solved the problem of 
replacing gasoline - first you have to focus just on inventing a radically 
cheaper battery, before you consider the possibly hundreds to thousands of 
associated inventions and innovations.involved in creating a major new supply 
system.

Here it would be much simpler to focus on understanding a single photographic 
scene - or real, directly-viewed scene - of objects, rather than the many 
thousands involved in a movie.

In terms of language, it would be simpler to focus on understanding just two 
consecutive sentences of a text or section of dialogue  - or even as I've 
already suggested, just the flexible combinations of two words - rather than 
the hundreds of lines and many thousands of words involved in a movie or play 
script.

And even this is probably all too evolved, for humans only came to use formal 
representations of the world v. recently in evolution.

The general point -  a massively important one - is that AGI-ers cannot 
continue to think of AGI in terms of massively complex and evolved intelligent 
systems, as you are doing. You have to start with the simplest possible systems 
and gradually evolve them.  Anything else is a defiance of all the laws of 
technology - and will see AGI continuing to go absolutely nowhere.

From: deepakjnath 
Sent: Monday, July 19, 2010 5:19 AM
To: agi 
Subject: Re: [agi] Of definitions and tests of AGI


Exactly my point. So if I show a demo of an AGI system that can see two movies 
and understand that the plot of the movies are same even though they are 2 
entirely different movies, you would agree that we have created a true AGI.

Yes there are always lot of things we need to do before we reach that level. 
Its just good to know the destination so that we will know it when it arrives.





On Mon, Jul 19, 2010 at 2:18 AM, Mike Tintner tint...@blueyonder.co.uk wrote:

  Jeez,  no AI program can understand *two* consecutive *sentences* in a text - 
can understand any text period - can understand language, period. And you want 
an AGI that can understand a *story*. You don't seem to understand that 
requires cognitively a fabulous, massively evolved, highly educated, hugely 
complex set of powers . 

  No AI can understand a photograph of a scene, period - a crowd scene, a house 
by the river. Programs are hard put to recognize any objects other than those 
in v. standard positions. And you want an AGI that can understand a *movie*. 

  You don't seem to realise that we can't take the smallest AGI  *step* yet - 
and you're fantasying about a superevolved AGI globetrotter.

  That's why Benjamin  I tried to focus on v. v. simple tests -  they're 
still way too complex  they (or comparable tests) will have to be refined down 
considerably for anyone who is interested in practical vs sci-fi fantasy AGI.

  I recommend looking at Packbots and other military robots and hospital robots 
and the like, and asking how we can free them from their human masters and give 
them the very simplest of capacities to rove and handle the world independently 
- like handling and travelling on rocks. 

  Anyone dreaming of computers or robots that can follow Gone with The Wind 
or become a child (real) scientist in the foreseeable future pace Ben, has no 
realistic understanding of what is involved.

  From: deepakjnath 
  Sent: Sunday, July 18, 2010 9:04 PM
  To: agi 
  Subject: Re: [agi] Of definitions and tests of AGI


  Let me clarify. As you all know there are somethings computers are good at 
doing and somethings that Humans can do but a computer cannot.

  One of the test that I was thinking about recently is to have to movies show 
to the AGI. Both movies will have the same story but it would be a totally 
different remake of the film probably in different languages and settings. If 
the AGI is able to understand the sub plot and say that the story line is 
similar in the two movies then it could be a good test for AGI structure. 

  The ability of a system to understand its environment and underlying sub 
plots is an important requirement of AGI.

  Deepak


  On Mon, Jul 19, 2010 at 1:14 AM, Mike Tintner tint...@blueyonder.co.uk 
wrote:

Please explain/expound freely why you're not convinced - and indicate 
what you expect,  - and I'll reply - but it may not be till tomorrow.

Re your last point, there def. is no consensus on a general problem/test OR 
a def. of AGI.  

One flaw in your expectations seems to be a desire for a single test -  
almost by definition, there is no such thing as 

a) a single test - i.e. there should be at least a dual or serial test - 
having passed any given test, like the rock/toy test, the AGI must be presented 
with a new adjacent test for wh. it has had no preparation,  like say 
building with 

Re: [agi] Comments On My Skepticism of Solomonoff Induction

2010-07-19 Thread Jim Bromer
Abram,
I feel a responsibility to make an effort to explain myself when someone
doesn't understand what I am saying, but once I have gone over the material
sufficiently, if the person is still arguing with me about it I will just
say that I have already explained myself in the previous messages.  For
example if you can point to some authoritative source outside the
Solomonoff-Kolmogrov crowd that agrees that full program space, as it
pertains to definitions like, all possible programs, or my example
of, all possible mathematical functions, represents an comprehensible
concept that is open to mathematical analysis then tell me about it.  We use
concepts like the set containing sets that are not members of themselves
as a philosophical tool that can lead to the discovery of errors in our
assumptions, and in this way such contradictions are of tremendous value.
The ability to use critical skills to find flaws in one's own presumptions
are critical in comprehension, and if that kind of critical thinking has
been turned off for some reason, then the consequences will be predictable.
I think compression is a useful field but the idea of universal induction
aka Solomonoff Induction is garbage science.  It was a good effort on
Solomonoff's part, but it didn't work and it is time to move on, as the
majority of theorists have.
Jim Bromer

On Sun, Jul 18, 2010 at 10:59 PM, Abram Demski abramdem...@gmail.comwrote:

 Jim,

 I'm still not sure what your point even is, which is probably why my
 responses seem so strange to you. It still seems to me as if you are jumping
 back and forth between different positions, like I said at the start of this
 discussion.

 You didn't answer why you think program space does not represent a
 comprehensible concept. (I will drop the full if it helps...)

 My only conclusion can be that you are (at least implicitly) rejecting some
 classical mathematical principles and using your own very different notion
 of which proofs are valid, which concepts are well-defined, et cetera.

 (Or perhaps you just don't have a background in the formal theory of
 computation?)

 Also, not sure what difference you mean to say I'm papering over.

 Perhaps it *is* best that we drop it, since neither one of us is getting
 through to the other; but, I am genuinely trying to figure out what you are
 saying...

 --Abram

   On Sun, Jul 18, 2010 at 9:09 PM, Jim Bromer jimbro...@gmail.com wrote:

   Abram,
 I was going to drop the discussion, but then I thought I figured out why
 you kept trying to paper over the difference.  Of course, our personal
 disagreement is trivial; it isn't that important.  But the problem with
 Solomonoff Induction is that not only is the output hopelessly tangled and
 seriously infinite, but the input is as well.  The definition of all
 possible programs, like the definition of all possible mathematical
 functions, is not a proper mathematical problem that can be comprehended in
 an analytical way.  I think that is the part you haven't totally figured out
 yet (if you will excuse the pun).  Total program space, does not represent
 a comprehensible computational concept.  When you try find a way to work out
 feasible computable examples it is not enough to limit the output string
 space, you HAVE to limit the program space in the same way.  That second
 limitation makes the entire concept of total program space, much too
 weak for our purposes.  You seem to know this at an intuitive operational
 level, but it seems to me that you haven't truly grasped the implications.

 I say that Solomonoff Induction is computational but I have to use a trick
 to justify that remark.  I think the trick may be acceptable, but I am not
 sure.  But the possibility that the concept of all possible programs,
 might be computational doesn't mean that that it is a sound mathematical
 concept.  This underlies the reason that I intuitively came to the
 conclusion that Solomonoff Induction was transfinite.  However, I wasn't
 able to prove it because the hypothetical concept of all possible program
 space, is so pretentious that it does not lend itself to mathematical
 analysis.

 I just wanted to point this detail out because your implied view that you
 agreed with me but total program space was mathematically well-defined
 did not make any sense.
 Jim Bromer
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com/




 --
 Abram Demski
 http://lo-tho.blogspot.com/
 http://groups.google.com/group/one-logic
   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com/




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: 

Re: [agi] Is there any Contest or test to ensure that a System is AGI?

2010-07-19 Thread Mike Tintner
Ian: Suppose I want to know about the characteristics of concrete

You seem to think you can know about an object without ever having seen it or 
physically interacted with it?  As long as you have a set of words for the 
world, you need never have actually experienced or been in the world?

You can fight Israel and lay concrete merely by manipulating words?


From: Ian Parker 
Sent: Monday, July 19, 2010 10:39 AM
To: agi 
Subject: Re: [agi] Is there any Contest or test to ensure that a System is AGI?


What is the difference between laying concrete at 50C and fighting Israel?. 
That is my question my 2 pennyworth. Other people can elaborate. 


If that question can be answered you can have an automated advisor in BQ. 
Suppose I want to know about the characteristics of concrete. Of course one 
thing you could do is go to BQ and ask them what they would be looking for in 
an avatar.




  - Ian Parker


On 19 July 2010 02:43, Colin Hales c.ha...@pgrad.unimelb.edu.au wrote:

  Try this one ...
  http://www.bentham.org/open/toaij/openaccess2.htm
  If the test subject can be a scientist, it is an AGI.
  cheers
  colin


  Steve Richfield wrote: 
Deepak,

An intermediate step is the reverse Turing test (RTT), wherein people or 
teams of people attempt to emulate an AGI. I suspect that from such a 
competition would come a better idea as to what to expect from an AGI.

I have attempted in the past to drum up interest in a RTT, but so far, no 
one seems interested.

Do you want to play a game?!

Steve
 


On Sun, Jul 18, 2010 at 5:15 AM, deepakjnath deepakjn...@gmail.com wrote:

  I wanted to know if there is any bench mark test that can really convince 
majority of today's AGIers that a System is true AGI?

  Is there some real prize like the XPrize for AGI or AI in general?

  thanks,
  Deepak

agi | Archives  | Modify Your Subscription  



  agi | Archives  | Modify Your Subscription  

agi | Archives  | Modify Your Subscription  



  agi | Archives  | Modify Your Subscription   



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Of definitions and tests of AGI

2010-07-19 Thread deepakjnath
‘The intuitive mind is a sacred gift and the rational  mind is a faithful
servant. We have created a society that honours the servant and has
forgotten the gift.’

‘The intellect has little to do on the road to discovery. There comes a leap
in consciousness, call it intuition or what you will, and the solution comes
to you and you don’t know how or why.’

— Albert Einstein

We are here talking like programmers who needs to build a new system; Just
divide the problem, solve it one by one, arrange the pieces and voila. We
are missing something fundamentally here. That I believe has to come as a
stroke of genius to someone.

thanks,
Deepak




On Mon, Jul 19, 2010 at 4:10 PM, Mike Tintner tint...@blueyonder.co.ukwrote:

  No, Dave  I vaguely agree here that you have to start simple. To think
 of movies is massively confused - rather like saying: when we have created
 an entire new electric supply system for cars, we will have solved the
 problem of replacing gasoline - first you have to focus just on inventing a
 radically cheaper battery, before you consider the possibly hundreds to
 thousands of associated inventions and innovations.involved in creating a
 major new supply system.

 Here it would be much simpler to focus on understanding a single
 photographic scene - or real, directly-viewed scene - of objects, rather
 than the many thousands involved in a movie.

 In terms of language, it would be simpler to focus on understanding just
 two consecutive sentences of a text or section of dialogue  - or even as
 I've already suggested, just the flexible combinations of two words - rather
 than the hundreds of lines and many thousands of words involved in a movie
 or play script.

 And even this is probably all too evolved, for humans only came to use
 formal representations of the world v. recently in evolution.

 The general point -  a massively important one - is that AGI-ers cannot
 continue to think of AGI in terms of massively complex and evolved
 intelligent systems, as you are doing. You have to start with the simplest
 possible systems and gradually evolve them.  Anything else is a defiance of
 all the laws of technology - and will see AGI continuing to go absolutely
 nowhere.

  *From:* deepakjnath deepakjn...@gmail.com
 *Sent:* Monday, July 19, 2010 5:19 AM
 *To:* agi agi@v2.listbox.com
 *Subject:* Re: [agi] Of definitions and tests of AGI

 Exactly my point. So if I show a demo of an AGI system that can see two
 movies and understand that the plot of the movies are same even though they
 are 2 entirely different movies, you would agree that we have created a true
 AGI.

 Yes there are always lot of things we need to do before we reach that
 level. Its just good to know the destination so that we will know it when it
 arrives.




 On Mon, Jul 19, 2010 at 2:18 AM, Mike Tintner tint...@blueyonder.co.ukwrote:

  Jeez,  no AI program can understand *two* consecutive *sentences* in a
 text - can understand any text period - can understand language, period. And
 you want an AGI that can understand a *story*. You don't seem to understand
 that requires cognitively a fabulous, massively evolved, highly educated,
 hugely complex set of powers .

 No AI can understand a photograph of a scene, period - a crowd scene, a
 house by the river. Programs are hard put to recognize any objects other
 than those in v. standard positions. And you want an AGI that can understand
 a *movie*.

 You don't seem to realise that we can't take the smallest AGI  *step* yet
 - and you're fantasying about a superevolved AGI globetrotter.

 That's why Benjamin  I tried to focus on v. v. simple tests -  they're
 still way too complex  they (or comparable tests) will have to be refined
 down considerably for anyone who is interested in practical vs sci-fi
 fantasy AGI.

 I recommend looking at Packbots and other military robots and hospital
 robots and the like, and asking how we can free them from their human
 masters and give them the very simplest of capacities to rove and handle the
 world independently - like handling and travelling on rocks.

 Anyone dreaming of computers or robots that can follow Gone with The
 Wind or become a child (real) scientist in the foreseeable future pace Ben,
 has no realistic understanding of what is involved.
  *From:* deepakjnath deepakjn...@gmail.com
 *Sent:* Sunday, July 18, 2010 9:04 PM
   *To:* agi agi@v2.listbox.com
 *Subject:* Re: [agi] Of definitions and tests of AGI

 Let me clarify. As you all know there are somethings computers are good at
 doing and somethings that Humans can do but a computer cannot.

 One of the test that I was thinking about recently is to have to movies
 show to the AGI. Both movies will have the same story but it would be a
 totally different remake of the film probably in different languages and
 settings. If the AGI is able to understand the sub plot and say that the
 story line is similar in the two movies then it could be a good test for AGI
 structure.

 

Re: [agi] Of definitions and tests of AGI

2010-07-19 Thread Mike Tintner
Non-reply.

Name one industry/ section of technology that began with, say, the invention of 
the car,  skipping all the many thousands of stages from the invention of the 
wheel. What you and others are proposing is far, far more outrageous.

It won't require one but a million strokes of genius in one - a stroke of 
divinity. More fantasy AGI.


From: deepakjnath 
Sent: Monday, July 19, 2010 12:00 PM
To: agi 
Subject: Re: [agi] Of definitions and tests of AGI


‘The intuitive mind is a sacred gift and the rational  mind is a faithful 
servant. We have created a society that honours the servant and has forgotten 
the gift.’

‘The intellect has little to do on the road to discovery. There comes a leap in 
consciousness, call it intuition or what you will, and the solution comes to 
you and you don’t know how or why.’

— Albert Einstein

We are here talking like programmers who needs to build a new system; Just 
divide the problem, solve it one by one, arrange the pieces and voila. We are 
missing something fundamentally here. That I believe has to come as a stroke of 
genius to someone.

thanks,
Deepak





On Mon, Jul 19, 2010 at 4:10 PM, Mike Tintner tint...@blueyonder.co.uk wrote:

  No, Dave  I vaguely agree here that you have to start simple. To think of 
movies is massively confused - rather like saying: when we have created an 
entire new electric supply system for cars, we will have solved the problem of 
replacing gasoline - first you have to focus just on inventing a radically 
cheaper battery, before you consider the possibly hundreds to thousands of 
associated inventions and innovations.involved in creating a major new supply 
system.

  Here it would be much simpler to focus on understanding a single photographic 
scene - or real, directly-viewed scene - of objects, rather than the many 
thousands involved in a movie.

  In terms of language, it would be simpler to focus on understanding just two 
consecutive sentences of a text or section of dialogue  - or even as I've 
already suggested, just the flexible combinations of two words - rather than 
the hundreds of lines and many thousands of words involved in a movie or play 
script.

  And even this is probably all too evolved, for humans only came to use formal 
representations of the world v. recently in evolution.

  The general point -  a massively important one - is that AGI-ers cannot 
continue to think of AGI in terms of massively complex and evolved intelligent 
systems, as you are doing. You have to start with the simplest possible systems 
and gradually evolve them.  Anything else is a defiance of all the laws of 
technology - and will see AGI continuing to go absolutely nowhere.

  From: deepakjnath 
  Sent: Monday, July 19, 2010 5:19 AM
  To: agi 
  Subject: Re: [agi] Of definitions and tests of AGI


  Exactly my point. So if I show a demo of an AGI system that can see two 
movies and understand that the plot of the movies are same even though they are 
2 entirely different movies, you would agree that we have created a true AGI.

  Yes there are always lot of things we need to do before we reach that level. 
Its just good to know the destination so that we will know it when it arrives.





  On Mon, Jul 19, 2010 at 2:18 AM, Mike Tintner tint...@blueyonder.co.uk 
wrote:

Jeez,  no AI program can understand *two* consecutive *sentences* in a text 
- can understand any text period - can understand language, period. And you 
want an AGI that can understand a *story*. You don't seem to understand that 
requires cognitively a fabulous, massively evolved, highly educated, hugely 
complex set of powers . 

No AI can understand a photograph of a scene, period - a crowd scene, a 
house by the river. Programs are hard put to recognize any objects other than 
those in v. standard positions. And you want an AGI that can understand a 
*movie*. 

You don't seem to realise that we can't take the smallest AGI  *step* yet - 
and you're fantasying about a superevolved AGI globetrotter.

That's why Benjamin  I tried to focus on v. v. simple tests -  they're 
still way too complex  they (or comparable tests) will have to be refined down 
considerably for anyone who is interested in practical vs sci-fi fantasy AGI.

I recommend looking at Packbots and other military robots and hospital 
robots and the like, and asking how we can free them from their human masters 
and give them the very simplest of capacities to rove and handle the world 
independently - like handling and travelling on rocks. 

Anyone dreaming of computers or robots that can follow Gone with The Wind 
or become a child (real) scientist in the foreseeable future pace Ben, has no 
realistic understanding of what is involved.

From: deepakjnath 
Sent: Sunday, July 18, 2010 9:04 PM
To: agi 
Subject: Re: [agi] Of definitions and tests of AGI


Let me clarify. As you all know there are somethings computers are good at 
doing and somethings that 

Re: [agi] Of definitions and tests of AGI

2010-07-19 Thread rob levy


 However, I see that there are no valid definitions of AGI that explain what
 AGI is generally , and why these tests are indeed AGI. Google - there are v.
 few defs. of AGI or Strong AI, period.



I like Fogel's idea that intelligence is the ability to solve the problem
of how to solve problems in new and changing environments.  I don't think
Fogel's method accomplishes this, but the goal he expresses seems to be the
goal of AGI as I understand it.

Rob



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Of definitions and tests of AGI

2010-07-19 Thread Mike Tintner
Whaddya mean by solve the problem of how to solve problems? Develop a 
universal approach to solving any problem? Or find a method of solving a class 
of problems? Or what?


From: rob levy 
Sent: Monday, July 19, 2010 1:26 PM
To: agi 
Subject: Re: [agi] Of definitions and tests of AGI



  However, I see that there are no valid definitions of AGI that explain what 
AGI is generally , and why these tests are indeed AGI. Google - there are v. 
few defs. of AGI or Strong AI, period.




I like Fogel's idea that intelligence is the ability to solve the problem of 
how to solve problems in new and changing environments.  I don't think Fogel's 
method accomplishes this, but the goal he expresses seems to be the goal of AGI 
as I understand it. 


Rob
  agi | Archives  | Modify Your Subscription   



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Of definitions and tests of AGI

2010-07-19 Thread rob levy
Well, solving ANY problem is a little too strong.  This is AGI, not AGH
(artificial godhead), though AGH could be an unintended consequence ;).  So
I would rephrase solving any problem as being able to come up with
reasonable approaches and strategies to any problem (just as humans are able
to do).

On Mon, Jul 19, 2010 at 11:32 AM, Mike Tintner tint...@blueyonder.co.ukwrote:

  Whaddya mean by solve the problem of how to solve problems? Develop a
 universal approach to solving any problem? Or find a method of solving a
 class of problems? Or what?

  *From:* rob levy r.p.l...@gmail.com
 *Sent:* Monday, July 19, 2010 1:26 PM
 *To:* agi agi@v2.listbox.com
 *Subject:* Re: [agi] Of definitions and tests of AGI


 However, I see that there are no valid definitions of AGI that explain
 what AGI is generally , and why these tests are indeed AGI. Google - there
 are v. few defs. of AGI or Strong AI, period.



 I like Fogel's idea that intelligence is the ability to solve the problem
 of how to solve problems in new and changing environments.  I don't think
 Fogel's method accomplishes this, but the goal he expresses seems to be the
 goal of AGI as I understand it.

 Rob
   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Of definitions and tests of AGI

2010-07-19 Thread rob levy
Fogel originally used the phrase to argue that evolutionary computation
makes sense as a cognitive architecture for a general-purpose AI problem
solver.

On Mon, Jul 19, 2010 at 11:45 AM, rob levy r.p.l...@gmail.com wrote:

 Well, solving ANY problem is a little too strong.  This is AGI, not AGH
 (artificial godhead), though AGH could be an unintended consequence ;).  So
 I would rephrase solving any problem as being able to come up with
 reasonable approaches and strategies to any problem (just as humans are able
 to do).


 On Mon, Jul 19, 2010 at 11:32 AM, Mike Tintner 
 tint...@blueyonder.co.ukwrote:

  Whaddya mean by solve the problem of how to solve problems? Develop a
 universal approach to solving any problem? Or find a method of solving a
 class of problems? Or what?

  *From:* rob levy r.p.l...@gmail.com
 *Sent:* Monday, July 19, 2010 1:26 PM
 *To:* agi agi@v2.listbox.com
 *Subject:* Re: [agi] Of definitions and tests of AGI


 However, I see that there are no valid definitions of AGI that explain
 what AGI is generally , and why these tests are indeed AGI. Google - there
 are v. few defs. of AGI or Strong AI, period.



 I like Fogel's idea that intelligence is the ability to solve the problem
 of how to solve problems in new and changing environments.  I don't think
 Fogel's method accomplishes this, but the goal he expresses seems to be the
 goal of AGI as I understand it.

 Rob
   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Of definitions and tests of AGI

2010-07-19 Thread Mike Tintner
OK. so you're saying:   AGI is solving problems where you have to *devise* a 
method of solution/solving the problem  and is that devising in effect or 
actually/formally? - **

vs

narrow AI wh. is where you *apply* a pre-existing method of solution/solving 
the problem  ?

And are you happy with:

AGI is about devising *one-off* methods of problemsolving (that only apply to 
the individual problem, and cannot be re-used - at least not in their totality)

vs

narrow AI is about applying pre-existing *general* methods of problemsolving  
(applicable to whole classes of problems)?




From: rob levy 
Sent: Monday, July 19, 2010 4:45 PM
To: agi 
Subject: Re: [agi] Of definitions and tests of AGI


Well, solving ANY problem is a little too strong.  This is AGI, not AGH 
(artificial godhead), though AGH could be an unintended consequence ;).  So I 
would rephrase solving any problem as being able to come up with reasonable 
approaches and strategies to any problem (just as humans are able to do).


On Mon, Jul 19, 2010 at 11:32 AM, Mike Tintner tint...@blueyonder.co.uk wrote:

  Whaddya mean by solve the problem of how to solve problems? Develop a 
universal approach to solving any problem? Or find a method of solving a class 
of problems? Or what?


  From: rob levy 
  Sent: Monday, July 19, 2010 1:26 PM
  To: agi 
  Subject: Re: [agi] Of definitions and tests of AGI



However, I see that there are no valid definitions of AGI that explain what 
AGI is generally , and why these tests are indeed AGI. Google - there are v. 
few defs. of AGI or Strong AI, period.




  I like Fogel's idea that intelligence is the ability to solve the problem of 
how to solve problems in new and changing environments.  I don't think Fogel's 
method accomplishes this, but the goal he expresses seems to be the goal of AGI 
as I understand it. 


  Rob
agi | Archives  | Modify Your Subscription   

agi | Archives  | Modify Your Subscription  



  agi | Archives  | Modify Your Subscription   



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Of definitions and tests of AGI

2010-07-19 Thread rob levy
 And are you happy with:

 AGI is about devising *one-off* methods of problemsolving (that only apply
 to the individual problem, and cannot be re-used - at

least not in their totality)



Yes exactly, isn't that what people do?  Also, I think that being able to
recognize where past solutions can be generalized and where past solutions
can be varied and reused is a detail of how intelligence works that is
likely to be universal.



 vs

 narrow AI is about applying pre-existing *general* methods of
 problemsolving  (applicable to whole classes of problems)?



  *From:* rob levy r.p.l...@gmail.com
 *Sent:* Monday, July 19, 2010 4:45 PM
 *To:* agi agi@v2.listbox.com
 *Subject:* Re: [agi] Of definitions and tests of AGI

 Well, solving ANY problem is a little too strong.  This is AGI, not AGH
 (artificial godhead), though AGH could be an unintended consequence ;).  So
 I would rephrase solving any problem as being able to come up with
 reasonable approaches and strategies to any problem (just as humans are able
 to do).

 On Mon, Jul 19, 2010 at 11:32 AM, Mike Tintner 
 tint...@blueyonder.co.ukwrote:

  Whaddya mean by solve the problem of how to solve problems? Develop a
 universal approach to solving any problem? Or find a method of solving a
 class of problems? Or what?

  *From:* rob levy r.p.l...@gmail.com
 *Sent:* Monday, July 19, 2010 1:26 PM
  *To:* agi agi@v2.listbox.com
 *Subject:* Re: [agi] Of definitions and tests of AGI


  However, I see that there are no valid definitions of AGI that explain
 what AGI is generally , and why these tests are indeed AGI. Google - there
 are v. few defs. of AGI or Strong AI, period.



 I like Fogel's idea that intelligence is the ability to solve the problem
 of how to solve problems in new and changing environments.  I don't think
 Fogel's method accomplishes this, but the goal he expresses seems to be the
 goal of AGI as I understand it.

 Rob
   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription 
 http://www.listbox.com
   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com


   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Comments On My Skepticism of Solomonoff Induction

2010-07-19 Thread Jim Bromer
I checked the term program space and found a few authors who used it, but
it seems to be an ad-hoc definition that is not widely used.  It seems to be
an amalgamation of term sample space with the the set of all programs or
something like that.  Of course, the simple comprehension of the idea of,
all possible programs is different than the pretense that all possible
programs could be comprehended through some kind of strategy of evaluation
of all those programs.  It would be like confusing a domain from mathematics
with a value or an possibly evaluable variable (that can be assigned a value
from the domain).  These type distinctions are necessary for logical
thinking about these things.  The same kind of reasoning goes for Russell's
Paradox.  While I can, (with some thought) comprehend the definition and
understand the paradox, I cannot comprehend the set itself, that is, I
cannot comprehend the evaluation of the set.  Such a thing doesn't make any
sense.  It is odd that the set of all evaluable functions (or all programs)
is an inherent paradox when you try to think of it in the terms of an
evaluable function (as if writing a program that produces all possible
programs was feasible).  The oddness is due to the fact that there is
nothing that obviously leads to a paradox, and it is not easy to prove it
is a paradox (because it lacks the required definition). The only reason we
can give for the seeming paradox is that it is wrong to confuse the domain
of a mathematical definition with a value or values from the domain.  While
this barrier can be transcended in some very special cases, it very
obviously cannot be ignored for the general case.
Jim Bromer



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Of definitions and tests of AGI

2010-07-19 Thread Mike Tintner
Yes that's what people do, but it's not what programmed computers do.

The useful formulation that emerges here is:

narrow AI (and in fact all rational) problems  have *a method of solution*  (to 
be equated with general method)   - and are programmable (a program is a 
method of solution)

AGI  (and in fact all creative) problems do NOT have *a method of solution* (in 
the general sense)  -  rather a one.off *way of solving the problem* has to be 
improvised each time.

AGI/creative problems do not in fact have a method of solution, period. There 
is no (general) method of solving either the toy box or the build-a-rock-wall 
problem - one essential feature which makes them AGI.

You can learn, as you indicate, from *parts* of any given AGI/creative 
solution, and apply the lessons to future problems - and indeed with practice, 
should improve at solving any given kind of AGI/creative problem. But you can 
never apply a *whole* solution/way to further problems.

P.S. One should add that in terms of computers, we are talking here of 
*complete, step-by-step* methods of solution.



From: rob levy 
Sent: Monday, July 19, 2010 5:09 PM
To: agi 
Subject: Re: [agi] Of definitions and tests of AGI


  
  And are you happy with:

  AGI is about devising *one-off* methods of problemsolving (that only apply to 
the individual problem, and cannot be re-used - at 

  least not in their totality)


Yes exactly, isn't that what people do?  Also, I think that being able to 
recognize where past solutions can be generalized and where past solutions can 
be varied and reused is a detail of how intelligence works that is likely to be 
universal.

 
  vs

  narrow AI is about applying pre-existing *general* methods of problemsolving  
(applicable to whole classes of problems)?




  From: rob levy 
  Sent: Monday, July 19, 2010 4:45 PM
  To: agi 
  Subject: Re: [agi] Of definitions and tests of AGI


  Well, solving ANY problem is a little too strong.  This is AGI, not AGH 
(artificial godhead), though AGH could be an unintended consequence ;).  So I 
would rephrase solving any problem as being able to come up with reasonable 
approaches and strategies to any problem (just as humans are able to do).


  On Mon, Jul 19, 2010 at 11:32 AM, Mike Tintner tint...@blueyonder.co.uk 
wrote:

Whaddya mean by solve the problem of how to solve problems? Develop a 
universal approach to solving any problem? Or find a method of solving a class 
of problems? Or what?


From: rob levy 
Sent: Monday, July 19, 2010 1:26 PM
To: agi 
Subject: Re: [agi] Of definitions and tests of AGI



  However, I see that there are no valid definitions of AGI that explain 
what AGI is generally , and why these tests are indeed AGI. Google - there are 
v. few defs. of AGI or Strong AI, period.




I like Fogel's idea that intelligence is the ability to solve the problem 
of how to solve problems in new and changing environments.  I don't think 
Fogel's method accomplishes this, but the goal he expresses seems to be the 
goal of AGI as I understand it. 


Rob
  agi | Archives  | Modify Your Subscription   

  agi | Archives  | Modify Your Subscription  



agi | Archives  | Modify Your Subscription   

agi | Archives  | Modify Your Subscription  



  agi | Archives  | Modify Your Subscription   



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Of definitions and tests of AGI

2010-07-19 Thread Matt Mahoney
Creativity is the good feeling you get when you discover a clever solution to a 
hard problem without knowing the process you used to discover it.

I think a computer could do that.

 -- Matt Mahoney, matmaho...@yahoo.com





From: Mike Tintner tint...@blueyonder.co.uk
To: agi agi@v2.listbox.com
Sent: Mon, July 19, 2010 2:08:28 PM
Subject: Re: [agi] Of definitions and tests of AGI


Yes that's what people do, but it's not what  programmed computers do.
 
The useful formulation that emerges here  is:
 
narrow AI (and in fact all rational) problems   have *a method of solution*  
(to 
be equated with general  method)   - and are programmable (a program is a 
method of  solution)
 
AGI  (and in fact all creative) problems do  NOT have *a method of solution* 
(in 
the general sense)  -  rather  a one.off *way of solving the problem* has to be 
improvised each  time.
 
AGI/creative problems do not in fact have a method  of solution, period. There 
is no (general) method of solving either the toy box  or the build-a-rock-wall 
problem - one essential feature which makes them  AGI.
 
You can learn, as you indicate, from *parts* of any  given AGI/creative 
solution, and apply the lessons to future problems - and  indeed with practice, 
should improve at solving any given kind of AGI/creative  problem. But you can 
never apply a *whole* solution/way to further  problems.
 
P.S. One should add that in terms of computers, we  are talking here of 
*complete, step-by-step* methods of  solution.
 


From: rob levy 
Sent: Monday, July 19, 2010 5:09 PM
To: agi 
Subject: Re: [agi] Of definitions and tests of AGI
  
And are you happy with:
 
AGI is about devising *one-off* methods ofproblemsolving (that only apply 
to 
the individual problem, and cannot bere-used - at 


least not in their totality)
 

Yes exactly, isn't that what people do?  Also, I think that being  able to 
recognize where past solutions can be generalized and where past  solutions can 
be varied and reused is a detail of how intelligence works that is  likely to 
be 
universal.

 
vs
 
narrow AI is about applying pre-existing*general* methods of 
problemsolving  
(applicable to whole classes ofproblems)?
 
 


From: rob levy 
Sent: Monday, July 19, 2010 4:45 PM
To: agi 
Subject: Re: [agi] Of definitions and tests ofAGI

Well, solving ANY problem is a little too strong.  This isAGI, not AGH 
(artificial godhead), though AGH could be an unintendedconsequence ;).  So 
I 
would rephrase solving any problem as being ableto come up with 
reasonable 
approaches and strategies to any problem (just ashumans are able to do).


On Mon, Jul 19, 2010 at 11:32 AM, Mike Tintner tint...@blueyonder.co.uk 
wrote:

Whaddya mean by solve the problem of how to  solve problems? Develop a 
universal approach to solving any problem?  Or find a method of solving a 
class of problems? Or what?


From: rob levy 
Sent: Monday, July 19, 2010 1:26 PM
To: agi 
Subject: Re: [agi] Of definitions and tests of  AGI


 
However, I see that there are no validdefinitions of AGI that 
explain 
what AGI is generally , and why thesetests are indeed AGI. Google - 
there are v. few defs. of AGI or Strong AI,period.




I like Fogel's idea that intelligence is the ability to solve the  
problem 
of how to solve problems in new and changing environments.  I  don't 
think 
Fogel's method accomplishes this, but the goal he expresses  seems to be 
the 
goal of AGI as I understand it. 


Rob
agi | Archives  | Modify Your Subscription   
agi | Archives  | Modify Your Subscription  

agi | Archives  | Modify Your Subscription   
agi | Archives  | Modify Your Subscription  

agi | Archives  | Modify Your Subscription   
agi | Archives  | Modify Your Subscription  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Of definitions and tests of AGI

2010-07-19 Thread Mike Tintner
The issue isn't what a computer can do. The issue is how you structure the 
computer's or any agent's thinking about a problem. Programs/Turing machines 
are only one way of structuring thinking/problemsolving - by, among other 
things, giving the computer a method/process of solution. There is an 
alternative way of structuring a computer's thinking, which incl., among other 
things, not giving it a method/ process of solution, but making it rather than 
a human programmer do the real problemsolving.  More of that another time.


From: Matt Mahoney 
Sent: Tuesday, July 20, 2010 1:38 AM
To: agi 
Subject: Re: [agi] Of definitions and tests of AGI


Creativity is the good feeling you get when you discover a clever solution to a 
hard problem without knowing the process you used to discover it.


I think a computer could do that.

 
-- Matt Mahoney, matmaho...@yahoo.com 






From: Mike Tintner tint...@blueyonder.co.uk
To: agi agi@v2.listbox.com
Sent: Mon, July 19, 2010 2:08:28 PM
Subject: Re: [agi] Of definitions and tests of AGI


Yes that's what people do, but it's not what programmed computers do.

The useful formulation that emerges here is:

narrow AI (and in fact all rational) problems  have *a method of solution*  (to 
be equated with general method)   - and are programmable (a program is a 
method of solution)

AGI  (and in fact all creative) problems do NOT have *a method of solution* (in 
the general sense)  -  rather a one.off *way of solving the problem* has to be 
improvised each time.

AGI/creative problems do not in fact have a method of solution, period. There 
is no (general) method of solving either the toy box or the build-a-rock-wall 
problem - one essential feature which makes them AGI.

You can learn, as you indicate, from *parts* of any given AGI/creative 
solution, and apply the lessons to future problems - and indeed with practice, 
should improve at solving any given kind of AGI/creative problem. But you can 
never apply a *whole* solution/way to further problems.

P.S. One should add that in terms of computers, we are talking here of 
*complete, step-by-step* methods of solution.



From: rob levy 
Sent: Monday, July 19, 2010 5:09 PM
To: agi 
Subject: Re: [agi] Of definitions and tests of AGI


  
  And are you happy with:

  AGI is about devising *one-off* methods of problemsolving (that only apply to 
the individual problem, and cannot be re-used - at 

  least not in their totality)


Yes exactly, isn't that what people do?  Also, I think that being able to 
recognize where past solutions can be generalized and where past solutions can 
be varied and reused is a detail of how intelligence works that is likely to be 
universal.

 
  vs

  narrow AI is about applying pre-existing *general* methods of problemsolving  
(applicable to whole classes of problems)?




  From: rob levy 
  Sent: Monday, July 19, 2010 4:45 PM
  To: agi 
  Subject: Re: [agi] Of definitions and tests of AGI


  Well, solving ANY problem is a little too strong.  This is AGI, not AGH 
(artificial godhead), though AGH could be an unintended consequence ;).  So I 
would rephrase solving any problem as being able to come up with reasonable 
approaches and strategies to any problem (just as humans are able to do).


  On Mon, Jul 19, 2010 at 11:32 AM, Mike Tintner tint...@blueyonder.co.uk 
wrote:

Whaddya mean by solve the problem of how to solve problems? Develop a 
universal approach to solving any problem? Or find a method of solving a class 
of problems? Or what?


From: rob levy 
Sent: Monday, July 19, 2010 1:26 PM
To: agi 
Subject: Re: [agi] Of definitions and tests of AGI



  However, I see that there are no valid definitions of AGI that explain 
what AGI is generally , and why these tests are indeed AGI. Google - there are 
v. few defs. of AGI or Strong AI, period.




I like Fogel's idea that intelligence is the ability to solve the problem 
of how to solve problems in new and changing environments.  I don't think 
Fogel's method accomplishes this, but the goal he expresses seems to be the 
goal of AGI as I understand it. 


Rob
  agi | Archives  | Modify Your Subscription   

  agi | Archives  | Modify Your Subscription  



agi | Archives  | Modify Your Subscription   

agi | Archives  | Modify Your Subscription  



  agi | Archives  | Modify Your Subscription   

  agi | Archives  | Modify Your Subscription  

  agi | Archives  | Modify Your Subscription   



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Comments On My Skepticism of Solomonoff Induction

2010-07-19 Thread Jim Bromer
I made a remark about confusing a domain with the values that was wrong.  What
I should have said is that you cannot just treat a domain of functions or of
programs as if they were a domain of numbers or values and expect them to
act in ways that are familiar from a study of numbers.



Of course you can use any of the members of a domain of numbers or numerical
variables in evaluation methods, but when you try that with a domain of
functions, programs or algorithms, you have to expect that you may get some
odd results.



I believe that since programs can be represented by strings, the Solomonoff
Induction of programs can be seen to be computable because you can just
iterate through every possible program string.  I believe that the same
thing could be said of all possible Universal Turing Machines.  If these two
statements are true, then I believe that the program is both computable and
will create the situation of Cantor's diagonal argument.  I believe that the
construction of the infinite sequences of Cantor's argument can be
constructed through an infinite computable program, and since the program
can also act on the infinite memory that Solomonoff Induction needs,
Cantor's diagonal sequence can also be constructed by a program.  Since
Solomonoff Induction is defined so that it will use every possible program,
this situation cannot be avoided.



Thus, Solomonoff Induction would be both computable and it would produce
uncountable infinities of strings.  When combined with the problem of
ordering the resulting strings in order to show how the functions might
approach stable limits for each probability, since you cannot a priori
determine the ordering of the programs that you would need for the
computation of these stable limiting probabilities you would be confronted
with the higher order infinity of all possible combinations of orderings of
the trans infinite strings that the program would hypothetically produce.



Therefore, Solomonoff Induction is either incomputable or else it cannot be
proven to be capable of avoiding the production of trans infinite strings
whose ordering is so confused that they would be totally useless for any
kind of prediction of a string based on a given prefix, as is claimed. The
system is not any kind of ideal but rather *a confused theoretical notion.
*

I might be wrong.  Or I might be right.

Jim Bromer



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com