Non-reply.

Name one industry/ section of technology that began with, say, the invention of 
the car,  skipping all the many thousands of stages from the invention of the 
wheel. What you and others are proposing is far, far more outrageous.

It won't require one but a million strokes of genius in one - a stroke of 
divinity. More fantasy AGI.


From: deepakjnath 
Sent: Monday, July 19, 2010 12:00 PM
To: agi 
Subject: Re: [agi] Of definitions and tests of AGI


‘The intuitive mind is a sacred gift and the rational  mind is a faithful 
servant. We have created a society that honours the servant and has forgotten 
the gift.’

‘The intellect has little to do on the road to discovery. There comes a leap in 
consciousness, call it intuition or what you will, and the solution comes to 
you and you don’t know how or why.’

— Albert Einstein

We are here talking like programmers who needs to build a new system; Just 
divide the problem, solve it one by one, arrange the pieces and voila. We are 
missing something fundamentally here. That I believe has to come as a stroke of 
genius to someone.

thanks,
Deepak





On Mon, Jul 19, 2010 at 4:10 PM, Mike Tintner <tint...@blueyonder.co.uk> wrote:

  No, Dave & I vaguely agree here that you have to start simple. To think of 
movies is massively confused - rather like saying: "when we have created an 
entire new electric supply system for cars, we will have solved the problem of 
replacing gasoline" - first you have to focus just on inventing a radically 
cheaper battery, before you consider the possibly hundreds to thousands of 
associated inventions and innovations.involved in creating a major new supply 
system."

  Here it would be much simpler to focus on understanding a single photographic 
scene - or real, directly-viewed scene - of objects, rather than the many 
thousands involved in a movie.

  In terms of language, it would be simpler to focus on understanding just two 
consecutive sentences of a text or section of dialogue  - or even as I've 
already suggested, just the flexible combinations of two words - rather than 
the hundreds of lines and many thousands of words involved in a movie or play 
script.

  And even this is probably all too evolved, for humans only came to use formal 
representations of the world v. recently in evolution.

  The general point -  a massively important one - is that AGI-ers cannot 
continue to think of AGI in terms of massively complex and evolved intelligent 
systems, as you are doing. You have to start with the simplest possible systems 
and gradually evolve them.  Anything else is a defiance of all the laws of 
technology - and will see AGI continuing to go absolutely nowhere.

  From: deepakjnath 
  Sent: Monday, July 19, 2010 5:19 AM
  To: agi 
  Subject: Re: [agi] Of definitions and tests of AGI


  Exactly my point. So if I show a demo of an AGI system that can see two 
movies and understand that the plot of the movies are same even though they are 
2 entirely different movies, you would agree that we have created a true AGI.

  Yes there are always lot of things we need to do before we reach that level. 
Its just good to know the destination so that we will know it when it arrives.





  On Mon, Jul 19, 2010 at 2:18 AM, Mike Tintner <tint...@blueyonder.co.uk> 
wrote:

    Jeez,  no AI program can understand *two* consecutive *sentences* in a text 
- can understand any text period - can understand language, period. And you 
want an AGI that can understand a *story*. You don't seem to understand that 
requires cognitively a fabulous, massively evolved, highly educated, hugely 
complex set of powers . 

    No AI can understand a photograph of a scene, period - a crowd scene, a 
house by the river. Programs are hard put to recognize any objects other than 
those in v. standard positions. And you want an AGI that can understand a 
*movie*. 

    You don't seem to realise that we can't take the smallest AGI  *step* yet - 
and you're fantasying about a superevolved AGI globetrotter.

    That's why Benjamin & I tried to focus on v. v. simple tests - & they're 
still way too complex & they (or comparable tests) will have to be refined down 
considerably for anyone who is interested in practical vs sci-fi fantasy AGI.

    I recommend looking at Packbots and other military robots and hospital 
robots and the like, and asking how we can free them from their human masters 
and give them the very simplest of capacities to rove and handle the world 
independently - like handling and travelling on rocks. 

    Anyone dreaming of computers or robots that can follow "Gone with The Wind" 
or become a child (real) scientist in the foreseeable future pace Ben, has no 
realistic understanding of what is involved.

    From: deepakjnath 
    Sent: Sunday, July 18, 2010 9:04 PM
    To: agi 
    Subject: Re: [agi] Of definitions and tests of AGI


    Let me clarify. As you all know there are somethings computers are good at 
doing and somethings that Humans can do but a computer cannot.

    One of the test that I was thinking about recently is to have to movies 
show to the AGI. Both movies will have the same story but it would be a totally 
different remake of the film probably in different languages and settings. If 
the AGI is able to understand the sub plot and say that the story line is 
similar in the two movies then it could be a good test for AGI structure. 

    The ability of a system to understand its environment and underlying sub 
plots is an important requirement of AGI.

    Deepak


    On Mon, Jul 19, 2010 at 1:14 AM, Mike Tintner <tint...@blueyonder.co.uk> 
wrote:

      Please explain/expound freely why you're not "convinced" - and indicate 
what you expect,  - and I'll reply - but it may not be till tomorrow.

      Re your last point, there def. is no consensus on a general problem/test 
OR a def. of AGI.  

      One flaw in your expectations seems to be a desire for a single test -  
almost by definition, there is no such thing as 

      a) a single test - i.e. there should be at least a dual or serial test - 
having passed any given test, like the rock/toy test, the AGI must be presented 
with a new "adjacent" test for wh. it has had no preparation,  like say 
building with cushions or sand bags or packing with fruit. (and neither 
rock/toy test state that clearly)

      b) one kind of test - this is an AGI, so it should be clear that if it 
can pass one kind of test, it has the basic potential to go on to many 
different kinds, and it doesn't really matter which kind of test you start with 
- that is partly the function of having a good.definition of AGI .


      From: deepakjnath 
      Sent: Sunday, July 18, 2010 8:03 PM
      To: agi 
      Subject: Re: [agi] Of definitions and tests of AGI


      So if I have a system that is close to AGI, I have no way of really 
knowing it right? 

      Even if I believe that my system is a true AGI there is no way of 
convincing the others irrefutably that this system is indeed a AGI not just an 
advanced AI system.

      I have read the toy box problem and rock wall problem, but not many 
people will still be convinced I am sure.

      I wanted to know that if there is any consensus on a general problem 
which can be solved and only solved by a true AGI. Without such a test bench 
how will we know if we are moving closer or away from our quest. There is no 
map.

      Deepak




      On Sun, Jul 18, 2010 at 11:50 PM, Mike Tintner <tint...@blueyonder.co.uk> 
wrote:

        I realised that what is needed is a *joint* definition *and*  range of 
tests of AGI.

        Benamin Johnston has submitted one valid test - the toy box problem. 
(See archives).

        I have submitted another still simpler valid test - build a rock wall 
from rocks given, (or fill an earth hole with rocks).

        However, I see that there are no valid definitions of AGI that explain 
what AGI is generally , and why these tests are indeed AGI. Google - there are 
v. few defs. of AGI or Strong AI, period.

        The most common: AGI is human-level intelligence -  is an embarrassing 
non-starter - what distinguishes human intelligence? No explanation offered.

        The other two are also inadequate if not as bad: Ben's "solves a 
variety of complex problems in a variety of complex environments". Nope, so 
does  a multitasking narrow AI. Complexity does not distinguish AGI. Ditto 
Pei's - something to do with "insufficient knowledge and resources..."    
"Insufficient" is open to narrow AI interpretations and reducible to 
mathematically calculable probabilities.or uncertainties. That doesn't 
distinguish AGI from narrow AI.

        The one thing we should all be able to agree on (but who can be sure?) 
is that:

        ** an AGI is a general intelligence system, capable of independent 
learning**

        i.e. capable of independently learning new activities/skills with 
minimal guidance or even, ideally, with zero guidance (as humans and animals 
are) - and thus acquiring a "general", "all-round" range of intelligence..  

        This is an essential AGI goal -  the capacity to keep entering and 
mastering new domains of both mental and physical skills WITHOUT being 
specially programmed each time - that crucially distinguishes it from narrow 
AI's, which have to be individually programmed anew for each new task. Ben's 
AGI dog exemplified this in a v simple way -  the dog is supposed to be able to 
learn to fetch a ball, with only minimal instructions, as real dogs do - they 
can learn a whole variety of new skills with minimal instruction.  But I am 
confident Ben's dog can't actually do this.

        However, the independent learning def. while focussing on the 
distinctive AGI goal,  still is not detailed enough by itself.

        It requires further identification of the **cognitive operations** 
which distinguish AGI,  and wh. are exemplified by the above tests.

        [I'll stop there for interruptions/comments & continue another time].

         P.S. Deepakjnath,

        It is vital to realise that the overwhelming majority of AGI-ers do not 
* want* an AGI test -  Ben has never gone near one, and is merely typical in 
this respect. I'd put almost all AGI-ers here in the same league as the US 
banks, who only want mark-to-fantasy rather than mark-to-market tests of their 
assets.
              agi | Archives  | Modify Your Subscription  




      -- 
      cheers,
      Deepak

            agi | Archives  | Modify Your Subscription   

            agi | Archives  | Modify Your Subscription  




    -- 
    cheers,
    Deepak

          agi | Archives  | Modify Your Subscription   

          agi | Archives  | Modify Your Subscription  




  -- 
  cheers,
  Deepak

        agi | Archives  | Modify Your Subscription   

        agi | Archives  | Modify Your Subscription  




-- 
cheers,
Deepak

      agi | Archives  | Modify Your Subscription   



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com

Reply via email to