Mike Tintner <[email protected]> wrote:
The [2nd] basic principle of AGI testing is v. simple
The principle is: does this robot have "generalizabilty"? Can it
automatically generalize whatever capacity it has been designed with?
Crudely: can it "take off"?
- then it's AGI if it can automatically go on to handle an endless
diversity of objects without any additional programming.
--------------------------------

No program and no person, "can automatically go on to handle an endless
diversity..."  What you mean is that the program has to demonstrate that it
can handle different kinds of things including things that it was not
specifically programmed to handle.

Everyone understands this.  That is what we mean by genuine learning.

Yes, an AGI program has to demonstrate that it can handle new ideas or new
situations.  However, given the fact that we acknowledge that we cannot
write programs that can deal with very much complexity, this is very
different from saying that it can deal with an endless variety of ideas of
situations.  Some ideas or situations are dependent on successfully dealing
with numerous complications.  This simple fact is what complicates this
problem so seriously.  From a human point of view, the degree of learning
as one progresses through school or through any learning experience seems
like it is a process of a simple increasing complexity.  But the evidence
that we have from AI experiments is that the complexity of the progress of
human learning is much steeper and increasingly steeper the more you get
into advanced subject matters.  That is why cutting edge technological
achievement is so difficult before a revolutionary scientific advancement
in the field is found.

My attempt to show that the program could deal with ambiguity and
referential poly morphs was a way to go beyond the Turing Test given the
fact that our programs cannot deal with an endlessness of possibilities.

The program has to be able to learn about new things.
In order to demonstrate that the program can do this, other sympathetic
programmers will challenge the programmer to demonstrate that his program
can learn about something which is similar to his demo but which he did not
anticipate before hand.  If his program works then he can be challenged to
other variations.  Knowing that he might have some trouble adapting his
program to other modalities, he would be given time to show that his ideas
can work with other IO contexts.

Finally, in order to differentiate his program from a novel kind of
programming environment, he would have to show that his program can do some
thinking for itself.  By avoiding the kind of user environment where the
user could program the computer to recognize simplistic categories and
variables or references that belonged to those categories the program would
have to demonstrate that it could work with ambiguity and polymorphous
references.

Jim


On Sat, Jun 9, 2012 at 6:25 AM, Mike Tintner <[email protected]>wrote:

> **
> Jim: what would constitute a real empirical test ?
>
> The [2nd] basic principle of AGI testing is v. simple - and a particular
> test doesn't have to be defined, though suggestions like I and Benjamin
> made are always helpful.
>
> The principle is:   does this robot have "generalizabilty"? Can it
> automatically generalize whatever capacity it has been designed with?
> Crudely: can it "take off"?
>
> So if you have a robot that is focussed to begin with on nothing else but
> handling - a handling/manipulative robot - then it's AGI if it can
> automatically go on to handle an endless diversity of objects without any
> additional programming. If it starts by handling small rocks, then it
> should automatically be able to grasp bricks, bottles, small pyramids,
> ropes etc  and whatever surprise objects are presented to it, (within
> reasonable boundaries). As with humans and infants, this will be by a
> process of trial and error, which may include failures but will include
> sucess after success.
>
> Ditto if you have a robot that can locomote on one terrain, then it's AGI
> if it can automatically go on to handle new kinds of terrain - if it starts
> with stony ground, it should be able to go on to, say, rocky ground, grassy
> ground, sandy ground etc. waterbeds - an endless range of new terrains.
>
> The same principle would apply "in theory" to a language AGI - if it can
> talk about navigating one terrain, can it go on to discuss an endless range
> of new terrains?
>
> I say, "in theory" here because the idea of a language AGI in any
> foreseeable future is farcical - and anyone contemplating it hasn't got
> much of a clue about the conceptual nature of language.
>
> The endless generalization of a faculty and particular activity is what
> distinguishes humans and animals  - we do go on to handle an endless range
> of new objects and navigate an endless range of new terrains -.. and talk
> to an endless range of new personalities with new philosophies, attitudes,
> vocabularies, accents etc.  Our capacity to do this is the basis of our
> acquiring new skills/activities., Our capacity to handle ever new objects,
> for example, is basic to handle ever new rackets/bats and successively
> learn tennis/table tennis/baseball/cricket/hockey et al
>
> This basic principle is, I think, not something that anyone here could or
> would argue with. Obviously an AGI must have generalizability. But I doubt
> whether a single project is aiming directly/immediately for a *testable*
> version of it. I can virtually guarantee that Ben and Boris et al aren't.
>
> The 1st principle of AGI testing is also simple and is inseparable from
> the 2nd  - but will be more controversial.
>
> It is creativity. AN AGI must be able to create a given course of action
> WITHOUT having been specifically programmed for it. It must be able to
> handle new object after new object, new terrain after new terrain WITHOUT
> any programming for those specific objects.
>
> So you should be able to tell your AGI in one form or other - "pick up
> that object" - and it will both design and effect the necessary course of
> action, with no human programming input.
>
> This again is absolutely fundamental to how all humans and animals pursue
> courses of action - we can take "briefs"/brief instructions and flesh out
> the appropriate course of action.  It is also fundamental to Ben's "dog
> fetch ball" test of old. (As I said, Ben's first intuitions are often good
> ones. In reality, a dog who fetches a ball always has to create
> the necessary course of action in a somewhat unfamiliar field. But the
> actual version of a dog fetching a ball implemented by Ben had nothing to
> do with AGI).
>
> Generalizability and creativity (creating a course of action without
> specific programming) - those are the fundamental,intertwined, **clearly.
> testable ** principles of AGI.
>    *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/10561250-164650b2> |
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com>
>



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-c97d2393
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-2484a968
Powered by Listbox: http://www.listbox.com

Reply via email to