Jim,

 

"Endless variety" does not mean I am handling them all or that I actually
have handled them all. It merely means that, when presented with one case
selected from an endless variety of cases, I know I can handle that case. I
know I can handle any case out of an endless variety. Here is an example of
handling an endless variety: numbers. Give me any number you want, any, even
if you travel around the Earth writing digits evey millimeter, and I can
alwas find the next by adding 1 to your number. 

 

A much more advanced example is emergent inference (EI). There is an
infinite variety of cases (I call them causets) that can arise. To each one,
there corresponds exactly one algorithm, which defines the response
behavior. There are infinitely many of these algorithms, but EI maps each
case to its corresponding algorithm. One-on-one. It doesn't mean you
actually *know* the solutions to all problems. But it does mean that, if and
when you are presented with one of them, EI will find the solution for you. 

 

See the difference? No, it is not a figure of speech. 

 

Sergio

 

 

 

From: Jim Bromer [mailto:[email protected]] 
Sent: Saturday, June 09, 2012 4:27 PM
To: AGI
Subject: Re: [agi] The 2 Tests of AGI - generalizability & creativity

 

Sergio said:

Any person with a brain, and any program which has EI, can automatically
handle an endless variety.

 

Sergio,

This is a figure of speech.  Nothing with a brain or any program can
automatically handle an endless variety of...

 

Suppose we take this as a given: "a robot that is able to grasp any object."
Do you sincerely believe that an assertion like this is a reasonable
starting point?  Do you think that if a robot tried to grasp a star that it
might run into a problem?

 

Similarly, there would be other more reachable objects which the robot could
not grasp.  Like water.  This exception might lead us to ask what does it
mean to grasp something?  Could we define to grasp with a little more leeway
than the normal definition to stretch the concept so that we could test if
the water is able to "grasp" water?  But why should we.  Isn't the
exaggeration where instead of saying that the robot has the ability, "to
grasp objects," we exaggerate and say the robot has the ability "to grasp
any object."  Wouldn't a more constrained assessment be the better
observation?

 

We use figure of speeches because we have to.  But we have the opportunity,
once we have learned how to be a little more wary of exaggerated assumptions
and absurd possibilities to qualify what we are saying once in a while so
that we do not sound like we are making exaggerations all the time.

 

No person is able to automatically deal with an endless variety of
...whatever... because it is not a definable thing.  I am not saying that
language like this is not acceptable I am just saying that you are not
saying anything that I haven't been saying all along but you are not saying
it very well.

 

The use of exaggerated possibilities (automatically handling an endless
variety) as a starting point for an empirical test is evidence of a poorly
thought out systematization of the situation.  An empirical test has to
start with something that is relatively more definable.  "The ability to
handle new situations" is not much better but at least it is based on a less
infinite definition.  It says it all and it uses terms that are familiar to
people like school teachers.  It does not add some absurd mathematicity and
it is not something that would be easily distorted by an exaggerator.  But
we agreed that it had to be able to automatically handle any situation, just
like a human being is?  Really, are you so insensitive as to believe that
all human beings are able to "handle" any situation?  Exaggerated starting
points and goal points are just the waste waters of sophistry.

 

Because we need to be able to measure progress (in some way, not necessarily
a literal measurement) we need to be able to design tests that show the
simplest ability to learn something new but then we want to show that it is
a truly general learning ability because we do have experience of getting
stuck in a back water of AI.

 

Jim Bromer

 

Jim Bromer

 

On Sat, Jun 9, 2012 at 3:17 PM, Sergio Pissanetzky <[email protected]>
wrote:

Jim,

 

Any  person with a brain, and any program which has EI, can automatically
handle an endless variety. The handling is not in actually computing all the
possibilities, but in the ABILITY to handle any one possibility that happens
to arise out of and infinite, but countable assortment. Emergent inference
has that ability. This is why. EI maps directly from any element of an
infinite - but countable - collection of causal sets, obtained directly from
sensory organs or sensors, to the corresponding algorithm, out of an
infinite - but countable - collection of structured algorithms. That is
exactly how people function. You don't know what is behind the door until
you open it. But then, after you have opened it, you learn, and your brain
uses that one particular causal set that you just learned to build the
corresponding behavior. 

 

Any robot with EI has already demonstrated that it can handle different
kinds of things. I'll say this even stronger, let's see if it catches. EI
does not, repeat, does not have a program at all! I do not write programs
for EI! Yet, EI can handle what I throw at it. 

 

Sergio

 

 

From: Jim Bromer [mailto:[email protected]] 
Sent: Saturday, June 09, 2012 9:21 AM
To: AGI
Subject: Re: [agi] The 2 Tests of AGI - generalizability & creativity

 

Mike Tintner <[email protected]> wrote:

The [2nd] basic principle of AGI testing is v. simple 

The principle is: does this robot have "generalizabilty"? Can it
automatically generalize whatever capacity it has been designed with?
Crudely: can it "take off"?  

- then it's AGI if it can automatically go on to handle an endless diversity
of objects without any additional programming.

--------------------------------

 

No program and no person, "can automatically go on to handle an endless
diversity..."  What you mean is that the program has to demonstrate that it
can handle different kinds of things including things that it was not
specifically programmed to handle.

 

Everyone understands this.  That is what we mean by genuine learning.

 

Yes, an AGI program has to demonstrate that it can handle new ideas or new
situations.  However, given the fact that we acknowledge that we cannot
write programs that can deal with very much complexity, this is very
different from saying that it can deal with an endless variety of ideas of
situations.  Some ideas or situations are dependent on successfully dealing
with numerous complications.  This simple fact is what complicates this
problem so seriously.  From a human point of view, the degree of learning as
one progresses through school or through any learning experience seems like
it is a process of a simple increasing complexity.  But the evidence that we
have from AI experiments is that the complexity of the progress of human
learning is much steeper and increasingly steeper the more you get into
advanced subject matters.  That is why cutting edge technological
achievement is so difficult before a revolutionary scientific advancement in
the field is found.

 

My attempt to show that the program could deal with ambiguity and
referential poly morphs was a way to go beyond the Turing Test given the
fact that our programs cannot deal with an endlessness of possibilities.

 

The program has to be able to learn about new things.

In order to demonstrate that the program can do this, other sympathetic
programmers will challenge the programmer to demonstrate that his program
can learn about something which is similar to his demo but which he did not
anticipate before hand.  If his program works then he can be challenged to
other variations.  Knowing that he might have some trouble adapting his
program to other modalities, he would be given time to show that his ideas
can work with other IO contexts.

 

Finally, in order to differentiate his program from a novel kind of
programming environment, he would have to show that his program can do some
thinking for itself.  By avoiding the kind of user environment where the
user could program the computer to recognize simplistic categories and
variables or references that belonged to those categories the program would
have to demonstrate that it could work with ambiguity and polymorphous
references.

 

Jim


 

On Sat, Jun 9, 2012 at 6:25 AM, Mike Tintner <[email protected]>
wrote:

Jim: what would constitute a real empirical test ?

 

The [2nd] basic principle of AGI testing is v. simple - and a particular
test doesn't have to be defined, though suggestions like I and Benjamin made
are always helpful.

 

The principle is:   does this robot have "generalizabilty"? Can it
automatically generalize whatever capacity it has been designed with?
Crudely: can it "take off"?  

 

So if you have a robot that is focussed to begin with on nothing else but
handling - a handling/manipulative robot - then it's AGI if it can
automatically go on to handle an endless diversity of objects without any
additional programming. If it starts by handling small rocks, then it should
automatically be able to grasp bricks, bottles, small pyramids, ropes etc
and whatever surprise objects are presented to it, (within reasonable
boundaries). As with humans and infants, this will be by a process of trial
and error, which may include failures but will include sucess after success.

 

Ditto if you have a robot that can locomote on one terrain, then it's AGI if
it can automatically go on to handle new kinds of terrain - if it starts
with stony ground, it should be able to go on to, say, rocky ground, grassy
ground, sandy ground etc. waterbeds - an endless range of new terrains.

 

The same principle would apply "in theory" to a language AGI - if it can
talk about navigating one terrain, can it go on to discuss an endless range
of new terrains?

 

I say, "in theory" here because the idea of a language AGI in any
foreseeable future is farcical - and anyone contemplating it hasn't got much
of a clue about the conceptual nature of language.

 

The endless generalization of a faculty and particular activity is what
distinguishes humans and animals  - we do go on to handle an endless range
of new objects and navigate an endless range of new terrains -.. and talk to
an endless range of new personalities with new philosophies, attitudes,
vocabularies, accents etc.  Our capacity to do this is the basis of our
acquiring new skills/activities., Our capacity to handle ever new objects,
for example, is basic to handle ever new rackets/bats and successively learn
tennis/table tennis/baseball/cricket/hockey et al

 

This basic principle is, I think, not something that anyone here could or
would argue with. Obviously an AGI must have generalizability. But I doubt
whether a single project is aiming directly/immediately for a *testable*
version of it. I can virtually guarantee that Ben and Boris et al aren't.

 

The 1st principle of AGI testing is also simple and is inseparable from the
2nd  - but will be more controversial.

 

It is creativity. AN AGI must be able to create a given course of action
WITHOUT having been specifically programmed for it. It must be able to
handle new object after new object, new terrain after new terrain WITHOUT
any programming for those specific objects. 

 

So you should be able to tell your AGI in one form or other - "pick up that
object" - and it will both design and effect the necessary course of action,
with no human programming input.

 

This again is absolutely fundamental to how all humans and animals pursue
courses of action - we can take "briefs"/brief instructions and flesh out
the appropriate course of action.  It is also fundamental to Ben's "dog
fetch ball" test of old. (As I said, Ben's first intuitions are often good
ones. In reality, a dog who fetches a ball always has to create the
necessary course of action in a somewhat unfamiliar field. But the actual
version of a dog fetching a ball implemented by Ben had nothing to do with
AGI).

 

Generalizability and creativity (creating a course of action without
specific programming) - those are the fundamental,intertwined, **clearly.
testable ** principles of AGI.  


AGI |  <https://www.listbox.com/member/archive/303/=now> Archives
<https://www.listbox.com/member/archive/rss/303/10561250-164650b2> | Modify
Your Subscription

        

 <https://www.listbox.com/member/archive/rss/303/10561250-164650b2>  


 <https://www.listbox.com/member/archive/rss/303/10561250-164650b2> AGI |
Archives | Modify Your Subscription

 <https://www.listbox.com/member/archive/rss/303/10561250-164650b2> 

 <https://www.listbox.com/member/archive/rss/303/10561250-164650b2>  


 <https://www.listbox.com/member/archive/rss/303/10561250-164650b2> AGI |
Archives | Modify Your Subscription

 <https://www.listbox.com/member/archive/rss/303/10561250-164650b2> 

 <https://www.listbox.com/member/archive/rss/303/10561250-164650b2>  


 <https://www.listbox.com/member/archive/rss/303/10561250-164650b2> AGI |
Archives | Modify Your Subscription

 <https://www.listbox.com/member/archive/rss/303/10561250-164650b2> 

 <https://www.listbox.com/member/archive/rss/303/10561250-164650b2>  




-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-c97d2393
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-2484a968
Powered by Listbox: http://www.listbox.com

Reply via email to