Because we would be testing primitive programs, they have to be defined by
the programmers.  Other AGI programmers might then comment sympathetically
to see if the -test- could be made a little more sophisticated for the
programmer's program. The program would have to learn.  The learning could
take place through direct instruction but the program would have to be able
to figure some things out for itself.  The learning method would have to go
beyond basic filling of variable types or of basic numerical computation.
So, to steer in between two objects by using the average of the distance
between them would not in itself be good enough (although it could be part
of something more sophisticated.)  Similarly, learning to implement a
conclusion based on categorization of objects into a predefined type would
not be good enough (but it could be part of something more sophisticated.)
A slightly more elusive rule is that the categorization type and the
categorization of a object into that type cannot just be defined as the
program is running.  What I am getting at is that simply shifting the
predefined category (or the predefined numerical relationships) to make it
dynamically definable (as the program is running) is not good enough.  It
has to be able to figure some things out appropriate for its level of
sophistication.

I want to make these tests simple enough to get myself to try something.
So the programmers will be able to define very simple tests for their
programs.  These may work on a very small number of objects and very easy
problems.  But how do we demonstrate generality?  By challenging the
programmer to show that the program can also work with slightly different
set of objects and problems or even different IO modalities.

For instance, I want to write a text-based program that is supposed to be
an  implementation of AGI.  However, I acknowledge that it will have to be
very simple and very primitive. I would want the program to learn to use
categorical reasoning.  But rather than defining all the categories
beforehand, or by writing a program that can then be programmed (through
text) as it is running to associate specific words with specific categories
(predefined and dynamically created) it must be able to learn to make
-some- meaningful associations through trial and error.

Now I am in some pretty subtle territory here, but the simplest way to do
this is to allow the program to make mistakes and to allow for some
possible ambiguity in the categorization and association.  An AGI program
is not a programming language compiler.  If your compiler was allowed to
creatively make assumptions and guesses and to treat the relationships
between variables and variable types to be prone to creative ambiguity
would you be very happy with it?  No you would not be.  I contend that this
is exactly what an AGI program must be if it not to be a novel programming
program.  If it can think creatively it would therefore be prone to
imaginative associations and to ambiguity.  (I don't know what the abstract
term for creative ambiguity is.  Maybe creative polymorphing?)  On the
other hand the program must also be working to disambiguate based on
experiential reasoning.

So maybe this is my own definition and you other guys would not want to use
the qualifications that I used.  But what I am getting at is that we have
to make sure that our AGI programs, which can learn, are not just
programming programs.  The program has to do some thinking for itself.  And
it has to be able to learn.  Furthermore, since our efforts are going to be
pretty primitive we have to define simple tests to demonstrate what we are
doing.  Finally in order to demonstrate generality, other sympathetic
programmers will be able to challenge the presenter with slightly different
kinds of tasks to see if the program can learn with slightly different
problems.  Finally, if the program is successful then the programmer might
be challenged to show that his basic ideas can work with different IO
modalities.

Jim Bromer

On Fri, Jun 8, 2012 at 9:42 PM, Ben Goertzel <[email protected]> wrote:
>
>
> Tests of incremental progress are often theory-dependent though...
>
> The Wright Brothers' wind tunnel tests were meaningful to them, but not
to skeptics who doubted the Wright Bros. (partly rigorous, partly
intuitive) understanding of aerodynamic theory
>
> Our current work using inference to integrate opportunistic
structure-building into navigation seems relevant to us (the OpenCog team)
based on our own theoretical understanding, but may seem meaningless to
others with different theoretical understandings...
>
> There is no unified approach to testing proto-AGI applications yet,
because there is no single, widely-accepted theory of AGI yet. Yet there
are many different theories, being used to motivate many different
approaches. You may not like any of these theories, which is your
prerogative ;)
>
> A number of us wrote a paper on this for the most recent AI Magazine,
>
> http://aaai.org/ojs/index.php/aimagazine/article/view/2322
>
> I gave a talk briefly summarizing some point from the paper at
AGI-11@Google, the video is here:
>
> http://www.youtube.com/watch?v=5OoYOjOEy6A
>
> -- Ben G
>
>
>
>
> On Fri, Jun 8, 2012 at 9:33 PM, Jim Bromer <[email protected]> wrote:
>>
>> On Fri, Jun 8, 2012 at 10:28 AM, Mike Tintner <[email protected]>
wrote:
>> If you...wish to be Don Quixote's who refuse to subject their
ideas/systems to any real, empirical test, (a la Don Q), that is your right.
>>
>> But the point I made is a good one, & needs insisting on... A proper
technological field should have an insistence on empirical testing. Yours
signally *doesn't*.
>> ------------------------------------
>> Well this is reasonable and we have all explained that we don't yet know
how to make true AGI programs that are capable of human-like learning that
would be efficient enough to watch. On the other hands most of us can point
to some actions of computer programs that certainly looks like the work of
human-like mental capabilities.
>> So what would constitute a real empirical test given the fact that we
are still at the dawn of the development of true AGI?
>> Jim Bomer
>> AGI | Archives | Modify Your Subscription
>
> Ben Goertzel, PhD
> http://goertzel.org
>
> "My humanity is a constant self-overcoming" -- Friedrich Nietzsche
>



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-c97d2393
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-2484a968
Powered by Listbox: http://www.listbox.com

Reply via email to