We all know that our projects are not working at human-level capacity.  So
how could you test the essential characteristic of the program if you only
have a limited 'capacity' to try it out on?  This is the essential question
of testing during development.  Saying that if an algorithm works then it
works and if it doesn't then it needs some more work is not an adequate
test of whether or not the essential quality of an AGI is achievable using
your ideas.  There is not an easy answer to this question but I can at
least try to start to answer it.

Suppose that someone demonstrated that his numerical algorithm, which used
averaging and weighting was able to learn to speed up, slow down and steer
a remote controll car based on some kind of numerical feedback
for different goals.  Once done, once the program showed that it could
control the car adequately for each learned trip how would the programmer
show, given the constraint of his computational resources, that the the
essential characteristics of the program was truly AGI?  He would, for
example, have to show that the learning could be used in planning tor new
trips.  But then he would have to show that his program could work with
other kinds of problems including problems that used different IO
modalities. How does a purely numerical program solve word-based problems
for instance?  If the programmer thinks it could be done then this would be
a requirement to start to show that his program had adequate generality to
work on this program.

While many people say their program would be able to work with different
kinds of modalities (with different kinds of problems) the scientific proof
is making it do so.  It is not enough to say that we are creating the
program to do exactly that when that is the claim that is actively being
questioned.  Can't you guys get that?  To say that yeah we already thought
of that is pure nonsense.  What I am questioning here is not whether or not
you guys get this on a superficial level but whether or not you guys get
that the claim that you already have thought of a general untried theory
does not stand in for adequate testing methodology. To say that we already
know that is a little like saying that we already know that the program
would have to be just about capable of thinking like a human being to
demonstrate true AGI.  Well, so what?  Of course you already know that you
[more colorful language deleted].  If, for instance, you have a
careful algorithm worked out which you claim that you could show the
essence of AI generality, then what do you have to test the untried
algorithm out with?  The claim that you have it all worked out means that
you can get the coding done in a few months. The belief that your carefully
worked out method is going to work without substantial development is
delusional.  If you have it all worked out but cannot test it because it
will take a year of development then what could you do to begin testing it
now?  If you seriously think that you have it all figured out (except for
the tweaking) then you should be able to contrive all sorts of small tests
that will show almost immediately if your ideas would work or if they would
need a lot more work.  But it would have to be done in a way to show the
potential to work within a little complexity.  Did you get what I just said
even before I said it?
Jim Bromer



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to