Jim,

My pointwass as follows.  Suppose one is trying to achieve a certain
complex goal, G.

Suppose that everyone in a certain community agrees on a test that would
validate "achievement of G."

Even so, the community may not be able to agree on a test to validate
"having proceeded 50% of the way to G."  Given two partial achievements of
G, they may not agree on which constitutes greater progress.

If G is "race 1000 meters along a track", then probably everyone involved
can agree that once someone has raced 500 meters along that track, they are
halfway there...

But suppose G is "make a robot that can graduate from MIT."

Then, consider

-- one group has made a robot that can walk around the MIT campus, sit in
the desks there, and generally carry out the physical movements required to
go to MIT.  It can also deal with some of the physical aspects of social
interaction -- staying out of the way of human students in the hallway,
looking at the professor when he's talking, etc.  But it doesn't understand
what the professors are saying.  (call this A)

-- another group has made a program that can pass the exams for a number of
MIT classes, when fed the exams in a structured XML format (call this B)

How close is A to goal G?  How close is B to goal G?  Is group A almost to
the end-goal G, or have they just dealt with trivial, mechanical parts of
the problem?  Is group B almost to the end-goal G, or have they just dealt
with some parts of the problem in such an artificial way that it doesn't
genuinely constitute progress toward G?

My point is, the assessment of the distance between A and G, and the
distance between B and G, depends on one's theory of AGI...

For those who think human-level AGI is mostly about embodied interaction
with the world, A is almost to G, and B doesn't really constitute much
progress toward G.

For those who think human-level AGI is mostly about rational, symbolic,
linguistic thinking, B is almost to G, and A doesn't really constitute much
progress...

This exemplifies what I mean when I say that measurement of incremental
progress toward human-level AGI is theory-dependent...

Even if different people agree on the end goal G, they may disagree on
which aspects of G are most difficult or most critical, and hence they may
differ radicaly in the amount of successfulness they attribute to various
partial achievements of the end goal (like A and B) ...

-- Ben G







On Tue, Jan 1, 2013 at 12:53 PM, Jim Bromer <[email protected]> wrote:

> On Mon, Dec 31, 2012 at 11:18 AM, Ben Goertzel <[email protected]> wrote:
> That's the problem with partial validation and incremental evidence --
> its interpretation is highly theory-dependent...
>
>
> The interpretation is theory-dependent since you are validating or
> examining evidence of the theory.  That is trivially true.  If you are
> suggesting that you can only use the theory to test or examine the evidence
> then that is just plain wrong. Even using the theories and methods that you
> used in the programming, by sometimes using critical attacks against your
> own theories you can find flaws and weaknesses. And you can use other kinds
> of theories to examine the nature of your theoriess and programs.
>
> If you want me to comment on your comments less often, just let me know.
> Jim Bromer
>    *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/212726-deec6279> | 
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com>
>



-- 
Ben Goertzel, PhD
http://goertzel.org

"My humanity is a constant self-overcoming" -- Friedrich Nietzsche



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to