It's more than obvious:

So far there's one only meaningful measure and example of "intelligence"
and "general intelligence", that's what we want to create, an entity that's
functionally comparable and better.

There's one way that one can estimate how far a particular person (with
their particular history/behavioral records, cognitive
capacity/capabilities/talents/skills) is:

- from graduating from MIT (one must specify what exactly is meant by
half-way - half the age from 0, or half in the curricullum) or
- from saying sentences with 3 word
- from running or jumping
- from using past tense (in particular language, with particular mistakes)
- from playing a tune on a piano,
- from asking a question about so-and so
- from giving a particular typical answer of a particular typical question,
with her particular personal history/experience in particular environments
and particular interactions with other people, particular vocabulary etc.

It's by comparing to a model human who has gone through it or averaged
model humans.

That's the only meaningful and justified/natural way to measure how far
another non-human system is from a human-level AGI, and the only
"human-level".

No additional justification and "consensus" are required, and nobody can
question it - well, they can, but that would be like questioning the thing
you call "intelligence" (that yet "nobody knows what it is") of you
yourself or of your child.

If a system can't deal even with baby's cognitive business and it requires
some kind of a snobbish "community" to agree that some work is a progress
or not, then it's apparently not a real progress.

Artificial "consensus" is not a progress, it's politics, vanity fair, a way
to persuade other people who don't have a clue that  this is something
"scientific" and that there's a progress - some numbers now are bigger than
they were.

As of real results and real progress - they will be obvious for a dog, the
"numbers" will become infinitely bigger than what they were before.



>
>    - *From:* Ben Goertzel <[email protected]>
>
>
>    - *To:* [email protected]
>
>
>    - *Subject:* Re: [agi] Partial Validation and Incremental Evidence
>    Does Not Have to be Theory-Circular
>
>
>    - *Date:* Tue, 1 Jan 2013 14:15:32 -0500
>
> Jim,
> My pointwass as follows. Â Suppose one is trying to achieve a certain
> complex goal, G.
> Suppose that everyone in a certain community agrees on a test that would
> validate "achievement of G."
> Even so, the community may not be able to agree on a test to validate
> "having proceeded 50% of the way to G." Â Given two partial achievements of
> G, they may not agree on which constitutes greater progress.
> If G is "race 1000 meters along a track", then probably everyone involved
> can agree that once someone has raced 500 meters along that track, they are
> halfway there...
> But suppose G is "make a robot that can graduate from MIT."
> Then, consider
> -- one group has made a robot that can walk around the MIT campus, sit in
> the desks there, and generally carry out the physical movements required to
> go to MIT. Â It can also deal with some of the physical aspects of social
> interaction -- staying out of the way of human students in the hallway,
> looking at the professor when he's talking, etc. Â But it doesn't
> understand what the professors are saying. Â (call this A)
> -- another group has made a program that can pass the exams for a number
> of MIT classes, when fed the exams in a structured XML format (call this B)
> How close is A to goal G? Â How close is B to goal G? Â Is group A almost
> to the end-goal G, or have they just dealt with trivial, mechanical parts
> of the problem? Â Is group B almost to the end-goal G, or have they just
> dealt with some parts of the problem in such an artificial way that it
> doesn't genuinely constitute progress toward G?
> My point is, the assessment of the distance between A and G, and the
> distance between B and G, depends on one's theory of AGI...
> For those who think human-level AGI is mostly about embodied interaction
> with the world, A is almost to G, and B doesn't really constitute much
> progress toward G.
> For those who think human-level AGI is mostly about rational, symbolic,
> linguistic thinking, B is almost to G, and A doesn't really constitute much
> progress...
> This exemplifies what I mean when I say that measurement of incremental
> progress toward human-level AGI is theory-dependent...
> Even if different people agree on the end goal G, they may disagree on
> which aspects of G are most difficult or most critical, and hence they may
> differ radicaly in the amount of successfulness they attribute to various
> partial achievements of the end goal (like A and B) ...
> -- Ben G
> On Tue, Jan 1, 2013 at 12:53 PM, Jim Bromer <[email protected]> wrote:
> On Mon, Dec 31, 2012 at 11:18 AM, Ben Goertzel <[email protected]> wrote:
> That's the problem with partial validation and incremental evidence --
> its interpretation is highly theory-dependent...
> Â
> Â
> The interpretation is theory-dependent since you are validating or
> examining evidence of the theory.  That is trivially true.  If you are
> suggesting that you can only use the theory to test or examine the evidence
> then that is just plain wrong. Even using the theories and methods that you
> used in the programming, by sometimes using critical attacks against your
> own theories you can find flaws and weaknesses. And you can use other kinds
> of theories to examine the nature of your theoriess and programs.
> Â
> If you want me to comment on your comments less often, just let me know.
>

Jim Bromer



-- 
....* Todor "Tosh" Arnaudov ....*
*
.... Twenkid Research:*  http://research.twenkid.com

.... *Self-Improving General Intelligence Conference*:
http://artificial-mind.blogspot.com/2012/07/news-sigi-2012-1-first-sigi-agi.html

*.... Todor Arnaudov's Researches Blog**: *
http://artificial-mind.blogspot.com



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to