I have to agree with Ben that this is only valid for "blank slate" style
AGI's, which are just a bunch of learning algorithms clobbered together.
However as Matt implied most structured approaches of programming would
yield quick and efficient results for such tests, i.e. calculators.
Qalculate is open source and supports a wide variety of advanced
mathematical concepts.

My generic tests for AGI,
are that at the "animal intelligence" level,
they can have and satisfy desires,
which most contemporary computers are capable to an extent,
even if the desires are typically stimulated by external input,
they still satisfy then by running the programs or computation.

Similarly Roomba's can tell when they need to feed, are low on batteries,
and then search out a food source i.e. charging location.
Instincts which rule the world of animals, can in many ways be considered
pre-programmed responses, with some host variation.
i.e. foxes as adults are pre-programmed to be weary of humans,
unless genetically bred to the point of domestication.

Anyways so it's safe to say that our computers are on level with
domesticated small animals, like perhaps lizards, albeit more useful.
Similar to how a cat may meow that it wants more food, a laptop or cell
phone may bleep to indicate low batteries.

Low-level human intelligence is distinct from animals in that it can make
it's own choices about what it does, or how it satisfies it's needs.

I remember skimming some sci-fi short-stories about the future, which to my
disappointment was mostly very sexual like due to american male target
audience. However what I found interesting was a story where a man got an
upgrade to their "wife-bot", to make it more "human" by some university
students, afterwhich it chose to get a portable power supply and leave the
f**ker,  he was disappointed to know that the upgrade voided his warranty.

Similarly a human-level AGI should have all the capacities of it's
domestic-animal predecessors (computers/phones)  while also having the
ability to choose how they live and who they work for.





On Wed, Dec 26, 2012 at 4:11 PM, Todor Arnaudov <[email protected]> wrote:

>
>    -
>    - >I agree that existing tests for human child intelligence are
>    basically fine for AGIs being built and taught via a
>    human-development-related path...
>
>
> Yes, I'm talking about developmental psychology, language acquisition
> etc., it's not a dumb a/b/c test taking...
>
> >However, I think such tests are useful as qualitative guides for folks
> working on AGI. Â  I think if you start interpreting them as rigorous
> >progress metrics, somebody is going to be able to hack a system to pass
> those early childhood tests without any capability of extending further >on
> to adult intelligence....
>
> I'm talking about SIGI (Self-improving ...), it's a sensori-motor AGI that
> lives and interact with environment, teachers and other agents (any other,
> as general as a corresponding human child would be capable to, at least),
> it's not a test-taking hack.
>
> Such a SIGI doesn't have another way to learn than from raw sensory data
> (initially), after some time it can learn and acquire quickly data with
> more abstract forms of sensing and directly connect to high levels, but it
> must have all initial sensory interfaces required to do human-like
> activities which a human would do and could do if enter in its virtual
> world, or if the machine is fed with data from the real world and interact
> with the real world.
>
> In order to be qualified as "3 year-old" the same system must be capable
> to pass the requirements for all lower ages (kindergarten textbooks
> programs are divided in weeks, and each week has different subjects, for
> earlier ages one can pretend being a mother/parent).
>
> The SIGI must start from a position similar to a human (with some
> relaxations possible), e.g. at 0 years old it shouldn't be capable to speak
> sentences, write, count from 1 to 20 etc.
>
> .... Todor "Tosh" Arnaudov ....
>
> .... Twenkid Research:  http://research.twenkid.com
>
> .... Self-Improving General Intelligence Conference:
> http://artificial-mind.blogspot.com/2012/07/news-sigi-2012-1-first-sigi-agi.html
>
> .... Todor Arnaudov's Researches Blog: http://artificial-mind.blogspot.com
>
>
>    - *From:* Ben Goertzel <[email protected]>
>    - *To:* [email protected]
>    - *Subject:* Re: [agi] Re: A test for less narrow artificial
>    intelligence
>    - *Date:* Wed, 26 Dec 2012 11:44:30 -0500
>
>
>    I agree that existing tests for human child intelligence are basically
>    fine for AGIs being built and taught via a human-development-related 
> path...
>
>    However, I think such tests are useful as qualitative guides for folks
>    working on AGI. Â  I think if you start interpreting them as rigorous
>    progress metrics, somebody is going to be able to hack a system to pass
>    those early childhood tests without any capability of extending further on
>    to adult intelligence.... Â
>
>    -- Ben G
>
>    *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/5037279-a88c7a6d> | 
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com>
>



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to