I think AGI test should fundamentally be a learning ability test. When
there's a specified domain in which the system should demonstrate it
competency (like 'chatting' or 'playing Go'), it's likely easier to write
narrow solution. If system is not a RSI AI already, resulted competency
depends on quirks of given domain too much, and it's unclear how
improvements in general learning ability translate in competency.

I see such test along the lines of feeding the system a stream of frame-like
representations, and then it should be able to fill in the blanks in
incomplete representations based on analogies. It's general enough to be
AGI-complete, and simple enough to test existing narrow AI systems.
Depending on supplied data it can be taken out of reach of algorithms which
are too biased towards their narrow domain. Frame-like representations allow
to construct tasks of different complexity according to human intuition, and
likewise test their feasibility. This input stream shouldn't be too
cluttered (it shouldn't include things like cyc database, wikipedia, etc.),
but should assume zero knowledge.

-- 
Vladimir Nesov                            mailto:[EMAIL PROTECTED]

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=55007138-dd3f75

Reply via email to