Yes of course an AGI will make mistakes - and sometimes fail - in adapting. I say that v. explicitly.

But your other point also skirts the problem - which is that the AGI must first identify what it needs to adapt to, before it can start googling/asking for advice.

I think we need better to focus the problem. I gave I think a good example. How is a system going to build a wall if the only materials it knows to build a wall with - bricks - are unavailable?

You might care to focus the submersible problem. Let's say it is that the submersible finds it cannot rise - it just won't go upwards. I'm not v. mechanical so you guys can perhaps flesh it out. Something is preventing it from rising, but all the obvious things are functioning OK - define what it knows, A-X, define what the mysterious problem is, Z, and then you have a true AGI problem - how does it generalize fromA-X or any other knowledge to Z (a v. different domain)? Z, for example, might be a squid (unless it already knows about squids).

A good deal of imagination has to go into just defining AGI problems - you have to spend a good deal of time on it.

Andi:
One thing I would expect from an AGI is that it least it would be able
to Google for something that might talk about how to do whatever it
needs and to have available library references on the subject.  Being
able to follow and interpret written instructions takes a lot of
intelligence in itself.  And a lot of times there are important
conventions about how to do certain things and it is a bad idea to
just do things completely your own way.

Of course, we do expect an intelligent agent to often be able to
figure things out on its own.  But you have to remember that when you
allow for this, there will sometimes be mistakes.  It is a necessary
consequence of trying something new that it won't always work.  And I
am afraid that people have an unrealistic expectation that the AGI
will be able to do something new without ever getting it wrong.  I'm
expecting there will be a lot of pain and disappointment when it
doesn't work this way.  Because it can't work that way.  An AGI
working in unknown territory will have to make mistakes.
Andi


Quoting Mike Tintner <[EMAIL PROTECTED]>:

Jeez, Will, the point of Artificial General Intelligence is that it can
start adapting to an unfamiliar situation and domain BY ITSELF.  And
your FIRST and only response to the problem you set was to say: "I'll
get someone to tell it what to do."

IOW you simply avoided the problem and thought only of cheating. What a
solution, or merest idea for a solution, must do is tell me how that
intelligence will start adapting by itself  - will generalize from its
existing skills to cross over domains.

Then, as my answer indicated, it may well have to seek some
instructions and advice - especially and almost certainly  if it wants
to acquire a whole new major skill, as we do, by taking courses etc.

But a general intelligence should be able to adapt to some unfamiliar
situations entirely by itself - like perhaps your submersible
situation. No guarantee that it will succeed in any given situation,
(as there isn't with us), but you should be able to demonstrate its
power to adapt sometimes.

In a sense, you should be appalled with yourself that you didn't try to
tackle the problem - to produce a "cross-over" idea. But since
literally no one else in the field of AGI has the slightest
"cross-over" idea - i.e. is actually tackling the problem of AGI, - and
the whole culture is one of avoiding the problem, it's to be expected.
(You disagree - show me one, just one, cross-over idea anywhere.
Everyone will give you a v. detailed,impressive timetable for how long
it'll take them to produce such an idea, they just will never produce
one. Frankly, they're too scared).


Mike Tintner <[EMAIL PROTECTED]> wrote:

You must first define its existing skills, then define the new challenge
with some degree of precision - then explain the principles by which it will extend its skills. It's those principles of extension/generalization that are the be-all and end-all, (and NOT btw, as you suggest, any helpful info that the robot will receive - that,sir, is cheating - it has to work these
things out for itself - although perhaps it could *ask* for info).


Why is that cheating? Would you never give instructions to a child
about what to do? Taking instuctions is something that all
intelligences need to be able to do, but it should be attempted to be
minimised. I'm not saying it should take instructions unquestioningly
either, ideally it should figure out whether the instructions you give
are any use for it.

Will Pearson



-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: http://www.listbox.com/member/?&;
Powered by Listbox: http://www.listbox.com



--
No virus found in this incoming message.
Checked by AVG Free Edition.
Version: 7.5.516 / Virus Database: 269.21.3/1306 - Release Date: 3/1/2008 5:41 PM



-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com

Reply via email to