How intelligent would any human be if it couldn't be taught by other humans?

Could a human ever learn to speak by itself?  The few times this has
happened in real life, the person was permanently disabled and not capable
of becoming a normal human being.

If humans can't become human without the help of other humans, why should
this is a criteria for AGI?

David Clark

PS I am not suggesting that explicitly programming 100% of an AGI is either
doable or desirable but some degree of detailed teaching must be a
requirement for all on this list who dream of creating an AGI, no?

> -----Original Message-----
> From: Mike Tintner [mailto:[EMAIL PROTECTED]
> Sent: March-02-08 5:36 AM
> To: agi@v2.listbox.com
> Subject: Re: [agi] Thought experiment on informationally limited
> systems
> 
> Jeez, Will, the point of Artificial General Intelligence is that it can
> start adapting to an unfamiliar situation and domain BY ITSELF.  And
> your
> FIRST and only response to the problem you set was to say: "I'll get
> someone
> to tell it what to do."
> 
> IOW you simply avoided the problem and thought only of cheating. What a
> solution, or merest idea for a solution, must do is tell me how that
> intelligence will start adapting by itself  - will generalize from its
> existing skills to cross over domains.
> 
> Then, as my answer indicated, it may well have to seek some
> instructions and
> advice - especially and almost certainly  if it wants to acquire a
> whole new
> major skill, as we do, by taking courses etc.
> 
> But a general intelligence should be able to adapt to some unfamiliar
> situations entirely by itself - like perhaps your submersible
> situation. No
> guarantee that it will succeed in any given situation, (as there isn't
> with
> us), but you should be able to demonstrate its power to adapt
> sometimes.
> 
> In a sense, you should be appalled with yourself that you didn't try to
> tackle the problem - to produce a "cross-over" idea. But since
> literally no
> one else in the field of AGI has the slightest "cross-over" idea - i.e.
> is
> actually tackling the problem of AGI, - and the whole culture is one of
> avoiding the problem, it's to be expected. (You disagree - show me one,
> just
> one, cross-over idea anywhere. Everyone will give you a v.
> detailed,impressive timetable for how long it'll take them to produce
> such
> an idea, they just will never produce one. Frankly, they're too
> scared).
> 
> 
> Mike Tintner <[EMAIL PROTECTED]> wrote:
> >
> >>  You must first define its existing skills, then define the new
> challenge
> >>  with some degree of precision - then explain the principles by
> which it
> >> will
> >>  extend its skills. It's those principles of
> extension/generalization
> >> that
> >>  are the be-all and end-all, (and NOT btw, as you suggest, any
> helpful
> >> info
> >>  that the robot will receive - that,sir, is cheating - it has to
> work
> >> these
> >>  things out for itself - although perhaps it could *ask* for info).
> >>
> >
> > Why is that cheating? Would you never give instructions to a child
> > about what to do? Taking instuctions is something that all
> > intelligences need to be able to do, but it should be attempted to be
> > minimised. I'm not saying it should take instructions unquestioningly
> > either, ideally it should figure out whether the instructions you
> give
> > are any use for it.
> >
> >  Will Pearson
> >
> >
> 
> 
> -------------------------------------------
> agi
> Archives: http://www.listbox.com/member/archive/303/=now
> RSS Feed: http://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> http://www.listbox.com/member/?&;
> 724342
> Powered by Listbox: http://www.listbox.com

-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com

Reply via email to