On 11/5/07, Matt Mahoney <[EMAIL PROTECTED]> wrote:
>
> --- Monika Krishan <[EMAIL PROTECTED]> wrote:
>
> > Hi All,
> >
> > I'm new to the list. So I'm not sure if these issues have been already
> been
> > raised.
> >
> > 1. Do you think AGIs will eventually reach a point in their evolution
> when
> > "self improvement" might come to mean attempting to "solve previously
> solved
> > problems with fewer resources"?
>
> I think that optimization is consistent with the evolutionary driven goal
> of
> becoming more intelligent.  But this can also be accomplished by building
> and
> stealing more computing power and forming alliances.  I expect that
> competing
> AGI will use all of these approaches.
>
> > - "Fewer resources" might mean deliberately increasing constraints or
> > reducing computing power
> > - This "point" in evolution might be reached (for example) when the AGI
> > environment (including the humans/hybrids in it) becomes highly
> > predictable.
> > - This type of "improvement" would be analogous to say, being able to
> > navigate based on hearing alone (eg: in the case of the visually
> impaired)
> > or running a marathon (as opposed to driving the same distance) or being
> > able to paint with one's toes.
>
> I guess the question is what purpose does challenging oneself play?  How
> does
> climbing mountains or going to the moon help humans
> survive?  Experimentation
> is an essential component of intelligence, so I believe it will survive in
> AGI.


Thank you.

"Experimentation" is a good way of putting it. The motivation behind my
questions was the possibility that AGI might come full circle and attempt to
emulate human intelligence (HI) in the process of continually improving
itself.

It is conceivable that one day most problems will have been solved. So the
next step might be to find more efficient solutions and eventually perhaps
more creative solutions which might include solving problems "the way humans
do", with all their constraints. Humans would be a part of the AGI
environment, so it seems that its knowledge would include some
representation of humans and their capabilities.

There has been discussion re. the use of AGI to augment human intelligence
(HI). Can this augmentation be achieved without determining what HI is
capable of? For instance, one wouldn't consider a basic square root
calculator something that augments HI because, well, humans can do this
easily enough.

Perhaps the task of fully understanding what HI can achieve will be taken up
AGIs !
-Monika

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=61804393-8094c2

Reply via email to