--- Monika Krishan <[EMAIL PROTECTED]> wrote:

> Hi All,
> 
> I'm new to the list. So I'm not sure if these issues have been already been
> raised.
> 
> 1. Do you think AGIs will eventually reach a point in their evolution when
> "self improvement" might come to mean attempting to "solve previously solved
> problems with fewer resources"?

I think that optimization is consistent with the evolutionary driven goal of
becoming more intelligent.  But this can also be accomplished by building and
stealing more computing power and forming alliances.  I expect that competing
AGI will use all of these approaches.

> - "Fewer resources" might mean deliberately increasing constraints or
> reducing computing power
> - This "point" in evolution might be reached (for example) when the AGI
> environment (including the humans/hybrids in it) becomes highly
> predictable.
> - This type of "improvement" would be analogous to say, being able to
> navigate based on hearing alone (eg: in the case of the visually impaired)
> or running a marathon (as opposed to driving the same distance) or being
> able to paint with one's toes.

I guess the question is what purpose does challenging oneself play?  How does
climbing mountains or going to the moon help humans survive?  Experimentation
is an essential component of intelligence, so I believe it will survive in
AGI.


-- Matt Mahoney, [EMAIL PROTECTED]

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=61509686-4d75d3

Reply via email to