I'm in agreement, PM.

But supposing we're wrong and AGI does attain "godlike powers" -- I feel so
silly just saying that phrase -- I think JC has missed something
fundamental: The most interesting things in life are not those that come
from outside, but those that come from within. Being creative and exploring
your own potential is *far* more interesting than wandering around looking
at every copy of a pattern you've already understood. When I get bored, I
don't go outside and stare at trees. (Though this can be wonderfully
refreshing, it's more appropriate for when I need to relax than when I'm
bored.) Instead, I pick up my guitar, or open an IDE, or grab a blank piece
of paper. We ourselves are the most complex structures in the universe
(afaik) and so we are the place to look for new and interesting things to
emerge. And when I don't feel like creating, I go online and see what other
people have created since I last checked. The universe will always be
interesting as long as we're around to make it so.

On Fri, Nov 2, 2012 at 1:23 AM, Piaget Modeler <[email protected]>wrote:

>  I think people on this thread overestimate the potential capabilities of
> AGI's. Particularly the ones we can build in our lifetime with
> current technologies.
>
> To ascribe  omnipotence or omniscience to an AGI is a grave mistake.
>  Also, I think we underestimate just how much information
> an AGI has to process in order to arrive at new and useful conclusions,
> especially in real time.  If an AGI performs better than a
> human, which we all expect it to, that will be a significant milestone,
> because humans filter a tremendous amount of information
> in order to make their mistakes (and hopefully learn from them).
>
> Why would we expect so much more from an AGI?
>
> I'm coming from a Developmental AGI perspective where we attempt to build
> infants, not demi-Gods.
>
> ~PM
>
>
>  ------------------------------
> Date: Wed, 31 Oct 2012 19:05:12 -0700
> From: [email protected]
> To: [email protected]
> Subject: Re: [agi] not so superintelligent questions ...
>
>
> what if the superintelligent agi system finishes learning everything our
> particular universe offers to be learned within a day? do you suggest that
> exploring the cosmos for 10^100 years will be very interesting at this
> point? this seems like exploring telephone books to me - you fully
> understand the concept of a telephone book and all there is left to explore
> are billions of telephone number entries (or billions of planets and
> galaxies).
>
> "wow, this particular nebula is more interesting than the 5000 others i
> have seen today and the trillion i have simulated so far"?
>
> i really do hope that there is a lot more to the cosmos, life, etc. than i
> can perceive today.
>
> -- just another camel
>
>
> On 11/01/2012 07:29 PM, Patrick McKown wrote:
>
>
> Maybe biological life can only know purpose as confined in the finite
> cycles.
>
>
> *From:* Piaget Modeler <[email protected]><[email protected]>
>  *To:* AGI <[email protected]> <[email protected]>
> *Sent:* Friday, October 19, 2012 2:13 AM
> *Subject:* RE: [agi] not so superintelligent questions ...
>
>
> The AGI's could explore space--a la Star Trek.  Manufacture and launch
> repeater communication satellites,
>
> So much to learn there. We haven't even begun.
>
>
>  ------------------------------
>
>  1) will all sufficiently intelligent AGI agents ultimately share the
> same behaviour everywhere in our universe just like zero intelligence
> behaves the same everywhere?
>
>
> 2) what could be the incentive for such a superintelligent AGI to stay
> "alive"? if joy is defined as a reduction of entropy or an increase in
> unity (as ben suggests)
>
>
>
>
>  <http://www.listbox.com/>
>   *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/19859314-980d9bb4> |
> Modify <https://www.listbox.com/member/?&;> Your Subscription
> <http://www.listbox.com/>
>
>
>   *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/23508161-fa52c03c> |
> Modify <https://www.listbox.com/member/?&;> Your Subscription
> <http://www.listbox.com/>
>
>
>   *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/19999924-5cfde295> |
> Modify <https://www.listbox.com/member/?&;> Your Subscription
> <http://www.listbox.com/>
>   *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/23050605-bcb45fb4> |
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com/>
>



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-c97d2393
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-2484a968
Powered by Listbox: http://www.listbox.com

Reply via email to