I think people on this thread overestimate the potential capabilities of AGI's. 
Particularly the ones we can build in our lifetime with current technologies.
To ascribe  omnipotence or omniscience to an AGI is a grave mistake.  Also, I 
think we underestimate just how much informationan AGI has to process in order 
to arrive at new and useful conclusions, especially in real time.  If an AGI 
performs better than a human, which we all expect it to, that will be a 
significant milestone, because humans filter a tremendous amount of information 
in order to make their mistakes (and hopefully learn from them).  
Why would we expect so much more from an AGI?  
I'm coming from a Developmental AGI perspective where we attempt to build 
infants, not demi-Gods.  
~PM


Date: Wed, 31 Oct 2012 19:05:12 -0700
From: [email protected]
To: [email protected]
Subject: Re: [agi] not so superintelligent questions ...


  
    
  
  
    what if the superintelligent agi system
      finishes learning everything our particular universe offers to be
      learned within a day? do you suggest that exploring the cosmos for
      10^100 years will be very interesting at this point? this seems
      like exploring telephone books to me - you fully understand the
      concept of a telephone book and all there is left to explore are
      billions of telephone number entries (or billions of planets and
      galaxies).

      

      "wow, this particular nebula is more interesting than the 5000
      others i have seen today and the trillion i have simulated so
      far"?

      

      i really do hope that there is a lot more to the cosmos, life,
      etc. than i can perceive today.

      

      -- just another camel

      

      

      On 11/01/2012 07:29 PM, Patrick McKown wrote:

    
    
      
        

          
        Maybe biological life can only know purpose as
            confined in the finite cycles. 

          
        

          
        

          
        From: Piaget Modeler
          <[email protected]>

        
        
          
             To: AGI
                <[email protected]> 

                Sent:
                Friday, October 19, 2012 2:13 AM

                Subject:
                RE: [agi] not so superintelligent questions ...

               
            

            
            
              
              
                
                  

                  The AGI's could explore space--a la Star Trek.
                   Manufacture and launch repeater communication
                  satellites, 
                  

                  
                  So much to learn there. We haven't even begun. 
                    

                    
                    

                      
                        

                        
                          
                            1) will all sufficiently intelligent AGI
                            agents ultimately share the same behaviour
                            everywhere in our universe just like zero
                            intelligence behaves the same everywhere? 
                          

                            2) what could be the incentive for such a
                            superintelligent AGI to stay "alive"? if joy
                            is defined as a reduction of entropy or an
                            increase in unity (as ben suggests) 
                          

                            

                          
                        
                        
                          
                            
                              
                                

                                
                                
                                    
                              
                            
                          
                        
                      
                    
                  
                
                
                  
                    
                      
                          AGI
                            | Archives
                             |
                            Modify Your Subscription
                        
                            
                      
                    
                  
                
              
            
            
            

            

          
        
      
      
        
          
            
                AGI
                  | Archives
                  
                  | Modify Your Subscription
              
                  
            
          
        
      
    
    

  


  
    
      
      AGI | Archives

 | Modify
 Your Subscription


      
    
  

                                          


-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-c97d2393
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-2484a968
Powered by Listbox: http://www.listbox.com

Reply via email to