Perhaps the real question that Andi was asking was whether Andi should give up. 
 Now there is a simple question that Andi can ask himself to verify this theory 
that he was projecting.  Have you been thinking about giving up?Jim Bromer
 From: [email protected]
To: [email protected]
Subject: [agi] Tilting at Windmills
Date: Fri, 19 Apr 2013 09:58:03 -0700




I would agree that Google is a Behemoth with resources.  So is IBM.
I personally think HAL = SIRI + Watson.
So, what is the rationale for me to research AGI?   My answer is to learn more 
about myself and others.My seminal question that started me on this Piaget 
Modeler adventure was "What is a Mental Model and how do we form them."  Pure 
and simple.  I've learned a lot in a very short time and for me that is the 
reward. 
It's personal. Even if they build an AGI, that's nice.  But I'm sure there are 
many ways to skin a cat.  And perhaps there is not only one way to build an 
AGI.  There are after all, many ways to fly, (e.g., like birds, like aircraft) 
and we have not yet ventured very far into some of them (e.g. sonic levitation, 
magnetic levitation). 
The Wright Brothers didn't give up. Why should any one of us? Why Should Jim 
Bromer or Arthur Murray, or Ben Goertzel?  We shouldn't. 
Cheers,
~PM
Subject: Re: [agi] Re: Summary of My Current Theory For an AGI Program.
From: [email protected]
Date: Fri, 19 Apr 2013 09:37:09 -0500
To: [email protected]

Jim, at this point I was not trying to address your theories specifically.  I 
was just expressing exasperation at the lack of technical detail here right 
now.  Details are import and need to be specific and precise in order for work 
to be done.  Yeah, sorry about getting ticky about use of terms, but when you 
study this stuff academically, what can perhaps seem like simple mistakes of 
terminology come across as, I guess I would say "unprofessional".  I suppose 
this group has always been like this, but I'm right now trying to say, well, it 
is possible to be negative, but give clear reasons for and not just shout and 
be emotional.
Sorry for not being more constructive.  I think I might have had some more sort 
of detailed things I could address, but they were piling up to the level of no 
longer seeming to be of much value.
So to throw something somewhat more positive out there,  I just looked at the 
website of the people working at Google Research.  They've got literally tons 
of people in areas like machine perception, AI, machine learning, machine 
translation.  It does give me the feeling that there are people, and with 
enough plugging, they will eventually get AGI as just a natural progression.  
Of, course, I think they field and the stuff they use has some missing bits, 
but that's just me.  You all can tilt at all the windmills y'all want, but they 
have money and talent at a level we can't approach.andi




                                          


  
    
      
      AGI | Archives

 | Modify
 Your Subscription


      
    
  

                                          


-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to