Hi Ben,

Clearly you're not averse to philosophy, and as a result, your designs for 
OpenCog and presumably Novamente are well thought out and robust. Regardless of 
whether everyone agrees with the philosophy behind the design, it's obvious 
that it's well considered and that the design is consistent with it.

I'm happy to be wrong about my perception. Maybe it's just that I see folks who 
are striving for AGI but whose designs are informed by a carefully considered 
philosophy that I simply don't agree with. Yet, I still feel there's truth to 
what I wrote earlier.

T

--- On Tue, 8/5/08, Ben Goertzel <[EMAIL PROTECTED]> wrote:
I think your assertion is not correct...

Personally I published a book in 2006 called "The Hidden Pattern", which is 
largely preoccupied with philosophy of mind

Anecdotally, I have found many AGI researchers to be deeply interested in (and 
knowledgeable of) philosophy of mind


On the other hand, the majority of **narrow AI** researchers probably see 
themselves largely as computer scientists, and are more interested in the 
mathematical and algorithmic and engineering aspects of AI than in 
philosophical aspects.


Another point is that the style of scientific journals and conferences in AI 
does not lend itself to philosophical discourse.  So the publication record 
may  minimize the role of philosophy, more so than is reflective of actual 
practice.


One of the interesting intellectual developments of the last few decades is the 
way cog sci, AI and philosophy of mind have developed synergetically ...

-- Ben G

On Tue, Aug 5, 2008 at 12:03 PM, Terren Suydam <[EMAIL PROTECTED]> wrote:



OK, this brings up something that I'd like to pose to the list as a whole. I 
realize this will be a somewhat antagonistic question - my intent here is not 
to offend (or to single anyone out), especially since I could be wrong.




But my impression is that with some exceptions, AI researchers in general don't 
want to touch philosophy. And that astounds me, because of all the possible 
domains of engineering, AI research has to be the domain of the most 
philosophical consequence. Trying to build AI without doing philosophy, to me, 
is like trying to build a rocketship without doing math.




I believe there are a few reasons for why this is. One, philosophy is hard and 
very often boring. Two, there is a bias against philosophers that don't build 
things as being somehow irrelevant. And three, subjecting your own ideas to the 
philosophical scrutiny of others is threatening. There's a kind of honor in 
testing your ideas by building it, so one can save some face in the event of 
failure (it was an unsuccessful experiment). But a philosophical rejection that 
demonstrates through careful logic the infeasibility of your design before you 
even build it - well, that just makes you feel stupid.




I invite those of you who feel like this is unfair to correct my perceptions.



Terren



--- On Tue, 8/5/08, John G. Rose <[EMAIL PROTECTED]> wrote:

> > Searle's Chinese Room argument is one of those

> things that makes me

> > wonder if I'm living in the same (real or virtual)

> reality as everyone

> > else. Everyone seems to take it very seriously, but to

> me, it seems like

> > a transparently meaningless argument.

> >

>

> I think that the Chinese Room argument is an AI

> philosophical anachronistic

> meme that is embedded in the AI community and promulgated

> by monotonous

> drone-like repetitivity. Whenever I hear it I'm like

> let me go read up on

> that for the n'th time and after reading I'm like

> WTF are they talking

> about!?!? Is that one the grand philosophical hang-ups in

> AI thinking?

>

> I wish I had a mega-meme expulsion cannon and could expunge

> that mental knot

> of twisted AI arterialsclerosis.

>

> John

>

>

>

>

> -------------------------------------------

> agi

> Archives: https://www.listbox.com/member/archive/303/=now

> RSS Feed: https://www.listbox.com/member/archive/rss/303/

> Modify Your Subscription:

> https://www.listbox.com/member/?&;

> Powered by Listbox: http://www.listbox.com











-------------------------------------------

agi

Archives: https://www.listbox.com/member/archive/303/=now

RSS Feed: https://www.listbox.com/member/archive/rss/303/

Modify Your Subscription: https://www.listbox.com/member/?&;

Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"Nothing will ever be attempted if all possible objections must be first 
overcome " - Dr Samuel Johnson









  
    
      
      agi | Archives

 | Modify
 Your Subscription


      
    
  





      


-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com

Reply via email to