Hi Terren,

When I worked at Cycorp, on the Cyc Knowledge Base, I think that they employed 
over 20 PhD philosophers at the peak to edit Cyc concepts and relationships.  
Every day I heard conversations such as "Would a geyser of Dr. Pepper still be 
Dr. Pepper? - (or is it only a conveniently contained soft drink?)", "What are 
the essential concepts of time, and the relationships between them?", "How do 
we perform deductive inference when modal operators are allowed?", "In some 
possible world, ...", and so forth.   The field of knowledge representation, 
which is narrow AI, certainly benefits from applied philosophy.

-Steve


 Stephen L. Reed


Artificial Intelligence Researcher
http://texai.org/blog
http://texai.org
3008 Oak Crest Ave.
Austin, Texas, USA 78704
512.791.7860



----- Original Message ----
From: Terren Suydam <[EMAIL PROTECTED]>
To: [email protected]
Sent: Tuesday, August 5, 2008 11:03:27 AM
Subject: [agi] aversion to philosophy


OK, this brings up something that I'd like to pose to the list as a whole. I 
realize this will be a somewhat antagonistic question - my intent here is not 
to offend (or to single anyone out), especially since I could be wrong.

But my impression is that with some exceptions, AI researchers in general don't 
want to touch philosophy. And that astounds me, because of all the possible 
domains of engineering, AI research has to be the domain of the most 
philosophical consequence. Trying to build AI without doing philosophy, to me, 
is like trying to build a rocketship without doing math.

I believe there are a few reasons for why this is. One, philosophy is hard and 
very often boring. Two, there is a bias against philosophers that don't build 
things as being somehow irrelevant. And three, subjecting your own ideas to the 
philosophical scrutiny of others is threatening. There's a kind of honor in 
testing your ideas by building it, so one can save some face in the event of 
failure (it was an unsuccessful experiment). But a philosophical rejection that 
demonstrates through careful logic the infeasibility of your design before you 
even build it - well, that just makes you feel stupid.

I invite those of you who feel like this is unfair to correct my perceptions.

Terren

--- On Tue, 8/5/08, John G. Rose <[EMAIL PROTECTED]> wrote:
> > Searle's Chinese Room argument is one of those
> things that makes me
> > wonder if I'm living in the same (real or virtual)
> reality as everyone
> > else. Everyone seems to take it very seriously, but to
> me, it seems like
> > a transparently meaningless argument.
> > 
> 
> I think that the Chinese Room argument is an AI
> philosophical anachronistic
> meme that is embedded in the AI community and promulgated
> by monotonous
> drone-like repetitivity. Whenever I hear it I'm like
> let me go read up on
> that for the n'th time and after reading I'm like
> WTF are they talking
> about!?!? Is that one the grand philosophical hang-ups in
> AI thinking?
> 
> I wish I had a mega-meme expulsion cannon and could expunge
> that mental knot
> of twisted AI arterialsclerosis.
> 
> John
> 
> 
> 
> 
> -------------------------------------------
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com


      


-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: https://www.listbox.com/member/?&;
Powered by Listbox: http://www.listbox.com



      


-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com

Reply via email to