On Monday 15 October 2007 01:57:18 pm, Richard Loosemore wrote:
> AI programmers, in their haste to get something working, often simply 
> write some code and then label certain symbols as if they are 
> meaningful, when in fact they are just symbols-with-labels.

This is quite true, but I think it is a lot closer to McDermott's critique 
(Artificial Intelligence meets Natural Stupidity) than to Harnad's.

Harnad shares the typical epistemologist's assumption that for a symbol to 
have meaning, it must have an "aboutness", i.e. it must refer to something in 
some external (although perhaps imaginary) world. I happen to think that 
Solomonoff's inductive formulation of AI more or less demolished this 
particular philosophical set of (often unstated) assumptions, which were 
after all responsible of 3 millennia of spectacularly unproductive 
pontification.

Josh

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=53806349-fa774c

Reply via email to