This has been a great thread!
I think a simulated, grounded, embodied approach is the one exception to the otherwise correct Chinese Room (CR) argument. It is the keyhole through which we must pass to achieve strong AI.
Actually, if you read Searle's original paper, I think that you will find that he would agree with you since he is *not* meaning to argue against the possibility of strong AI (since he makes repeated references to human as machines) but merely against the possibility of strong AI in machines where "the operation of the machine is defined solely in terms of computational processes over formally defined elements" (which was the current state of the art in AI when he was arguing against it -- unlike today where there are a number of systems which don't require axiomatic reasoning over formally defined elements). There's also the trick that Chinese Room is assumed/programmed to be 100% omniscient/correct in it's required domain.
The distinction that is not clarified in most arguments is that Searle's Chinese Room is exactly analogous to an old-style expert system in that it is ungrounded and unchanging. It is only doing pre-programmed symbol manipulation. Most importantly, though, to Searle and his arguments, is the fact that the *intention* of the Chinese Room is merely to do symbol manipulation according to the pre-defined rules, not to "understand" Chinese.
The critical point that most people miss -- and what is really important for this list (and why people shouldn't blindly dismiss Searle) is that it is *intentionality* that defines "understanding". If a system has goals/intentions and it's actions are modified by the external world (i.e. it is grounded), then, to the extent to which it's actions are *effectively* modified (as judged in relation to it's intentions) is the extent to which it "understands". The most important feature of an AGI is that it has goals and that it modifies it's behavior (and learns) in order to reach them. The Chinese Room is incapable of these behaviors since it has no desires.
Where Searle is normally misconstrued is when people either don't understand what he means by "formally defined elements" or don't understand that or how and why his argument is limited to them. Unless you are omniscient, the world is not made up of formally defined elements. Incomplete information (not to mention incorrect information) prevents true axiomatic and formal reasoning. If you want to get really pedantic, you certainly could argue that our *interface to*/*sensing of* the world can be broken down into formally defined elements but lack of omniscience/complete information (i.e. seemingly different results under what appear to be the same circumstances) means that we and a true AGI cannot simply be symbol manipulators due to our lack of omniscience -- and we can't improve unless we have an intention to improve/measure against.
(Side comment: This is also the basis for my disagreement with the claim that compression is the same as AGI. The compression/decompression algorithm must effectively be omniscient before it shows truly effective behavior and then it is merely a symbol processor of the omniscient knowledge -- not an AGI).
Thus, in reality, Searle's argument is not that strong AI is not possible but that strong AI requires intentionality which is not displayed in his example Chinese Room.
Unfortunately, I have to take a break from the list (why are people cheering??).
No cheering at all. This was a very nice change of pace.
Mark
----- Original Message -----
From: "Terren Suydam" <[EMAIL PROTECTED]>
To: <[email protected]> Sent: Wednesday, August 06, 2008 2:50 PM Subject: Re: [agi] Groundless reasoning --> Chinese Room
Abram,I think a simulated, grounded, embodied approach is the one exception to the otherwise correct Chinese Room (CR) argument. It is the keyhole through which we must pass to achieve strong AI.The Novamente example I gave may qualify as such an exception (although the hybrid nature of grounded and ungrounded knowledge used in the design is a question mark for me), and does not invalidate the arguments against ungrounded approaches.The CR argument works for ungrounded approaches, because without grounding, the symbols to be manipulated have no meaning, except within an external context that is totally independent of and inaccessible to the processing engine.I believe for this to be further constructive, you have to show either 1) how an ungrounded symbolic approach does not apply to the CR argument, or 2) why, specifically, the argument fails to show that ungrounded approaches cannot achieve comprehension.Unfortunately, I have to take a break from the list (why are people cheering??). I will answer any further posts addressed to me in due time, but I have other commitments for the time being.Terren --- On Wed, 8/6/08, Abram Demski <[EMAIL PROTECTED]> wrote:From: Abram Demski <[EMAIL PROTECTED]> Subject: Re: [agi] Groundless reasoning --> Chinese Room To: [email protected] Date: Wednesday, August 6, 2008, 2:32 PM Terren, I agree that the emergence response shows where the flaw is in the Chinese Room argument. The argument fails, because although understanding is not in the person or the instructions or the physical room as a whole, it emerges from the system. That being the case, how can the argument show that ungrounded AI does not work? I am not arguing for ungrounded AI, I just don't think the chinese room argument is a good argument against it. You say yourself that understanding could occur within the original Chinese Room experiment: "For instance, we may be simulating an agent that has senses and can effect actions, and has some kind of dynamic memory and cognitive architecture that allows it to process a kind of ongoing experience. The Novamente design is one that could presumably lead to grounded understanding [...]" Therefore, the Chinese Room argument is not an example of symbolic AI failing, by your own argument. Maybe it could be fixed or extended to argue against symbolic AI. However, it does not do so by itself, and in my opinion it would be clearer to come up with a different argument rather than fixing that one. -Abram Demski On Wed, Aug 6, 2008 at 1:44 PM, Terren Suydam <[EMAIL PROTECTED]> wrote: > > Hi Abram, > > Sorry, your message did slip through the cracks. I intended to respond earlier... here goes. > > --- On Wed, 8/6/08, Abram Demski <[EMAIL PROTECTED]> wrote: >> <[EMAIL PROTECTED]> wrote: >> I explained somewhat in my first reply to this thread. >> Basically, as I >> understand you, you are saying that the original chinese >> room does not >> have understanding, but if we modify the argument to >> connect it up to >> a robot with adequate senses, it could have understanding >> (if the >> human inside could work fast enough to show it). But, if I >> am willing >> to grant that such a robot has understanding (despite the >> human >> controller having no understanding of the data being >> manipulated), >> then I may very well be willing to grant that the original >> Chinese >> room has understanding (as I am willing to grant). > > This is the crux of the emergence response to the Chinese Room. The question boils down to: how is it possible for the robot to have understanding but the processor not to? > > The insight necessary to see that this is possible, is that there are multiple and utterly independent levels of description going on. On the local level you have the processor, blindly following instructions, manipulating data, and so on. At the global level, a simulation is going on (whether it's a physical or virtual robot). In other words, a simulated reality has *emerged* as a consequence of executing a sophisticated program. The agent being simulated and the virtual environment it is simulated in, are obeying a set of rules that are totally orthogonal to the set of rules at the local level. > > There is nothing in particular about what the processor is doing at that local level of description that facilitates understanding at the global level (not unlike studying the behavior of individual neurons to try and understand the global 'mind' phenomenon). Understanding, rather, is a consequence of the global-level description of the emergent entities and how they interact with one another, and cannot be understood strictly in terms of the local level. It's irreducible. > > For instance, we may be simulating an agent that has senses and can effect actions, and has some kind of dynamic memory and cognitive architecture that allows it to process a kind of ongoing experience. The Novamente design is one that could presumably lead to grounded understanding (I assume, anyway, based on BG's assertions that it's similar enough to OpenCog), because the design enables (simulated) experience, and the ability to structure new knowledge based ultimately on what it experiences (i.e. grounding). You can argue that definition/mechanism of understanding, and grounding, and so on, but that's a separate argument. They key point is that it is a mistake to in any way attribute the global phenomenon of understanding by emergent agent to the local processor that is just blindly executing instructions. > >> I do distrust some philosophy, but other issues I think are >> very >> important. For example, I am very interested in the >> foundations of >> mathematics. >> >> -Abram > > Skepticism of the content of philosophy is certainly justified, but skepticism of the need for proficiency in it is not, if you're an AI researcher. > > Terren > > > > > > ------------------------------------------- > agi > Archives: https://www.listbox.com/member/archive/303/=now > RSS Feed: https://www.listbox.com/member/archive/rss/303/ > Modify Your Subscription: https://www.listbox.com/member/?& > Powered by Listbox: http://www.listbox.com > ------------------------------------------- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?& Powered by Listbox: http://www.listbox.com------------------------------------------- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/Modify Your Subscription: https://www.listbox.com/member/?&Powered by Listbox: http://www.listbox.com
------------------------------------------- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244&id_secret=108809214-a0d121 Powered by Listbox: http://www.listbox.com
