Re: [agi] Groundless reasoning -- Chinese Room

2008-08-21 Thread Valentina Poletti
On 8/8/08, Mark Waser [EMAIL PROTECTED] wrote: The person believes his decision are now guided by free will, but truly they are still guided by the book: if the book gives him the wrong meaning of a word, he will make a mistake when answering a Chinese speaker The translations are guided

Re: [agi] Groundless reasoning -- Chinese Room

2008-08-19 Thread Jim Bromer
On Sun, Aug 17, 2008 at 10:52 PM, Charles Hixson [EMAIL PROTECTED] wrote: Well, one point where we disagree is on whether truth can actually be known by anything. I don't think this is possible. So to me that which is called truth is just something with a VERY high probability, and which is

Re: [agi] Groundless reasoning -- Chinese Room

2008-08-18 Thread Charles Hixson
This is probably quibbling over a definition, but: Jim Bromer wrote: On Sat, Aug 9, 2008 at 5:35 PM, Charles Hixson [EMAIL PROTECTED] wrote: Jim Bromer wrote: As far as I can tell, the idea of making statistical calculation about what we don't know is only relevant for three

Re: [agi] Groundless reasoning -- Chinese Room

2008-08-17 Thread Jim Bromer
On Sat, Aug 9, 2008 at 5:35 PM, Charles Hixson [EMAIL PROTECTED] wrote: Jim Bromer wrote: As far as I can tell, the idea of making statistical calculation about what we don't know is only relevant for three conditions. The accuracy of the calculation is not significant. The evaluation is near

Re: [agi] Groundless reasoning -- Chinese Room

2008-08-09 Thread Jim Bromer
In most situations this is further limited because one CAN'T know all of the consequences. So one makes probability calculations weighting things not only by probability of occurrence, but also by importance. So different individuals disagree not only on the definition of best, but also on

Re: [agi] Groundless reasoning -- Chinese Room

2008-08-09 Thread Charles Hixson
Jim Bromer wrote: In most situations this is further limited because one CAN'T know all of the consequences. So one makes probability calculations weighting things not only by probability of occurrence, but also by importance. So different individuals disagree not only on the definition of

Re: [agi] Groundless reasoning -- Chinese Room

2008-08-09 Thread Jim Bromer
On Sat, Aug 9, 2008 at 5:35 PM, Charles Hixson [EMAIL PROTECTED] wrote: Could you define opinion in an operational manner, i.e. in such a way that it was specified whether a particular structure in a database satisfied that or not? Or a particular logical operation? This part of your question

Re: [agi] Groundless reasoning -- Chinese Room

2008-08-08 Thread Valentina Poletti
Let me ask about a special case of this argument. Suppose now the book that the guy in the room holds is a chinese-teaching book for english speakers. The guy can read it for as long as he wishes, and can consult it in order to give the answers to the chinese speakers interacting with him. In

Re: [agi] Groundless reasoning -- Chinese Room

2008-08-08 Thread Mark Waser
is that the person can choose what to answer (as opposed to the Chinese Room where responses are dictated by the input and no choice is involved). - Original Message - From: Valentina Poletti To: agi@v2.listbox.com Sent: Friday, August 08, 2008 6:18 AM Subject: Re: [agi] Groundless

Re: [agi] Groundless reasoning -- Chinese Room

2008-08-08 Thread Jim Bromer
Valentina Poletti [EMAIL PROTECTED] wrote: Suppose now the book that the guy in the room holds is a chinese-teaching book for english speakers. The guy can read it for as long as he wishes, and can consult it in order to give the answers to the chinese speakers interacting with him. Great

Re: [agi] Groundless reasoning -- Chinese Room

2008-08-08 Thread Jim Bromer
On Thu, Aug 7, 2008 at 3:53 PM, Charles Hixson [EMAIL PROTECTED] wrote: At this point I think it relevant to bring in an assertion from Larry Niven (Protector): Paraphrase: When you understand all the consequences of an act, then you don't have free will. You must choose the best decision.

Re: [agi] Groundless reasoning -- Chinese Room

2008-08-08 Thread Charles Hixson
Jim Bromer wrote: On Thu, Aug 7, 2008 at 3:53 PM, Charles Hixson [EMAIL PROTECTED] wrote: At this point I think it relevant to bring in an assertion from Larry Niven (Protector): Paraphrase: When you understand all the consequences of an act, then you don't have free will. You must choose

Re: [agi] Groundless reasoning -- Chinese Room

2008-08-07 Thread wannabe
Argh! Are you all making the mistake I think you are making? Searle is using a technical term in philosophy--intentionality. It is different from the common use of intending as in aiming to do something or intention as a goal. (Here's s wiki http://en.wikipedia.org/wiki/Intentionality). The

Re: [agi] Groundless reasoning

2008-08-07 Thread Jiri Jelinek
On Wed, Aug 6, 2008 at 8:25 PM, Brad Paulsen [EMAIL PROTECTED] wrote: Jiri, I'd really like to hear more about your approach. Sounds bang-on! Have you written a paper (or worked from papers written by others) to which you could point us? Brad, I'm not aware of anyone else using or

Re: [agi] Groundless reasoning -- Chinese Room

2008-08-07 Thread Valentina Poletti
Terren: Substituting an actual human invalidates the experiment, because then you are bringing something in that can actually do semantics. The point of the argument is to show how merely manipulating symbols (i.e. the syntactical domain) is not a demonstration of understanding, no matter what the

Re: [agi] Groundless reasoning -- Chinese Room

2008-08-07 Thread Valentina Poletti
yep.. isn't it amazing how long a thread is becoming based on an experiment that has no significance? On 8/6/08, Steve Richfield [EMAIL PROTECTED] wrote: Back to reason, This entire thread is yet another example that once you accept a bad assumption, you can then prove ANY absurd

Re: [agi] Groundless reasoning -- Chinese Room

2008-08-07 Thread Eric Burton
Seriously, I'm only venturing a personal opinion but I've never even especially cared for Chinese Rooms. On 8/7/08, Valentina Poletti [EMAIL PROTECTED] wrote: yep.. isn't it amazing how long a thread is becoming based on an experiment that has no significance? On 8/6/08, Steve Richfield

Re: [agi] Groundless reasoning -- Chinese Room

2008-08-07 Thread Jim Bromer
On Wed, Aug 6, 2008 at 8:33 PM, Mark Waser [EMAIL PROTECTED] wrote: I was not dismissing the argument and certainly not making a presumption of human dominion over understanding. Quite the opposite in fact. I'm not quite sure why you believe that I did. Could you tell me which of my

Re: [agi] Groundless reasoning -- Chinese Room

2008-08-07 Thread Mark Waser
agree completely. Mark - Original Message - From: [EMAIL PROTECTED] To: agi@v2.listbox.com Sent: Thursday, August 07, 2008 2:57 AM Subject: Re: [agi] Groundless reasoning -- Chinese Room Argh! Are you all making the mistake I think you are making? Searle is using a technical

Re: [agi] Groundless reasoning -- Chinese Room

2008-08-07 Thread Mark Waser
to do. You have free will because you will do what (your nature makes) you wish to do. Mark - Original Message - From: Jim Bromer [EMAIL PROTECTED] To: agi@v2.listbox.com Sent: Thursday, August 07, 2008 8:55 AM Subject: Re: [agi] Groundless reasoning -- Chinese Room On Wed

Re: [agi] Groundless reasoning -- Chinese Room

2008-08-07 Thread Jim Bromer
On Thu, Aug 7, 2008 at 9:42 AM, Mark Waser [EMAIL PROTECTED] wrote: Hi Jim, The apparent paradox can be reduced to the never ending deterministic vs free will argument. Again, I agree but I don't believe that determinism vs. free will is really a paradox (heresy!). You are deterministic

Re: [agi] Groundless reasoning -- Chinese Room

2008-08-07 Thread Mark Waser
grounding emergent. They are pretty close to one and the same. - Original Message - From: Jim Bromer [EMAIL PROTECTED] To: agi@v2.listbox.com Sent: Thursday, August 07, 2008 11:14 AM Subject: Re: [agi] Groundless reasoning -- Chinese Room On Thu, Aug 7, 2008 at 9:42 AM, Mark Waser

Re: [agi] Groundless reasoning

2008-08-07 Thread James Ratcliff
Further explanation needed more below inline. --- On Mon, 8/4/08, Harry Chesley [EMAIL PROTECTED] wrote: From: Harry Chesley [EMAIL PROTECTED] Subject: Re: [agi] Groundless reasoning To: agi@v2.listbox.com Date: Monday, August 4, 2008, 9:16 PM Vladimir Nesov wrote: It's too fuzzy

Re: [agi] Groundless reasoning -- Chinese Room

2008-08-07 Thread James Ratcliff
Back on the problem of understanding more below ___ James Ratcliff - http://falazar.com Looking for something... --- On Wed, 8/6/08, Terren Suydam [EMAIL PROTECTED] wrote: From: Terren Suydam [EMAIL PROTECTED] Subject: Re: [agi] Groundless reasoning

Re: [agi] Groundless reasoning -- Chinese Room

2008-08-07 Thread James Ratcliff
PROTECTED] To: agi@v2.listbox.com Sent: Wednesday, August 06, 2008 7:32 PM Subject: Re: [agi] Groundless reasoning -- Chinese Room On Wed, Aug 6, 2008 at 6:11 PM, Mark Waser [EMAIL PROTECTED] wrote: This has been a great thread! Actually, if you read Searle's original paper, I think that you

Re: [agi] Groundless reasoning -- Chinese Room

2008-08-07 Thread Mark Waser
will not be thrown by minor differences since they understand that space around their known solutions as well as the exact solutions. - Original Message - From: James Ratcliff To: agi@v2.listbox.com Sent: Thursday, August 07, 2008 2:13 PM Subject: Re: [agi] Groundless reasoning -- Chinese

Re: [agi] Groundless reasoning -- Chinese Room

2008-08-07 Thread James Ratcliff
: Re: [agi] Groundless reasoning -- Chinese Room To: agi@v2.listbox.com Date: Thursday, August 7, 2008, 2:31 PM Please Answer: Now how can we really say how this is different from human understanding? I receive a question, I rack my brain for stored facts, if relevant, and any

FWIW: Re: [agi] Groundless reasoning

2008-08-07 Thread Charles Hixson
Brad Paulsen wrote: ... Nope. Wrong again. At least you're consistent. That line actually comes from a Cheech and Chong skit (or a movie -- can't remember which at the moment) where the guys are trying to get information by posing as cops. At least I think that's the setup. When the

Re: [agi] Groundless reasoning -- Chinese Room

2008-08-07 Thread Charles Hixson
Jim Bromer wrote: ... I mostly agree with your point of view, and I am not actually saying that your technical statements are wrong. I am trying to explain that there is something more to be learned. The apparent paradox can be reduced to the never ending deterministic vs free will argument.

Re: [agi] Groundless reasoning -- Chinese Room

2008-08-07 Thread Mike Tintner
Charles:When you understand all the consequences of an act, then you don't have free will. Just so. And the number of decisions/actions that you take, where you understand all the consequences - all the rewards, risks, costs, and opportunity costs of not just the actions, but how and how long

Re: [agi] Groundless reasoning -- Chinese Room

2008-08-07 Thread James Ratcliff
] Subject: Re: [agi] Groundless reasoning -- Chinese Room To: agi@v2.listbox.com Date: Thursday, August 7, 2008, 2:49 PM :-)  The Chinese Room can't pass the Turing test for exactly the reason you mention.   Well in the Chinese Room case I think the book of instructions is infinitely large

Re: [agi] Groundless reasoning -- Chinese Room

2008-08-07 Thread Mark Waser
the distinction that you need. - Original Message - From: James Ratcliff To: agi@v2.listbox.com Sent: Thursday, August 07, 2008 4:27 PM Subject: Re: [agi] Groundless reasoning -- Chinese Room With the chinese room, we arent doing any reasoning really, just looking up answers

Re: [agi] Groundless reasoning

2008-08-07 Thread Harry Chesley
James Ratcliff wrote: Every AGI, but the truly most simple AI must run in a simulated environment of some sort. Not necessarily, but in most cases yes. To give a counter example, a human scholar reads Plato and publishes an analysis of what he has read. There is no interaction with the

Re: FWIW: Re: [agi] Groundless reasoning

2008-08-07 Thread Brad Paulsen
Charles, Well, that's what gets me up in the morning. I learn something new every day! FWIW, I don't believe the Pink Floyd reference is appropriate since I don't *think* they included the signature word: stinkin'. The We don't need no.. part is there, though. ;-) As educational as this

Re: [agi] Groundless reasoning -- Chinese Room

2008-08-06 Thread Valentina Poletti
My view is that the problem with the Chinese Room argument is precisely the manner in which it uses the word 'understanding'. It is implied that in this context this word refers to mutual human experience. Understanding has another meaning, namely the emergent process some of you described, which

Re: [agi] Groundless reasoning -- Chinese Room

2008-08-06 Thread Valentina Poletti
Then again, I was just thinking.. wouldn't it be wonderful if instead of learning everything from scratch since the day we are born, we were born with all the knowledge all human beings had acquired until that moment? If somehow that was inplanted in our DNA? Of course that is not feasable.. but

Re: [agi] Groundless reasoning -- Chinese Room

2008-08-06 Thread Terren Suydam
Cargo Cult AI. (http://en.wikipedia.org/wiki/Cargo_cult) Terren --- On Tue, 8/5/08, Abram Demski [EMAIL PROTECTED] wrote: From: Abram Demski [EMAIL PROTECTED] Subject: Re: [agi] Groundless reasoning -- Chinese Room To: agi@v2.listbox.com Date: Tuesday, August 5, 2008

Re: [agi] Groundless reasoning -- Chinese Room

2008-08-06 Thread Valentina Poletti
Ok, I really don't see how it proves that then. In my view, the book could be replaced with a chinese-english translator and the same exact outcome will be given. Both are using their static knowledge for this process, not experience. On 8/6/08, Terren Suydam [EMAIL PROTECTED] wrote: Hi

Re: [agi] Groundless reasoning -- Chinese Room

2008-08-06 Thread Valentina Poletti
by translator i meant human translator btw. what this experiment does suggest is that linguistic abilities require energy (the book alone would do nothing). and that they are independent of humanness (the machine could do it), whether they involve 'understanding' or not. On 8/6/08, Valentina

Re: [agi] Groundless reasoning -- Chinese Room

2008-08-06 Thread Terren Suydam
But you're assuming that a chinese-english translator is possible to achieve without understanding. Language translation requires understanding of semantics, not just manipulation of syntax. That's why NLP has been out of reach, and will be, until we get agents that can actually do semantics.

Re: [agi] Groundless reasoning -- Chinese Room

2008-08-06 Thread Terren Suydam
the global result of the manipulation is (e.g., a chess move). --- On Wed, 8/6/08, Valentina Poletti [EMAIL PROTECTED] wrote: From: Valentina Poletti [EMAIL PROTECTED] Subject: Re: [agi] Groundless reasoning -- Chinese Room To: agi@v2.listbox.com Date: Wednesday, August 6, 2008, 11:27 AM by translator

Re: [agi] Groundless reasoning -- Chinese Room

2008-08-06 Thread Harry Chesley
Terren Suydam wrote: Harry, --- On Wed, 8/6/08, Harry Chesley [EMAIL PROTECTED] wrote: I'll take a stab at both of these... The Chinese Room to me simply states that understanding cannot be decomposed into sub-understanding pieces. I don't see it as addressing grounding, unless you

Re: [agi] Groundless reasoning -- Chinese Room

2008-08-06 Thread Abram Demski
[EMAIL PROTECTED] wrote: From: Abram Demski [EMAIL PROTECTED] Subject: Re: [agi] Groundless reasoning -- Chinese Room To: agi@v2.listbox.com Date: Tuesday, August 5, 2008, 9:49 PM Terren, I agree. Searle's responses are inadequate, and the whole thought experiment fails to prove his point. I

Re: [agi] Groundless reasoning -- Chinese Room

2008-08-06 Thread Terren Suydam
Hi Abram, Sorry, your message did slip through the cracks. I intended to respond earlier... here goes. --- On Wed, 8/6/08, Abram Demski [EMAIL PROTECTED] wrote: [EMAIL PROTECTED] wrote: I explained somewhat in my first reply to this thread. Basically, as I understand you, you are saying

Re: [agi] Groundless reasoning -- Chinese Room

2008-08-06 Thread Terren Suydam
Harry, --- On Wed, 8/6/08, Harry Chesley [EMAIL PROTECTED] wrote: But it's a preaching to the choir argument: Is there anything more to the argument than the intuition that automatic manipulation cannot create understanding? I think it can, though I have yet to show it. The burden is on

Re: [agi] Groundless reasoning -- Chinese Room

2008-08-06 Thread Abram Demski
Terren, I agree that the emergence response shows where the flaw is in the Chinese Room argument. The argument fails, because although understanding is not in the person or the instructions or the physical room as a whole, it emerges from the system. That being the case, how can the argument show

Re: [agi] Groundless reasoning -- Chinese Room

2008-08-06 Thread Terren Suydam
] wrote: From: Abram Demski [EMAIL PROTECTED] Subject: Re: [agi] Groundless reasoning -- Chinese Room To: agi@v2.listbox.com Date: Wednesday, August 6, 2008, 2:32 PM Terren, I agree that the emergence response shows where the flaw is in the Chinese Room argument. The argument fails, because

Re: [agi] Groundless reasoning -- Chinese Room

2008-08-06 Thread Steve Richfield
Back to reason, This entire thread is yet another example that once you accept a bad assumption, you can then prove ANY absurd proposition. I see no reason to believe that a Chinese Room is possible, and therefore I have no problem rejecting all arguments regarding the absurd conclusions that

Re: [agi] Groundless reasoning -- Chinese Room

2008-08-06 Thread Harry Chesley
Terren Suydam wrote: Harry, --- On Wed, 8/6/08, Harry Chesley [EMAIL PROTECTED] wrote: But it's a preaching to the choir argument: Is there anything more to the argument than the intuition that automatic manipulation cannot create understanding? I think it can, though I have yet to show

Re: [agi] Groundless reasoning -- Chinese Room

2008-08-06 Thread Harry Chesley
Terren Suydam wrote: Unfortunately, I have to take a break from the list (why are people cheering??). No cheering here. Actually, I'd like to say thanks to everyone. This thread has been very interesting. I realize that much of it is old hat and boring to some of you, but it's been useful

RE: [agi] Groundless reasoning -- Chinese Room

2008-08-06 Thread David Clark
: Terren Suydam [mailto:[EMAIL PROTECTED] Sent: August-06-08 8:24 AM To: agi@v2.listbox.com Subject: Re: [agi] Groundless reasoning -- Chinese Room Harry, --- On Wed, 8/6/08, Harry Chesley [EMAIL PROTECTED] wrote: I'll take a stab at both of these... The Chinese Room to me simply states

Re: [agi] Groundless reasoning -- Chinese Room

2008-08-06 Thread Mark Waser
- Original Message - From: Terren Suydam [EMAIL PROTECTED] To: agi@v2.listbox.com Sent: Wednesday, August 06, 2008 2:50 PM Subject: Re: [agi] Groundless reasoning -- Chinese Room Abram, I think a simulated, grounded, embodied approach is the one exception to the otherwise correct

Re: [agi] Groundless reasoning -- Chinese Room

2008-08-06 Thread Mark Waser
But it's a preaching to the choir argument: Is there anything more to the argument than the intuition that automatic manipulation cannot create understanding? I think it can, though I have yet to show it. Searle answers that exact question in his paper by saying Because the formal symbol

Re: [agi] Groundless reasoning -- Chinese Room

2008-08-06 Thread Mark Waser
linkage to the world). - Original Message - From: David Clark [EMAIL PROTECTED] To: agi@v2.listbox.com Sent: Wednesday, August 06, 2008 3:57 PM Subject: RE: [agi] Groundless reasoning -- Chinese Room I got the following quote from Wikipedia under understanding. According

Re: [agi] Groundless reasoning

2008-08-06 Thread Brad Paulsen
Terren Suydam wrote: Brad, I'm not entirely certain this was directed to me, since it seems to be a response to both things I said and things Mike Tintner said. My comments below, where (hopefully) appropriate. --- On Mon, 8/4/08, Brad Paulsen [EMAIL PROTECTED] wrote: Ah, excuse me.

Re: [agi] Groundless reasoning -- Chinese Room

2008-08-06 Thread Jim Bromer
On Wed, Aug 6, 2008 at 6:11 PM, Mark Waser [EMAIL PROTECTED] wrote: This has been a great thread! Actually, if you read Searle's original paper, I think that you will find that he... is *not* meaning to argue against the possibility of strong AI (since he makes repeated references to human as

Re: [agi] Groundless reasoning

2008-08-06 Thread Brad Paulsen
Ben Goertzel wrote: Well, having an intuitive understanding of human language will be useful for an AGI even if its architecture is profoundly nonhumanlike. And, human language is intended to be interpreted based on social, spatiotemporal experience. So the easiest way to make an AGI

Re: [agi] Groundless reasoning

2008-08-06 Thread Brad Paulsen
Jiri, I'd really like to hear more about your approach. Sounds bang-on! Have you written a paper (or worked from papers written by others) to which you could point us? Cheers, Brad Jiri Jelinek wrote: Ben, My perspective on grounding is partially summarized here

Re: [agi] Groundless reasoning -- Chinese Room

2008-08-06 Thread Mark Waser
: [agi] Groundless reasoning -- Chinese Room On Wed, Aug 6, 2008 at 6:11 PM, Mark Waser [EMAIL PROTECTED] wrote: This has been a great thread! Actually, if you read Searle's original paper, I think that you will find that he... is *not* meaning to argue against the possibility of strong AI (since he

Re: [agi] Groundless reasoning -- Chinese Room

2008-08-06 Thread Harry Chesley
Mark Waser wrote: The critical point that most people miss -- and what is really important for this list (and why people shouldn't blindly dismiss Searle) is that it is *intentionality* that defines understanding. If a system has goals/intentions and it's actions are modified by the

Re: [agi] Groundless reasoning

2008-08-06 Thread Brad Paulsen
Mike Tintner wrote: Brad;We don't need no stinkin' grounding. Your intention, I take it, is partly humorous. You are self-consciously assuming the persona of an angry child/adolescent. How do I know that without being grounded in real world conversations? How can you understand the prosody

Re: [agi] Groundless reasoning

2008-08-05 Thread Mike Tintner
Brad;We don't need no stinkin' grounding. Your intention, I take it, is partly humorous. You are self-consciously assuming the persona of an angry child/adolescent. How do I know that without being grounded in real world conversations? How can you understand the prosody of language generally

Re: [agi] Groundless reasoning

2008-08-05 Thread Jim Bromer
On Tue, Aug 5, 2008 at 12:42 AM, Jiri Jelinek [EMAIL PROTECTED] wrote: teaching through submitted stories [initially written in a formal language] = a solution I'm trying to implement when I (once a while) get to my AGI development. Stories (and the formal language) provide important

RE: [agi] Groundless reasoning -- Chinese Room

2008-08-05 Thread John G. Rose
From: Harry Chesley [mailto:[EMAIL PROTECTED] Searle's Chinese Room argument is one of those things that makes me wonder if I'm living in the same (real or virtual) reality as everyone else. Everyone seems to take it very seriously, but to me, it seems like a transparently meaningless

Re: [agi] Groundless reasoning -- Chinese Room

2008-08-05 Thread Terren Suydam
The Chinese Room argument counters only the assertion that the computational mechanism that manipulates symbols is capable of understanding. But in more sophisticated approaches to AGI, the computational mechanism is not the agent, it's merely a platform. Take the OpenCog design. See in

Re: [agi] Groundless reasoning -- Chinese Room

2008-08-05 Thread Abram Demski
Terren, You and I could agree. But the Chinese Room, as a thought experiment, is supposed to refute that. The reply you are giving is very similar to the Systems Reply: http://plato.stanford.edu/entries/chinese-room/#4.1 Searle's response to the Systems Reply is simple: in principle, the man

Re: [agi] Groundless reasoning -- Chinese Room

2008-08-05 Thread Terren Suydam
problem, that Searle does not adequately address the role of emergence. Terren --- On Tue, 8/5/08, Abram Demski [EMAIL PROTECTED] wrote: From: Abram Demski [EMAIL PROTECTED] Subject: Re: [agi] Groundless reasoning -- Chinese Room To: agi@v2.listbox.com Date: Tuesday, August 5, 2008, 6:07 PM

Re: [agi] Groundless reasoning -- Chinese Room

2008-08-05 Thread Eric Burton
does not adequately address the role of emergence. Terren --- On Tue, 8/5/08, Abram Demski [EMAIL PROTECTED] wrote: From: Abram Demski [EMAIL PROTECTED] Subject: Re: [agi] Groundless reasoning -- Chinese Room To: agi@v2.listbox.com Date: Tuesday, August 5, 2008, 6:07 PM Terren, You and I

Re: [agi] Groundless reasoning -- Chinese Room

2008-08-05 Thread Abram Demski
of emergence. Terren --- On Tue, 8/5/08, Abram Demski [EMAIL PROTECTED] wrote: From: Abram Demski [EMAIL PROTECTED] Subject: Re: [agi] Groundless reasoning -- Chinese Room To: agi@v2.listbox.com Date: Tuesday, August 5, 2008, 6:07 PM Terren, You and I could agree. But the Chinese Room

Re: [agi] Groundless reasoning -- Chinese Room

2008-08-05 Thread Terren Suydam
, 8/5/08, Abram Demski [EMAIL PROTECTED] wrote: From: Abram Demski [EMAIL PROTECTED] Subject: Re: [agi] Groundless reasoning -- Chinese Room To: agi@v2.listbox.com Date: Tuesday, August 5, 2008, 9:49 PM Terren, I agree. Searle's responses are inadequate, and the whole thought experiment fails

Re: [agi] Groundless reasoning -- Chinese Room

2008-08-05 Thread Harry Chesley
Cargo Cult AI. (http://en.wikipedia.org/wiki/Cargo_cult) Terren --- On Tue, 8/5/08, Abram Demski [EMAIL PROTECTED] wrote: From: Abram Demski [EMAIL PROTECTED] Subject: Re: [agi] Groundless reasoning -- Chinese Room To: agi@v2.listbox.com Date: Tuesday, August 5, 2008, 9:49 PM Terren, I agree

[agi] Groundless reasoning

2008-08-04 Thread Harry Chesley
As I've come out of the closet over the list tone issues, I guess I should post something AI-related as well -- at least that will make me net neutral between relevant and irrelevant postings. :-) One of the classic current AI issues is grounding, the argument being that a dictionary cannot

Re: [agi] Groundless reasoning

2008-08-04 Thread Jim Bromer
Harry Chesley [EMAIL PROTECTED] wrote: One of the classic current AI issues is grounding, the argument being that a dictionary cannot be complete because it is only self-referential, and *has* to be grounded at some point to be truly meaningful. This argument is used to claim that abstract AI

Re: [agi] Groundless reasoning

2008-08-04 Thread Vladimir Nesov
On Mon, Aug 4, 2008 at 10:55 PM, Harry Chesley [EMAIL PROTECTED] wrote: As I've come out of the closet over the list tone issues, I guess I should post something AI-related as well -- at least that will make me net neutral between relevant and irrelevant postings. :-) One of the classic

Re: [agi] Groundless reasoning

2008-08-04 Thread Terren Suydam
Harry, Count me in the camp that views grounding as the essential problem of traditional AI approaches, at least as it relates to AGI. An embodied AI [*], in which the only informational inputs to the AI come via so-called sensory modalities, is the only way I can see for an AI to arrive at

Re: [agi] Groundless reasoning

2008-08-04 Thread Mike Tintner
Harry: I have never bought this line of reasoning. It seems to me that meaning is a layered thing, and that you can do perfectly good reasoning at one (or two or three) levels in the layering, without having to go all the way down. And if that layering turns out to be circular (as it is in a

Re: [agi] Groundless reasoning

2008-08-04 Thread Pei Wang
This topic has been discussed in this list for several times. A previous post of mine can be found at http://www.listbox.com/member/archive/303/2007/10/sort/time_rev/page/13/entry/22 Pei On Mon, Aug 4, 2008 at 2:55 PM, Harry Chesley [EMAIL PROTECTED] wrote: As I've come out of the closet over

Re: [agi] Groundless reasoning

2008-08-04 Thread Abram Demski
Harry, In what way do you think your approach is not grounded? --Abram On Mon, Aug 4, 2008 at 2:55 PM, Harry Chesley [EMAIL PROTECTED] wrote: As I've come out of the closet over the list tone issues, I guess I should post something AI-related as well -- at least that will make me net neutral

Re: [agi] Groundless reasoning

2008-08-04 Thread Jiri Jelinek
On Mon, Aug 4, 2008 at 2:55 PM, Harry Chesley [EMAIL PROTECTED] wrote: the argument being that a dictionary cannot be complete because it is only self-referential, and *has* to be grounded at some point to be truly meaningful. This argument is used to claim that abstract AI can never

Re: [agi] Groundless reasoning

2008-08-04 Thread Matt Mahoney
- Original Message From: Ben Goertzel [EMAIL PROTECTED] My perspective on grounding is partially summarized here www.goertzel.org/papers/PostEmbodiedAI_June7.htm I agree that AGI should ideally have multiple sources of knowledge as you describe: explicitly taught, learned from

Re: [agi] Groundless reasoning -- Chinese Room

2008-08-04 Thread Harry Chesley
Terren Suydam wrote: ... Without an internal sense of meaning, symbols passed to the AI are simply arbitrary data to be manipulated. John Searle's Chinese Room (see Wikipedia) argument effectively shows why manipulation of ungrounded symbols is nothing but raw computation with no

Re: [agi] Groundless reasoning

2008-08-04 Thread Harry Chesley
Vladimir Nesov wrote: It's too fuzzy an argument. You're right, of course. I'm not being precise, and though I'll try to improve on that here, I probably still won't be. But here's my attempt: There are essentially three types of grounding: embodiment, hierarchy base nodes, and

Re: [agi] Groundless reasoning

2008-08-04 Thread Brad Paulsen
Terren Suydam wrote: I don't know, how do you do it? :-] A human baby that grows up with virtual reality hardware surgically implanted (never to experience anything but a virtual reality) will have the same issues, right? There is no difference in principle between real reality and virtual

Re: [agi] Groundless reasoning -- Chinese Room

2008-08-04 Thread Terren Suydam
Hi Harry, All the Chinese Room argument shows, if you accept the arguments, is that approaches to AI in which symbols are *given*, cannot manifest understanding (aka an internal sense of meaning) from the perspective of the AI. By given, I mean simply that symbols are incorporated into the

Re: [agi] Groundless reasoning

2008-08-04 Thread Ben Goertzel
Well, having an intuitive understanding of human language will be useful for an AGI even if its architecture is profoundly nonhumanlike. And, human language is intended to be interpreted based on social, spatiotemporal experience. So the easiest way to make an AGI grok human language is very

Re: [agi] Groundless reasoning -- Chinese Room

2008-08-04 Thread Eric Burton
The Chinese Room concept became more palatable to me when I started putting the emphasis on nese and not on room. /Chinese/ Room, not Chinese /Room/. I don't know why this is. I think it changes the implied meaning from a room where Chinese happens to be spoken, to a room for the

Re: [agi] Groundless reasoning

2008-08-04 Thread Jiri Jelinek
On Tue, Aug 5, 2008 at 12:48 AM, Ben Goertzel [EMAIL PROTECTED] wrote: The problem is that writing stories in a formal language, with enough nuance and volume to really contain the needed commonsense info, would require a Cyc-scale effort at formalized story entry. While possible in principle,

Re: [agi] Groundless reasoning

2008-08-04 Thread Ben Goertzel
When do you think Novamente will be ready to go out and effectively learn from (/interract with) environments not fully controlled by the dev team? I wish I could say tomorrow, but realistically it looks like it's gonna be 2009 ... hopefully earlier rather than later in the year but I'm not