On 8/8/08, Mark Waser [EMAIL PROTECTED] wrote:
The person believes his decision are now guided by free will, but
truly they are still guided by the book: if the book gives him the wrong
meaning of a word, he will make a mistake when answering a Chinese speaker
The translations are guided
On Sun, Aug 17, 2008 at 10:52 PM, Charles Hixson
[EMAIL PROTECTED] wrote:
Well, one point where we disagree is on whether truth can actually be
known by anything. I don't think this is possible. So to me that which is
called truth is just something with a VERY high probability, and which is
This is probably quibbling over a definition, but:
Jim Bromer wrote:
On Sat, Aug 9, 2008 at 5:35 PM, Charles Hixson
[EMAIL PROTECTED] wrote:
Jim Bromer wrote:
As far as I can tell, the idea of making statistical calculation about
what we don't know is only relevant for three
On Sat, Aug 9, 2008 at 5:35 PM, Charles Hixson
[EMAIL PROTECTED] wrote:
Jim Bromer wrote:
As far as I can tell, the idea of making statistical calculation about
what we don't know is only relevant for three conditions.
The accuracy of the calculation is not significant.
The evaluation is near
In most situations this is further limited because one CAN'T know all of the
consequences. So one makes probability calculations weighting things not
only by probability of occurrence, but also by importance. So different
individuals disagree not only on the definition of best, but also on
Jim Bromer wrote:
In most situations this is further limited because one CAN'T know all of the
consequences. So one makes probability calculations weighting things not
only by probability of occurrence, but also by importance. So different
individuals disagree not only on the definition of
On Sat, Aug 9, 2008 at 5:35 PM, Charles Hixson
[EMAIL PROTECTED] wrote:
Could you define opinion in an operational manner, i.e. in such a way that
it was specified whether a particular structure in a database satisfied that
or not? Or a particular logical operation?
This part of your question
Let me ask about a special case of this argument.
Suppose now the book that the guy in the room holds is a chinese-teaching
book for english speakers. The guy can read it for as long as he wishes, and
can consult it in order to give the answers to the chinese speakers
interacting with him.
In
is that the person can choose what to answer (as
opposed to the Chinese Room where responses are dictated by the input and no
choice is involved).
- Original Message -
From: Valentina Poletti
To: agi@v2.listbox.com
Sent: Friday, August 08, 2008 6:18 AM
Subject: Re: [agi] Groundless
Valentina Poletti [EMAIL PROTECTED] wrote:
Suppose now the book that the guy in the room holds is a chinese-teaching
book for english speakers. The guy can read it for as long as he wishes, and
can consult it in order to give the answers to the chinese speakers
interacting with him.
Great
On Thu, Aug 7, 2008 at 3:53 PM, Charles Hixson
[EMAIL PROTECTED] wrote:
At this point I think it relevant to bring in an assertion from Larry Niven
(Protector):
Paraphrase: When you understand all the consequences of an act, then you
don't have free will. You must choose the best decision.
Jim Bromer wrote:
On Thu, Aug 7, 2008 at 3:53 PM, Charles Hixson
[EMAIL PROTECTED] wrote:
At this point I think it relevant to bring in an assertion from Larry Niven
(Protector):
Paraphrase: When you understand all the consequences of an act, then you
don't have free will. You must choose
Argh! Are you all making the mistake I think you are making? Searle is
using a technical term in philosophy--intentionality. It is different
from the common use of intending as in aiming to do something or intention
as a goal. (Here's s wiki http://en.wikipedia.org/wiki/Intentionality).
The
On Wed, Aug 6, 2008 at 8:25 PM, Brad Paulsen [EMAIL PROTECTED] wrote:
Jiri,
I'd really like to hear more about your approach. Sounds bang-on! Have you
written a paper (or worked from papers written by others) to which you could
point us?
Brad,
I'm not aware of anyone else using or
Terren: Substituting an actual human invalidates the experiment, because
then you are bringing something in that can actually do semantics. The point
of the argument is to show how merely manipulating symbols (i.e. the
syntactical domain) is not a demonstration of understanding, no matter what
the
yep.. isn't it amazing how long a thread is becoming based on an experiment
that has no significance?
On 8/6/08, Steve Richfield [EMAIL PROTECTED] wrote:
Back to reason,
This entire thread is yet another example that once you accept a bad
assumption, you can then prove ANY absurd
Seriously, I'm only venturing a personal opinion but I've never even
especially cared for Chinese Rooms.
On 8/7/08, Valentina Poletti [EMAIL PROTECTED] wrote:
yep.. isn't it amazing how long a thread is becoming based on an experiment
that has no significance?
On 8/6/08, Steve Richfield
On Wed, Aug 6, 2008 at 8:33 PM, Mark Waser [EMAIL PROTECTED] wrote:
I was not dismissing the argument and certainly not making a presumption of
human dominion over understanding. Quite the opposite in fact. I'm not
quite sure why you believe that I did. Could you tell me which of my
agree completely.
Mark
- Original Message -
From: [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Thursday, August 07, 2008 2:57 AM
Subject: Re: [agi] Groundless reasoning -- Chinese Room
Argh! Are you all making the mistake I think you are making? Searle is
using a technical
to do. You have free will because you will
do what (your nature makes) you wish to do.
Mark
- Original Message -
From: Jim Bromer [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Thursday, August 07, 2008 8:55 AM
Subject: Re: [agi] Groundless reasoning -- Chinese Room
On Wed
On Thu, Aug 7, 2008 at 9:42 AM, Mark Waser [EMAIL PROTECTED] wrote:
Hi Jim,
The apparent paradox can be
reduced to the never ending deterministic vs free will argument.
Again, I agree but I don't believe that determinism vs. free will is really
a paradox (heresy!). You are deterministic
grounding
emergent. They are pretty close to one and the same.
- Original Message -
From: Jim Bromer [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Thursday, August 07, 2008 11:14 AM
Subject: Re: [agi] Groundless reasoning -- Chinese Room
On Thu, Aug 7, 2008 at 9:42 AM, Mark Waser
Further explanation needed more below inline.
--- On Mon, 8/4/08, Harry Chesley [EMAIL PROTECTED] wrote:
From: Harry Chesley [EMAIL PROTECTED]
Subject: Re: [agi] Groundless reasoning
To: agi@v2.listbox.com
Date: Monday, August 4, 2008, 9:16 PM
Vladimir Nesov wrote:
It's too fuzzy
Back on the problem of understanding
more below
___
James Ratcliff - http://falazar.com
Looking for something...
--- On Wed, 8/6/08, Terren Suydam [EMAIL PROTECTED] wrote:
From: Terren Suydam [EMAIL PROTECTED]
Subject: Re: [agi] Groundless reasoning
PROTECTED]
To: agi@v2.listbox.com
Sent: Wednesday, August 06, 2008 7:32 PM
Subject: Re: [agi] Groundless reasoning -- Chinese Room
On Wed, Aug 6, 2008 at 6:11 PM, Mark Waser [EMAIL PROTECTED]
wrote:
This has been a great thread!
Actually, if you read Searle's original paper, I think that you
will not be thrown by minor differences since they understand
that space around their known solutions as well as the exact solutions.
- Original Message -
From: James Ratcliff
To: agi@v2.listbox.com
Sent: Thursday, August 07, 2008 2:13 PM
Subject: Re: [agi] Groundless reasoning -- Chinese
: Re: [agi] Groundless reasoning -- Chinese Room
To: agi@v2.listbox.com
Date: Thursday, August 7, 2008, 2:31 PM
Please Answer:
Now how can we really say how this is different from human
understanding?
I receive
a question, I rack my brain for stored facts, if relevant, and
any
Brad Paulsen wrote:
... Nope. Wrong again. At least you're consistent. That line
actually comes from a Cheech and Chong skit (or a movie -- can't
remember which at the moment) where the guys are trying to get
information by posing as cops. At least I think that's the setup.
When the
Jim Bromer wrote:
...
I mostly agree with your point of view, and I am not actually saying
that your technical statements are wrong. I am trying to explain that
there is something more to be learned. The apparent paradox can be
reduced to the never ending deterministic vs free will argument.
Charles:When you understand all the consequences of an act, then
you don't have free will.
Just so. And the number of decisions/actions that you take, where you
understand all the consequences - all the rewards, risks, costs, and
opportunity costs of not just the actions, but how and how long
]
Subject: Re: [agi] Groundless reasoning -- Chinese Room
To: agi@v2.listbox.com
Date: Thursday, August 7, 2008, 2:49 PM
:-) The Chinese Room can't pass the Turing
test for exactly the reason you mention.
Well
in the Chinese Room case I think the book of instructions is infinitely large
the distinction that you need.
- Original Message -
From: James Ratcliff
To: agi@v2.listbox.com
Sent: Thursday, August 07, 2008 4:27 PM
Subject: Re: [agi] Groundless reasoning -- Chinese Room
With the chinese room, we arent doing any reasoning really, just
looking up answers
James Ratcliff wrote:
Every AGI, but the truly most simple AI must run in a simulated
environment of some sort.
Not necessarily, but in most cases yes. To give a counter example, a
human scholar reads Plato and publishes an analysis of what he has read.
There is no interaction with the
Charles,
Well, that's what gets me up in the morning. I learn something new every day!
FWIW, I don't believe the Pink Floyd reference is appropriate since I don't
*think* they included the signature word: stinkin'. The We don't need
no.. part is there, though. ;-)
As educational as this
My view is that the problem with the Chinese Room argument is precisely the
manner in which it uses the word 'understanding'. It is implied that in this
context this word refers to mutual human experience. Understanding has
another meaning, namely the emergent process some of you described, which
Then again, I was just thinking.. wouldn't it be wonderful if instead of
learning everything from scratch since the day we are born, we were born
with all the knowledge all human beings had acquired until that moment? If
somehow that was inplanted in our DNA? Of course that is not feasable.. but
Cargo Cult
AI.
(http://en.wikipedia.org/wiki/Cargo_cult)
Terren
--- On Tue, 8/5/08, Abram Demski
[EMAIL PROTECTED] wrote:
From: Abram Demski [EMAIL PROTECTED]
Subject: Re: [agi] Groundless reasoning --
Chinese Room
To: agi@v2.listbox.com
Date: Tuesday, August 5, 2008
Ok, I really don't see how it proves that then. In my view, the book could
be replaced with a chinese-english translator and the same exact outcome
will be given. Both are using their static knowledge for this process, not
experience.
On 8/6/08, Terren Suydam [EMAIL PROTECTED] wrote:
Hi
by translator i meant human translator btw. what this experiment does
suggest is that linguistic abilities require energy (the book alone would do
nothing). and that they are independent of humanness (the machine could do
it), whether they involve 'understanding' or not.
On 8/6/08, Valentina
But you're assuming that a chinese-english translator is possible to achieve
without understanding. Language translation requires understanding of
semantics, not just manipulation of syntax. That's why NLP has been out of
reach, and will be, until we get agents that can actually do semantics.
the global result of the
manipulation is (e.g., a chess move).
--- On Wed, 8/6/08, Valentina Poletti [EMAIL PROTECTED] wrote:
From: Valentina Poletti [EMAIL PROTECTED]
Subject: Re: [agi] Groundless reasoning -- Chinese Room
To: agi@v2.listbox.com
Date: Wednesday, August 6, 2008, 11:27 AM
by translator
Terren Suydam wrote:
Harry,
--- On Wed, 8/6/08, Harry Chesley [EMAIL PROTECTED] wrote:
I'll take a stab at both of these...
The Chinese Room to me simply states that understanding cannot be
decomposed into sub-understanding pieces. I don't see it as
addressing grounding, unless you
[EMAIL PROTECTED] wrote:
From: Abram Demski [EMAIL PROTECTED]
Subject: Re: [agi] Groundless reasoning -- Chinese Room
To: agi@v2.listbox.com
Date: Tuesday, August 5, 2008, 9:49 PM
Terren,
I agree. Searle's responses are inadequate, and the
whole thought
experiment fails to prove his point. I
Hi Abram,
Sorry, your message did slip through the cracks. I intended to respond
earlier... here goes.
--- On Wed, 8/6/08, Abram Demski [EMAIL PROTECTED] wrote:
[EMAIL PROTECTED] wrote:
I explained somewhat in my first reply to this thread.
Basically, as I
understand you, you are saying
Harry,
--- On Wed, 8/6/08, Harry Chesley [EMAIL PROTECTED] wrote:
But it's a preaching to the choir argument: Is there
anything more to
the argument than the intuition that automatic manipulation
cannot
create understanding? I think it can, though I have yet to
show it.
The burden is on
Terren,
I agree that the emergence response shows where the flaw is in the
Chinese Room argument. The argument fails, because although
understanding is not in the person or the instructions or the physical
room as a whole, it emerges from the system. That being the case, how
can the argument show
] wrote:
From: Abram Demski [EMAIL PROTECTED]
Subject: Re: [agi] Groundless reasoning -- Chinese Room
To: agi@v2.listbox.com
Date: Wednesday, August 6, 2008, 2:32 PM
Terren,
I agree that the emergence response shows where the flaw is
in the
Chinese Room argument. The argument fails, because
Back to reason,
This entire thread is yet another example that once you accept a bad
assumption, you can then prove ANY absurd proposition. I see no reason to
believe that a Chinese Room is possible, and therefore I have no problem
rejecting all arguments regarding the absurd conclusions that
Terren Suydam wrote:
Harry,
--- On Wed, 8/6/08, Harry Chesley [EMAIL PROTECTED] wrote:
But it's a preaching to the choir argument: Is there anything more
to the argument than the intuition that automatic manipulation
cannot create understanding? I think it can, though I have yet to
show
Terren Suydam wrote:
Unfortunately, I have to take a break from the list (why are people
cheering??).
No cheering here.
Actually, I'd like to say thanks to everyone. This thread has been very
interesting. I realize that much of it is old hat and boring to some of
you, but it's been useful
: Terren Suydam [mailto:[EMAIL PROTECTED]
Sent: August-06-08 8:24 AM
To: agi@v2.listbox.com
Subject: Re: [agi] Groundless reasoning -- Chinese Room
Harry,
--- On Wed, 8/6/08, Harry Chesley [EMAIL PROTECTED] wrote:
I'll take a stab at both of these...
The Chinese Room to me simply states
- Original Message -
From: Terren Suydam [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Wednesday, August 06, 2008 2:50 PM
Subject: Re: [agi] Groundless reasoning -- Chinese Room
Abram,
I think a simulated, grounded, embodied approach is the one exception to
the otherwise correct
But it's a preaching to the choir argument: Is there anything more to the
argument than the intuition that automatic manipulation cannot create
understanding? I think it can, though I have yet to show it.
Searle answers that exact question in his paper by saying Because the
formal symbol
linkage to the world).
- Original Message -
From: David Clark [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Wednesday, August 06, 2008 3:57 PM
Subject: RE: [agi] Groundless reasoning -- Chinese Room
I got the following quote from Wikipedia under understanding.
According
Terren Suydam wrote:
Brad,
I'm not entirely certain this was directed to me, since it seems to be a
response to both things I said and things Mike Tintner said. My comments below,
where (hopefully) appropriate.
--- On Mon, 8/4/08, Brad Paulsen [EMAIL PROTECTED] wrote:
Ah, excuse me.
On Wed, Aug 6, 2008 at 6:11 PM, Mark Waser [EMAIL PROTECTED] wrote:
This has been a great thread!
Actually, if you read Searle's original paper, I think that you will find
that he... is *not* meaning to argue against the
possibility of strong AI (since he makes repeated references to human as
Ben Goertzel wrote:
Well, having an intuitive understanding of human language will be useful
for an AGI even if its architecture is profoundly nonhumanlike. And,
human language is intended to be interpreted based on social,
spatiotemporal experience. So the easiest way to make an AGI
Jiri,
I'd really like to hear more about your approach. Sounds bang-on! Have you
written a paper (or worked from papers written by others) to which you could
point us?
Cheers,
Brad
Jiri Jelinek wrote:
Ben,
My perspective on grounding is partially summarized here
: [agi] Groundless reasoning -- Chinese Room
On Wed, Aug 6, 2008 at 6:11 PM, Mark Waser [EMAIL PROTECTED] wrote:
This has been a great thread!
Actually, if you read Searle's original paper, I think that you will find
that he... is *not* meaning to argue against the
possibility of strong AI (since he
Mark Waser wrote:
The critical point that most people miss -- and what is really
important for this list (and why people shouldn't blindly dismiss
Searle) is that it is *intentionality* that defines understanding.
If a system has goals/intentions and it's actions are modified by the
Mike Tintner wrote:
Brad;We don't need no stinkin' grounding.
Your intention, I take it, is partly humorous. You are self-consciously
assuming the persona of an angry child/adolescent. How do I know that
without being grounded in real world conversations? How can you
understand the prosody
Brad;We don't need no stinkin' grounding.
Your intention, I take it, is partly humorous. You are self-consciously
assuming the persona of an angry child/adolescent. How do I know that
without being grounded in real world conversations? How can you understand
the prosody of language generally
On Tue, Aug 5, 2008 at 12:42 AM, Jiri Jelinek [EMAIL PROTECTED] wrote:
teaching through submitted stories [initially written in a formal
language] = a solution I'm trying to implement when I (once a while)
get to my AGI development. Stories (and the formal language) provide
important
From: Harry Chesley [mailto:[EMAIL PROTECTED]
Searle's Chinese Room argument is one of those things that makes me
wonder if I'm living in the same (real or virtual) reality as everyone
else. Everyone seems to take it very seriously, but to me, it seems like
a transparently meaningless
The Chinese Room argument counters only the assertion that the computational
mechanism that manipulates symbols is capable of understanding. But in more
sophisticated approaches to AGI, the computational mechanism is not the agent,
it's merely a platform.
Take the OpenCog design. See in
Terren,
You and I could agree. But the Chinese Room, as a thought experiment,
is supposed to refute that.
The reply you are giving is very similar to the Systems Reply:
http://plato.stanford.edu/entries/chinese-room/#4.1
Searle's response to the Systems Reply is simple: in principle, the
man
problem, that
Searle does not adequately address the role of emergence.
Terren
--- On Tue, 8/5/08, Abram Demski [EMAIL PROTECTED] wrote:
From: Abram Demski [EMAIL PROTECTED]
Subject: Re: [agi] Groundless reasoning -- Chinese Room
To: agi@v2.listbox.com
Date: Tuesday, August 5, 2008, 6:07 PM
does not adequately address the role of emergence.
Terren
--- On Tue, 8/5/08, Abram Demski [EMAIL PROTECTED] wrote:
From: Abram Demski [EMAIL PROTECTED]
Subject: Re: [agi] Groundless reasoning -- Chinese Room
To: agi@v2.listbox.com
Date: Tuesday, August 5, 2008, 6:07 PM
Terren,
You and I
of emergence.
Terren
--- On Tue, 8/5/08, Abram Demski [EMAIL PROTECTED] wrote:
From: Abram Demski [EMAIL PROTECTED]
Subject: Re: [agi] Groundless reasoning -- Chinese Room
To: agi@v2.listbox.com
Date: Tuesday, August 5, 2008, 6:07 PM
Terren,
You and I could agree. But the Chinese Room
, 8/5/08, Abram Demski [EMAIL PROTECTED] wrote:
From: Abram Demski [EMAIL PROTECTED]
Subject: Re: [agi] Groundless reasoning -- Chinese Room
To: agi@v2.listbox.com
Date: Tuesday, August 5, 2008, 9:49 PM
Terren,
I agree. Searle's responses are inadequate, and the
whole thought
experiment fails
Cargo Cult AI.
(http://en.wikipedia.org/wiki/Cargo_cult)
Terren
--- On Tue, 8/5/08, Abram Demski [EMAIL PROTECTED] wrote:
From: Abram Demski [EMAIL PROTECTED]
Subject: Re: [agi] Groundless reasoning -- Chinese Room
To: agi@v2.listbox.com
Date: Tuesday, August 5, 2008, 9:49 PM
Terren,
I agree
As I've come out of the closet over the list tone issues, I guess I
should post something AI-related as well -- at least that will make me
net neutral between relevant and irrelevant postings. :-)
One of the classic current AI issues is grounding, the argument being
that a dictionary cannot
Harry Chesley [EMAIL PROTECTED] wrote:
One of the classic current AI issues is grounding, the argument being that a
dictionary cannot be complete because it is only self-referential, and *has*
to be grounded at some point to be truly meaningful. This argument is used
to claim that abstract AI
On Mon, Aug 4, 2008 at 10:55 PM, Harry Chesley [EMAIL PROTECTED] wrote:
As I've come out of the closet over the list tone issues, I guess I should
post something AI-related as well -- at least that will make me net neutral
between relevant and irrelevant postings. :-)
One of the classic
Harry,
Count me in the camp that views grounding as the essential problem of
traditional AI approaches, at least as it relates to AGI. An embodied AI [*],
in which the only informational inputs to the AI come via so-called sensory
modalities, is the only way I can see for an AI to arrive at
Harry: I have never bought this line of reasoning. It seems to me that
meaning is a
layered thing, and that you can do perfectly good reasoning at one (or
two
or three) levels in the layering, without having to go all the way
down.
And if that layering turns out to be circular (as it is in a
This topic has been discussed in this list for several times.
A previous post of mine can be found at
http://www.listbox.com/member/archive/303/2007/10/sort/time_rev/page/13/entry/22
Pei
On Mon, Aug 4, 2008 at 2:55 PM, Harry Chesley [EMAIL PROTECTED] wrote:
As I've come out of the closet over
Harry,
In what way do you think your approach is not grounded?
--Abram
On Mon, Aug 4, 2008 at 2:55 PM, Harry Chesley [EMAIL PROTECTED] wrote:
As I've come out of the closet over the list tone issues, I guess I should
post something AI-related as well -- at least that will make me net neutral
On Mon, Aug 4, 2008 at 2:55 PM, Harry Chesley [EMAIL PROTECTED] wrote:
the argument being that a dictionary cannot be complete because it is only
self-referential, and *has* to be grounded at some point to be truly
meaningful. This argument is used to claim that abstract AI can never
- Original Message
From: Ben Goertzel [EMAIL PROTECTED]
My perspective on grounding is partially summarized here
www.goertzel.org/papers/PostEmbodiedAI_June7.htm
I agree that AGI should ideally have multiple sources of knowledge as you
describe: explicitly taught, learned from
Terren Suydam wrote:
...
Without an internal
sense of meaning, symbols passed to the AI are simply arbitrary data
to be manipulated. John Searle's Chinese Room (see Wikipedia)
argument effectively shows why manipulation of ungrounded symbols is
nothing but raw computation with no
Vladimir Nesov wrote:
It's too fuzzy an argument.
You're right, of course. I'm not being precise, and though I'll try to
improve on that here, I probably still won't be. But here's my attempt:
There are essentially three types of grounding: embodiment, hierarchy
base nodes, and
Terren Suydam wrote:
I don't know, how do you do it? :-] A human baby that grows up with virtual
reality hardware surgically implanted (never to experience anything but a
virtual reality) will have the same issues, right?
There is no difference in principle between real reality and virtual
Hi Harry,
All the Chinese Room argument shows, if you accept the arguments, is that
approaches to AI in which symbols are *given*, cannot manifest understanding
(aka an internal sense of meaning) from the perspective of the AI. By given, I
mean simply that symbols are incorporated into the
Well, having an intuitive understanding of human language will be useful for
an AGI even if its architecture is profoundly nonhumanlike. And, human
language is intended to be interpreted based on social, spatiotemporal
experience. So the easiest way to make an AGI grok human language is very
The Chinese Room concept became more palatable to me when I started
putting the emphasis on nese and not on room. /Chinese/ Room, not
Chinese /Room/. I don't know why this is.
I think it changes the implied meaning from a room where Chinese
happens to be spoken, to a room for the
On Tue, Aug 5, 2008 at 12:48 AM, Ben Goertzel [EMAIL PROTECTED] wrote:
The problem is that writing stories in a formal language, with enough nuance
and volume to really contain the needed commonsense info, would require a
Cyc-scale effort at formalized story entry. While possible in principle,
When do you think Novamente will be ready to go out and effectively
learn from (/interract with) environments not fully controlled by the
dev team?
I wish I could say tomorrow, but realistically it looks like it's gonna be
2009 ... hopefully earlier rather than later in the year but I'm not
88 matches
Mail list logo