Richard,

Thanks for taking the time to explain your position. I actually agree
with most what you wrote, though I don't think it is inconsistent with
my point, that is, beliefs do need numerical truth values.

Let me explain briefly (I have to leave soon). In an AGI system (as
least in mine), a belief may have complicated internal structure and
external relationship (syntax and semantics), as well as a numerical
truth value, though the truth value does not summarize the information
in the syntax and semantics, but supplement to it. When new beliefs
are produced from existing beliefs (call it inference or not), the
truth value of the conclusion needs to be calculated, though it is not
the only thing the system does: the syntax and semantics of the
involved beliefs all need to be taken into consideration, which is
usually more complicated than the numerical calculation of the truth
value.

In this sense, I agree that the numerical calculation is not
everything, or even the most important part of the process, but it is
still a necessary part of the process, which is what I want to stress.
Different from what you said, this numerical calculation is not
*sometimes* carried out, but almost always going on, though it is not
always the crucial factor that determine the system's overall
behavior.

The real alternative of numerical truth value is to stay with binary
logic, which will make things even worse.

In summary, to me the debate is not on "syntactic/semantic
manipulation vs. numerical calculation" (AGI needs both), but "binary
truth value vs. numerical truth value" (AGI needs the latter, while
let the former play limited role).

By the way, "numerical truth value" doesn't necessarily means
"probability", but it is a separate issue that I'd rather not to
discuss here.

Pei


On 8/3/06, Richard Loosemore <[EMAIL PROTECTED]> wrote:

Thanks for the thoughtful responses, folks.  I have a few replies.

Pei Wang wrote:
> No matter how bad fuzzy logic is, it cannot be responsible for the
> past failures of AI --- fuzzy logic has never been popular in the AI
> community.

Oh, no doubt about it:  but fuzzy logic by itself was not the reason
that I made that general criticism of the AI community.  What I have a
problem with is the idea that routine assessment and storage of simple
probabilities, or numerical truth values, is what lies at the heart of
all the main "thinking processes" in a cognitive system.

I can make this point a little more precisely in the context of what you
say next:

Pei Wang wrote:
> I agree with Richard that attaching numbers to beliefs doesn't solve
> all AI problems, that is, it is not sufficient for AGI, but I strongly
> believe that it is necessary, that is, we do need numerical truth
> value, among other things.  The basic reason is not that it is more
> "accurate" than binary values, but because it provides a general way
> to compare beliefs supported by evidence from different sources and
> with different natures. Very often the system doesn't have the
> knowledge and resources to carry out the thinking process Richard
> suggested.

If you are saying that numerical truth values and probabilities
*sometimes* play a role, then I am agreeing with you.  They do sometimes
play a role.

But the idea I am challenging is the idea of using concepts (entities,
predicates, etc) that have local stored values for their "truth" or
"probability", and then having the *crucial* processes of intelligence
be a matter of combining these probabilities or truth values.  It is the
central role played by these numbers that I am disagreeing with.  It is
the way that these simple calculations are assumed to be the main
drivers of thinking and reasoning processes.

Can I illustrate with an analogy?

Imagine a molecular soup of consisting of many different types of
long-chain protein molecules, in which (as we know from our molecular
biology) the interactions between these molecules is crucially dependent
on the shape of the molecules (as well as other factors like the
presence of smaller, mediating chemicals).  So if molecules A and B
happen to fit with each other in some interesting way, then the
interaction of A and B can yield a highly specific product, whereas A
and C might not do anything with each other, because there is no fit.

The question is, would it make sense to model the interactions of these
molecules by storing simple numbers inside representations of them?  I
don't think many people would defend such an idea:  we know that the
interactions are extremely sensitive to the detailed structure of the
molecules themselves, and summarizing that complex structure in any kind
of simply "probabilities of interaction" (or some such) would simply not
be good enough.  In technical language, the system is extremely
nonlinear, and so any local parameters values that purported to encode
the interactivity of the molecules would not be compositional.

[You could attempt to make these "local parameter values" more complex,
the way that Fuzzy Logicians tried to improve on simple probabilities,
but I would suggest that if you tried to do this with the molecular
interaction situation, you would still be wasting your time:  that
paradigm for understanding their interaction is simply not appropriate,
no matter how many Ptolemaic Epicycles you might add to it].

Now, of course, the question becomes whether the analogy is valid:  does
a cognitive process resemble the interaction of complex molecules with
specific shape, so that when molecules encounter one another, the result
of their interaction is sensitive to those "shapes"?

Well, when I pointed out that an AGI system (or a person) who tried to
answer a question about whether 8 pigs in a suburban home context could
be considered "many pigs", I was arguing that what the system tried to
do was build a model of the hypothetical situation, and a model of how
to assess the applicability of a statement like "many pigs", as well as
sub-models of the question-asker, the general process of asking and
answering questions that make little sense, and the appropriateness of
giving a response that does not answer the question but instead deflects
the conversation in a humorous direction .... and out of the complex
interaction of these many models came the resulting indirect and
humorous reply.

My claim is that this is a far more likely characterization of what
happens in human thinking, than the alternative idea that we first
compute (unconsciously) some probabilities and then combine them.  Does
it seem more likely that what is happening is the interaction of complex
models, or that everything is squeezed through the bottleneck of a
relatively simple probability calculation, and that content-dependent
model interaction is almost irrelevant?  I honestly think that anyone
who believes that all human cognition is dominated by such simple
calculation of probabilities or truth values would really be kidding
themselves, for the simple reason that we have good psychological
evidence that in all kinds of real-world thinking situations, people
seem to "fit ideas together" in order to come up with responses, and we
have very little evidence that they simply add probabilities.

I would have to take a good deal more time and space than is available
here to argue that empirical point, so I am appealing to people's
reasonable intuitions on this point.  I mean, isn't this one of THE big
complaints against psychologists, that they take situations that we
know, from our own mental experiences, involve complicated,
content-dependent interactions of thoughts, and they try to pretend that
they can throw away the complexity and the content-dependence and just
pretend that those processes can be modelled by simple parameters?  Even
the psychologists themselves acknowledge that this is what they do,
because they have no choice, and that they know the reality is more complex.

The "specific shape" of molecules in my analogy corresponds to the
content dependence of the models that are constructed in situation.  The
model for "8 pigs" combined with "in a suburban home context" gives rise
to a specific interaction that generates "farcical situation" model,
which in turn interacts with a model of "commentator describing this as
'many pigs'" to yield "commentator failing to understand that the word
'many' implies a comparison with a 'normal' number of pigs" [and so on].

The very same question asked about a farm context, instead of a suburban
one, would not go the same way because the interaction of the first two
models would give rise to something very different:  a simple chain of
models dealing with the size of hog farms.

The point is that all the important cognitive processes happening here
are extremely content dependent.  The whole direction of the train of
thought was dominated by model construction.

Was it dominated by some probability assessments?  That would be to
confuse something that may have been involved, in a trivial or strictly
subsidiary way, with a huge, nonlinear interaction of a bunch of complex
processes.

Finally (and because I am too tired tonight to reply in detail), I would
say that I am happy with the comments about some sub-cognitive processes
involving computation of probabilities .... I do not doubt that this
happens in some form or other.  But if my tennis arm knows where to
throw the ball without me knowing it, then this is very low level
phenomenon that is not really (or not necessarily relevant) to the high
level case.

The overwhelming weight of empirical evidence, I suggest, is that real
general intelligence involves the interaction of structured entities,
and therefore the burden of proof is really in the opposing court.  I
don't think asking people of my persuasion to come up with convincing
evidence that AI researchers should change what they have been doing:  I
want those researchers to explain why they have made what seems to be an
assumption that already flies in the face of a lot of empirical evidence!

I therefore ask the question:  what *evidence* is there that the
majority of cases of human cognition involve calculation of simple
probabilities and encoding of truth values (be they ever so complex as
Fuzzy logic, or probabilistic term logic, or whatever), and that the
apparently dominant role played by interacting, structured entities
(models, in a word) is actually not what happens?  I don't mind if the
models hypothesis is wrong, I just want to know when anyone sat down and
figured out that it could not be valid.

Richard Loosemore



-------
To unsubscribe, change your address, or temporarily deactivate your 
subscription,
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


-------
To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to