Pei Wang wrote:
On 2/4/07, Richard Loosemore <[EMAIL PROTECTED]> wrote:
I fully accept that you don't care if the human mind does it that way,
because you want NARS to do it differently.  My question was at a higher
level.  If we knew for sure that the human mind was using something like
a formalized system (and not the messy nonlinear stuff I described),
then we could quite comfortably say "Hey, let's do the same, but simpler
and maybe even better."  My problem is, of course, that the human mind
may well not be doing it that way, and that if it is not, there may be a
good reason why it does it the messy, nonlinear way (namely, because all
formalizable, cleaner systems turn out to be incapable of getting up to
and staying at full, autonomous intelligence).

I see your point, though I don't think anyone has proved that the
human mind doesn't show any regularity in its management of beliefs
and concepts --- to me, it is the opposite. Intelligence is by no
means random or arbitrary. Though the previous theories are not
flexible enough to capture its principles, it doesn't mean no theory
will.

What you have said here is very important, in the sense that you have given a succinct statement of a common misunderstanding of my claim, so that lets me come to the crux of the matter quite well....

I would never, ever claim that "the human mind doesn't show any regularity in its management of beliefs and concepts". Far from it! It is only the nature of those regularities that are of interest to me. They can be formalizable, or they can be partially complex. Ditto for the idea that I might be saying that intelligence is "random or arbitrary" ... I would never want to imply that.

By the way, a reasoning system is not necessarily clear and clean.
Actually the concepts in NARS' memory are quite fuzzy and messy,
thought the system does follow a logic.

Again, a slight misunderstanding. It is difficult to find the right words, but when I say "clear and clean" I meant almost the same thing as "the system does follow a logic". I didn't phrase it that way because I want to encompass systems that are not strictly following a logic (like a hybrid, for example) but which are a million miles closer to a logic than the messy thing I described.


I have to curtail my engagement on this because I have to get busy this afternoon, but I will leave with one final thought, related to the question of whether humans are using any kind of logical formalism down at the low level.

Mike Oaksford and Nick Chater have an interesting perspective on the question, as evidenced by this:

Nick and I have explained that a great of apparently irrational human reasoning 
performance as actually rational if you take a Bayesian probabilistic perspective. Our 
most recent book, about to come out with Oxford UP, is titled "Bayesian 
Rationality".

In other words, the best explanation for the well-known *irrationality* shown by humans in various cognitive psychology experiments is that they are, deep down, trying to apply bayesian reasoning to the task, but that when they try to do this, they mess up!

One interpretation: when the mind tries to rely too heavily on a routine, mindless application of a formalized system for dealing with the world, it shows its dumbest side. Then, when something else steps in (higher level structures that use other principles to dig the dumb bayesian reasoning engine out of its mess), the system shows its smartest side.

At the very least, this is suggestive empirical evidence that, yes, there are interesting mechanisms at work down there, but that the bits that try to rely too heavily on simple formal-system approaches are not the ones that make the system intelligent.


Richard Loosemore.

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303

Reply via email to