Ben Goertzel wrote:

It **could** be that the only way a system can give rise to probabilistically sensible patterns of action-selection, given limited computational resources, is to do stuff internally that is based on nonlinear dynamics rather than probability theory.
But, I doubt it...
The human brain may work that way, but it is not the only (nor the ideal!) cognitive system...

Hmmm.... but what I wanted was to try to get some traction on why you would say this.

Your answer is only "I don't think so."

Your comment that the human brain "... is not the only (nor the ideal!) cognitive system" is a direct rejection of the idea that I was asking you to consider as a hypothesis.

I *know* you don't believe it to be true! ;-) What I was trying to do was to ask on what grounds you reject it.

It seems to me that there are some aspects of human cognition that are reasonably well modeled using a combination of probability theory with various other heuristics (such as "support theory"). Representation of, and reasoning about, declarative knowledge is an example. There is also increasing evidence for Bayesian inference being emergent from the dynamics of spiking neural nets with timing-dependent potentiation. I believe one can show that approximations to probabilistic term logic deduction also emerge from this sort of neural net model.

On the other hand, there are other aspects of human cognition that seem poorly modeled using probability theory, for instance learning of motor procedures, attention allocation, and creativity in general and perceptual pattern recognition. It's not so much that these contradict probability theory, but rather that the simplest mechanisms for implementing/describing these processes seem to involve a lot of other tools as well as probability theory.

These observations and others inclined me to try to work out an AGI design that combined

-- probability theory for reasoning on declarative knowledge
-- evolutionary learning (combined with prob. modeling) for procedure learning -- hierarchical pattern mining for perceptual pattern recognition (defaulting to prob. reasoning and ev. learning for unusual hard cases) -- simulated economics for attention allocation, using results of probabilistic reinforcement learning as data where available -- statistical pattern mining based self-analysis to recognize emergent attractors in the overall system dynamics and embody these attractors as declarative knowledge

I.e., I use explicit probability theory in Novamente where the brain seems to be doing something roughly probability-theory-like, and other tools in other places, loosely modeled on the sorts of tools the brain seems to be using in these other places.

-- Ben

Well, its time to drop the discussion, because everything you just said is completely consistent with either the truth or the negation of the problem I stated.

Thus:

(a) As far as I know, only a small part of "...representation of, and reasoning about, declarative knowledge" has ever been modeled, in cognitive science, using "probability theory with various other heuristics (such as "support theory")". I don't buy for one minute that probability theory plays a major role here, except in specialized, hand-picked cases.

(b) There are enormous gaps and inconsistencies in the models and frameworks that different cognitive scientists use to study the phenomena you list - you are citing subfields that don't even talk to each other, never mind have a consistent story to tell about how probality theory might play a role.

(c) I have big problems with the spiking neural net work that (among other things) seems to imply bayesian mechanisms. It is not that the bayesian mechanisms are not there (I would not be in the least surprised if they were -- see my last message to Pei) it is the next few steps in their scientific reasoning that bothers me: these people seem to be implying that (a) behaviorism was a great idea after all (they are totally ignorant, it seems, of all the reasons why it was rejected), and (b) their ideas often seem to imply a localist representation of concepts that is flatly contradicted by fifty years of neurological case studies (they are back to the grandmother cells again).

I have tried reading some of the papers on this stuff, and if you can find any high quality examples where the cognitive science is at a non-baby level, *please* point me to them, because the ones I have seen start with a pathetic bit of handwaving towards cognitive science (at about the level of sophistication I would expect if all they had done was skim one Time Magazine about "concepts"), and then a ton of neuroscience and math, then a wild conclusion about how they were finally tracking down the origins of consciousness.


So, sorry, but I am looking at the same data, and as far as I am concerned I see almost no evidence that probability theory plays a significant role in cognition at the concept level.

What that means, to go back to the original question, is that the possibility I raised is still completely open.


Richard Loosemore.





-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303

Reply via email to