Ben Goertzel wrote:


Hi,

s.

1) Would anyone currently putting energy into the foundations of probability discussion be willing to say that this hypothetical human mechanism could *still* be meaningfully described in terms of a tractable probabilistic formalism (by, e.g., transforming or approximating all the nasty nonlinearity I just introduced into a simpler, more analytic form, without losing anything)?

[My intuition on this question:  no way.]


My intuition is completely opposite yours on this issue.

I think that a system like you described above could likely be described, in terms of its **action selections** and patterns therein, using probability theory.

This is why, in the theoretical hypotheses I proposed, I was talking only about probabilistic rules observed by an external agent M2 to govern a given agent M1's action-selections. I was not making any committments that M1 has to explicitly use probability theory internally. I think that explicitly using probability theory internally is only one among many ways to wind up approximately using probability theory on the level of patterns in one's action-selections.

Point taken: but I was specifically *not* addressing any of the issues related to describing an M1 using some other M2 that employed probability theory. I was strictly confining myself to the question of whether M1 used any kind of probability theory in its internal mechanisms. I thought that was obvious, but I guess not. I have no interest in the M2 arguments.



Also, I don't know why you contrast "analytic" with "nonlinear." Nonlinear equations are analytic constructs, just as surely as probabilistic equations. And probabilistic relationships can be nonlinear.

I did not mean the word "nonlinear" to be interpreted in the narrow sense (again, sorry, but I did not think I needed to explain that usage, given my known position on these issues): I meant generalized nonlinearity (which includes realms of "badly behaved" functions), and when I said "analytic" I meant that there are no *analytic* solutions to these sorts of beast.

I mean: if the concept units I described are really basing crucial decisions on such factors as those that came up in the example I gave, where the decision about raven-blackness depends on the appearance of a a very particular representation of other concepts in the STM (to wit, the fact of the interlocutor having weird motives), isn't this a paradigm case of (generalized) nonlinearity?




2) Suppose that this really *is* the way the human cognitive system works, and that the reason it works this way is that evolution has figured out (pardon the teleology: you know what I mean) that any attempt to build systems that manipulate more tractable types of "concepts," using simpler types of reasoning formalisms that actually do allow things to be interpreted in a high level way, simply do not work. In other words, such system just do not get to be intelligent (for whatever reason.... but probably because they can never learn those horribly vague, messy-looking concepts that don't fit very nicely into logical formalisms, but which are vital to the development of the system)? My actual question, then: Suppose it just happens not to be possible to do it any other way than with all the messy, nonlinear mechanisms described above: what, in that case, would be the use in trying to keep as close as you can to a formal, tractable approach to AGI, of the sort that would allow you to prove at least something about the way the not-quite-probabilities are handled?

It **could** be that the only way a system can give rise to probabilistically sensible patterns of action-selection, given limited computational resources, is to do stuff internally that is based on nonlinear dynamics rather than probability theory.

But, I doubt it...

The human brain may work that way, but it is not the only (nor the ideal!) cognitive system...

Hmmm.... but what I wanted was to try to get some traction on why you would say this.

Your answer is only "I don't think so."

Your comment that the human brain "... is not the only (nor the ideal!) cognitive system" is a direct rejection of the idea that I was asking you to consider as a hypothesis.

I *know* you don't believe it to be true! ;-) What I was trying to do was to ask on what grounds you reject it.





Richard Loosemore.



-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303

Reply via email to