The only thing that troubles me about this discussion of the relevance of probability theory to AGI is the way it seems to be *partly* founded on an assumption that I, for one, cannot accept.

The assumption is that the underlying dynamics of things at the concept level (or "logical term" level, if "concept" is not to your liking) can be meaningfully described by things that look something like "probabilities."

Now, before you jump on me (Ben and Pei, at least!), I want to say that I do understand that some of you have explicitly tried to distance yourselves from a very naive form of that idea .... by saying that the numbers being manipulated down there do not represent probabilities as such (Pei?), or that a vector of numbers like [P, confidence in P, etc.] is a more viable way of handling things (Ben?) -- but what I am trying to do is ask a question about whether even *these* two positions might be too strong.

What do I mean by that?

Suppose, for the sake of argument, that the way the human mind deal with these things is like this:

(1) The "concept" entities are active computational things (not passive tokens manipulated by a reasoning engine), with at least some internal structure [I need this to make the argument simpler]; (2) These concepts use a vector of numbers to handle their interactions with other concepts, but (*crucially*) these numbers do not correspond to things that are interpretable by us as "probability" or "confidence," or anything similar. Suppose that they are very badly behaved (nonlinear) functions of the things that each concept sees going on around it. 3) Suppose, further, that when concepts engage with one another to produce the episodes that we call "thoughts," they use these vectors of numbers AND also the actual, moment-by-moment configuration of the other concepts to which they are are temporarily conneceted in short term memory. In other words, the mere existence of a certain type of concept-cluster, in a nearby part of STM, at the right moment, can be the governing factor in what a given concept does.

[ASIDE. An example of this. The system is trying to answer the question "Are all ravens black?", but it does not just look to its collected data about ravens (partly represented by the vector of numbers inside the "raven" concept, which are vaguely related to the relevant probability), it also matters, quite crucially, that the STM contains a representation of the fact that the question is being asked by a psychologist, and that whereas the usual answer would be p(all ravens are black) = 1.0, this particular situation might be an attempt to make the subject come up with the most bizarre possible counterexamples (a genetic mutant; a raven that just had an accident with a pot of white paint, etc. etc.). In these circumstances, the numbers encoded inside concepts seem less relevant than the fact of there being a person of a particular type uttering the question.]


Now, with these three assumptions in hand, here are my questions.

1) Would anyone currently putting energy into the foundations of probability discussion be willing to say that this hypothetical human mechanism could *still* be meaningfully described in terms of a tractable probabilistic formalism (by, e.g., transforming or approximating all the nasty nonlinearity I just introduced into a simpler, more analytic form, without losing anything)?

[My intuition on this question:  no way.]

2) Suppose that this really *is* the way the human cognitive system works, and that the reason it works this way is that evolution has figured out (pardon the teleology: you know what I mean) that any attempt to build systems that manipulate more tractable types of "concepts," using simpler types of reasoning formalisms that actually do allow things to be interpreted in a high level way, simply do not work. In other words, such system just do not get to be intelligent (for whatever reason.... but probably because they can never learn those horribly vague, messy-looking concepts that don't fit very nicely into logical formalisms, but which are vital to the development of the system)? My actual question, then: Suppose it just happens not to be possible to do it any other way than with all the messy, nonlinear mechanisms described above: what, in that case, would be the use in trying to keep as close as you can to a formal, tractable approach to AGI, of the sort that would allow you to prove at least something about the way the not-quite-probabilities are handled?

Anyone who knows my theoretical position will recognize where I am coming from here, but I have at least made an effort to frame it in neutral terms.


Just doing my usual anarchic bit to bend the world to my unreasonable position, that's all ;-).

Richard Loosemore.





-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303

Reply via email to