I may have been guilty of writing too compactly again, so here is a quick reply.

What I was trying to say was:

(1) Suppose the human mind works this way, with the overall intelligence of the system being a (complex) consequence of the interaction of elements whose local structure (the messy use of non-interpretable parameters, for example) cannot be interpreted in any kind of formalizable way.

(2) Then suppose that the reason the human mind works this way is that there exist in the universe NO systems that are cleaner and more formalizable, with low-level parameters that can be interpreted in some way (as something like probabilities), which are also fully intelligent.

I was trying to ask about whether you considered the above to be a possibility. And if not a possibility, what kind of reasons, or intuitions (or just plain hopes, if it comes to that!) you would use to justify rejecting the possibility.

I fully accept that you don't care if the human mind does it that way, because you want NARS to do it differently. My question was at a higher level. If we knew for sure that the human mind was using something like a formalized system (and not the messy nonlinear stuff I described), then we could quite comfortably say "Hey, let's do the same, but simpler and maybe even better." My problem is, of course, that the human mind may well not be doing it that way, and that if it is not, there may be a good reason why it does it the messy, nonlinear way (namely, because all formalizable, cleaner systems turn out to be incapable of getting up to and staying at full, autonomous intelligence).

Forgive me for going on about this: I am trying hard to ask a particular *type* of question, and trying not to cause misunderstandings about what the type of my question is).


Richard Loosemore.



Pei Wang wrote:
Richard,

The assumption is that the underlying dynamics of things at the concept
level (or "logical term" level, if "concept" is not to your liking) can
be meaningfully described by things that look something like
"probabilities."

I never try to accurately duplicate the human mind. Instead, I just
want to build a system that follow the same principle as followed by
the human mind when described at a general way, such as "adapting to
the environment and working with insufficient knowledge and
resources".

Suppose, for the sake of argument, that the way the human mind deal with
these things is like this:

   (1) The "concept" entities are active computational things (not
passive tokens manipulated by a reasoning engine), with at least some
internal structure [I need this to make the argument simpler];

I fully agree with the "active" part, and believe that is how concepts
are handled in NARS. However, I know some people will think NARS only
has "passive tokens manipulated by a reasoning engine". So the
question is: how can the two situations be distinguished clearly?

   (2) These concepts use a vector of numbers to handle their
interactions with other concepts, but (*crucially*) these numbers do not
correspond to things that are interpretable by us as "probability" or
"confidence," or anything similar.  Suppose that they are very badly
behaved (nonlinear) functions of the things that each concept sees going
on around it.

For the human mind, I don't know and don't care too much for the
details. For NARS, some numerical measurement is necessary (though not
sufficient) for its implementation of the principle mentioned above.

   3) Suppose, further, that when concepts engage with one another to
produce the episodes that we call "thoughts," they use these vectors of
numbers AND also the actual, moment-by-moment configuration of the other
concepts to which they are are temporarily conneceted in short term
memory.  In other words, the mere existence of a certain type of
concept-cluster, in a nearby part of STM, at the right moment, can be
the governing factor in what a given concept does.

I have no problem with this.

[ASIDE.  An example of this.  The system is trying to answer the
question "Are all ravens black?", but it does not just look to its
collected data about ravens (partly represented by the vector of numbers
inside the "raven" concept, which are vaguely related to the relevant
probability), it also matters, quite crucially, that the STM contains a
representation of the fact that the question is being asked by a
psychologist, and that whereas the usual answer would be p(all ravens
are black) = 1.0, this particular situation might be an attempt to make
the subject come up with the most bizarre possible counterexamples (a
genetic mutant; a raven that just had an accident with a pot of white
paint, etc. etc.).  In these circumstances, the numbers encoded inside
concepts seem less relevant than the fact of there being a person of a
particular type uttering the question.]

The numbers are not always the most important factors in decision
making, though I'd add that they are still necessary.

Now, with these three assumptions in hand, here are my questions.

1)  Would anyone currently putting energy into the foundations of
probability discussion be willing to say that this hypothetical human
mechanism could *still* be meaningfully described in terms of a
tractable probabilistic formalism (by, e.g., transforming or
approximating all the nasty nonlinearity I just introduced into a
simpler, more analytic form, without losing anything)?

[My intuition on this question:  no way.]

Agree.

2)  Suppose that this really *is* the way the human cognitive system
works, and that the reason it works this way is that evolution has
figured out (pardon the teleology:  you know what I mean) that any
attempt to build systems that manipulate more tractable types of
"concepts," using simpler types of reasoning formalisms that actually do
allow things to be interpreted in a high level way, simply do not work.
In other words, such system just do not get to be intelligent (for
whatever reason.... but probably because they can never learn those
horribly vague, messy-looking concepts that don't fit very nicely into
logical formalisms, but which are vital to the development of the
system)?  My actual question, then:  Suppose it just happens not to be
possible to do it any other way than with all the messy, nonlinear
mechanisms described above:  what, in that case, would be the use in
trying to keep as close as you can to a formal, tractable approach to
AGI, of the sort that would allow you to prove at least something about
the way the not-quite-probabilities are handled?

Again, uncertainty representation and processing is only a very small
part of the design of an AGI, though I think it is a necessary part.

It is possible that the "horribly vague, messy-looking concepts" at a
deep level still follow some type of "logical formalisms", though they
are very different from mathematical logic and probability theory.

Pei

Anyone who knows my theoretical position will recognize where I am
coming from here, but I have at least made an effort to frame it in
neutral terms.


Just doing my usual anarchic bit to bend the world to my unreasonable
position, that's all ;-).

Richard Loosemore.





-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303



-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303

Reply via email to