Jef Allbright wrote:
Ben Goertzel wrote:

The relationship between rationality and goals is fairly subtle, and something I have been thinking about recently....

Ben, as you know, I admire and appreciate your thinking but have always
perceived an "inside-outness" with your approach (which we have
discussed before) in that your descriptions of mind always seem (to me)
to begin from a point of pre-existing awareness.  (I can think of
immediate specific objections to the preceding statement, but in the
interest of expediency in this low-bandwidth discussion medium, I would
ask that you suspend immediate objections and look for the general point
I am trying to make clear.)

It seems to me that discussing AI or human thought in terms of goals and
subgoals is a very "narrow-AI" approach and destined to fail in general
application.  Why?  Because to conceive of a goal requires a perspective
outside of and encompassing the goal system.  We can speak in a valid
way about the goals of a system, or the goals of a person, but it is
always from a perspective outside of that system.

It seems to me that a better functional description is based on
"values", more specifically the eigenvectors and eigenvalues of a highly
multidimensional model *inside the agent* which drive its behavior in a
very simple way:  It acts to reduce the difference between the internal
model and perceived reality. [The hard part is how to evolve these
recursively self-modifying patterns of behavior, without requiring
natural evolutionary time scale.]  Goals thus emerge as third-party
descriptions of behavior, or even as post hoc internal explanations or
rationalizations of its own behavior, but don't merit the status of
fundamental drivers of the behavior.

Does this make sense to you?  I've been saying this for years, but have
never gotten even a "huh?", let alone a "duh."  ;-)

- Jef

This is identical to one of the points I was making when talking about diffuse contraint-driven motivational systems, though we are phrasing it differently.

A system can be *relaxation* driven - it changes its state according to a large number of constraints that are always trying to do local gradient descent - in such a way that it looks approximately as if it were pursuing a kind of goal-seeking behavior.

Thus: a boltzmann machine does not explicitly try to retrieve a previously matched associate of a pattern, it just relaxes its constraints until the pattern comes out.

If a system had several relaxation mechanisms working simultaneously, each of these might seem to be a "goal". I dislike that word, as I have said before, precisely because it has connotations of explicitness that I don't buy, and because there is something else that really is an explicit goal (I intend to get in the car and go home later today: this is a real "goal").

Your point about people taking a perspective "outside" or "inside" the system is the same as saying that we should not be interpreting behavioral characteristics (in this case, movement towards "goals") as if they are directly represented inside the system by a mechanisms that explicitly encodes the goal and explicitly tries to achieve it.

The early connectionists made this one of their big issues. (See the two PDP volumes for hundreds of repetitions of the same ideological statement).


Richard Loosemore.





-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303

Reply via email to