Matt Mahoney wrote:
--- Richard Loosemore <[EMAIL PROTECTED]> wrote:
I am disputing the very idea that monkeys (or rats or pigeons or humans) have a "part of the brain which generates the reward/punishment signal for operant conditioning."

This is behaviorism. I find myself completely at a loss to know where to start, if I have to explain what is wrong with behaviorism.

Call it what you want.  I am arguing that there are parts of the brain (e.g.
the nucleus accumbens) responsible for reinforcement learning, and
furthermore, that the synapses along the input paths to these regions are not
trainable.  I argue this has to be the case because an intelligent system
cannot be allowed to modify its motivational system.  Our most fundamental
models of intelligent agents require this (e.g. AIXI -- the reward signal is
computed by the environment).  You cannot turn off hunger or pain.  You cannot
control your emotions.  Since the synaptic weights cannot be altered by
training (classical or operant conditioning), they must be hardwired as
determined by your DNA.


Pei has already spoken eloquently on many of these questions.

Do you agree?  If not, what part of this argument do you disagree with?

That reward and punishment exist and result in learning in humans?

Reward and punishment at what level? Your use of behaviorist phrasing implies that you mean one particular interpretation of these terms, but there are others. If it is the former, then the terms are so incoherent as mechanisms that there is no answer: there simply is nothing as crude as behaviorist style "reward and punishment" going on. As an idea bout mechanism it is bankrupt.

If you mean it at some other level, or if you mean the terms to be interpreted so generally that they could mean, for example, that there are mechanisms responsible for relaxation pressures that go in a particular direction, then of course they result in learning.


That there are neurons dedicated to computing reinforcement signals?

Similar answer to the previous. "Reinforcement signals" could mean just about anything, but if you mean in the behaviorist sense, then there is no such thing as reinforcement learning going on. And to understand *that* statement (the one I just made) you meed to understand a long story about why behaviorism is wrong.


That the human motivational system (by which I mean the logic of computing the
reinforcement signals from sensory input) is not trainable?

Now you are asking a question based on terms that (see above) are either ambiguous or incoherent.

If I back off from you interpretation of the motivational system, I can answer that the latter is probably a complicated entity, with many components, so some parts of it are trainable, and some others are not.


That the motivational system is completely specified by DNA?

This is a meaningless question. Do you mean *directly* specified by the DNA? Or do you mean that the DNA specifies a generator that builds the motivational system? Or that the DNA specifies a generator that eventually builds the motivational system, after morphing through several intermediate mechanisms? Are you allowing or excluding the interaction of the generators with the environment when they build the motivational system? In all except the first case, the DNA only specifies things indirectly, so the phrase "completely specified by DNA" is ambivalent at best.



That all human learning can be reduced to classical and operant conditioning?

Of course I am disputing this. This is the behaviorist idea that has been completely rejected by the cognitive science community since 1956.

If you are willing to bend the meaning of the terms "classical and operant conditioning" sufficiently far from their origins, you might be able to make the idea more plausible, but that kind of redefinition is a little silly, and I don't see you trying to do that.


That humans are animals that differ only in the ability to learn language?

Do I disagree with this?  Of course.  Humans are not moluscs that talk.

That models of goal seeking agents like AIXI are realistic models of
intelligence?

AIXI is not a model of a goal seeking agent, it is a mathematical abstraction of a goal seeking agent. Of course it has no value as a realistic model of intelligence.

Do you object to behavioralism because of their view that consciousness and
free will do not exist, except as beliefs?

I assume you mean "behaviorism". My objection to behaviorism has nothing to do with any of their claims about free will or consciousness. I happen to think that their opinions on such matters were generally incoherent (though not entirely so), but they were in good company on that one, so no matter.



Do you object to the assertion that the brain is a computer with finite memory
and speed?  That your life consists of running a program?  Is this wrong, or
just uncomfortable?

Well, I'm glad you ended on a lighter note. You are barking up the wrong tree there: I have no problems with those things. I am not one of those people who feel "uncomfortable" with the idea of being software, I am one of those who go around trying to explain to others how not uncomfortable it is. [Don't know how we ended up in this neck of the woods].





Richard Loosemore.

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303

Reply via email to