Hi Chetan,

Jeff has been asked many times about this, and usually also incorporates it 
into his talks. Essentially, emotion is generated (or controlled) in deeper, 
subcortical structures and via hormonal and neurotransmitter suffusions. The 
neocortex interacts with this "emotional weather" as just another sensory 
input; it's a kind of "internal sense".

So, for example, attention in the neocortex can be strongly coupled to 
emotions. Thus in a situation which generates a high level of fear (such as 
during a fight or a traffic accident) the neocortex is bathed in stress 
hormones and fed with large amounts of "red flashing lights" 
attention-directing enervation. This causes profound changes in our attention 
control, which we experience as "time slowing down" (in fact it's our neocortex 
speeding up!), tunnel vision, etc.

While NuPIC does not currently incorporate emotions, this is only because it is 
a model of just one layer of just one region of neocortex, with just the 
interfaces needed to feed in data and read out predictions.

A multi-layer, hierarchical CLA system with sensory-motor (behaviour) feedback 
structures would be the minimum basis for talking about emotions. At that 
point, you could simulate emotions both as bottom-up data (providing a measure 
of "happiness" as you mentioned, hunger or fear) and as top-down goal-seeking 
behaviour instructions (play a sequence of behaviour which has victory (or 
food, or escape) as an end pattern).

While chess is a classic AI example, it's not optimal for use when talking 
about the CLA. It's essentially a combinatorial, hard-edged calculation 
problem. The only reason all humans are not yet always beaten by computers is 
just due to the size of the decision tree, which forces software to use 
heuristics and tricks to reduce the size of the problem. Given a fast and big 
enough computer, it will always win (or draw), simply by calculating all future 
board positions and forcing the result.

In addition, the all-or-nothing nature of chess means that the "pattern" on the 
board does not smoothly semantically change when one piece is moved, so there 
is no "momentum" in the change which pushes a CLA towards the next predicted 
change. Sure, a CLA can learn a lot of games, but it can't generalise as 
there's not enough semantic similarity (in terms of victory or loss) between 
board snapshots.

In essence, the chess search space is very dense, very non-linear, 
non-continuous and very high-dimensional, requiring effective storage of every 
possible sequence of moves.

A better example would be playing tennis instead of chess, where the possible 
"moves" are continuous movement of the player and continuous positioning of a 
racket. Feedforward inputs would be the positions and velocity of both players 
and rackets as well as the ball. Behaviour-feedback inputs would be "my 
advantage" at the highest levels in the hierarchy, driving the system to find 
(predict and execute) sequences which increase and maximise "advantage". 
Lower-level behaviour-feedback inputs would find, predict and execute sequences 
which "hit the ball" or "get it into the corner" and so on.

Clearly emotion is just one type of top-down (and simultaneously bottom-up or 
"feed-round" as I call it) control data in the human brain. Attention, sequence 
execution, cross-sensory perception (hearing these words as you read) and 
several more are all embodied in a yet-to-be modelled hierarchy with behaviour 
and motor feedback.

Regards,

Fergal Byrne




_______________________________________________
nupic mailing list
[email protected]
http://lists.numenta.org/mailman/listinfo/nupic_lists.numenta.org

Reply via email to