The idea of the brain using emotion as another sensory input makes a lot of sense (haha, no pun intended). When creating goal-oriented AIs using the CLA, this could be a very natural way to reinforce goal-seeking behavior.
As per your comments about chess, I agree that it's a combinatorially challenging problem, and computers that beat humans at it are just able to brute-force search a larger space than humans can. However, humans can learn to play chess better than other humans, and it can't possibly be through just a brute-force method of searching the game tree. Humans learn patterns in the game and build a feel for what kinds of moves are good and what kinds of moves are bad (losing your queen is bad, cornering their kind is good). At some point in the hierarchy, they must be representing these kinds of heuristics as neuronal activity and learning temporal and spatial patterns around them. Now, it's possible that this pattern-matching is used in conjunction with game tree searching through the use of multi-step prediction in our brains. All I'm trying to say that if the CLA is to perform similarly to the brain, it should be able to learn these patterns and heuristics the way the brain does, and eventually play better chess than an AI that has limited computation time such that it can only search a fraction of the game tree. The whole point of the CLA is to compress these large amounts of data into generalizable patterns, and I don't see why the same couldn't be done for chess. Agreed, this is only possible with a hierarchical CLA that can learn patterns on patterns. Otherwise, its performance would be severely limited, especially in a game like chess. But maybe until we have that, we can hard-code certain heuristics – such as "how many queens does player A have?" or "how many non-attacked spaces exist around the king?" – and include them as part of the encoded input to the CLA region. These are things that a hierarchical CLA should be able to learn on its own, but we can encode it by hand for now to speed up learning. - Chetan On Fri, Aug 2, 2013 at 2:21 AM, Fergal Byrne <fergal.by...@examsupport.ie>wrote: > > Hi Chetan, > > Jeff has been asked many times about this, and usually also incorporates > it into his talks. Essentially, emotion is generated (or controlled) in > deeper, subcortical structures and via hormonal and neurotransmitter > suffusions. The neocortex interacts with this "emotional weather" as just > another sensory input; it's a kind of "internal sense". > > So, for example, attention in the neocortex can be strongly coupled to > emotions. Thus in a situation which generates a high level of fear (such as > during a fight or a traffic accident) the neocortex is bathed in stress > hormones and fed with large amounts of "red flashing lights" > attention-directing enervation. This causes profound changes in our > attention control, which we experience as "time slowing down" (in fact it's > our neocortex speeding up!), tunnel vision, etc. > > While NuPIC does not currently incorporate emotions, this is only > because it is a model of just one layer of just one region of neocortex, > with just the interfaces needed to feed in data and read out predictions. > > A multi-layer, hierarchical CLA system with sensory-motor (behaviour) > feedback structures would be the minimum basis for talking about emotions. > At that point, you could simulate emotions both as bottom-up data > (providing a measure of "happiness" as you mentioned, hunger or fear) and > as top-down goal-seeking behaviour instructions (play a sequence of > behaviour which has victory (or food, or escape) as an end pattern). > > While chess is a classic AI example, it's not optimal for use when > talking about the CLA. It's essentially a combinatorial, hard-edged > calculation problem. The only reason all humans are not yet always beaten > by computers is just due to the size of the decision tree, which forces > software to use heuristics and tricks to reduce the size of the problem. > Given a fast and big enough computer, it will always win (or draw), simply > by calculating all future board positions and forcing the result. > > In addition, the all-or-nothing nature of chess means that the "pattern" > on the board does not smoothly semantically change when one piece is moved, > so there is no "momentum" in the change which pushes a CLA towards the next > predicted change. Sure, a CLA can learn a lot of games, but it can't > generalise as there's not enough semantic similarity (in terms of victory > or loss) between board snapshots. > > In essence, the chess search space is very dense, very non-linear, > non-continuous and very high-dimensional, requiring effective storage of > every possible sequence of moves. > > A better example would be playing tennis instead of chess, where the > possible "moves" are continuous movement of the player and continuous > positioning of a racket. Feedforward inputs would be the positions and > velocity of both players and rackets as well as the ball. > Behaviour-feedback inputs would be "my advantage" at the highest levels in > the hierarchy, driving the system to find (predict and execute) sequences > which increase and maximise "advantage". Lower-level behaviour-feedback > inputs would find, predict and execute sequences which "hit the ball" or > "get it into the corner" and so on. > > Clearly emotion is just one type of top-down (and simultaneously > bottom-up or "feed-round" as I call it) control data in the human brain. > Attention, sequence execution, cross-sensory perception (hearing these > words as you read) and several more are all embodied in a yet-to-be > modelled hierarchy with behaviour and motor feedback. > > Regards, > > Fergal Byrne > > > > > > _______________________________________________ > nupic mailing list > nupic@lists.numenta.org > http://lists.numenta.org/mailman/listinfo/nupic_lists.numenta.org > >
_______________________________________________ nupic mailing list nupic@lists.numenta.org http://lists.numenta.org/mailman/listinfo/nupic_lists.numenta.org