In the office hours on Tuesday, Jeff mentioned there being a potential
approach for system motivation using entropy. That made me look back at an
article that Wissner-Gross & Freer published in April called "Causal
Entropic Forces" that got some press for simulating adaptive behavior. It
very much aligns with what Jeff mentioned, so I figured it'd be worth
sharing.

For anyone who didn't catch it, here's the
paper<http://math.mit.edu/~freer/papers/PhysRevLett_110-168702.pdf>,
a video of the simulations <http://www.entropica.com/>, and a decent,
less-technical 
overview<http://davidruescas.com/2013/04/22/causal-entropy-maximization-and-intelligence/>
.

The gist: without any guidance, the authors were able to simulate agents
setting and achieving goals, just using a "simple" model that had the
agents moving towards states that afforded them the greatest future
entropy. Or better still, the agents appeared intelligent because they were
putting themselves in positions that gave them the most options.

If you wanted to take a swing at implementing their model, the meat and
potatoes is in the path integration of equation 11, but I think the
trickiest part could be parametrizing the path temperature. Then it sounds
like you have your agent run Monte Carlo simulations on potential futures
at each timestep (Eron's "Imagine That" hack is ahead of the game).

It could be a useful motivation engine within NuPIC, somewhere down the
line. It feels like it's a slightly higher abstraction than the current
neuron level, and it would definitely need to make use of the sensor-motor
pathways. Nothing that this community can't handle.

Tom
_______________________________________________
nupic mailing list
[email protected]
http://lists.numenta.org/mailman/listinfo/nupic_lists.numenta.org

Reply via email to