Thanks Tom,

Always great to get some good links to help understand these ideas.

Jeff and several of us were talking about this during the hackathon, too. I
have an idea that the CLA gives rise to a process which maximises the
structural (spatiotemporal and sensorimotor) information extracted from the
data stream and stored in the region. This information content can be
equated to the entropy as alluded to in your email.

We were discussing the idea that the human brain (and all mammals' too) has
a reward mechanism for learning about the world, because it is expensive to
do but the long-term benefits outweigh the cost. Jeff mentioned the fact
that young men are particularly risk-seeking (ie they take risks in the
pursuit of new experiences). This would indicate that evolution considers
the gathering of some kinds of experiences as worth the potential death of
the organism, which is quite something if you think about it. This would
add the "selfish brain" to the idea of the "selfish gene" as well as
explaining why people can be so reckless at times.

I often feel like learning is addictive, as if I'm receiving an injection
of some very pleasant narcotic when I am learning something new and
interesting, or when I come up with a solution to a difficult challenge.
I'm sure that this is the reward mechanism we were discussing.

Regards,

Fergal Byrne




On Sat, Nov 16, 2013 at 2:14 AM, Thomas Macrina <[email protected]>wrote:

> In the office hours on Tuesday, Jeff mentioned there being a potential
> approach for system motivation using entropy. That made me look back at an
> article that Wissner-Gross & Freer published in April called "Causal
> Entropic Forces" that got some press for simulating adaptive behavior. It
> very much aligns with what Jeff mentioned, so I figured it'd be worth
> sharing.
>
> For anyone who didn't catch it, here's the 
> paper<http://math.mit.edu/~freer/papers/PhysRevLett_110-168702.pdf>,
> a video of the simulations <http://www.entropica.com/>, and a decent,
> less-technical 
> overview<http://davidruescas.com/2013/04/22/causal-entropy-maximization-and-intelligence/>
> .
>
> The gist: without any guidance, the authors were able to simulate agents
> setting and achieving goals, just using a "simple" model that had the
> agents moving towards states that afforded them the greatest future
> entropy. Or better still, the agents appeared intelligent because they were
> putting themselves in positions that gave them the most options.
>
> If you wanted to take a swing at implementing their model, the meat and
> potatoes is in the path integration of equation 11, but I think the
> trickiest part could be parametrizing the path temperature. Then it sounds
> like you have your agent run Monte Carlo simulations on potential futures
> at each timestep (Eron's "Imagine That" hack is ahead of the game).
>
> It could be a useful motivation engine within NuPIC, somewhere down the
> line. It feels like it's a slightly higher abstraction than the current
> neuron level, and it would definitely need to make use of the sensor-motor
> pathways. Nothing that this community can't handle.
>
> Tom
>
> _______________________________________________
> nupic mailing list
> [email protected]
> http://lists.numenta.org/mailman/listinfo/nupic_lists.numenta.org
>
>


-- 

Fergal Byrne, Brenter IT

<http://www.examsupport.ie>http://inbits.com - Better Living through
Thoughtful Technology

e:[email protected] t:+353 83 4214179
Formerly of Adnet [email protected] http://www.adnet.ie
_______________________________________________
nupic mailing list
[email protected]
http://lists.numenta.org/mailman/listinfo/nupic_lists.numenta.org

Reply via email to