> > From: Tim Tyler [mailto:[email protected]]
> >
> > "Alex Wissner-Gross: A new equation for intelligence"
> >
> >  - https://www.youtube.com/watch?v=ue2ZEmTJ_Xo

On Mon, Feb 10, 2014 at 6:46 PM, Piaget Modeler
<[email protected]> wrote:
>
> I found it too vague.

I did too, and the Entropica website wasn't any help. It just has the
same video clip you saw on TED. However, I did find a more detailed
explanation at http://www.alexwg.org/publications/PhysRevLett_110-168702.pdf

Unfortunately, if you were looking for the holy grail of AI, you can
keep looking. It doesn't shortcut the uncomputability of intelligence
proven by Hutter's AIXI model. In the entropic model, the idea is that
the optimal action of an intelligent agent is the one that maximizes
future entropy. Of course entropy in the information theoretic sense
is not computable because it depends on Kolmogorov complexity.

However it might still be a useful principle, in the same way that
Occam's Razor is useful to machine learning. We do know that
computation requires energy. In particular, writing a bit of memory
decreases the information theoretic entropy of a computer's state by
up to 1 bit, and therefore requires a corresponding increase in
entropy of the environment of kT ln 2 where T is the temperature and k
is Boltzmann's constant. So it looks to me like the principle is to
choose the action that maximizes expected future computation.

--
-- Matt Mahoney, [email protected]


-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to