BTW, Wisner-Gross will be giving one of the keynotes at AGI-14 in
Quebec City in early August... I encourage y'all to come argue with
him in person !!!

I don't think he's found the holy grail of AGI, but I do think his
observations are interesting... I think causal path entropy (or
something like it) would sensibly be included as one of the high-level
goals of an AGI system...

ben

On Sat, Feb 22, 2014 at 4:20 AM, Bill Hibbard <[email protected]> wrote:
> Yes, the paper at:
> http://www.alexwg.org/publications/PhysRevLett_110-168702.pdf
> is more detailed and quite interesting.
>
> An interesting project would be to investigate
> the relation between this paper and AIXI. The
> paper includes probabilities of future histories,
> for a system interacting with an environment, in
> a new definition of entropy, called causal path
> entropy.
>
> Probabilities of future histories for a system
> interacting with an environment play a major role
> in the definition of intelligence in AIXI. It
> would be interesting to see how close the relation
> is between causal path entropy and AIXI.
>
> Bill
>
>
> On Fri, 21 Feb 2014, Matt Mahoney wrote:
>
>>>> From: Tim Tyler [mailto:[email protected]]
>>>>
>>>> "Alex Wissner-Gross: A new equation for intelligence"
>>>>
>>>>  - https://www.youtube.com/watch?v=ue2ZEmTJ_Xo
>>
>>
>> On Mon, Feb 10, 2014 at 6:46 PM, Piaget Modeler
>> <[email protected]> wrote:
>>>
>>>
>>> I found it too vague.
>>
>>
>> I did too, and the Entropica website wasn't any help. It just has the
>> same video clip you saw on TED. However, I did find a more detailed
>> explanation at
>> http://www.alexwg.org/publications/PhysRevLett_110-168702.pdf
>>
>> Unfortunately, if you were looking for the holy grail of AI, you can
>> keep looking. It doesn't shortcut the uncomputability of intelligence
>> proven by Hutter's AIXI model. In the entropic model, the idea is that
>> the optimal action of an intelligent agent is the one that maximizes
>> future entropy. Of course entropy in the information theoretic sense
>> is not computable because it depends on Kolmogorov complexity.
>>
>> However it might still be a useful principle, in the same way that
>> Occam's Razor is useful to machine learning. We do know that
>> computation requires energy. In particular, writing a bit of memory
>> decreases the information theoretic entropy of a computer's state by
>> up to 1 bit, and therefore requires a corresponding increase in
>> entropy of the environment of kT ln 2 where T is the temperature and k
>> is Boltzmann's constant. So it looks to me like the principle is to
>> choose the action that maximizes expected future computation.
>>
>> --
>> -- Matt Mahoney, [email protected]
>>
>>
>> -------------------------------------------
>> AGI
>> Archives: https://www.listbox.com/member/archive/303/=now
>> RSS Feed: https://www.listbox.com/member/archive/rss/303/3603840-9a430058
>>
>> Modify Your Subscription: https://www.listbox.com/member/?&;
>> Powered by Listbox: http://www.listbox.com
>>
>
>
> -------------------------------------------
> AGI
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/212726-deec6279
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com



-- 
Ben Goertzel, PhD
http://goertzel.org

"In an insane world, the sane man must appear to be insane". -- Capt.
James T. Kirk


-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to