Just a note that this is an example of a physics-first to AGI.  Is
there any science that HASN'T been used a starting point for AGI?
There really should be a science of starting points in AI.  To me it
should be metaphysics, but that is just me, I realize....

On 2/9/14, John Rose <[email protected]> wrote:
> Intelligence has heat and heat dissipates.
>
>
>
> Did I miss anything?
>
>
>
> Oh and Entropica can play pong with itself J
>
>
>
> John
>
>
>
> From: Tim Tyler [mailto:[email protected]]
> Sent: Sunday, February 9, 2014 3:51 PM
> To: AGI
> Subject: [agi] A new equation for intelligence?
>
>
>
> Here's Alex Wissner-Gross's TED presentation - on the
> "maximizing options" theory of intelligence and "Entropica".
>
> "Alex Wissner-Gross: A new equation for intelligence"
>
>  - https://www.youtube.com/watch?v=ue2ZEmTJ_Xo
>
> It seems as though this is a redefinition of intelligence.  Intelligence,
> conventionally, involves a broad-spectrum ability at achieving goals.
> Keeping your options open seems more like a common instrumental
> value.
>
> Go and chess playing are not, in fact, about "keeping your options
> open". They are all about winning the game.  If that involves
> eliminating future options and bringing the game to an end, so be it.
>
> I'm actually a fan of the idea of a strong link between entropy
> generation and intelligence.  Rather ironically, I see the link as
> going the other way - at least most of the time. As intelligence
> systems evolved, they have got better and better at seeking out
> energy gradients and dissipating them. That's why we developed
> nuclear fission, for instance.
>
> Such behaviour doesn't "keep future options open" - rather it
> accelerates universal heat death.  Intelligent systems might
> *sometimes* conserve energy resources - in the way that Wissner-Gross
> suggests - but it is much more common for them to burn through
> them and convert them into offspring - and in those few cases
> where resources are conserved, it is *usually* to burn through
> them only *slightly* later on.
>
> The "keeping your options open" theory of intelligence seems
> silly to me. I'm concerned that the real intelligence-entropy
> link will be polluted by association with this daft idea.
> --
> __________
>  |im |yler  http://timtyler.org/  [email protected]  Remove lock to reply.
>
>
> AGI |  <https://www.listbox.com/member/archive/303/=now> Archives
> <https://www.listbox.com/member/archive/rss/303/248029-3b178a58> |
> <https://www.listbox.com/member/?&;>
> Modify Your Subscription
>
>  <http://www.listbox.com>
>
>
>
>
>
>
> -------------------------------------------
> AGI
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/11943661-d9279dae
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>


-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to