His equation might show some general property of intelligence. We are aware of the equivalences of information theory with thermodynamics, digital physics, etc..
I like my idea proposed a while ago on this list of deriving a unit of intelligence from classical physics of energy and thermodynamics and giving it the name Goertzel Unit. So 1 Goertzel or 1 Gtzl would be some derivation of power, watt's, input with intelligence output. Does intelligence = efficiency and/or doesn't have units? But for example x Watts would be able to produce a maximum theoretical amount of intelligence or Gtzls. Gtzls per Watt or Gtzl/Watt operating efficiency. Somebody help me out here J john From: Aaron Hosford [mailto:[email protected]] Sent: Wednesday, March 5, 2014 9:41 AM To: AGI Subject: Re: [agi] A new equation for intelligence? What is the difference between maximizing one's own future options and seeking power? And what use is power, but the ability to accomplish your own ends in your own time? Go and chess playing are not, in fact, about "keeping your options open". They are all about winning the game. If that involves eliminating future options and bringing the game to an end, so be it. This is exactly what has been bothering me about this equation since I first heard about it. I wrote a Reversi (a.k.a. Othello) game play engine -- back before I heard about Wissner-Gross or his ideas -- which initially operated on the principal of maximizing future options, and later on the principal of maximizing its own future options while closing off the opponent's. It worked quite nicely for the first half of the game, dominating the board, but failed to close in on the win. I had to modify the value function to successively migrate from keeping options open to closing options favorably as game play continued. It is no use keeping options open if you aren't going to take advantage of them when the time comes. And knowing when to do that is a whole different dimension of intelligence. On Tue, Mar 4, 2014 at 7:09 PM, Robert Levy <[email protected]> wrote: There's an interesting and maybe humorous quasi-paradox in the idea of settling on AWG's equation as the defining principle of intelligence. If you disagree or find it unlikely that it is the kernel of intelligence but yet find it useful, you are applying it recursively: you keep it around as a potentially useful element to consider in various contexts but keep options open, looking for other powerful/elegant principles. On the other hand someone who is convinced very strongly it is the ultimate principle should reflect on the possibility that this is not a very intelligent commitment to make as it could introduce harmful path dependencies that could block the discovery of more compelling insights into computational intelligence. The other extreme is never committing to any leads, which is unintelligent in a different way, because it is counter to any kind of useful curiosity (to never pursue an interest to the exclusion of others), and counter to pragmatic sensibilities of knowing when and where to apply effort to worthwhile pursuits. On Fri, Feb 21, 2014 at 5:56 PM, Ben Goertzel <[email protected]> wrote: BTW, Wisner-Gross will be giving one of the keynotes at AGI-14 in Quebec City in early August... I encourage y'all to come argue with him in person !!! I don't think he's found the holy grail of AGI, but I do think his observations are interesting... I think causal path entropy (or something like it) would sensibly be included as one of the high-level goals of an AGI system... ben On Sat, Feb 22, 2014 at 4:20 AM, Bill Hibbard <[email protected]> wrote: > Yes, the paper at: > http://www.alexwg.org/publications/PhysRevLett_110-168702.pdf > is more detailed and quite interesting. > > An interesting project would be to investigate > the relation between this paper and AIXI. The > paper includes probabilities of future histories, > for a system interacting with an environment, in > a new definition of entropy, called causal path > entropy. > > Probabilities of future histories for a system > interacting with an environment play a major role > in the definition of intelligence in AIXI. It > would be interesting to see how close the relation > is between causal path entropy and AIXI. > > Bill > > > On Fri, 21 Feb 2014, Matt Mahoney wrote: > >>>> From: Tim Tyler [mailto:[email protected]] >>>> >>>> "Alex Wissner-Gross: A new equation for intelligence" >>>> >>>> - https://www.youtube.com/watch?v=ue2ZEmTJ_Xo >> >> >> On Mon, Feb 10, 2014 at 6:46 PM, Piaget Modeler >> <[email protected]> wrote: >>> >>> >>> I found it too vague. >> >> >> I did too, and the Entropica website wasn't any help. It just has the >> same video clip you saw on TED. However, I did find a more detailed >> explanation at >> http://www.alexwg.org/publications/PhysRevLett_110-168702.pdf >> >> Unfortunately, if you were looking for the holy grail of AI, you can >> keep looking. It doesn't shortcut the uncomputability of intelligence >> proven by Hutter's AIXI model. In the entropic model, the idea is that >> the optimal action of an intelligent agent is the one that maximizes >> future entropy. Of course entropy in the information theoretic sense >> is not computable because it depends on Kolmogorov complexity. >> >> However it might still be a useful principle, in the same way that >> Occam's Razor is useful to machine learning. We do know that >> computation requires energy. In particular, writing a bit of memory >> decreases the information theoretic entropy of a computer's state by >> up to 1 bit, and therefore requires a corresponding increase in >> entropy of the environment of kT ln 2 where T is the temperature and k >> is Boltzmann's constant. So it looks to me like the principle is to >> choose the action that maximizes expected future computation. >> >> -- >> -- Matt Mahoney, [email protected] >> >> >> ------------------------------------------- >> AGI >> Archives: https://www.listbox.com/member/archive/303/=now >> RSS Feed: https://www.listbox.com/member/archive/rss/303/3603840-9a430058 >> >> Modify Your Subscription: https://www.listbox.com/member/? <https://www.listbox.com/member/?&> & >> Powered by Listbox: http://www.listbox.com >> > > > ------------------------------------------- > AGI > Archives: https://www.listbox.com/member/archive/303/=now > RSS Feed: https://www.listbox.com/member/archive/rss/303/212726-deec6279 > Modify Your Subscription: > https://www.listbox.com/member/? <https://www.listbox.com/member/?&> & > Powered by Listbox: http://www.listbox.com -- Ben Goertzel, PhD http://goertzel.org "In an insane world, the sane man must appear to be insane". -- Capt. James T. Kirk ------------------------------------------- AGI Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/18769370-bddcdfdc Modify Your Subscription: https://www.listbox.com/member/? <https://www.listbox.com/member/?&> & Powered by Listbox: http://www.listbox.com AGI | <https://www.listbox.com/member/archive/303/=now> Archives <https://www.listbox.com/member/archive/rss/303/23050605-2da819ff> | <https://www.listbox.com/member/?&> Modify Your Subscription <http://www.listbox.com> AGI | <https://www.listbox.com/member/archive/303/=now> Archives <https://www.listbox.com/member/archive/rss/303/248029-3b178a58> | <https://www.listbox.com/member/?&> Modify Your Subscription <http://www.listbox.com> ------------------------------------------- AGI Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424 Modify Your Subscription: https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657 Powered by Listbox: http://www.listbox.com
