Sometimes you have to talk these things out PM... using casual entropic
forces in complex environments, seeking goals using propositional logic.
Sometimes the complexity only does upper semi-computable and you have to put
a lot of energy resources in it otherwise the learning decelerates and you
hit that peak combinatorial explosion too soon. L

 

John

 

From: Piaget Modeler [mailto:[email protected]] 
Sent: Wednesday, March 5, 2014 12:23 PM
To: AGI
Subject: RE: [agi] A new equation for intelligence?

 

Touche.  

 

On second thought let's not touch that. 

Hands off that topic. 

~PM

  _____  

From: [email protected]
To: [email protected]
Subject: RE: [agi] A new equation for intelligence?
Date: Wed, 5 Mar 2014 12:07:51 -0500

Yes, being a wise man he uses a raincoat with his causal entropic force.

 

John

 

From: Piaget Modeler [mailto:[email protected]] 
Sent: Wednesday, March 5, 2014 11:54 AM
To: AGI
Subject: RE: [agi] A new equation for intelligence?

 

Does this imply that Wisner-Goss is unmarried with no children? 

 

~PM

  _____  

Date: Wed, 5 Mar 2014 16:01:24 +0100
Subject: Re: [agi] A new equation for intelligence?
From: [email protected]
To: [email protected]

Consider the exploration vs. exploitation dilemma:

 

A wise man will know that he knows very little of nature, so he knows that
he has to continue exploring. At some point, he will realize that the nature
is simply too big, so the best he can do is to avoid getting stuck in his
ignorance, hence, he just tries to maximize the number of possible future
options.

 

Something like that. 

:-)

 

 

 

On Wed, Mar 5, 2014 at 3:41 PM, Aaron Hosford <[email protected]> wrote:

What is the difference between maximizing one's own future options and
seeking power? And what use is power, but the ability to accomplish your own
ends in your own time?

 

Go and chess playing are not, in fact, about "keeping your options
open". They are all about winning the game.  If that involves
eliminating future options and bringing the game to an end, so be it.

 

This is exactly what has been bothering me about this equation since I first
heard about it. I wrote a Reversi (a.k.a. Othello) game play engine -- back
before I heard about Wissner-Gross or his ideas -- which initially operated
on the principal of maximizing future options, and later on the principal of
maximizing its own future options while closing off the opponent's. It
worked quite nicely for the first half of the game, dominating the board,
but failed to close in on the win. I had to modify the value function to
successively migrate from keeping options open to closing options favorably
as game play continued. It is no use keeping options open if you aren't
going to take advantage of them when the time comes. And knowing when to do
that is a whole different dimension of intelligence.

 

On Tue, Mar 4, 2014 at 7:09 PM, Robert Levy <[email protected]> wrote:

There's an interesting and maybe humorous quasi-paradox in the idea of
settling on AWG's equation as the defining principle of intelligence. If you
disagree or find it unlikely that it is the kernel of intelligence but yet
find it useful, you are applying it recursively: you keep it around as a
potentially useful element to consider in various contexts but keep options
open, looking for other powerful/elegant principles. On the other hand
someone who is convinced very strongly it is the ultimate principle should
reflect on the possibility that this is not a very intelligent commitment to
make as it could introduce harmful path dependencies that could block the
discovery of more compelling insights into computational intelligence. The
other extreme is never committing to any leads, which is unintelligent in a
different way, because it is counter to any kind of useful curiosity (to
never pursue an interest to the exclusion of others), and counter to
pragmatic sensibilities of knowing when and where to apply effort to
worthwhile pursuits.

 

On Fri, Feb 21, 2014 at 5:56 PM, Ben Goertzel <[email protected]> wrote:

BTW, Wisner-Gross will be giving one of the keynotes at AGI-14 in
Quebec City in early August... I encourage y'all to come argue with
him in person !!!

I don't think he's found the holy grail of AGI, but I do think his
observations are interesting... I think causal path entropy (or
something like it) would sensibly be included as one of the high-level
goals of an AGI system...

ben


On Sat, Feb 22, 2014 at 4:20 AM, Bill Hibbard <[email protected]> wrote:
> Yes, the paper at:
> http://www.alexwg.org/publications/PhysRevLett_110-168702.pdf
> is more detailed and quite interesting.
>
> An interesting project would be to investigate
> the relation between this paper and AIXI. The
> paper includes probabilities of future histories,
> for a system interacting with an environment, in
> a new definition of entropy, called causal path
> entropy.
>
> Probabilities of future histories for a system
> interacting with an environment play a major role
> in the definition of intelligence in AIXI. It
> would be interesting to see how close the relation
> is between causal path entropy and AIXI.
>
> Bill
>
>
> On Fri, 21 Feb 2014, Matt Mahoney wrote:
>
>>>> From: Tim Tyler [mailto:[email protected]]
>>>>
>>>> "Alex Wissner-Gross: A new equation for intelligence"
>>>>
>>>>  - https://www.youtube.com/watch?v=ue2ZEmTJ_Xo
>>
>>
>> On Mon, Feb 10, 2014 at 6:46 PM, Piaget Modeler
>> <[email protected]> wrote:
>>>
>>>
>>> I found it too vague.
>>
>>
>> I did too, and the Entropica website wasn't any help. It just has the
>> same video clip you saw on TED. However, I did find a more detailed
>> explanation at
>> http://www.alexwg.org/publications/PhysRevLett_110-168702.pdf
>>
>> Unfortunately, if you were looking for the holy grail of AI, you can
>> keep looking. It doesn't shortcut the uncomputability of intelligence
>> proven by Hutter's AIXI model. In the entropic model, the idea is that
>> the optimal action of an intelligent agent is the one that maximizes
>> future entropy. Of course entropy in the information theoretic sense
>> is not computable because it depends on Kolmogorov complexity.
>>
>> However it might still be a useful principle, in the same way that
>> Occam's Razor is useful to machine learning. We do know that
>> computation requires energy. In particular, writing a bit of memory
>> decreases the information theoretic entropy of a computer's state by
>> up to 1 bit, and therefore requires a corresponding increase in
>> entropy of the environment of kT ln 2 where T is the temperature and k
>> is Boltzmann's constant. So it looks to me like the principle is to
>> choose the action that maximizes expected future computation.
>>
>> --
>> -- Matt Mahoney, [email protected]
>>
>>
>> -------------------------------------------
>> AGI
>> Archives: https://www.listbox.com/member/archive/303/=now
>> RSS Feed: https://www.listbox.com/member/archive/rss/303/3603840-9a430058
>>
>> Modify Your Subscription: https://www.listbox.com/member/?
<https://www.listbox.com/member/?&;> &
>> Powered by Listbox: http://www.listbox.com
>>
>
>
> -------------------------------------------
> AGI
> Archives: https://www.listbox.com/member/archive/303/=now

> RSS Feed: https://www.listbox.com/member/archive/rss/303/212726-deec6279

> Modify Your Subscription:
> https://www.listbox.com/member/? <https://www.listbox.com/member/?&;> &
> Powered by Listbox: http://www.listbox.com



--

Ben Goertzel, PhD
http://goertzel.org

"In an insane world, the sane man must appear to be insane". -- Capt.
James T. Kirk






AGI |  <https://www.listbox.com/member/archive/303/=now> Archives
<https://www.listbox.com/member/archive/rss/303/19999924-4a978ccc> |
<https://www.listbox.com/member/?&;> Modify Your Subscription

 <http://www.listbox.com> 


AGI |  <https://www.listbox.com/member/archive/303/=now> Archives
<https://www.listbox.com/member/archive/rss/303/248029-3b178a58> |
<https://www.listbox.com/member/?&;>
Modify Your Subscription

 <http://www.listbox.com> 

 




-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to