Ben,

Yes, the iteration you mention is important... so at each stage in its
life, the AGI is doing Occam's Razor according to the "programming
language" implicit in what it's learned so far...

"Stage of life" is kind of coarse, I meant a level of generalization, with great many present simultaneously & data passing though each in microseconds.

And it may be that the "base", the initial measure of simplicity used
at the start of this iteration, is not all that relevant to the end
result -- so long as the base is something reasonable, and not totally
out of synch with the environment, goals and architecture of the
system...

I'd say a measure of *complexity*, simplicity is a reduction thereof. And that measure should be consistent across all levels, for global resource allocation.

However, that doesn't eliminate the
theoretical/mathematical/philosophical question I posted in that blog
post...

I think it does, - you measure complexity as data + syntax, & simplicity as a difference between initial & compressed complexity. Of course, there that Schmidhuber's of time (speed) & space (memory) components of complexity, but I deal with it by averaging past productivity of each component. I then use them as a measure of "opportunity cost" for data & syntax components of complexity.

OpenCog already embodies its own choice regarding the base language for measuring
simplicity, the Atom language...

That's not basic enough. AtomSpace already assumes some initially known structure in the data (hypergraphs), while I think the most basic data should be simply a sequence of integers.

However, I strongly suspect we can succeed at building powerful AGI
without needing to have a good general theory of general intelligence
beforehand..

I think that's wishful thinking :). Well, except for slavishly copying the brain, but neither of us is doing that.


--------------------------------------------------
From: "Ben Goertzel" <[email protected]>
Sent: Monday, August 27, 2012 1:18 PM
To: "AGI" <[email protected]>
Subject: Re: [agi] Finding the "Right" Computational Model to Support Occam's Razor

Yes, the iteration you mention is important... so at each stage in its
life, the AGI is doing Occam's Razor according to the "programming
language" implicit in what it's learned so far...

This is a matter of learning "computational models / programming
languages" on the fly, that are specifically adapted to the
environment of a given intelligent system, and the internal structures
emergent within that system as it learns/grows/develops...

And it may be that the "base", the initial measure of simplicity used
at the start of this iteration, is not all that relevant to the end
result -- so long as the base is something reasonable, and not totally
out of synch with the environment, goals and architecture of the
system...

However, that doesn't eliminate the
theoretical/mathematical/philosophical question I posted in that blog
post...

Please note: I don't think that theoretical/mathematical/philosophical
question  needs to be solved in order to create AGI.  OpenCog already
embodies its own choice regarding the base language for measuring
simplicity, the Atom language...

I just think it's an interesting question, anyway...

And it may be relevant ultimately to creating a good general theory of
general intelligence, which we lack now...

However, I strongly suspect we can succeed at building powerful AGI
without needing to have a good general theory of general intelligence
beforehand..

ben


On Mon, Aug 27, 2012 at 12:37 PM, Boris Kazachenko <[email protected]> wrote:
Just some speculations about possible theoretical computer science I'd
do if I had the time ;p


http://multiverseaccordingtoben.blogspot.com/2012/08/finding-right-computational-model-to.html


Maybe you should spend your time more wisely :).

All this confusion results from using languages designed for irrelevant
tasks. For GI, the language must be generated by the compressive Occam's
Razor algorithm itself, with incremental syntax produced by its past
iterations. Notice the POV difference: you don't have some fixed & final
complexity for GI to reduce. Rather, you go through indefinite number of
complexity accumulation / compression cycles. In my terms, that's a
current-level search / next-level evaluation cycle, iterated for as long as
you keep accumulating the data.


-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/212726-11ac2389
Modify Your Subscription:
https://www.listbox.com/member/?&;
Powered by Listbox: http://www.listbox.com



--
Ben Goertzel, PhD
http://goertzel.org

"My humanity is a constant self-overcoming" -- Friedrich Nietzsche


-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/18407320-d9907b69
Modify Your Subscription: https://www.listbox.com/member/?&; Powered by Listbox: http://www.listbox.com



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-c97d2393
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-2484a968
Powered by Listbox: http://www.listbox.com

Reply via email to