Kolmogorov complexity is measured relative to an implicit descriptive language. If we were to order algorithms by their Kolmogorov complexity relative to a specific descriptive language, choosing a different descriptive language could result in a complete reversal of that ordering, or possibly any arbitrary ordering. K-complexity is thus inherently unmeasurable without specifying the descriptive language. (Compression algorithms take advantage of this by generating a mapping to a new language for which the K-complexity of the data is much smaller.)
What language are we talking about? C++? Lisp? The "language" of physical reality? Physical reality seems the most natural "language", but there are many different ways to encode the same behavior in a physical system. (Take quasiparticles, for example.) Even if we could say that the human brain has found a local optimum in the space of descriptions available in physical reality's "language" (a dubious claim), there is nothing to say that there isn't a significantly more concise globally optimal description available. This, of course, says nothing about the difficulty in actually *finding *such an encoding. I think we're best off assuming we have no idea whatsoever how complex GI really is, or how hard it would be to match or one-up the one known (and poorly understood) example encoding we have, outside of direct simulation. On Sun, Dec 9, 2012 at 8:32 AM, John G. Rose <[email protected]>wrote: > > -----Original Message----- > > From: Matt Mahoney [mailto:[email protected]] > > > > Here is a draft of a paper I am working on. I would appreciate any > > comments. You might find the content somewhat controversial. > > > > This is a big statement: > > "AI requires both a brain and a body. Therefore, we should expect its > algorithmic (Kolmogorov) complexity to be similar to that of a human." > > Though the word "similar" leaves much open for interpretation. > > I don't know if anyone knows anywhere near what the minimal K-complexity is > for running general intelligence. We know the approximate minimal known > K-complexly for human intelligence, and that is from us. > > Some people think the minimal k-complexity for GI is quite a bit smaller > than that of a human's. I would think it would have to be... But then there > is the start K-complexity tabula rasa at human birth before it gets filled > with information... similar to AGI I suppose. Though some AGI designs imply > general intelligence being achieved after it runs for a while, not at the > getgo. > > John > > > > > > > > > ------------------------------------------- > AGI > Archives: https://www.listbox.com/member/archive/303/=now > RSS Feed: https://www.listbox.com/member/archive/rss/303/23050605-2da819ff > Modify Your Subscription: > https://www.listbox.com/member/?& > Powered by Listbox: http://www.listbox.com > ------------------------------------------- AGI Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424 Modify Your Subscription: https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657 Powered by Listbox: http://www.listbox.com
