On Fri, Jan 9, 2009 at 12:19 AM, Matt Mahoney <matmaho...@yahoo.com> wrote: > Mike, > > Your own thought processes only seem mysterious because you can't predict > what you will think without actually thinking it. It's not just a property of > the human brain, but of all Turing machines. No program can non-trivially > model itself. (By model, I mean that P models Q if for any input x, P can > compute the output Q(x). By non-trivial, I mean that P does something else > besides just model Q. (Every program trivially models itself). The proof is > that for P to non-trivially model Q requires K(P) > K(Q), where K is > Kolmogorov complexity, because P needs a description of Q plus whatever else > it does to make it non-trivial. It is obviously not possible for K(P) > K(P)). >
Matt, please stop. I even constructed an explicit counterexample to this pseudomathematical assertion of yours once. You don't pay enough attention to formal definitions: what this "has a description" means, and which reference TMs specific Kolmogorov complexities are measured in. -- Vladimir Nesov robot...@gmail.com http://causalityrelay.wordpress.com/ ------------------------------------------- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244&id_secret=123753653-47f84b Powered by Listbox: http://www.listbox.com