Hi Abram, I believe the key point of the paper is:
"...human minds develop through time over generations, they invent new concepts and techniques, which in turn allow previously resistant problems to be solved. There seems to be no upward bound whatsoever to this ascension." This is captured in step 3 of the algorithm at the bottom of page 8. Step 3 requires human society to invent new concepts and techniques, and to thereby perform hypercomputation. I don't think that a computable nonmonotonic logic really solves this problem. Nonmonotonic logics allow for jumping to conclusions within a fixed axiomatization or conceptual structure. Bringsjord's paper, however, requires the development of entirely new conceptual structures for problem solving, as part of the problem solving process itself. My own interpretation of the work is that an individual person is no more powerful than a Turing machine (though, this point isn't discussed in the paper), but that society as a whole is capable of hypercomputation because we can keep drawing upon more resources to solve a problem: we build machines, we reproduce, we interact with and record our thoughts in the environment. Effectively, society as a whole becomes somewhat like a Zeus machine - faster and more complex with each moment. I think that a Turing-computable AGI can fit within such a society (or a similar all-AGI society) and thereby achieve the same kind of hypercomputation. (I should add that I'm not opposed to nonmonotonic logics, nor am I attempting to contradict your claim that they're a valid way to AGI... I'm merely commenting that I don't think the paper you present supports your ideas about nonmonotonic logics) -Ben -----Original Message----- From: Abram Demski [mailto:[EMAIL PROTECTED] Sent: Tuesday, 17 June 2008 7:34 AM To: [email protected] Subject: [agi] the uncomputable I previously posted here claiming that the human mind (and therefore an ideal AGI) entertains uncomputable models, counter to the AIXI/Solomonoff model. There was little enthusiasm about this idea. :) Anyway, I hope I'm not being too annoying if I try to argue the point once again. This paper also argues the point: http://www.osl.iu.edu/~kyross/pub/new-godelian.pdf The paper includes a study of the uncomputable "busy beaver" function up to x=6. The authors claim that their success at computing busy beaver strongly suggests that humans can hypercompute. I believe the authors take this to imply that AGI cannot succeed on current hardware; I am not suggesting this. Instead, I offer a fairly concrete way to make deductions using a restricted class of uncomputable models, as an illustration of the idea (and as weak evidence that the general case can be embodied on computers). The method is essentially nonmonotonic logic. Computable predicates can be represented in any normal way (1st-order logic, lambda calculus, a standard programming language...). Computably enumerable predicates (such as the halting problem) are represented by a default assumption of "false", plus the computable method of enumerating true cases. To reason about such a predicate, the system allocates however much time it can spare to trying to prove a case true; if at the end of that time it has not found a proof by the enumeration method, it considers it false. (Of course it could come back later and try harder, too.) Co-enumerable predicates are similarly assumed true until a counterexample is found. Similar methodology can extend the class of uncomputables we can handle somewhat farther. Consider the predicate "all turing machines of class N halt", where N is a computably enumerable class. Neither the true cases nor the false cases of this predicate are computably enumerable. Nonetheless, we can characterize the predicate by assuming it is true until a counterexample is "found": a turing machine that doesn't seem to halt when run as long as we can afford to run it. If our best efforts (within time constraints) fail to find such a machine, then we stick with the default assumption "true". (A simplistic nonmonotonic logic can't quite handle this: at any stage of the search, we would have many turing machines still at their default status of "nonhalting", which would make the predicate seem always-false; we need to only admit assumptions that have been "hardened" by trying to disprove them for some amount of time.) This may sound "so easy that a Turing machine could do it". And it is, for set cutoffs. But the point is that an AGI that only considered computable models, such as AIXI, would never converge to the correct model in a world that contained anything uncomputable, whereas it seems a human could. (AIXI would find turing machines that were ever-closer to the right model, but could never see that there was a simple pattern behind these ever-larger machines.) I hope that this makes my claim sound less extreme. Uncomputable models of the world are not really so hard to reason about, if we're comfortable with a logic that makes probably-true conclusions rather than definitely-true. ------------------------------------------- agi Archives: http://www.listbox.com/member/archive/303/=now RSS Feed: http://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: http://www.listbox.com/member/?& Powered by Listbox: http://www.listbox.com ------------------------------------------- agi Archives: http://www.listbox.com/member/archive/303/=now RSS Feed: http://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: http://www.listbox.com/member/?member_id=8660244&id_secret=106510220-47b225 Powered by Listbox: http://www.listbox.com
