Mike Tintner wrote:
Ben:I didn't read that book but I've read dozens of his papers ... it's cool stuff but does not convince me that engineering AGI is impossible ... however when I debated this with Stu F2F I'd say neither of us convinced each other ;-) ... Ben, His argument (like mine), is that AGI is *algorithmically* impossible, (Similarly he is arguing only that our *present* mechanistic worldview is inadequate). I can't vouch for it, since he doesn't explicitly address AGI as distinct from the powers of algorithms, but I would be v. surprised if he was arguing that AGI is impossible, period (no?). I would've thought that he would argue something like that just as we need a revolutionary new mechanistic worldview, so we need a revolutionary approach to AGI, (and not just a few tweaks :) ).
I would go both further and not as far.
Math clearly states that to derive all the possible truths from a numeric system as strong as number theory requires an infinite number of axioms. I.e., choices. This is clearly impossible. To me this implies (but not proves) that there are an infinite number of possible futures descending from any precisely defined state. As such, no AGI will be able to solve this problem. It can't even make probability based choices.
OTOH, given a few local biases to start with, and reasoning with a relatively short headway from "current time", Bayesian predictions work pretty well, and don't require infinite resources.
It's my further suspicion that we are equipped with sets of "domain biases", and that at any one time one particular set is dominant. This I see as primarily a simplifying approach, but one which reduces the amount of computation needed in any situation, allowing faster near-future predictions.
So what we have is something less that totally general. Call it an A(g)I. It has a general mode that it can use when it's got plenty of time, but that's not what it uses in real-time, and it's never run as a dominant mode, only as a moderately high priority task. And the general mode tends to get stuck on insoluble (or just too complex) problems until it times out. Sometimes it saves the state and returns to it later, but sometimes a meta-heuristic says "Forget about it. That game's not worth the candle."
The problem comes when you take the G in AGI too seriously. There is no existence proof that such a thing can exist in finite space/time/energy situations. But you should be able to get closer to it than people have evolved to demonstrate.
------------------------------------------- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69 Powered by Listbox: http://www.listbox.com
