On Sun, Aug 24, 2008 at 5:51 PM, Terren Suydam <[EMAIL PROTECTED]> wrote: > >> Did you read CFAI? At least it dispels the mystique and >> ridicule of >> "provable" Friendliness and shows what kind of >> things are relevant for >> its implementation. You don't really want to fill the >> universe with >> paperclips, do you? The problem is that you can't take >> a wrong route >> just because it's easier, it is an illusion born of >> insufficient >> understanding of the issue that it might be OK anyway. > > I'm not taking the easy way out here, I'm talking about what I see as the > only possible path to general intelligence. I could be wrong of course, > but that's why we're here, to talk about our differences.
What is the point of building general intelligence if all it does is takes the future from us and wastes it on whatever happens to act as its goal? > I've read parts of the CFAI but like most of Eliezer's writings, if I had > time to read every word he writes I'd have no life at all. The crux of his > argument seems to come down to what he calls renormalization, in > which the AI corrects its goals as it goes. But that begs the question > of what the AI is comparing its behavior against - some supergoal or > meta-ethics or whatever you want to call it - and the answer must > ultimately come from us, pre-structured. Non-embodied. > Certainly the answer comes from us. In light of renormalization, I like to think about Friendly AI as a second chance machine. The intuitive dread of hard takeoff is in its irreversibility -- if we get it wrong, the overwhelming change will sweep us from the face of this world, and we'll be helpless to do anything about it, we'll never get a chance to fix it. Other scenarios of the future don't look as frightening, but fundamentally they come down to the same problem. Global catastrophes are threatening to wipe out our civilization or whole species, taking away the chance to build better future, and as such belong in a different category from even most horrible of local catastrophes. A 1984 government might take over and stagnate the humanity for millennia. Slowly developed tool AIs might eventually accumulate enough optimization power to move away from the framework we developed them for, like humans broke out from the network weaved by evolution for propagating the genes, even though for a long time before that everything would look fine. Bad decisions can accumulate along the way and send us in a wrong direction, if we have the technology to break or overcome our genetic heritage that presently binds us to the same humane path. The problem with powerful AIs is that they could get their goals wrong and never get us the chance to fix that. And thus one of the fundamental problems that Friendliness theory needs to solve is giving us a second chance, building in deep down in the AI process the dynamic that will make it change itself to be what it was supposed to be. All the specific choices and accidental outcomes need to descend from the initial conditions, be insensitive to what went horribly wrong. This ability might be an end in itself, the whole point of building an AI, when considered as applying to the dynamics of the world as a whole and not just AI aspect of it. After all, we may make mistakes or be swayed by unlucky happenstance in all matters, not just in a particular self-vacuous matter of building AI. >> I was exploring the notion of nonembodied interaction that >> you talkied about. > > Right, in a way that suggests you didn't grasp what I was saying, > and that may be a failure on my part. That's why I was "exploring" -- I didn't get what you meant, and I hypothesized a coherent concept that seemed to fit what you said. I still don't understand that concept. >> > I'm saying that we don't specify that process. >> We let it emerge through >> > large numbers of generations of simulated evolution. >> Now that's going >> > to be a very unpopular idea in this forum, but it >> comes out of what I think >> > are valid philosophical criticisms of designed (or >> metacognitive/metamoral >> > if you wish) intelligence. >> >> Name them. > > I refer you to my article "Design is bad -- or why artificial intelligence > needs artificial life": > > http://machineslikeus.com/news/design-bad-or-why-artificial-intelligence-needs-artificial-life > (answering to the article) Creating *an* intelligence might be good in itself, but not good enough and too likely with negative side effects like wiping out the humanity to sum out positive in the end. It is a tasty cookie with black death in it. You can't assert that we are not closer to AI than 50 years ago -- it's just unclear how closer we are. Great many techniques were developed in these years, and some good lessons learned the wrong way. Is it useful? Most certainly some of it, but how can we tell... Intelligence was created by a blind idiot evolutionary process that has no foresight and no intelligence. Of course it can be designed. Intelligence is all that evolution is, but immensely faster, better and flexible. Creations of Nature only look good in comparison, because we had hardly several hundred years of technological progress, where evolution churned out its designs for billions of years. If something "determines its own goals", isn't it equivalent to these goals being independent on (for example) our goals? Do we want something this alien around? Goals need to come from somewhere, they are not a causal miracle. If they come from arbitrariness, bad for us. Evolution designed humans, human designed artificial evolution, artificial evolution causally led to artificial intelligence. What is the difference between this and evolution causally leading to artificial intelligence? AI won't be *designed* by natural evolution in this case, since the latter stages are not natural evolution, but it's the origin of the goals in resulting AI, like Big Bang. Where do you draw the line and why? If you can design a process that you know to lead to AI, you've designed that AI. You don't build an AI that already knows all the trivia about the world, but instead build a cognitive algorithm that can learn to absorb the structure. How is it different from building artificial evolution environment that develops into an AI? If you build a cognitive algorithm that doesn't have the potential, bud design choice. If you build an artificial evolution environment that develops into an AI, good design choice. If designed AI destroys the world, bad design. If evolved AI turns Friendly, good design. There is no dichotomy, it is a question of good engineering, and it is answered by specific arguments about the design in all cases. -- Vladimir Nesov [EMAIL PROTECTED] http://causalityrelay.wordpress.com/ ------------------------------------------- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51 Powered by Listbox: http://www.listbox.com
