--- Artificial Stupidity <[EMAIL PROTECTED]> wrote: > Who cares? Really, who does? You can't create an > AGI that is friendly or > unfriendly. It's like having a friendly or > unfriendly baby.
No, it is not. A baby comes pre-designed by evolution and genetics. An AGI can be custom-written to spec. See http://www.singinst.org/upload/futuresalon.pdf. > How do you > prevent the next Hitler, the next Saddam, the next > Osama, and so on and so > forth? AGIs do not act like humans do. You cannot have an AGI villain who acts like a human villain, any more than the pizza shop down the street is going to start walking on giant cyborg legs and destroying the city. An AGI, at the lowest level, is made up of code. Code does exactly what is written on the sheet of paper; no more, and no less. This does not mean that we can predict what the code will do, but we can say that it will not spontaneously hijack the nearest greenhouse and use it to grow coconuts. > A friendly society is a good start. Evil > doesn't evolve in the > absence of evil, and good doesn't come from pure > evil either. These are *human rules*. They apply to *human society*. They do not apply to AGIs. A nuclear bomb isn't going to spare Tokyo because it was built in a friendly society. The only reason why an AGI should care more about human society than about the randomly selected moss down the street is because we painstakingly design it that way. > Unfortunately, we live in a world that has had > evil and good since the > very beginning of time, This is a nice-sounding cliche; it's not actually true. There was no such thing as evil and good until nervous systems (which can contain things like pain, pleasure, desire, etc.) evolved, which wasn't until around ~600 MYA. Human concepts of good and evil are unique to evolved creatures; they do not automatically transfer over into a computer system. > thus an AGI can choose to go > bad or good, Exercise: Every time you say "AGI", replace it with "computer virus". If it sounds silly, you're anthropomorphisizing. > but we > must realize that there will not be one AGI being, > there will be many, and > some will go good and some will go bad. Transcoded: "there will not be one computer virus, there will be many, and some will go good and some will go bad." Sounds silly, right? How could a computer virus "go good"? It doesn't make any sense. > If those > that go bad are against > human and our ways, the ones that are "good", will > fight for us and be on > our side. Transcoded: "If the bad computer viruses are against human (sic) and our ways, the viruses that are "good", will fight for us and be on our side." Still silly. How could a computer virus "fight for you" or "be on your side"? You can engineer a virus to help you, but if you make a stupid mistake, the virus will wipe your hard drive too; it doesn't care about good or evil. Same principles. (NOTE: A Friendly AGI would care about good and evil, but this is a painstakingly designed feature, not something that happens by default to every AGI. If you pull a random AGI out of a hat, it will probably act like a computer virus). > So a future of man vs machine is just > not going to happen. The > closest thing that will happen will be Machines vs > (Man + Machines). > That's it. With that said, back to work! Do you have any evidence to support this assertion? - Tom > On 9/22/07, Derek Zahn <[EMAIL PROTECTED]> wrote: > > > > This message is "semi-serious". > > > > The latest SIAI blog laments the apparently > dismissive attitude of > > mainstream media toward the singularity summit > (and presumably the concept > > in general, and SIAI itself by extension). Maybe > it's not the worst thing > > thing that could happen. > > > > Consider the war in Iraq (oops, I just lost half > my readers! But this is > > not a political tirade, it's about AGI): The > "reason" for this war, in my > > opinion, is to establish a base from which the USA > can exert social, > > cultural, economic, and military pressure on > people who might use nasty > > weapons against the USA or its friends. Whether > such a project is noble or > > effective is unimportant. What is important is > that the USA is so scared > > of having our people and stuff blown up that > we'll spend a trillion dollars > > and thousands of lives on a rather speculative > strategy for fighting the > > threat. > > > > Now our little gang is basically saying that AGI > is WAY more dangerous > > than any little nuclear bomb or other WMD. Thank > the AGI and Bayes its > > prophet that they think we're kooks, they'd shut > us down in a heartbeat if > > they didn't! > > > > Can they? As an arbitrary thought experiment, > let's say that a > > beyond-human AGI can be built on a 1000-pc > cluster. Modern computer chips > > are incredibly complicated devices that can only > be produced in massive > > high-tech fabrication facilities. I could easily > imagine the government > > attempting to regulate these plants and their > products like any other > > hazardous but useful substance, and bombing fabs > if they are constructed in > > North Korea or Iran. Controlling proliferation > of radioactive material in > > this way has been at least somewhat effective, > and maybe spending a > > trillion dollars in an effort to do the same thing > to CPUs could seem to > > powerful people to be a good idea, especially if > the threat is not only > > physical but also spiritual. > > > > That doesn't stop Russia or China etc from > building AGI, so I suppose we'd > > also have treaties to prevent AGI development that > we'd secretly cheat on, > > so all of us will end up in windowless > cinderblock cubicles in Los Alamos. > > > > Now let's follow up on the recent speculation on > the AGI list that a cheap > > laptop is actually enough processing power. In > that case, the hardware > > restriction policy would be necessary but also > too late. AGI work itself > > can still be banned. What sort of additions to > the Patriot Act would be > > needed to make sure that we are not working on AGI > in secret? > > > > Also in this case, amusingly, the well-publicised > effort to make sure > > every kid on the planet has a cheap laptop is > basically making sure that > > every kid on the planet has something worse than > a nuclear bomb kit. Maybe > > all those kids are too dumb to figure out how to > assemble it. > > > > Next, consider religious fundamentalists. Those > people are able to follow > > a chain of reasoning that leads them to blow up > abortion clinics, > > marketplaces, and fly airplanes into buildings to > protect their points of > > view. AGI and the singularity are much larger > threats to their world view > > than any current target. How attractive a bomb > target is the singularity > > summit itself or an artificial intelligence > conference? Thank the AGI and > > Bayes its prophet that they think we're kooks, > they'd kill us if they > > didn't! > > > > Why do we care whether the world thinks we're > kooks or not? > > > > 1) We want to beg for money, and people don't give > money to kooks. Fair > > enough, but another approach that good true ideas > with economic value can > > take is to earn money instead by selling people > things with value. > > > > 2) If we "raise awareness", perhaps a > better-informed "common man" will > > help make a "positive" singularity more likely. > It's possible. Getting > > more people who think technically for a living > (scientists, engineers) > > convinced could also be beneficial (in case us > believers don't have the > > right answers yet and aren't going to find them > soon). If those people's > > opinions are driven by what they see on tv news or > the wall street journal, > > the scent of kookery is not too helpful. > > > > 3) Bloggers and websites are successful in > proportion to the number of > > hits they get, and kooks don't get many hits. > > > > Any other good reasons we should care whether > journalists heap scorn on > > our efforts? > > > > ------------------------------ > > This list is sponsored by AGIRI: > http://www.agiri.org/email > > To unsubscribe or change your options, please go > to: > > http://v2.listbox.com/member/?& > > > > ----- > This list is sponsored by AGIRI: > http://www.agiri.org/email > To unsubscribe or change your options, please go to: > http://v2.listbox.com/member/?& ____________________________________________________________________________________ Pinpoint customers who are looking for what you sell. http://searchmarketing.yahoo.com/ ----- This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=4007604&id_secret=45434608-f6dcc7
