Jesus Freaking Christ, people! Does no-one read anything I post here.
This article was among those on the Singularity Summit site that I posted in my last post about the world to come... This sort of speculation is really useless, as we really have no way of knowing exactly what type of intelligence will be built. If it is built, modelling, and then raised based upon human models of intelligence then it will no more want to kill us (as in ALL of us) than any particular person wants to kill ALL of us (It may well want to kill one or two of us, and some particular moment, as many humans have shown to have a propensity twoard, but there are very few people who truly want to wipe out all mankind - they would have no- one left to recognize them - vanity is a strang characteristic of the psychopathic). If it is a purely economic agent, then it will likely at some point see us as a competitor for its resources (unless we can be seen to be a resource as a living group). Then, its decision to kill us will not be in the vein of Bender from Futurama'a "Kill all humans" out of hatred or contempt for humanity, but rather as a hyena will attempt to kill lion cubs because they compete with the limited resources of its own cubs... and vice-versa. This stuff is what I am studyig to make a living at... Matthew Bailey --- In [email protected], razldazl <[EMAIL PROTECTED]> wrote: > > "Human Friendly" Artifical Intelligence Might Keep Us Around As Petsif > We're Lucky > http://www.dailygalaxy.com/photos/uncategorized/2007/09/12/ ai_artificial_intelligence_singular.jpg > (Image is a hyperturing AI) > > Ai_artificial_intelligence_singular Some pretty quirky ideas were posed > at this year's Singularity Summit, including whether or not advanced AI > would want to keep us for pets, or turn all organic matter into > "computronium". One thing is for sure, the 600+ technocrats who attended > the recent event at the Palace of Fine Arts, left with their heads full > of futuristic ideas (not that they're weren't already). > > One prevalent theme seemed to be that in order to create truly > independent AI, we have to first "raise" them like we would children. > > "The only pathway is the way we walked ourselves," argued Sam Adams who > headed the IBM's Joshua Blue Project, which attempted to create an > artificial general intelligence (AGI) with the capabilities of a 3-year > old toddler. Before beginning the project, Adams and his collaborators > consulted the literature of developmental psychology and developmental > neuroscience to model Joshua. Joshua was capable of learning about > itself and the virtual environment in which it found itself. > > Similarly, Novamente's Ben Goertzel is working to create self-improving > AI avatars that can live on their own in virtual worlds like Second > Life. They could exist as virtual kids or pets that humans on Second > Life could teach and interact with. Eventually their virtual bodies > could develop the "senses" that would allow them to explore and become > socialized. > > But here's where it gets really weird unlike real babies, AI "babies" > have a potentially unlimited capacity for boosting their own level of > intelligence. So what happens if an AI baby becomes super-smart, but has > the emotional and moral stability of a small child, who think the world > should revolve around them? > > James Hughes pointed to the havoc currently carried out by the Storm > worm. Storm has now infected over 50 million computers and has at its > disposal more computing resources than 500 supercomputers. Perhaps more > alarmingly, when Storm detects attempts to stop it, it automatically > begins launching massive denial-of-service attacks to defend itself. > Hughes also speculated that self-willed minds could evolve from > primitive AIs already inhabiting the infosphere's ecosystems. > > But the future forecast may be sunny rather than stormy according to the > founder of Adaptive A.I., Peter Voss. He outlined several advantages > that super smart AIs could offer humanity. For one thing, AIs could > significantly lower costs, enable the production of better and safer > products and services, and improve the standard of living around the > world, including the elimination of poverty in developing nations. Voss > asked those at the conference to imagine what could happen when the AIs > equivalent of 100,000 Ph.D. scientists were working on life extension > and anti-aging research 24/7. Voss believes that AIs could make us > better people from a moral perspective. He imagines that each of us > could have our own super smart AI assistant to guide us in making > smarter choices ourselves. > > One underlying theme of the conference was the age old idea that with > great risk, comes great reward. Yes AI could be misused or allowed to > get out of hand, like most other technologies, but if handled correctly, > the technology could potentially make the planet a better place. But it > is important that we put measures into place that will ensure AI of the > future will be human friendly. But is that even possible? > > Computer scientist Stephen Omohundro doesn't think so. He said that > self-improving AIs could easily become ultra-rational economic agents, > basically examples of homo economicus. Such AIs would exhibit four > drives; efficiency, self-preservation, acquisition, and creativity. The > drive to acquire more resources means that AIs could be dangerously > competitive with humans (as humans are towards each other) rather than > helpful. > > Given these grave concerns about how future AIs might treat humans, > should we be trying to create them in the first place? Former Sun > Microsystems chief scientist Bill Joy declared that they are indeed too > dangerous, and that we would be wise to relinquish the drive to create > them altogether. But something tells me he wasn't preaching to the choir > on that one. > > Charles Harper, senior vice president of the Templeton Foundation, > suggested there was a "dilemma of power." The dilemma is that "our > science and technology create new forms of power but our cultures and > civilizations do not easily create parallel capacities of stewardship > required to utilize newly created technological powers for benevolent > uses and to restrain them from malevolent uses." > > However, not everyone is so pessimistic about our capacity to improve. > Science writer Ronald Bailey, who wrote The Scientific and Moral Case > for the Biotech Revolution points out, "Actually the arc of modern > history strongly suggests that Harper's claim is wrong. More people than > ever are wealthier and more educated and freer. Despite the tremendous > toll of the 20th century, even social levels of violence per capita have > been decreasing. We have been doing something more right than wrong as > our technical powers have burgeoned." > > Regardless of who's right, the likeliest outcome is that some sort of > independent AI will eventually arise. Ray Kurzweil, who joined the > Summit via video conferencing, predicted that AIs will come into > existence before 2030. Peter Voss was even bolder in his declaration. > "In my opinion AIs will be developed almost certainly in less than 10 > years and quite likely in less than five years." > > I guess we won't know for certain until 2017. In the mean time, I'm sure > most of us are happy to support the Singularity Institute's efforts to > ensure friendly AI. If I'm going to be some computer's petI'll at least > want it to treat me humanely? > > Posted by Rebecca Sato > http://www.dailygalaxy.com/my_weblog/2007/09/human-friendly-.html#more >
