Re: [agi] Playing with fire
It seems to me that a lot of the us-against-them-or-it flavor of this conversation is based on the assumption that both machine AI and human consciousness are fixed, static qualities/entities/factors. Let me put forth another scenario, which is that AI does not become them, but rather joins in a symbiotic partnership with wetware intelligences (us) to become something else. I think this is a lot more likely than the scenario that pure computer processing achieves consciousness. Symbiosis is the basis for just about every major evolutionary advance in the history of life. Is it that hard to believe that a silicon/carbon symbiosis might constitute the next punctuated advance in evolution? Not that we're thinking small, here. C. David Noziglia Object Sciences Corporation 6359 Walker Lane, Alexandria, VA (703) 253-1095 What is true and what is not? Only God knows. And, maybe, America. Dr. Khaled M. Batarfi, Special to Arab News Just because something is obvious doesn't mean it's true. --- Esmerelda Weatherwax, witch of Lancre - Original Message - From: Brad Wyble [EMAIL PROTECTED] To: [EMAIL PROTECTED] Sent: Monday, March 03, 2003 5:47 PM Subject: Re: [agi] Playing with fire Extra credit: I've just read the Crichton novel PREY. Totally transparent movie-scipt but a perfect text book on how to screw up really badly. Basically the formula is 'let the military finance it'. The general public will see this inevitable movie and we we will be drawn towards the moral battle we are creating. In early times it was the 'tribe over the hill' we feared. Communication has killed that. Now we have the 'tribe from another planet' and the 'tribe from the future' to fear and our fears play out just as powerfully as any time in out history. Note: I'm not arguing for or against AI here, just bringing to light some personal observations This particular situation is different than the others you describe(tribe over the hill). To accept the dangers of AI, one must first swallow racial pride and admit that we are not the top-dogs in the universe. Few people are willing to do this, even among well-educated, science minded engineers. I just tested this topic on my group of internet friends in a private forum with 20 some people. I was unable to convince a single person that this danger is real with a day's worth of intensive back and forth discussion. They assumed the typical we can just control it mentality that has always been prevalent. Notice that even in gloomy bad-AI stories such as Terminator and the Matrix, the humans always win in the end. This is what the mainstream will believe becauses they want to believe it. In other words, I don't think the public is going to care one-iota about the dangers of AI. They'd prefer to focus their energy on banning truly harmless technologies, such as cloning. People fear clones because as far as they are concerned, clones are people too, so we're dealing with an equal, and can lose. But AI's are just machines, they can be out-smarted or out-evolved as far as the average person is concerned. The upside is that AI researchers won't have to fight to keep their research legal. The downside of this is that we're more likely to destroy ourselves. -Brad --- To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/[EMAIL PROTECTED] --- To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/[EMAIL PROTECTED]
Re: [agi] really cool
Are you using an IP phone on your office network? Or a DSL connection at home? Those would be less general, and less cool, than Ben's suggestion. C. David NozigliaObject Sciences Corporation6359 Walker Lane, Alexandria, VA(703) 253-1095 "What is true and what is not? Only God knows. And, maybe, America." Dr. Khaled M. Batarfi, Special to Arab News "Just because something is obvious doesn't mean it's true." --- Esmerelda Weatherwax, witch of Lancre - Original Message - From: Ben Goertzel To: [EMAIL PROTECTED] Sent: Tuesday, February 25, 2003 2:29 PM Subject: RE: [agi] really cool I suppose they match geographical information about the IP address with caller ID info? -Original Message-From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]On Behalf Of KevinSent: Tuesday, February 25, 2003 2:18 PMTo: [EMAIL PROTECTED]Subject: [agi] really cool Those guys at Google sure are imaginative! I was wowed by this application and I'm still not sure how they are doing it. You call a number and say the search keywords you want to search on...it then displays the results in your browser. I have no idea how they are connecting my cell phone number with *my* computer. try it out.. http://labs1.google.com/gvs.html Kevin
Re: [agi] AI Morality -- a hopeless quest
- Original Message - From: Philip Sutton To: [EMAIL PROTECTED] Sent: Wednesday, February 12, 2003 2:55 PM Subject: Re: [agi] AI Morality -- a hopeless quest Brad, Maybe what you said below is the key to friendly GAI I don't think any human alive has the moral and ethical underpinnings to allow them to resist the corruption of absolute power in the long run. We are all kept in check by our lack of power, the competition of our fellow humans, the laws of society, and the instructions of our peers. Remove a human from that support framework and you will have a human that will warp and shift over time. We are designed to exist in a social framework, and our fragile ethical code cannot function properly in a vacuum. If we create a *community* of AGIs that have ethics orientated architecture/ethical training then *they* might stand a chance of policing themselves. The situation is analagous to how we try (so far with not enough success, but with improving odds) to protect non-human species. Humans are the biggest threat to non-human species (well demonstrated) but there are more and more efforts being made by humans to stop that and to provide other species a chance to survive and continue evolving. I think that we need to structure and train AGIs knowing that the same scenario could be played out in relation to us as has happened between us and less poweful life - but we have the advantages that: - we've seen where WE went wrong - we can shape the deep ethical structure of AGIs from the start with this meta issue in mind. Cheers, Philip I would second this, and note for the record the instance of the defense of the treatment of women in traditional societies. In places like Pakistan and Arabia, apologists defend the second-class status of women by saying that they are being "protected" by their male relatives. But without the ability to protect their own safety and status, such "protection" becomes honor killings and FGM. The only guarantee of protection and rights is to give womenthe ability to protect themselves, and that's a tremendous cultural change. Especially when such cultural traditions are claimed to be mandated by God. I guess the relevance here is that Philip has reached the core of this issue. There are no guarantees in this business, especially when trying to predict the behavior of complex adaptive entities with cognitive abilities that we are assuming will be greater than ours. Thus, the only safeguard is the classic one: division of power. C. David NozigliaObject Sciences Corporation6359 Walker Lane, Alexandria, VA(703) 253-1095 "What is true and what is not? Only God knows. And, maybe, America." Dr. Khaled M. Batarfi, Special to Arab News "Just because something is obvious doesn't mean it's true." --- Esmerelda Weatherwax, witch of Lancre
[agi] New Virus
C. David NozigliaObject Sciences Corporation6359 Walker Lane, Alexandria, VA(703) 253-1095 "What is true and what is not? Only God knows. And, maybe, America." Dr. Khaled M. Batarfi, Special to Arab News "Just because something is obvious doesn't mean it's true." --- Esmerelda Weatherwax, witch of Lancre BEGIN:VCARD VERSION:2.1 N:Noziglia;David FN:David Noziglia NICKNAME:Home ORG:Object Sciences Corporation TITLE:Analyst TEL;WORK;VOICE:703.253.1095 TEL;HOME;VOICE:703.768.8470 TEL;CELL;VOICE:703.201.6346 TEL;WORK;FAX:703.253.1061 ADR;WORK:;;6359 Walker Lane;Alexandria;Virginia;22310;Fairfax LABEL;WORK;ENCODING=QUOTED-PRINTABLE:6359 Walker Lane=0D=0AAlexandria, Virginia 22310=0D=0AFairfax ADR;HOME:;;6802 Stoneybrooke Lane;Alexandria;Virginia;22306;Fairfax LABEL;HOME;ENCODING=QUOTED-PRINTABLE:6802 Stoneybrooke Lane=0D=0AAlexandria, Virginia 22306=0D=0AFairfax X-WAB-GENDER:1 URL;HOME:http://www.expedia.com/pub/agent.dll?qscr=mgts URL;WORK:http://www.expedia.com/pub/agent.dll?qscr=mgts EMAIL;PREF;INTERNET:[EMAIL PROTECTED] REV:20030206T193952Z END:VCARD
[agi] Fw: New Virus
A virus has been passed on to me by a contact. My address book has in turn been affected. Since you are in my address book there is a good chance you will find it in your computer too. I followed the direction below and eradicated the virus easily. The virus ( called jdbgmgr.exe) is not detected by Norton or McAfee anti-virus systems. The virus sits quietly for 14 days before damaging the system. It is sent automatically by messenger and by the address book whether or not you sent e-mails to your contacts. Here is how you check for the virus and get rid of it. 1. Go to start, find or search option. 2. In the file/folders option, type the name : jdbgmgr.exe 3. Be sure to search your C: drive and all the subfolders and any other drives you may have. 4. Click "find now" 5. The virus has a teddy bear icon with the name jdbgmgr.exe. DO NOT OPEN IT 6. Go to edit(on the menu bar), choose "select all" to highlight the file without opening it. 7. Now go to the file (on the menu bar) and select delete. It will then to the Recycle bin. IF YOU FIND THE VIRUS, YOU MUST CONTACT ALL THE PEOPLE IN YOUR ADDRESS BOOK SO THEY CAN ERADICATE IT IN THEIR OWN ADDRESS BOOKS. To do this a) Open a new e-mail message b) Click on the icon of the address book next to the "TO" c) Highlight every name and add to "BCC" d) Copy this messageenter subjectpaste to e-mail. Apologies to those of you who have had this message several times from different people C. David NozigliaObject Sciences Corporation6359 Walker Lane, Alexandria, VA(703) 253-1095 "What is true and what is not? Only God knows. And, maybe, America." Dr. Khaled M. Batarfi, Special to Arab News "Just because something is obvious doesn't mean it's true." --- Esmerelda Weatherwax, witch of Lancre
[agi] Fw: Virus Hoax
Jdbgmgr.exe file hoax Reported on: April 12, 2002 Last Updated on: November 21, 2002 10:24:24 AM Symantec Security Response encourages you to ignore any messages regarding this hoax. It is harmless and is intended only to cause unwarranted concern. Type: Hoax This is a hoax that, like the SULFNBK.EXE Warning hoax, tries to persuade you to delete a legitimate Windows file from your computer. The file that the hoax refers to, Jdbgmgr.exe, is the Microsoft Debugger Registrar for Java. It may be installed when you install Windows. NOTE: Recent version of this hoax take advantage of the recent outbreak of the W32.bugbear@mm worm, and the fact that the Jdbgmgr.exe file that is mentioned in the hoax has a bear icon. The actual W32.bugbear@mm worm file is an .exe file and does not have a bear icon. The Windows Jdbgmgr.exe file has a teddy bear icon as described in the hoax: C. David Noziglia Object Sciences Corporation 6359 Walker Lane, Alexandria, VA (703) 253-1095 What is true and what is not? Only God knows. And, maybe, America. Dr. Khaled M. Batarfi, Special to Arab News Just because something is obvious doesn't mean it's true. --- Esmerelda Weatherwax, witch of Lancre - Original Message - From: Wells Piers [EMAIL PROTECTED] Sent: Thursday, February 06, 2003 11:28 AM Subject: Virus Hoax Disregard any emails regarding the jdbgmgr.exe virus. This is a hoax. Sorry for the inconvenience http://www.symantec.com/avcenter/venc/data/jdbgmgr.exe.file.hoax.html --- To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/?[EMAIL PROTECTED] print.gifemail.gifsection_title_technical.gif
Re: [agi] Jane
cosmodelia: Perhaps the most interesting recent (relatively) novelsfeaturing IA, with a healthy dose of warning about the dangers of the said tech, is the long, but definitely worth reading, four-book set by Dan Simmons: /Hyperion, The Fall of Hyperion, Endymion, and The Rise of Endymion\. I thought that these were very rich books, with new technology, interesting characters, thoughtful situations, and neat conflicts. In classic SF, true AI is very rare, and when done, kind of a throwaway. The exeption is robots, like Isaac Asimov's positronic brains, which probably count. The two short story collections /I, Robot\ and /The Rest of the Robots\ are worth reading, if only to help you understand what people who refer to the Three Laws of Robotics are talking about. For the most part, computers in SF are almost as primitive as those in the Star Trek shows, or, like those in the Verner Vinge novels, updated, star-faring versions of whatever technology was hot the year the book was written (in Vinge's case, Internet usegroups). I suppose that's one reason Arthur C. Clarke's HAL was so noticeable. Heinlein did come up with a talking skycar in /The Number of the Beast\, butthat's from his later, practically unreadable, period. I remember reading an out-of-print and little-regarded book from the fiftiescalled "The Rocket Ship," which was a usual space opera kind of story of a super agent and his trusty companion flying saucer. The character of the ship was very feminine, so this naturally kind of stuck in my 13-year-old memory. But I don't remember the author (not noted for anything else) and there's no reference in Amazon to anything like this. The dangers of computer technology (including AI -- ref /The Matrix\)are treatedmuch moreoften in movies and tv than in SF literature. That's because movies and tv treat all technology as dangerous. As I said, AI is rare, so that's all I can remember. I stopped reading SF regularly twenty years ago, though, so others can no doubt recommend more. C. David NozigliaObject Sciences Corporation6359 Walker Lane, Alexandria, VA(703) 253-1095 "What is true and what is not? Only God knows. And, maybe, America." Dr. Khaled M. Batarfi, Special to Arab News "Just because something is obvious doesn't mean it's true." --- Esmerelda Weatherwax, witch of Lancre - Original Message - From: Ben Goertzel To: [EMAIL PROTECTED] Sent: Tuesday, January 21, 2003 9:18 AM Subject: RE: [agi] Jane heycosmodelic one, Asyou read further in the series, you'll find that Jane didn't exactly just *emerge*; she was created -- although she did grow into something very different from her originally-created form. (Sorry to spoil an element of the plot for you ;) ButJane is an interesting portrayal of an AI as arising from a kind of "communicational brain". This concept is related, but not identical, to theidea of the"global brain", see http://pespmc1.vub.ac.be/SUPORGLI.html But the conjectured global brain is *composed of* communicational elements, whereas Jane is in a way parasitic off them... One of the great things about Speaker for the Dead and its two sequels, is the depth with whichCard portrays the different psychologies and cognitive abilities of the different alien races (the pequeninos, the buggers, and Jane). Although jane is clearly smarter than the others, the intelligences of the other three races are in a way incommensurable -- just *different from*, not better or worse than each other. This is a lesson worth learning as we move toward creating digital intelligent beings: intelligence is multidimensional not linearly scalable. This is true among humans but far more true in a cross-species sense. Narrow AI is already teaching us this in a way, of course. Of course, I think Card's novels are WAY off as futurology, in the sense that technology advances hardly at all over 3000 years in his universe. The ansible (superluminal communication) and other tech is borrowed from the buggers, but humans don't invent much that is new and significant during 3000 years!! This works well for the story he wants to tell, but seems phenomenally unlikely... -- Ben G -Original Message-From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]On Behalf Of cosmodeliaSent: Tuesday, January 21, 2003 12:34 AMTo: agiSubject: [agi] Jane I'm reading Speaker for the Dead by Orson Scott Card. I'm finding the IA character "Jane" interesting because Jane emerged, Jane was not created. It seems Card thinks IA will emerge as human intelligence emerged. " Jane first found herself between the stars, her thoughts playing among the vibrations of the philotic strands of the ansible net. The computers of the Hundred Worlds were hands and feet, eyes and ears to her. She spoke
Re: [agi] The Next Wave
ULTIMATE KNOWLEDGE Our AGI will come to know everything. Every single flap of every butterfly wing in all of history. If it has emotions like ours, it may become rather depressed and realize that it is all pointless. Maybe we will understand and agree with the AGI's explanation. What happens then? While I shudder at the enormity of the responsibility, I am in the process of forming committees to address the challenges of each category. For those of you that feel the burden of the future upon your shoulders, please let me know which committees you feel compelled to serve on. Kevin Copple P.S. I also need a name for the website, the foundation, and a good slogan. Any suggestions? If you are willing to have me, this is the one I would like to serve on. Because I have very strong views that such a thing isn't possible, and would like to debate that issue, among others. C. David Noziglia Object Sciences Corporation 6359 Walker Lane, Alexandria, VA (703) 253-1095 What is true and what is not? Only God knows. And, maybe, America. Dr. Khaled M. Batarfi, Special to Arab News Just because something is obvious doesn't mean it's true. --- Esmerelda Weatherwax, witch of Lancre - Original Message - From: Kevin Copple [EMAIL PROTECTED] To: [EMAIL PROTECTED] Sent: Friday, January 10, 2003 8:42 AM Subject: [agi] The Next Wave --- To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]
Re: [agi] Friendliness toward humans
It strikes me that what many of the messages refer to as ethical stances toward life, the earth, etc., are actually simply extensions of self interest. In fact, ethical systems of cooperation are really, on a very simplistic level, ways of improving the lives of individuals. And this is not true because of strictures from on high, but for reasons of real-world self-interest. Thus, the Nash Equilibrium, or the results of the Tit-for-Tat game experiment, show that an individual life is better in an environment where players cooperate. Being nice is smart, not just moral. Other experiments have shown that much hard-wired human and animal behavior is aimed at enforcing cooperation to punish cheaters, and that cooperation has survival value! I reference here, quickly, Darwin's Blind Spot, by Frank Ryan, which argues that symbiotic cooperation is a major creative force in evolution and biodiversity. Thus, simply giving AGI entities a deep understanding of game theory and the benefits of cooperative society would have far greater impact on their ability to interact productively with the human race than hard-wired instructions to follow the Three Laws that could some day be overwritten. C. David Noziglia Object Sciences Corporation 6359 Walker Lane, Alexandria, VA (703) 253-1095 What is true and what is not? Only God knows. And, maybe, America. Dr. Khaled M. Batarfi, Special to Arab News Just because something is obvious doesn't mean it's true. --- Esmerelda Weatherwax, witch of Lancre - Original Message - From: Philip Sutton [EMAIL PROTECTED] To: [EMAIL PROTECTED] Sent: Thursday, January 09, 2003 11:09 AM Subject: [agi] Friendliness toward humans In his last message Ben referred in passing to the issue of AGI's long- term Friendliness toward humans. This brought to mind some of the discussion in December last year about training AGIs using simulation games that emulate aspects of the natural world. I think that AGIs need to be not only friendy towards humans but towards other life as well (organic or not!). And I also think AGIs need to have a good understanding of the the need to protect the life support systems for all life. As we aspire to a greater mind than current humans it's worth looking at where human minds tend to be inadequate. I think humans lack an inbuilt capacity for complex and long running internal simulations that are probably necessary to be able to have a deep understanding of ecological or more multifaceted sustainability issues. I think current humans have the capacity for ethics that are not exclusively anthropocentric but that we need to boost this ethic in actuality in the human community and I think we need to make sure that AGIs develop this ethical stance too. Cheers, Philip Philip Sutton Director, Strategy Green Innovations Inc. 195 Wingrove Street Fairfield (Melbourne) VIC 3078 AUSTRALIA Tel fax: +61 3 9486-4799 Email: [EMAIL PROTECTED] http://www.green-innovations.asn.au/ Victorian Registered Association Number: A0026828M --- To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/?[EMAIL PROTECTED] --- To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]
Re: [agi] Friendliness toward humans
Superior in intelligence doesn't necessarily mean superior in wisdom ... there are plenty of examples of that in human history. intelligence in the wrong hands is the most dangerous things...we are seeing that right now in our govt IMO And just WHERE do you see evidence of intelligence? C. David Noziglia Object Sciences Corporation 6359 Walker Lane, Alexandria, VA (703) 253-1095 What is true and what is not? Only God knows. And, maybe, America. Dr. Khaled M. Batarfi, Special to Arab News Just because something is obvious doesn't mean it's true. --- Esmerelda Weatherwax, witch of Lancre --- To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]
Re: [agi] Friendliness toward humans
I still hold that *if* and AGI has a sense of self, without the concomitant wisdom needed, it *will* develop the destructive emotions... I agree that it will develop SOME destructive emotions, and I think that any mind necessarily will develop SOME destructive emotions -- which it then will hopefully learn to master, on its path to maturity... But I think that an AGI mind doesn't need to have the same EXTENT of destructive emotions as the average human has, because of the lack of human evolutionary wiring... Or, one could realize that the reason living being have destructive emotions (and who decides that) is for self-preservation. Although cooperative behavior is the most rational (say I in my infinite wisdom), there have to be mechanisms to punish cheaters, and prevent damage to one's self. Regardless of what we may think of the value of self, it can be argued that without individual benefits, the benefit of the whole is a meaningless concept. Plus, these things we are talking about as if they were the products of rational decision-making, are actually simply the behavior strategies/patterns that, on the whole, not universally, worked better than the alternatives that were available. Self-preservation is one of them. C. David Noziglia Object Sciences Corporation 6359 Walker Lane, Alexandria, VA (703) 253-1095 What is true and what is not? Only God knows. And, maybe, America. Dr. Khaled M. Batarfi, Special to Arab News Just because something is obvious doesn't mean it's true. --- Esmerelda Weatherwax, witch of Lancre --- To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]