RE: [agi] Hard Wired Switch

2003-03-03 Thread Ben Goertzel
I agree with Shane ... this approach suffers from the same sort of problem that AIXI suffers from, Friendliness-wise When the system is smart enough, it will learn to outsmart the posited Control Code, and the ethics-monitor AGI You might want to avoid this by making the ethics-monitor AGI

Re: [agi] Hard Wired Switch

2003-03-03 Thread Eliezer S. Yudkowsky
Ben Goertzel wrote: However, the society approach does not prevent a whole society of AGI's from drifting into evil. How good is our understanding of AGI sociodynamics??? ;-) This approach just replaces one hard problem with another... which may or may not be even harder... Indeed; if one cannot

RE: [agi] What is meant by hard-wiring?

2003-03-03 Thread Ben Goertzel
To me the distinction is between A) "Explicit programming-in of ethical principles" (EPIP) versus B) "Explicit programming-in of methods specially made for the learning of ethics through experience and teaching" versus C) "Acquisition of ethics through experience and teaching,

[agi] One super-smart AGI vs more, dumber AGIs???

2003-03-03 Thread Philip Sutton
Ben, would you rather have one person with an IQ of 200, or 4 people with IQ's of 50? Ten computers of intelligence N, or one computer with intelligence 10*N ? Sure, the intelligence of the ten computers of intelligence N will be a little smarter than N, all together, because of

RE: [agi] Hard Wired Switch

2003-03-03 Thread Kevin Copple
Ben said, When the system is smart enough, it will learn to outsmart the posited Control Code, and the ethics-monitor AGI This isn't apparent at all, given that the Control Code could be pervasively imbedded and keyed to things beyond the AGI's control. The idea is to limit the AGI and

RE: [agi] One super-smart AGI vs more, dumber AGIs???

2003-03-03 Thread Ben Goertzel
Hi, I don't see that you've made a convincing argument that a society of AI's is safer than an individual AI. Certainly among human societies, the only analogue we have, society-level violence and madness seems even MORE common than individual-level violence and madness. Often societies

Re: [agi] One super-smart AGI vs more, dumber AGIs???

2003-03-03 Thread Kevin
Hello all.. I was wondering what people thought the relative risks were between a super smart AGI that cannot yet self modify(change its own source code), and an AGI that can self modify? Do we see inherently less risk in case one? Perhaps some "hard wired" ethics in case 1 would be much

Re: [agi] One super-smart AGI vs more, dumber AGIs???

2003-03-03 Thread Kevin
It seems to me that communication and "thought sharing" between various AGI's would be so intertwined that each one will become indistinguishable from the other. So in essence you still have "one" AGI.. Kevin - Original Message - From: Ben Goertzel To: [EMAIL

[agi] new drafts

2003-03-03 Thread Pei Wang
I havetwo new drafts for comments:"Non-Axiomatic Logic", at http://www.cis.temple.edu/~pwang/drafts/NAL.pdfThis is a complete description of the logic I've been working on."A Term Logic for Cognitive Science", athttp://www.cis.temple.edu/~pwang/drafts/TermLogic.pdfThis is a comparison

Re: Selectively supporting the safest advanced tech [Re: [agi] Playing with fire]

2003-03-03 Thread Alan Grimes
I would point out that our legal frameworks are designed under the assumption that there is rough parity in intelligence between all actors in the system. The system breaks badly when you have extreme disparities in the intelligence of the actors because you are breaking one of the

RE: [agi] Playing with fire

2003-03-03 Thread Colin Hales
(1) Since we cannot accurately predicate the future implications of our action, almost all research can lead to deadly result --- just see what has been used as weapons in the current world. If we ask for a guaranty for safety before a research, then we cannot do anything. I don't think

Re: [agi] Hard Wired Switch

2003-03-03 Thread Brian Atkins
Kevin Copple wrote: Ben said, When the system is smart enough, it will learn to outsmart the posited Control Code, and the ethics-monitor AGI This isn't apparent at all, given that the Control Code could be pervasively imbedded and keyed to things beyond the AGI's control. The idea is to

Re: [agi] Playing with fire

2003-03-03 Thread Brad Wyble
Extra credit: I've just read the Crichton novel PREY. Totally transparent movie-scipt but a perfect text book on how to screw up really badly. Basically the formula is 'let the military finance it'. The general public will see this inevitable movie and we we will be drawn towards the moral

Re: [agi] Playing with fire

2003-03-03 Thread Brad Wyble
One thing I should add: It's the same hubris I mentioned in my previous message that prompted us to send out satellites effectively bearing our home address and basic physiology on a plaque in the hope that aliens would find it and come to us. Even NASA scientists seem to have no fear of

Re: [agi] Playing with fire

2003-03-03 Thread Philip Sutton
Hi Pei / Colin, Pei: This is the conclusion that I have been most afraid of from this Friendly AI discussion. Yes, AGI can be very dangerous, and I don't think any of the solutions proposed so far can eliminate the danger completely. However I don't think this is a valid reason to

RE: [agi] cart before the horse

2003-03-03 Thread Ben Goertzel
Well, that's one hell of a good reason to slow down the whole AGI project. Doesn't it strike you that it's kind of reckless to create something that could change society/the world drastically and bring it on before society has had the time to develop some safequards or safety net? This

RE: [agi] Why is multiple superintelligent AGI's safer than a single AGI?

2003-03-03 Thread Ben Goertzel
Ben, In reply to my para saying : if the one AGI goes feral the rest of us are going to need to access the power of some pretty powerful AGIs to contain/manage the feral one. Humans have the advantage of numbers but in the end we may not have the intellectual power or speed to counter

RE: [agi] Playing with fire

2003-03-03 Thread Colin Hales
PhilipI personally think humans as a society are capable of saving themselves from their own individual and collective stupidity. I've worked explicitly on this issue for 30 years and still retain some optimism on the subject. Colin: I'm with Pei Wang. Let's explore and deal with it.OK, if

RE: [agi] Why is multiple superintelligent AGI's safer than a single AGI?

2003-03-03 Thread Philip Sutton
Ben, Ben: That paragraph gave one possible dynamic in a society of AGI's, but there are many many other possible social dynamics Of course. What you say is quite true. But so what? Let's go back to that one possible dynamic. Can't you bring yourself to agree that if a one-and-only

[agi] Let's make the friendly AI debate relevant to AGI research/development

2003-03-03 Thread Philip Sutton
Pei, I also have a very low expectation on what the current Friendly AI discussion can contribute to the AGI research. OK - that's a good issue to focus on then. In an earlier post Ben described three ways that ethical systems could be facilitated: A) Explicit programming-in of ethical

FW: Selectively supporting the safest advanced tech [Re: [agi] Playing with fire]

2003-03-03 Thread Ben Goertzel
Alan, I've asked you repeatedly not to make insulting or anti-Semitic comments on this list. But yet, you feel you have to keep referring to Eliezer as the rabbi and making other similar choice comments. This is not good! As list moderator, I am hereby forbidding you to post to the AGI list

RE: [agi] Let's make the friendly AI debate relevant to AGI research/development

2003-03-03 Thread Philip Sutton
Ben, I think Pei's point is related to the following point We're now working on aspects of A) explicit programming-in of ideas and processes B) Explicit programming-in of methods specially made for the learning of ideas and processes through experience and teaching and that until

Re: [agi] Why is multiple superintelligent AGI's safer than a singleAGI?

2003-03-03 Thread Eliezer S. Yudkowsky
Ben Goertzel wrote: Yes, I see your point now. If an AI has a percentage p chance of going feral, then in the case of a society of AI's, only p percent of them will go feral, and the odds are that other AI's will be able to stop it from doing anything bad. But in the case of only one AI,

RE: [agi] What would you 'hard-wire'?

2003-03-03 Thread Philip Sutton
Ben, I can see some possible value in giving a system these goals, and giving it a strong motivation to figure out what the hell humans mean by the words care, living, etc. These rules are then really rule templates with instructions for filling them in... Yes. However, I view

Re: [agi] Why is multiple superintelligent AGI's safer than a single AGI?

2003-03-03 Thread Philip Sutton
Hi Eliezer, This does not follow. If an AI has a P chance of going feral, then a society of AIs may have P chance of all simultaneously going feral I can see you point but I don't agree with it. If General Motors churns out 100,000 identical cars with all the same charcteristics

RE: [agi] Why is multiple superintelligent AGI's safer than a single AGI?

2003-03-03 Thread Ben Goertzel
Ben Goertzel wrote: Yes, I see your point now. If an AI has a percentage p chance of going feral, then in the case of a society of AI's, only p percent of them will go feral, and the odds are that other AI's will be able to stop it from doing anything bad. But in the case of only

[agi] Singletons and multiplicities

2003-03-03 Thread Eliezer S. Yudkowsky
Philip Sutton wrote: Ben, Ben: That paragraph gave one possible dynamic in a society of AGI's, but there are many many other possible social dynamics Of course. What you say is quite true. But so what? Let's go back to that one possible dynamic. Can't you bring yourself to agree that if a

Re: [agi] Why is multiple superintelligent AGI's safer than a singleAGI?

2003-03-03 Thread Eliezer S. Yudkowsky
Philip Sutton wrote: Hi Eliezer, This does not follow. If an AI has a P chance of going feral, then a society of AIs may have P chance of all simultaneously going feral I can see you point but I don't agree with it. If General Motors churns out 100,000 identical cars with all the same

RE: [agi] Why is multiple superintelligent AGI's safer than a single AGI?

2003-03-03 Thread Ben Goertzel
Eliezer is certainly correct here -- your analogy ignores probabilistic dependency, which is crucial. Ben -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] Behalf Of Eliezer S. Yudkowsky Sent: Tuesday, March 04, 2003 1:33 AM To: [EMAIL PROTECTED] Subject: Re:

Re: [agi] Why is multiple superintelligent AGI's safer than a single AGI?

2003-03-03 Thread Philip Sutton
Eliezer, That's because your view of this problem has automatically factored out all the common variables. All GM cars fail when dropped off a cliff. All GM cars fail when crashed at 120 mph. All GM cars fail on the moon, in space, underwater, in a five-dimensional universe. All GM cars