Eric, We're talking Friendliness (capital F), a convention suggested by Eliezer Yudkowsky, that signifies the sense in which an AI does no harm to humans.
Yes, it's context dependent. "Do no harm" is the mantra within the medical community, but clearly there are circumstances in which you do a little harm to achieve greater health in the long run. Chemotherapy is a perfect example. Would we trust an AI if it proposed something like chemotherapy? Before we understood that to be a valid treatment, would we really believe it was being Friendly? You want me to drink *what*? Or take any number of ethical dilemmas, in which it's ok to steal food if it's to feed your kids. Or killing ten people to save twenty. etc. How do you define Friendliness in these circumstances? Depends on the context. Terren --- On Mon, 8/25/08, Eric Burton <[EMAIL PROTECTED]> wrote: > Is friendliness really so context-dependent? Do you have to > be human > to act friendly at the exception of acting busy, greedy, > angry, etc? I > think friendliness is a trait we project onto things pretty > readily > implying it's wired at some fundamental level. It comes > from the > social circuits, it's about being considerate or > inocuous. But I don't > know > > On 8/25/08, Terren Suydam <[EMAIL PROTECTED]> > wrote: > > > > Hi Will, > > > > I don't doubt that provable-friendliness is > possible within limited, > > well-defined domains that can be explicitly defined > and hard-coded. I know > > chess programs will never try to kill me. > > > > I don't believe however that you can prove > friendliness within a framework > > that has the robustness required to make sense of a > dynamic, unstable world. > > The basic problem, as I see it, is that > "Friendliness" is a moving target, > > and context dependent. It cannot be defined within the > kind of rigorous > > logical frameworks required to prove such a concept. > > > > Terren > > > > --- On Mon, 8/25/08, William Pearson > <[EMAIL PROTECTED]> wrote: > >> You may be interested in goedel machines. I think > this > >> roughly fits > >> the template that Eliezer is looking for, > something that > >> reliably self > >> modifies to be better. > >> > >> http://www.idsia.ch/~juergen/goedelmachine.html > >> > >> Although he doesn't like explicit utility > functions, > >> the provably > >> better is something he want. Although what you > would accept > >> as axioms > >> for the proofs upon which humanity fate rests I > really > >> don't know. > >> > >> Personally I think strong self-modification is not > going to > >> be useful, > >> the very act of trying to understand the way the > code for > >> an > >> intelligence is assembled will change the way that > some of > >> that code > >> is assembled. That is I think that intelligences > have to be > >> weakly > >> self modifying, in the same way bits of the brain > rewire > >> themselves > >> locally and subconciously, so to, AI will need > to have > >> the same sort > >> of changes in order to keep up with humans. > Computers at > >> the moment > >> can do lots of things better that humans (logic, > bayesian > >> stats), but > >> are really lousy at adapting and managing > themselves so the > >> blind > >> spots of infallible computers are always exploited > by slow > >> and error > >> prone, but changeable, humans. > >> > >> Will Pearson > >> > >> > >> ------------------------------------------- > >> agi > >> Archives: > https://www.listbox.com/member/archive/303/=now > >> RSS Feed: > https://www.listbox.com/member/archive/rss/303/ > >> Modify Your Subscription: > >> https://www.listbox.com/member/?& > >> Powered by Listbox: http://www.listbox.com > > > > > > > > > > > > ------------------------------------------- > > agi > > Archives: > https://www.listbox.com/member/archive/303/=now > > RSS Feed: > https://www.listbox.com/member/archive/rss/303/ > > Modify Your Subscription: > > https://www.listbox.com/member/?& > > Powered by Listbox: http://www.listbox.com > > > > > ------------------------------------------- > agi > Archives: https://www.listbox.com/member/archive/303/=now > RSS Feed: https://www.listbox.com/member/archive/rss/303/ > Modify Your Subscription: > https://www.listbox.com/member/?& > Powered by Listbox: http://www.listbox.com ------------------------------------------- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51 Powered by Listbox: http://www.listbox.com
