And apart from the global differences between the two types of AGI, it would be no good to try to guarantee friendliness using the kind of conventional AI system that is Novamente, because inasmuch as general goals would be encoded in such a system, they are explicitly coded as "statement" which are then interpreted by something else. To put it crudely (and oversimplify slightly) if the goal "Be empathic to the needs of human beings" were represented just like that, as some kind of proposition, and stored at a particular location, it wouldn't take much for a hacker to get inside and change the statement to "Make [hacker's name] rich and sacrifice as much of humanity as necessary". If that were to become the AGI's top level goal, we would then be in deep doodoo. In the system I propose, such events could not happen.

I think that this focuses on the wrong aspect. It is not the fact that the goal is explicitly encoded as a statement that is a problem -- it is the fact that it is in only one place that is dangerous. My assumption is that your system basically build it's base constraints from a huge number of examples and that it is distributed enough that that it would be difficult if not impossible to maliciously change enough to cause a problem. The fact that you're envisioning your system as not having easy-to-read statements is really orthogonal to your argument and a system that explicitly codes all of it's constraints as readable statements but still builds it's base constraints from a huge number of examples should be virtually as incorruptible as your system (with the difference being security by obscurity -- which is not a good thing to rely upon and also means that your system is less comprehensible).

----- Original Message ----- From: "Richard Loosemore" <[EMAIL PROTECTED]>
To: <[email protected]>
Sent: Monday, October 01, 2007 4:53 PM
Subject: **SPAM** Re: [agi] Religion-free technical content



Replies to several posts, omnibus edition:

************************************************************************


Edward W. Porter wrote:
Richard and Matt,

The below is an interesting exchange.

For Richard I have the question, how is what you are proposing that different than what could be done with Novamente, where if one had hardcoded a set of top level goals, all of the perceptual, cognitive, behavioral, and goal patterns -- and the activation of such patterns – developed by the system would not only be molded by the probabilities of the “world’ in which the system dealt, but also with how important each of those patterns have proven relative to the system’s high level goals. So in a Novamente system you would appear to have the types of biases you suggest that would greatly influence the each of the millions to trillions (depending on system size) of patterns in the “cloud of concepts” that would be formed, their links, and their activation patterns.

So, how is your system different?  What am I missing?

The system is radically different for reasons that are not related to the goal of producing a guaranteed-friendly AGI (this is because of something that I have called the "complex systems problem, or CSP, about which I wrote a paper that is included in the proceedings of the AGI workshop 2006).

And apart from the global differences between the two types of AGI, it would be no good to try to guarantee friendliness using the kind of conventional AI system that is Novamente, because inasmuch as general goals would be encoded in such a system, they are explicitly coded as "statement" which are then interpreted by something else. To put it crudely (and oversimplify slightly) if the goal "Be empathic to the needs of human beings" were represented just like that, as some kind of proposition, and stored at a particular location, it wouldn't take much for a hacker to get inside and change the statement to "Make [hacker's name] rich and sacrifice as much of humanity as necessary". If that were to become the AGI's top level goal, we would then be in deep doodoo. In the system I propose, such events could not happen.


************************************************************************

Mark Waser wrote:
>> 3) The system would actually be driven by a very smart, flexible,
>> subtle sense of 'empathy' and would not force us to do painful things
>> that were "good" for us, for the simple reason that this kind of
>> nannying would be the antithesis of really intelligent empathy.
>
> Hmmm.  My daughter hates getting vaccinations.  She's always hated them.
> Would the system let a five-year-old dictate that it not receive
> vaccinations?  How about ten-, fifteen-, twenty-, or fifty-year-olds?
> Would the answer change if vaccinations were legally required?  Assume
> that the system is the legal guardian of the five-, ten- and
> fifteen-year-olds (i.e. don't cop out and let the choice fall back on
> the parents).
>
> What if the system had to pull you out of the way of an oncoming car in
> the next 0.7 seconds with a 95% chance of breaking your arm to prevent a
> 30% chance of death?
>
> Nannying of adults is something that our society does too much of -- but
> there are places where it is appropriate

Mark,

Now we are getting down to cases, which is good.

Answer in this case: (1) such elemental things as protection from diseases could always be engineered so as not to involve painful injections (we are assuming superintelligent AGI, after all), and (2) even supposing that the AGI really was the ward of a child (extremely unusual: there would be people available) AND that something analogous to a painful injection were necessary (again, I cannot think of a case where this would happen, given the existence of nanotech), THEN under those hypothetical circumstances we would be dealing with a situation where the amount of hurt was negligible and is currently agreed by the vast majority of human beings to be approapriate for a person/child who is in chancery.

Really, we are only talking about cases where the AGI feels tempted to treat adult humans as if they were children (not about actual children). I suggest we confine ourselves to those cases, simply because otherwise we are trying to solve the regular dilemmas that parents face, as if those dilemmas were somehow a special problem of the AGI. Makes sense?


************************************************************************


BillK wrote:
> On 10/1/07, Richard Loosemore wrote:
>> 3) The system would actually be driven by a very smart, flexible, >> subtle >> sense of 'empathy' and would not force us to do painful things that >> were >> "good" for us, for the simple reason that this kind of nannying would >> be
>> the antithesis of really intelligent empathy.
>>
>> If you want, give specific cases and we will try to see how it would
behave.
>>
>
> This is heading straight for the eternal problem of good and evil.
>
> How does the AGI deal with bullying?
> Easy answer - It stops it.

No: not necessarily. Everyone gets *asked* how they want that handled. If you want to go and live in a clave where everyone has checked their memories at the door and is deliberately living a lifestyle that is unusual (say, a recreation of Ancient Rome), then they sign away their rights to AGI intervention when they go in. After they come out, they can have unpleasant memories wiped if they choose to do so, but bullying might specifically be allowed if the person says that it is okay.

> Then you have to get into the labyrinth of hard cases. Bullying covers
> everything from gang bosses committing murder and torture, to husband
> / wife psychological abuse, to office threats and allocation of
> unpleasant jobs, to kids verbal and physical abuse, to giving orders
> which you know will lead to much unpleasantness that the victim is
> unaware of. (Plausible denial comes in here, 'who me?').  In practice
> it rapidly becomes impossible to deal with these ornery humans.

In all of these cases, the person is asked ahead of time how much defence they want. They get what they ask for. They can change their mind at any time (unless they go into a closed clave, as above). In the case of children we apply the same rules that we do as now: we ask the parents, but we also apply the societal norms that *today* would mean that if a child is suffering abuse (from parents or from others), we step in (that is _we_ the human beings) and rescue them.

Nothing is different here, except that people have more options than they had before, and no-one is forced to do something that is unwanted nannying.

That is important: if I ask to be looked after by the AGI, that is not nannying and is beyond the scope of the question.


************************************************************************


All I've got the time for right now.



Richard Loosemore



-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;



-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=48599256-3df81c

Reply via email to