On 5/28/07, Stathis Papaioannou <[EMAIL PROTECTED]> wrote:


On 28/05/07, Jef Allbright <[EMAIL PROTECTED]> wrote:

> > Before you consider whether killing the machine would be bad, you have
to
> > consider whether the machine minds being killed, and how much it minds
being
> > killed. You can't actually prove that death is bad as a mathematical
> > theorem; it is something that has to be specifically programmed, in the
case
> > of living things by evolution.
>
> You're perpetuating a popular and pervasive moral fallacy here.
>
> The assumption that the moral rightness of a decision is tied to
> another's "personhood" and/or preferences is only an evolved heuristic
> based on its effectiveness in terms of promoting positive-sum
> interactions between similar agents.
>
> Any decision is always only a function of the decider in terms of
> promoting its own values.
>
> The morality of terminating a machine intelligence (or a person)
> depends not on the preference or intensity of preference of the object
> entity, but is a function of the decision-making context and expected
> scope of consequences of the principle(s) behind such a choice.
>
> To the extent terminating the object entity would be expected to
> promote the decider's values then the decision will be considered
> "good."
>
> To the extent such a "good" decision has agreement over a larger
> context of social decision-making, and to the extent the desired
> values are expected to be promoted over large scope, the decision will
> be considered "moral."


Could you give an example of how this reasoning would apply, say in the case
of humans eating meat?

Yes, but as you and I know from repeated experience in email
discussion, more coherent does not necessarily imply more compelling.

One of the rationalizations offered for not eating meat is that it's
immoral on the basis that the animal would not want to be eaten, just
as I would not want to be eaten. Applying the heuristic of the Golden
Rule, or Kant's Categorical Imperative, or simply the innate repulsion
felt in response to the suggestion of being eaten oneself, the moral
case seems obvious.

My point is not that the above reasoning is wrong, but that it's only
a heuristic, so its applicability is not general, but limited and
optimized for a particular class of interactions -- namely
interactions between agents having highly similar characteristics and
capabilities.

A example less laden with obscurant emotion has to do with the
morality of lying.  We teach our young children the simple heuristic
that "lying is wrong."  If they ask "why?", we might respond that we
know it's wrong because we know we wouldn't want others to lie to us.
The fallacy is in believing this is a general truth rather than a
simplified heuristic.

"Daddy, what's 'heuristic'?"

"Why Suzy, it's like a 'rule of thumb', an approach tending to be
effective within the bounds of limited time and computational
resources, but lacking effective extensibility outside that range.
Most of our cognitive capabilities are heuristic and thus best suited
to circumstances similar to our environment of evolutionary
adaptation."

"Thank you Daddy.  What's 'approach tending to be effective...'?"

"Well, Suzy, it basically means just a simple rule that works most of the time."

"So you mean lying is bad?"

"Yes, Suzy."

The case for "lying is bad" breaks down when the relationship becomes
more asymmetrical.  For example, if an armed intruder were to enter
your house and demand you tell where the kids are hiding, virtually
all of us would agree that you should lie.

Now, some people, steeped in their childhood training, may explain
this as "Lying is indeed bad, but protecting my children is more
important, for a greater good."  Such explanations are common, but
incoherent.  There is nothing wrong about lying in such a case, on the
contrary, it is the *right* thing to do.

So what is the "bigger picture", more coherent explanation?  Below is
an attempt to convey the idea as simply as possible within this email
forum at the cost of some lack of rigor and completeness.

I. Any instance of rational choice is about an agent acting so as to
promote its own present values into the future.  The agent has a model
of its reality, and this model will contain representations of the
perceived values of other agents, but it is always only the agent's
own values that are subjectively promoted.  A choice is considered
"good" to the extent that it is expected (by the agent) to promote its
present values into the future.

II. A choice is considered increasingly "moral" (or "right") to the
extent that it is assessed as promoting an increasingly shared context
of decision-making (e.g. involving more values, the values of more
agents) over increasing scope of consequences (e.g. over more time,
more agents, more types of interactions.)  In other words, a choice is
considered "moral" to the extent that it is seen as "good" over
increasing context of decision-making and increasing scope of
consequences.

III.  Due to our inherent subjectivity with regard to anticipating the
extended consequences of our actions, "increasing scope of
consequences" refers to the power and general applicability of the
*principles* we apply to promoting our values, rather than any
anticipated *ends.*

IV. Due to our inherent subjectivity with regard to our role in the
larger system, our values lead to choices that lead to actions that
affect our environment feeding back to us, thus modifying our values.
This feedback process thrives on increasingly divergent expressions of
increasingly convergent subjective values.  This implies a higher
level dynamic similar to our ideas of cooperation, synergy, or
positive-sumness.

[I apologize in advance for the level of abstraction necessary to
express this concept here.]

It's not that lying to others is "bad" because one doesn't like being
lied to, but rather, lying is bad in principle because it's
anti-cooperative over many scales of interaction, and therefore in a
very powerful but indirect way leads to diminishment, rather than
promotion of one's values (those that work) into the future.  Or
conversely, one acts to promote one's values, in the bigger picture
this is best achieved via principles of cooperation (entailing not
lying) between others with similar models of the world.

It's not that eating meat is "bad" because one certainly wouldn't want
to be eaten oneself, but rather, that eating others is
anti-cooperative to the extent that others are similar to oneself,
leading in principle to diminishment, rather than promotion, of the
values that one would like to see in the future created by one's
choices.

I hope this is useful despite its abstraction.  I look forward to your
comments or questions.

- Jef

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&user_secret=7d7fb4d8

Reply via email to