On 29/05/07, Jef Allbright <[EMAIL PROTECTED]> wrote:

I. Any instance of rational choice is about an agent acting so as to
promote its own present values into the future.  The agent has a model
of its reality, and this model will contain representations of the
perceived values of other agents, but it is always only the agent's
own values that are subjectively promoted.  A choice is considered
"good" to the extent that it is expected (by the agent) to promote its
present values into the future.


So whether I'm being selfish or altruistic from an external perspective, I
can really only be selfish from my own perspective, since I am promoting my
own values, wherever that leads me.Does this mean that my own values are by
definition "good", but not necessarily "moral", going by what you say below?

II. A choice is considered increasingly "moral" (or "right") to the
extent that it is assessed as promoting an increasingly shared context
of decision-making (e.g. involving more values, the values of more
agents) over increasing scope of consequences (e.g. over more time,
more agents, more types of interactions.)  In other words, a choice is
considered "moral" to the extent that it is seen as "good" over
increasing context of decision-making and increasing scope of
consequences.


That has a utilitarian ring to it.

III.  Due to our inherent subjectivity with regard to anticipating the
extended consequences of our actions, "increasing scope of
consequences" refers to the power and general applicability of the
*principles* we apply to promoting our values, rather than any
anticipated *ends.*


But the principle, however broad it is, assumes some end, doesn't it?

IV. Due to our inherent subjectivity with regard to our role in the
larger system, our values lead to choices that lead to actions that
affect our environment feeding back to us, thus modifying our values.
This feedback process thrives on increasingly divergent expressions of
increasingly convergent subjective values.  This implies a higher
level dynamic similar to our ideas of cooperation, synergy, or
positive-sumness.


I think I see. Is this a description of how ethics actually functions, or a
prescription for how it ought to function? It would seem that this feedback
mechanism will in the long run find the "optimal" ethics, although this
process could be sped up by starting from a better base.

It's not that lying to others is "bad" because one doesn't like being
lied to, but rather, lying is bad in principle because it's
anti-cooperative over many scales of interaction, and therefore in a
very powerful but indirect way leads to diminishment, rather than
promotion of one's values (those that work) into the future.  Or
conversely, one acts to promote one's values, in the bigger picture
this is best achieved via principles of cooperation (entailing not
lying) between others with similar models of the world.

It's not that eating meat is "bad" because one certainly wouldn't want
to be eaten oneself, but rather, that eating others is
anti-cooperative to the extent that others are similar to oneself,
leading in principle to diminishment, rather than promotion, of the
values that one would like to see in the future created by one's
choices.


Sure, but an essential part of the badness of lying to and eating people is
that they are in fact people. It wouldn't be the same if we were talking
about lying to and eating vegetables, for example. If AI's were more like
vegetables than people in their reaction to being lied to or eaten, then all
else being equal, it wouldn't be so bad to lie to them or eat them.


--
Stathis Papaioannou

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&user_secret=7d7fb4d8

Reply via email to