> A more accurate understanding of "morality" or decision-making seen as
> "right", and extensible beyond the EEA to our increasingly complex
> world might be something like the following:
>
> Decisions are seen as increasingly moral to the extent that they enact
> principles assessed as promoting an increasing context of increasingly
> coherent values over increasing scope of consequences.
OK. I would contend that a machine can be programmed to make decisions to
"enact principles assessed as promoting an increasing context of increasingly
coherent values over increasing scope of consequences" and that it can be
programmed in this fashion without it attaining consciousness.
You did say "machine that has been programmed to carry out acts which others
have decided are moral . . . is not displaying moral agency" but I interpreted
this as the machine merely following rules of what the human has already
decided as "enacting principles assessed . . ." (i.e. the machine is not doing
the actual morality checking itself)
So . . . my next two questions are
a.. Do you believe that a machine programmed to make decisions to "enact
principles assessed as promoting an increasing context of increasingly coherent
values over increasing scope of consequences" (I assume that it has/needs an
awesome knowledge base and very sophisticated rules and evaluation criteria) is
still not acting morally? (and, if so, why?)
b.. Or, do you believe that it is not possible to program a machine in this
fashion without giving it consciousness.
Also, BTW, with this definition of morality, I would argue that it is a very
rare human that makes moral decisions any appreciable percent of the time (and
those that do have ingrained it as reflex -- so do those reflexes count as
moral decisions? Or are they not moral since they're not conscious decisions
at the time of choice? :-).
Mark
----- Original Message -----
From: "Jef Allbright" <[EMAIL PROTECTED]>
To: <[email protected]>
Sent: Tuesday, June 05, 2007 5:45 PM
Subject: Re: [agi] Pure reason is a disease.
> On 6/5/07, Mark Waser <[EMAIL PROTECTED]> wrote:
>> > I would not claim that agency requires consciousness; it is necessary
>> > only that an agent acts on its environment so as to minimize the
>> > difference between the external environment and its internal model of
>> > the preferred environment
>>
>> OK.
>>
>> > Moral agency, however, requires both agency and self-awareness. Moral
>> > agency is not about the acting but the deciding
>>
>> So you're saying that deciding requires self-awareness?
>
> No, I'm saying that **moral** decision-making requires self-awareness.
>
>
>> > This requirement of expanded decision-making context is what makes the
>> > difference between what is seen as merely "good" (to an individual)
>> > and what is seen as "right" or "moral" (to a group.) Morality is a
>> > function of a group, not of an individual. The difference entails
>> > **agreement**, thus decision-making context greater than a single
>> > agent, thus recognition of self in order to recognize the existence of
>> > the greater context including both self and other agency.
>>
>> So you're saying that if you act morally without recognizing the greater
>> context then you are not acting morally (i.e. you are acting amorally --
>> without morals -- as opposed to immorally -- against morals).
>
> Yes, a machine that has been programed to carry out acts which others
> have decided are moral, or a human who follows religious (or military)
> imperatives is not displaying moral agency.
>
>
>> I would then argue that we humans *rarely* recognize this greater context --
>> and then most frequently act upon this realization for the wrong reasons
>> (i.e. fear of ostracism, punishment, etc.) instead of "moral" reasons
>> because realistically most of us are hard-wired by evolution to feel in
>> accordance with most of what is regarded as moral (with the exceptions often
>> being psychopaths).
>
> Yes! Our present-day moral agency is limited due to what we might
> lump under the term "lack of awareness." Most of what is presently
> considered "morality" is actually only distilled patterns of
> cooperative behavior that worked in the environment of evolutionary
> adaptation, now encoded into our innate biological preferences as well
> as cultural artifacts such as the Ten Commandments.
>
> A more accurate understanding of "morality" or decision-making seen as
> "right", and extensible beyond the EEA to our increasingly complex
> world might be something like the following:
>
> Decisions are seen as increasingly moral to the extent that they enact
> principles assessed as promoting an increasing context of increasingly
> coherent values over increasing scope of consequences.
>
> For the sake of brevity here I'll resist the temptation to forestall
> some anticipated objections.
>
> - Jef
>
> -----
> This list is sponsored by AGIRI: http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
> http://v2.listbox.com/member/?&
>
-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e