----- Original Message -----
From: "Jef Allbright" <
[EMAIL PROTECTED]>
Sent: Thursday, June 08, 2006 10:04 PM
Subject: Re: [agi] Four axioms

> It seems to me it would be better to say that there is no absolute or
> objective good-bad because  evaluation of goodness is necessarily
> relative to the subjective values of some agency.

OK.  That formulation works for me.

> At this point I would like to propose a somewhat different metaethical
> foundation that I think may be useful, providing at least some
> contrast, if not clarity, in the ensuing discussion.
> 1. Goodness is always assessed relative to the values of a subjective
> agency.
> 2. Any agency will assess as increasingly good those actions which
> increasingly promote its values into the future.
> 3. "Good" actions will be assessed as increasingly moral as they are
> seen to work over increasing scope of agency, types of interaction and
> duration.
>
> The practical implications of this reasoning are that increasing
> awareness of (1) what works (increasingly objective
> scientific/instrumental knowledge), applied to increasing awareness of
> (2) our increasingly shared subjective values, leads to increasingly
> effective social decision-making that is seen as increasingly good
> (moral.)
>
> Therefore, we can and should rationally agree to promote the common
> good via a framework supporting  increasing awareness of (1) and (2)
> above.

OK.  My initial impression was that this appears to me to be exactly isomorphic with my axioms so I had to agree with it.  Then, I realized that I did have one clarity question --
How do you perceive "Goodness is always assessed relative to the values of a subjective agency" as being different from "Goodness is always assessed relative to the volition of a subjective agency" (which I would regard simply as a rewording of my axiom 1)?  If your argument is that our values are often better than our wishes, then I see your point but feel that it is a minor wording difference since I regard volition as more of a reasoned desire than an idle wish (with the baggage that a value might be something imposed by G*d and not something that is an individual's volition).  If you mean something else, then I don't understand the distinction yet.
 
I would have absolutely no problem with an axiom like 2. Any agency will assess as increasingly good those actions which increasingly promote its volition in the future.
 
Item 3 initially seemed to be a bit unnecessary since I translated it as 3. "Good" actions will be assessed as increasingly good as they are seen to work over increasing scope of agency, types of interaction and duration.  It seemed a bit obvious to me to me until I realized that that was the meat of my axiom 4 and that you really could beat someone over the head with it.  I think that it's a bit too subtle for my tastes.
What's really nice about your approach is that you didn't have to make an initial declaration of what is good.  A possible downside of your approach is that I didn't find it as easy to comprehend and I think that this may be true of many other people.  Actually, though, now that I think more about it, the fact that you didn't have to make an initial declaration of what is good is also a downside since it doesn't require equality and doesn't clear out the clutter of current beliefs.  As an alternative formulation, I think that I really like it but as a primary formulation, I think that it does nothing to prevent the "G*d says" arguments from the average person since they won't be willing (rather than able) to follow the increasing scope argument (Why DO I have to include the infidel again?  :-).
 
- - - - - - - - - - -
 
Thanks.
 
        Mark


To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to