Isn't it indisputable that agency is necessarily on behalf of some perceived entity (a self) and that assessment of the "morality" of any decision is always only relative to a subjective model of "rightness"?
I'm not sure that I should dive into this but I'm not the brightest sometimes . . . . :-)
If someone else were to program a decision-making (but not conscious or self-conscious) machine to always recommend for what you personally (Jef) would find a moral act and always recommend against what you personally would find an immoral act, would that machine be acting morally?
<hopefully, we're not just debating the term agency> ----- This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e