> Objective values are NOT specifications of what agents SHOULD do.
> They are simply explanatory principles.  The analogy here is with the
> laws of physics.  The laws of physics *per se* are NOT descriptions of
> future states of matter.  The descriptions of the future states of
> matter are *implied by* the laws of physics, but the laws of physics
> themselves are not the descriptions.  You don't need to specify future
> states of matter to understand the laws of physics.  By analogy, the
> objective laws of morality are NOT specifications of optimization
> targets.  These specifications are *implied by the laws* of morality,
> but you can understand the laws of morality well without any knowledge
> of optimization targets.
> Thus it simply isn't true that you need to precisely specify an
> optimization target ( a 'goal') for an effective agent (for instance
> an AI).  Again, consider the analogy with the laws of  physics.
> Imperfect knowledge of the laws of physics, doesn't prevent scientists
> from building scientific tools to better understand the laws of
> physics.   This is because the laws of physics are explanatory
> principles, NOT direct specifications of future states of matter.
> Similarly, an agent (for instance an AI)  does not require a precisely
> specified goal , since imperfect knowledge of objective laws of
> morality is sufficient to produce behaviour which leads to more
> accurate knowledge.  Again, the  objective laws of morality are NOT
> optimization targets, but explanatory principles.
> The other claim of the objective value sceptics was that proposed
> objective values can't be empirically tested.  Wrong.  Again, the
> misunderstanding stems from the mistaken idea that objective values
> would be optimization targets.  They are not.  They are, as explained,
> explanatory principles.  And these principles CAN be tested.  The test
> is the extent to which these principles can be used to understand
> agent motivations - in the sense of emotional reactions to social
> events.  If an agent experiences a negative emotional reaction, mark
> the event as 'agent sees it as bad'.  If an agent experience a
> positive emotional reaction, mark the event as 'agent sees it as
> good'.  Different agents have different emotional reactions to the
> same event, but that doesn't mean there isn't a commonality averaged
> across many events and agents .  A successful 'theory of objective
> values' would abstract out this commonality to explain why agents
> experienced generic negative or positive emotions to generic events.
> And this would be *indirectly* testable by empirical means.

This all makes sense if you are referring to the values of a
particular entity. Objectively, the entity has certain values and we
can use empirical means to determine what these values are. However,
if I like red and you like blue, how do we decide which colour is
objectively better?

> Finally, the proof that objective values exist is quite simple.
> Without them, there simply could be no explanation of agent
> motivations.  A complete physical description of an agent is NOT an
> explanation of the agent's teleological properties (ie the agent
> motivations).  The teleological properties of agents (their goals and
> motivations) simply are not physical.  For sure, they are dependent on
> and reside in physical processes, but they are not identical to these
> physical processes.  This is because physical causal processes are
> concrete, where as teleological properties cannot be measured
> *directly* with physical devices (they are abstract)  .
> The whole basis of the scientific world view is that things have
> objective explanations.  Physical properties have objective
> explanations (the laws of physics).  Teleological properties (such as
> agent motivations) are not identical to physical properties.
> Something needs to explain these teleological properties.  QED
> objective 'laws of teleology' (objective values) have to exist.

You could make a similar claim for the abstract quality "redness",
which is associated with light of a certain wavelength but is not the
same thing as it. But it doesn't seem right to me to consider
"redness" as having a separate objective existence of its own; it's
just a name we apply to a physical phenomenon.

> What forms would objective values take?  As explained, these would NOT
> be 'optimization targets' (goals or rules of the form 'you should do
> X').  They couldn't be, because ethical rules differ according to
> culture and  are made by humans.
> What they have to be are inert EXPLANATORY PRINCIPLES, taking the
> form:  'Beauty has abstract properties A B C D E F G'.  'Liberty has
> abstract properties A B C D E F G' etc etc.  None the less, as
> explained, these abstract specifications would still be amenable to
> indirect empirical testing to the extent that they could be used to
> predict agent emotional reactions to social events.

Stathis Papaioannou

You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 

Reply via email to