Stathis Papaioannou wrote:
> On 18/08/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
>> Objective values are NOT specifications of what agents SHOULD do.
>> They are simply explanatory principles.  The analogy here is with the
>> laws of physics.  The laws of physics *per se* are NOT descriptions of
>> future states of matter.  The descriptions of the future states of
>> matter are *implied by* the laws of physics, but the laws of physics
>> themselves are not the descriptions.  You don't need to specify future
>> states of matter to understand the laws of physics.  By analogy, the
>> objective laws of morality are NOT specifications of optimization
>> targets.  These specifications are *implied by the laws* of morality,
>> but you can understand the laws of morality well without any knowledge
>> of optimization targets.
>> Thus it simply isn't true that you need to precisely specify an
>> optimization target ( a 'goal') for an effective agent (for instance
>> an AI).  Again, consider the analogy with the laws of  physics.
>> Imperfect knowledge of the laws of physics, doesn't prevent scientists
>> from building scientific tools to better understand the laws of
>> physics.   This is because the laws of physics are explanatory
>> principles, NOT direct specifications of future states of matter.
>> Similarly, an agent (for instance an AI)  does not require a precisely
>> specified goal , since imperfect knowledge of objective laws of
>> morality is sufficient to produce behaviour which leads to more
>> accurate knowledge.  Again, the  objective laws of morality are NOT
>> optimization targets, but explanatory principles.
>> The other claim of the objective value sceptics was that proposed
>> objective values can't be empirically tested.  Wrong.  Again, the
>> misunderstanding stems from the mistaken idea that objective values
>> would be optimization targets.  They are not.  They are, as explained,
>> explanatory principles.  And these principles CAN be tested.  The test
>> is the extent to which these principles can be used to understand
>> agent motivations - in the sense of emotional reactions to social
>> events.  If an agent experiences a negative emotional reaction, mark
>> the event as 'agent sees it as bad'.  If an agent experience a
>> positive emotional reaction, mark the event as 'agent sees it as
>> good'.  Different agents have different emotional reactions to the
>> same event, but that doesn't mean there isn't a commonality averaged
>> across many events and agents .  A successful 'theory of objective
>> values' would abstract out this commonality to explain why agents
>> experienced generic negative or positive emotions to generic events.
>> And this would be *indirectly* testable by empirical means.
> This all makes sense if you are referring to the values of a
> particular entity. Objectively, the entity has certain values and we
> can use empirical means to determine what these values are. However,
> if I like red and you like blue, how do we decide which colour is
> objectively better?

Marc, refers to "a commonality averaged across many events and agents" so 
apparently he has in mind a residue of consensus or near consensus.  Color 
preferences might average out to nil except in narrow circumstances, e.g. 
"Green people are bad." or "Ferraris should be red."  So "objective" really 
means "intersubjective agreement" among humans.  I wonder how big a sample is 
needed though to qualify as "objective"?  Everybody?  including children?  In a 
lot of the world women would be excluded from the count.  What about animals?

>> Finally, the proof that objective values exist is quite simple.
>> Without them, there simply could be no explanation of agent
>> motivations.  

So you would say that the actions of say a serial killer can only be explained 
by pointing to some aspect of his values that we share, e.g. sexual 

>A complete physical description of an agent is NOT an
>> explanation of the agent's teleological properties (ie the agent
>> motivations).  

You might, with great advances in neuroscience, infer what values an agent 
holds from the physical description.  That would be explanation in one sense.  
In general there is no such thing as "the explanation" of something.  An 
explanation must start with something you understand or accept and show how 
something you didn't understand follows.  So there can be different 
explanations depending on where you start and the level of the thing to be 

>The teleological properties of agents (their goals and
>> motivations) simply are not physical.  For sure, they are dependent on
>> and reside in physical processes, but they are not identical to these
>> physical processes.  This is because physical causal processes are
>> concrete, where as teleological properties cannot be measured
>> *directly* with physical devices (they are abstract)  .
>> The whole basis of the scientific world view is that things have
>> objective explanations.  

Here too "objective" means something like "intersubjective agreement".  The 
conservation laws of physics can be derived from invariance under change of 
point of view of the observer.

>Physical properties have objective
>> explanations (the laws of physics).  Teleological properties (such as
>> agent motivations) are not identical to physical properties.

But there's not as much intersubjective agreement as in physics either.  Some 
actions are motivated by religous piety, some by biological hunger.

>> Something needs to explain these teleological properties.  QED
>> objective 'laws of teleology' (objective values) have to exist.

In one sense  of explanation, motivations are explicable by evolution.  If your 
ancestors didn't love their children you wouldn't be here.

> You could make a similar claim for the abstract quality "redness",
> which is associated with light of a certain wavelength but is not the
> same thing as it. But it doesn't seem right to me to consider
> "redness" as having a separate objective existence of its own; it's
> just a name we apply to a physical phenomenon.
>> What forms would objective values take?  As explained, these would NOT
>> be 'optimization targets' (goals or rules of the form 'you should do
>> X').  

If they are not 'targets' in some sense, how are they motivators?

>> They couldn't be, because ethical rules differ according to
>> culture and  are made by humans.

But if they differ how do they fit into "a commonality averaged across many 
events and agents"?

Brent Meeker 
"If I had known then what I know now, I would have made the same
mistakes sooner."
   --- Robert Half

>> What they have to be are inert EXPLANATORY PRINCIPLES, taking the
>> form:  'Beauty has abstract properties A B C D E F G'.  'Liberty has
>> abstract properties A B C D E F G' etc etc.  None the less, as
>> explained, these abstract specifications would still be amenable to
>> indirect empirical testing to the extent that they could be used to
>> predict agent emotional reactions to social events.

You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at

Reply via email to