Ben,
> I can see some
possible value in giving a system these goals, and
> giving it a strong
motivation to figure out what the hell humans mean
> by the words
"care", "living", etc. These rules are then really "rule
> templates"
with instructions for filling them in...
Yes.
> However, I view
this as only a guide to learning ethical rules... the
> real rules the
system learns will be based on the meanings with which
> the system fills
in the words in the given template rules... For
> example, the
system's idea of what humans mean by "living" may not be
> accurate, or
may be biased in some way (since after all humans have a
> rather ambiguous
shifty definition of "living").
Yes again.
Picking up on your point, when AGIs
are first created most humans will
not see them as life. So the AGIs will need to be able to extend the
concept of life beyond where most humans
locate it.
Cheers, Philip