On 9/19/2019 1:27 PM, Jason Resch wrote:
The devils in the details. It's not a question of natural vs
artificial
(which you keep bringing up for no reason). It's a question of
whether
AIs will necessarily have certain fundamental values that they try to
implement, or will they have only those we provide them?
I think there are likely certain universal goals (which are subgoals
of anything that has any goal whatsoever). To name a few that come to
the top of my mind:
1. Self-preservation (if one ceases to exist, one can no longer serve
the goal)
Unless self-sacrifice serves the goal better. Ask any parent if they'd
sacrifice themself to save their child.
2. Efficiency (wasted resources are resources that might otherwise go
towards effecting the goal)
True. But it means being able to foresee all the way different things
can be used to further the goal. That raises my concern with an AI that
does bad things we didn't think of in pursing a goal.
3. Curiosity (learning new information can lead to better methods for
achieving the goal)
But, depending on the goal, a possibly very narrow curiosity....like
Sherlock Holmes who didn't know the Earth orbited the Sun and wasn't
interested because it had nothing to do with solving crimes.
Brent
There's probably many others.
--
You received this message because you are subscribed to the Google Groups
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
To view this discussion on the web visit
https://groups.google.com/d/msgid/everything-list/88e59831-edc3-5e2b-2152-30b7db035866%40verizon.net.