C. David Noziglia wrote:
The problem with the issue we are discussing here is that the worst-case
scenario for handing power to unrestricted, super-capable AI entities is
very bad, indeed. So what we are looking for is not really building an
ethical structure or moral sense at all. Failure is
Hi David,
The problem here, I guess, is the conflict between Platonic expectations of
perfection and the messiness of the real world.
I never said perfection, and in my book make it clear that
the task of a super-intelligent machine learning behaviors
to promote human happiness will be very
Brad Wyble wrote:
3) A society of selfish AIs may develop certain (not really
primatelike) rules for enforcing cooperative interactions among
themselves; but you cannot prove for any entropic specification, and
I will undertake to *disprove* for any clear specification, that this
creates
There are simple external conditions that provoke protective tendencies in
humans following chains of logic that seem entirely natural to us. Our
intuition that reproducing these simple external conditions serve to
provoke protective tendencies in AIs is knowably wrong, failing an
Brad Wyble wrote:
There are simple external conditions that provoke protective
tendencies in humans following chains of logic that seem entirely
natural to us. Our intuition that reproducing these simple external
conditions serve to provoke protective tendencies in AIs is knowably
wrong,