Okay, I'll see if I can grasp 2.3 and 2.4.
Perhaps you can lessen my pain by telling me which equations address a
risk factor?
I see lots of "utility" in the thoughts of this AI, but would it not
have some kind of risk related calculations?
On 11/14/2014 04:07 AM, Bill Hibbard via AGI wrote:
Thanks for your comments Stan.
Consider people with antisocial personality disorder.
Many of them are intelligent and thus know all about
social norms. However, they use that knowledge to
predict and manipulate other people rather than to
guide their own behavior.
In terms of equations (2.3) and (2.4) in my book,
knowledge of social norms contributes to \rho(h)
rather than u(h). Depending on how u(h) is defined,
an AI could seem to us anywahere between the most
sadistic psychopath and the wisest, most
compassionate being.
Bill
On Thu, 13 Nov 2014, Stanley Nilsen via AGI wrote:
As I read the following section of Bill's book,(pg 4 of chapter 1) I
wonder how such an intelligence can later be considered so stupid?
--------
"Because the Omniscience AI will have much greater physical capacity
than human
brains, its social model will be completely beyond the understanding
of human minds. Asking
humans to understand it would be like asking them to take the square
root of billion-digit
numbers in their heads. Human brains do not have enough neurons for
either task. And the social
model will be learned by the AI rather than designed by humans. In
fact humans could not design
it. Consequently, humans will not be able to design-in the types of
detailed safeguards that are in
the Google car."
+
"...It cannot explain to you what it has discovered about the
world and then ask you for a decision. You cannot understand the
intricate relationships among
the billions of people who use your electronic companions. You cannot
predict the subtle
interactions among the spreads of different ideas."
---------
Notice that this AI has grasped all these ideas and social concepts
and we are to believe that it somehow missed the notions of right and
wrong? Are we to believe that is reads current events and watches
news (needing that info to predict and factor into decisions) and did
not pick up on the concepts of justice, law, rights, privilege,
truth, deception, tyranny and the like? No way that such an AI would
not know when it is going against the rules, laws and morals of the
society. A society it is fully submerged in. It would know what
death, destruction, oppression and suffering are all about. The
point is, a choice to "maximize" anything isn't going to override
such a body of knowledge.
The only way I see that such accidental damage could occur is if the
knowledge of the AI was "scrubbed" and left to include only what an
evil force or person wished it to consider. Does intelligence face a
few morale dilemma? Likely, but there should be little doubt that it
is wrong to blind people and keep them drugged.
Agree? disagree? What am I missing?
stan
-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription:
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com