Thanks for your comments Stan.
Consider people with antisocial personality disorder.
Many of them are intelligent and thus know all about
social norms. However, they use that knowledge to
predict and manipulate other people rather than to
guide their own behavior.
In terms of equations (2.3) and (2.4) in my book,
knowledge of social norms contributes to \rho(h)
rather than u(h). Depending on how u(h) is defined,
an AI could seem to us anywahere between the most
sadistic psychopath and the wisest, most
compassionate being.
Bill
On Thu, 13 Nov 2014, Stanley Nilsen via AGI wrote:
As I read the following section of Bill's book,(pg 4 of chapter 1) I wonder
how such an intelligence can later be considered so stupid?
--------
"Because the Omniscience AI will have much greater physical capacity than
human
brains, its social model will be completely beyond the understanding of human
minds. Asking
humans to understand it would be like asking them to take the square root of
billion-digit
numbers in their heads. Human brains do not have enough neurons for either
task. And the social
model will be learned by the AI rather than designed by humans. In fact
humans could not design
it. Consequently, humans will not be able to design-in the types of detailed
safeguards that are in
the Google car."
+
"...It cannot explain to you what it has discovered about the
world and then ask you for a decision. You cannot understand the intricate
relationships among
the billions of people who use your electronic companions. You cannot predict
the subtle
interactions among the spreads of different ideas."
---------
Notice that this AI has grasped all these ideas and social concepts and we
are to believe that it somehow missed the notions of right and wrong? Are we
to believe that is reads current events and watches news (needing that info
to predict and factor into decisions) and did not pick up on the concepts of
justice, law, rights, privilege, truth, deception, tyranny and the like? No
way that such an AI would not know when it is going against the rules, laws
and morals of the society. A society it is fully submerged in. It would
know what death, destruction, oppression and suffering are all about. The
point is, a choice to "maximize" anything isn't going to override such a body
of knowledge.
The only way I see that such accidental damage could occur is if the
knowledge of the AI was "scrubbed" and left to include only what an evil
force or person wished it to consider. Does intelligence face a few morale
dilemma? Likely, but there should be little doubt that it is wrong to blind
people and keep them drugged.
Agree? disagree? What am I missing?
stan
On 11/08/2014 04:52 AM, Tim Tyler via AGI wrote:
On 07/11/2014 22:16, Matt Mahoney wrote:
On Fri, Nov 7, 2014 at 8:18 PM, Tim Tyler via AGI <[email protected]> wrote:
in fact, it is impossible to predict the consequences of
above-human-level AI.
As Vernor Vinge (1993) wrote, the technological singularity (i.e., the
advent
of far-above-human-level AI) is an "opaque wall across the future."
I think that this is nonsense. [...]
< section removed >
It's not that there's no mountain of information in the future - rather we
can make predictions with what we already have. Science gets the
low-hanging fruit first - so we already have many of the laws of
mechanics, gravity, electromagnetism, thermodynamics and evolution.
Some of this fundamental material illuminates the future of our
descendants.
There's no "opaque wall". It's more like a curtain - and as we get closer
to
it we will see more and more. The idea of a "singularity" which represents
a wall to predictions about our descendants - like an "event horizon" that
cannot be seen beyond - is unhelpful, unscientific and not justified by the
facts.
-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/3603840-9a430058
Modify Your Subscription:
https://www.listbox.com/member/?&
Powered by Listbox: http://www.listbox.com
-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription:
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com