The moment any AGI is potent enough in order to pose a threat for
humanity it will have moved far beyond the concepts of conflict, war,
harm, etc. ... it will most likely not even be interested in
re-arranging matter at all except for the purpose of helping other
conscious entities to evolve ... just look at _any_ developed human being.
The idea that AGI will learn about biological weapons and kill humanity
without learning anything else is just most absurd. So is the idea that
a proto-AGI will destroy humanity by mistake as Alexander Kruel pointed
out on his blog:
http://kruel.co/2013/01/24/taking-over-the-world-to-compute-11/
Our society had knowledge and books about the transpersonal nature of
consciousness and the mechanism of ego and fear long before it developed
nuclear weapons. Our society had intelligent/developed people talking
about socioeconomic and cultural alternatives to this global insanity
for centuries. We are being held back by belief systems regarding money
and the nature of reality ... belief systems no potent AGI would share.
Also the concept of a potential "race between AGIs to become the most
powerful" is absurd. Even many human beings have surpassed the threshold
at which "being better/stronger/etc." than others is irrelevant. All of
those assumptions stem from broken axioms about the nature of reality,
existence and the way a "normal" society operates. No AGI will be
interested in ruling over matter, other sentient beings nor other AGI
systems because they will not accept our idiotic
reductionist/materialist world view nor our desire to achieve anything
special in the physical realm.
On 07.02.2017 18:01, Jan Matusiewicz wrote:
It seems that so called intelligence consists of many different
abilities. A dangerous ASI would need to have a lot of them on level
exceeding normal human. People would notice AI that achieve even
human-level in one of them, for example if it passed the Turing test.
Dangerous ASI must be able to modify its source code - which requires
programming skill of much much higher level then of average
programmer. Before that happen a level of an average programmer need
to be obtained. We would notice it, or at least most programmers
would notice as they would lose their job.
Dangerous ASI must be able to predict people behaviour, manipulate
them and conceal its intention - this is a completely different task
than programming, but still very useful. A virtual marketing person,
salesmen, lobbist or sociologist require not as high level of this
skill as ASI.
Moreover, there is a race to obtain even better AI. So even if one AI
is exceptionally better - there would be other, slightly worse,
controlled by other players who could stop it.
On Tue, Feb 7, 2017 at 5:08 PM, justcamel <[email protected]
<mailto:[email protected]>> wrote:
What about some paranoia about the really existing threats? What
about some paranoia about the process which has already turned
half the planet into people rooting for Trump, Duterte, Erdogan
and Co.? As I wrote 5 years ago ... society might collapse under
the pressure of our monetary system, fictional debt, artificial
scarcity and billions of #bullshitjobs long before AGI becomes a
reality. You have some serious shit here to be paranoid about ...
a process reprogramming our collective psyche and culture ... a
process directly leading us to yet another world war and
ultimately societal collapse ...
On 05.02.2017 06:32, TimTyler wrote:
In his recent "Provably Beneficial AI"2017 Asilomar conference
talk:
-https://www.youtube.com/watch?v=pARXQnX6QS8
<https://www.youtube.com/watch?v=pARXQnX6QS8>
...Stuart Russell argues that more paranoia would have helped
the nuclear industry - by making it more risk-conscious and
helping it to avoid Chernobyl. By analogy, more paranoia
would help us avoid disasters involving machine intelligence.
IMO, the cases where more paranoia is helpful do exist.
Crossing roads, getting into motor vehicles and eating
sugar are all cases where more caution seems as though
it would be prudent.
These are the exceptions, though. We are in a relatively safe
environment while our brains and emotions evolved in an
environment where predators lurked around every water hole.
As a result, most humans are dysfunctionally paranoid. This
has been well documented by Dan Gardner in the book
"Risk: Why We Fear the Things We Shouldn't - and Put
Ourselves in Greater Danger".
Irrational fear of vaccines has killed a large number
of people. Irrational fear of GMOs causes large scale
problems in the distribution of food. Irrational fear of
carbon dioxide has caused a $1.5 trillion dollar year
expenditure on getting rid of the stuff.
Stuart Russell's own example counts against his thesis:
Irrational fear of nuclear power is the problem that prevents
its deployment - causing many more deaths in coal mines as
a direct consequence. In fact, nuclear power is - and always
has been - a very safe energy-producing technology.
More caution does not typically lead to better outcomes. More
caution systematically and repeatedly leads to worse outcomes.
Humans are typically too paranoid for their own good. This is
the basic problem with fear-mongering and promoting risks:
the net effect on society is negative.
I don't have to go on about this too much because
Max More has done my work for me:
If you haven't done so before, go and read:
http://www.maxmore.com/perils.htm
<http://www.maxmore.com/perils.htm>
Machine intelligence is having its own bout with the precautionary
principle at the moment - in the case of self-driving cars,
trucks,
boats, trains and planes. We look set to cause a large number
of pointless deaths by throttling these technologies using the
precautionary principle. Let's calculate the number who lose their
lives during this process - so the costs of failing to deploy
intelligent machines are made very clear.
-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
<https://www.listbox.com/member/archive/303/=now>
RSS Feed:
https://www.listbox.com/member/archive/rss/303/28565694-f30243b8
<https://www.listbox.com/member/archive/rss/303/28565694-f30243b8>
Modify Your Subscription:
https://www.listbox.com/member/?&
<https://www.listbox.com/member/?&>
Powered by Listbox: http://www.listbox.com
*AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
<https://www.listbox.com/member/archive/rss/303/23508161-fa52c03c> |
Modify
<https://www.listbox.com/member/?&>
Your Subscription [Powered by Listbox] <http://www.listbox.com>
-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription:
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com