Steve,

Some comments on your screed,


> There is NO WAY to make the inner workings of a human-level mind
> understandable to anything but a WAY bigger mind than human. Transparency
> is impossible.
>

There is no way to make the inner workings of any truly complex systems
understandable to a human mind, whether it be the human body, an individual
cell, weather, or the internet -- that is, if by "understandable you mean
anything approaching full understanding.  But that does not mean humans are
incapable of understanding important aspects of such complex systems.  Nor
does it mean human are incapable of learning how to predict or affect
aspects of such systems.  In fact, science has made many important and
useful understandings of such systems.  The same goes for humanity's
ability to understand certain important aspects of
human-brain-level-or-greater AGIs.


> Further, our society is screwed up as it now is **BECAUSE** of our
> dysfunctional values. Putting these into a powerful AI will only bring the
> whole thing crashing down that much sooner.
>

Humanity currently IS quite stupid and self destructive.   (Witness Trump's
current popularity as a candidate for the most important job in the world,
based on little more than a barrage of empty, impossible to fulfill
promises).  And it is not clear humanity is capable of dealing with the
powers that superintelligence can create.  But the superintelligence genie
is not going to be put back in the bottle.  There is too much money and
power to be had by developing and using it.   So let us at least try to
increase the number of intelligent humans and institutions that understand
the danger better, so humanity an at least have a chance of surviving it
for several more generations to come.


> *After years of discussions, anyone stupid enough not to see that the
> dangers in succeeding are both extreme and unavoidable is probably also too
> stupid to ever make such things work.*
>

If the people you disagree with are that stupid, you have nothing to
worry.  Unfortunately, humanity is not too stupid to create
superintelligence and, thus, there IS something to worry about.  Humanity
has a better chance of dealing well with human+ level AGI if more minds are
aware of important aspects of how it works, what its capabilities are, what
its dangers are, and how humanity can best protect its interests given the
inevitability of its advent.




> *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/8630185-a57a74e1> | Modify
> <https://www.listbox.com/member/?&;>
> Your Subscription <http://www.listbox.com>
>



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to