Re: Review of Bostrom's Superintelligence

2015-06-10 Thread John Mikes
Russell,

I spent more time on this 'Rod' text than on any other in a long while. It
was intriguing that his Bostrom origins etc. were close to my intro-years
(80-01)
in the digit field. I worked 'intuitively' (in a different domain: in
polymers) and was 'creative' - not by *random* invocation, but rather by
unusual (denied?) connotations applied to unusual (denied?) domains. (Hence
my 30+ patents 1950-87).
Lately (for the past 1/4 c.) I fell into (MY!) agnosticism and deviated
away.
Thanks, Rod, for brushing up my memories of last century thinking.

I did not read Bostrom.

Best regards
Jon Mikes


On Thu, Jun 4, 2015 at 7:35 PM, Russell Standish 
wrote:

> This review of Nick Bostrom's _Superintelligence_ crossed my desk from
> a Rod somebody or other. Should be interesting to members of this
> group, although you'll need a spare 15 minutes or so to read it.
>
> Cheers, Russell.
>
> Review of Nick Bostrom's _Superintelligence_, Oxford University Press,
> 2014.
>
> Is the surface of our planet -- and maybe every planet we can get
> our hands on -- going to be carpeted in paper clips (and paper clip
> factories) by a well-intentioned but misguided artificial intelligence
> (AI) that ultimately cannibalizes everything in sight, including us,
> in single-minded pursuit of a seemingly innocuous goal? Nick Bostrom,
> head of Oxford's Future of Humanity Institute, thinks that we can't
> guarantee it _won't_ happen, and it worries him. It doesn't require
> Skynet and Terminators, it doesn't require evil geniuses bent on
> destroying the world, it just requires a powerful AI with a moral
> system in which humanity's welfare is irrelevant or defined very
> differently than most humans today would define it. If the AI has a
> single goal and is smart enough to outwit our attempts to disable or
> control it once it has gotten loose, Game Over, argues Professor
> Bostrom in his book _Superintelligence_.
>
> This is perhaps the most important book I have read this decade, and
> it has kept me awake at night for weeks. I want to tell you why, and
> what I think, but a lot of this is difficult ground, so please bear
> with me. The short form is that I am fairly certain that we _will_
> build a true AI, and I respect Vernor Vinge, but I have long been
> skeptical of the Kurzweilian notions of inevitability,
> doubly-exponential growth, and the Singularity. I've also been
> skeptical of the idea that AIs will destroy us, either on purpose or
> by accident. Bostrom's book has made me think that perhaps I was
> naive. I still think that, on the whole, his worst-case scenarios are
> unlikely. However, he argues persuasively that we can't yet rule out
> any number of bad outcomes of developing AI, and that we need to be
> investing much more in figuring out whether developing AI is a good
> idea.  We may need to put a moratorium on research, as was done for a
> few years with recombinant DNA starting in 1975. We also need to be
> prepared for the possibility that such a moratorium doesn't
> hold. Bostrom also brings up any number of mind-bending dystopias
> around what qualifies as human, which we'll get to below.
>
> (If that paragraph doesn't make sense, go look up Vinge, Ray Kurzweil
> and the Singularity, and "strong AI"; I'll discuss them briefly below,
> but the more background you have, the better. I'll wait here...done?
> Good.)
>
> Let me begin with some of my own background and thoughts prior to
> reading _Superintelligence_.
>
> I read Roger Penrose's _The Emperor's New Mind_ when it first came out
> in 1989, not that I remember it more than dimly. Much later, I heard
> John Searle, the philosopher who developed the Chinese Room thought
> experiment give a talk at Xerox PARC. Both of these I found
> unconvincing, for reasons that have largely faded from my mind, though
> I'll give them a shot below.  Also, I used to have actual friends who
> worked in artificial intelligence for a living, though regular contact
> with that set has faded, as well. When I was kid I used to read a ton
> of classic science fiction, and Asimov's "The Final Question" and "All
> the Cares in the World" have weighed heavy on my mind. And hey, in
> recent years I've used Norvig and Russell's _Artificial Intelligence:
> A Modern Approach_ as a truly massive paperweight, and have actually
> read several chapters! Perhaps most importantly, I once read a book on
> philosophy, but have no formal training in it whatsoever.
>
> All of this collectively makes me qualified to review a book about --
> and to have intelligent, original thoughts, worth *your* attention, on
> -- the preeminent moral issue and possibly existential crisis for
> Humanity of the early-middle twenty-first century, right? Right! Heck,
> this is the Internet Age, I have a Facebook account and a blog, I'm
> overqualified! So, with that caveat, it is incumbent on you, Dear
> Reader, to skip over the obvious parts, tell me when others have
> covered the same ground, and especiall

Re: Review of Bostrom's Superintelligence

2015-06-08 Thread John Clark
Review of Nick Bostrom's _Superintelligence_, Oxford University Press, 2014
by somebody named "Rod":

> we need to be investing much more in figuring out whether developing AI
> is a good idea.


A waste of time, good idea or bad its going to happen it's just a question
of when.


> > We may need to put a moratorium on research,


Good luck with that. Do you really think politicians in the USA would take
China's word that they've stopped all AI research knowing the huge
advantage they'd have if they cheated?

 > as was done for a few years with recombinant DNA starting in 1975.


The Asilomar moratorium was voluntary and one year later the moratorium was
removed and very strict safety standards were set in place; and very soon
after that the safety standards were relaxed significantl . Looking back we
can see that it didn't slow down recombinant DNA developments at all.

> I heard John Searle, the philosopher who developed the Chinese Room
> thought experiment


The single stupidest thought experiment in the history of the world.

>  Searle tried to refute the possibility of Strong AI. (I believe he
> identified strong AI as the idea that a machine will truly be able to
> think, and will be conscious, as opposed to simply simulating the process;


To hell with consciousness! From the human point of view it's irrelevant if
computers are conscious, if they're not that's there problem not ours. But
computer intelligence is very relevant from the human point of view
especially if a computer is smarter than that biped.

> I really don't see any evidence of the domain-to-domain acceleration that
> Kurzweil sees, and in particular the shift from biological to digital
> beings will result in a radical shift in the evolutionary pressures. I see
> no reason why any sort of "law" should dictate that digital beings will
> evolve at a rate that *must* be faster than the biological one.


The fastest signals in the human brain move at about 100 meters a second,
many are far slower, light moves at 300 million meters per second. So if
you insist that the 2 most distant parts of a brain communicate as fast as
they do in a human brain (and I'm not entirely sure why that constraint
would be necessary) then parts in the brain of a AI could be at least 3
million times as distant. The volume increases by the cube of the distance
so such a brain would physically be 27 million trillion times larger than a
human brain. Even if 99.9% of that space were used just to deliver power
and get rid of waste heat you'd still have a thousand trillion times as
much volume for logic and memory components as humans have room for inside
their heads. And the components would be considerably smaller than the
human ones too.

That's why I think talk about how to make sure the AI always remains
friendly that is ubiquitous in some parts of the internet is just futile,
maybe it will be friendly and maybe it won't, but whatever it is we won't
have any say in the matter. The AI will do what it wants to do and we're
just in it for the ride.

 > Exponentials can't continue forever,

Moore's Law doesn't need to last forever, it doesn't even need to last for
very long to leave humanity in the dust.

> Being smart is hard. And making yourself smarter is also hard.


And it takes a long time too. It takes a human being 4 years to learn
enough to graduate from Harvard, an electronic AI whose brain components
send messages at 300 million meters a second rather than 100 meters a
second as in a human brain could graduate from Harvard in 64 minutes. That
is assuming the AI's brain component would be as small as biological ones,
but in reality they wouldn't be, they would be far smaller.

>  if you accidentally knock a bucket of baseballs down a set of  stairs,
> better data and faster computing are unlikely to help you  predict the
> exact order in which the balls will reach the bottom


Why not? To a mind that worked 3 million (or more) times as fast as yours
or mine the balls would be almost be stationary; if it took one second for
the baseballs to reach the bottom of the stairs to the AI it would seem
like about 6 hours. And to a mind with a memory 3 million times as
capacious as yours keeping track of all the balls would be easy.

> Go players like to talk about how close the top pros are to God


They said the same thing about Chess top pros until Deep Blue came along.



> > Top pros could sit across the board from an almost infinitely strong AI
> and still hold their heads up.


I don't think so. In 2012 a computer beat 5 times Japanese Go champion
Yoshio Ishia,  granted he had a 4 stone handicap but it was still
impressive, especially considering that far less resources have been
devoted to computer Go programs than computer Chess programs.

> I'm no expert in creativity, and I know researchers study it intensively,
> so I'm going to weasel through by saying it is the ability to generate
> completely new material, which involves some random process.


I ha

Re: Review of Bostrom's Superintelligence

2015-06-04 Thread meekerdb

Abandoning common sense notions is easy.  Finding good replacements is hard.

Brent
"Every boy in the streets of Gottingen understands more about four dimensional geometry 
than Einstein. Yet, in spite of that, Einstein did the work and not the mathematicians"

--- David Hilbert


On 6/4/2015 5:39 PM, LizR wrote:
One comment (so far) - Einstein's breakthrough on SR appears to have been "simply" to 
take seriously what the various results already obtained at that date suggested. That 
might be regarded as a "paradigm shift" by some since it involved space and time being 
unified and various counter-intuitive effects being accepted (time dilation etc), 
however it was taking a mass of apparently semi-unrelated results and simplifying them 
by using a single basic principle (the same results obtain in all reference frames). 
This doesn't seem to me to be more intelligent than anyone else, but "differently 
intelligent", so to speak. I wonder how you create an AI able to have key insights? One 
might cite Hugh Everett and Huw Price as (relatively) modern contenders for people 
"taking seriously what the physics is trying to tell us" and abandoning common sense 
notions (such as there is only one universe adn time's arrow applies equally at all levels).


--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Review of Bostrom's Superintelligence

2015-06-04 Thread LizR
This is what, IIRC, Asimov called the "Frankenstein complex" in his robot
stories - the idea that the world will be overrun by rampaging robots, or
paper clip factories as the case may be. While the "singleton" seems to be
what might be called the "HAL complex" (or Multivac if we want to stay with
Asimov) - the idea of one monolithic computer to rule them all. But
practice tells us that things may occur the other way around - that we will
have lots of little AIs to start with - eventually getting one as clever as
a dog, then one as clever as Jeeves, then one as clever as Einstein ... So
ISTM that if there isn't a singularity, but only steady growth, we will
have time to adapt and instil those Three Laws into our plastic pals who
are fun to be with. (And maybe they can sort out the *&%$#ing mess we've
made...)

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Review of Bostrom's Superintelligence

2015-06-04 Thread LizR
One comment (so far) - Einstein's breakthrough on SR appears to have been
"simply" to take seriously what the various results already obtained at
that date suggested. That might be regarded as a "paradigm shift" by some
since it involved space and time being unified and various
counter-intuitive effects being accepted (time dilation etc), however it
was taking a mass of apparently semi-unrelated results and simplifying them
by using a single basic principle (the same results obtain in all reference
frames). This doesn't seem to me to be more intelligent than anyone else,
but "differently intelligent", so to speak. I wonder how you create an AI
able to have key insights? One might cite Hugh Everett and Huw Price as
(relatively) modern contenders for people "taking seriously what the
physics is trying to tell us" and abandoning common sense notions (such as
there is only one universe adn time's arrow applies equally at all levels).


On 5 June 2015 at 11:35, Russell Standish  wrote:

> This review of Nick Bostrom's _Superintelligence_ crossed my desk from
> a Rod somebody or other. Should be interesting to members of this
> group, although you'll need a spare 15 minutes or so to read it.
>
> Cheers, Russell.
>
> Review of Nick Bostrom's _Superintelligence_, Oxford University Press,
> 2014.
>
> Is the surface of our planet -- and maybe every planet we can get
> our hands on -- going to be carpeted in paper clips (and paper clip
> factories) by a well-intentioned but misguided artificial intelligence
> (AI) that ultimately cannibalizes everything in sight, including us,
> in single-minded pursuit of a seemingly innocuous goal? Nick Bostrom,
> head of Oxford's Future of Humanity Institute, thinks that we can't
> guarantee it _won't_ happen, and it worries him. It doesn't require
> Skynet and Terminators, it doesn't require evil geniuses bent on
> destroying the world, it just requires a powerful AI with a moral
> system in which humanity's welfare is irrelevant or defined very
> differently than most humans today would define it. If the AI has a
> single goal and is smart enough to outwit our attempts to disable or
> control it once it has gotten loose, Game Over, argues Professor
> Bostrom in his book _Superintelligence_.
>
> This is perhaps the most important book I have read this decade, and
> it has kept me awake at night for weeks. I want to tell you why, and
> what I think, but a lot of this is difficult ground, so please bear
> with me. The short form is that I am fairly certain that we _will_
> build a true AI, and I respect Vernor Vinge, but I have long been
> skeptical of the Kurzweilian notions of inevitability,
> doubly-exponential growth, and the Singularity. I've also been
> skeptical of the idea that AIs will destroy us, either on purpose or
> by accident. Bostrom's book has made me think that perhaps I was
> naive. I still think that, on the whole, his worst-case scenarios are
> unlikely. However, he argues persuasively that we can't yet rule out
> any number of bad outcomes of developing AI, and that we need to be
> investing much more in figuring out whether developing AI is a good
> idea.  We may need to put a moratorium on research, as was done for a
> few years with recombinant DNA starting in 1975. We also need to be
> prepared for the possibility that such a moratorium doesn't
> hold. Bostrom also brings up any number of mind-bending dystopias
> around what qualifies as human, which we'll get to below.
>
> (If that paragraph doesn't make sense, go look up Vinge, Ray Kurzweil
> and the Singularity, and "strong AI"; I'll discuss them briefly below,
> but the more background you have, the better. I'll wait here...done?
> Good.)
>
> Let me begin with some of my own background and thoughts prior to
> reading _Superintelligence_.
>
> I read Roger Penrose's _The Emperor's New Mind_ when it first came out
> in 1989, not that I remember it more than dimly. Much later, I heard
> John Searle, the philosopher who developed the Chinese Room thought
> experiment give a talk at Xerox PARC. Both of these I found
> unconvincing, for reasons that have largely faded from my mind, though
> I'll give them a shot below.  Also, I used to have actual friends who
> worked in artificial intelligence for a living, though regular contact
> with that set has faded, as well. When I was kid I used to read a ton
> of classic science fiction, and Asimov's "The Final Question" and "All
> the Cares in the World" have weighed heavy on my mind. And hey, in
> recent years I've used Norvig and Russell's _Artificial Intelligence:
> A Modern Approach_ as a truly massive paperweight, and have actually
> read several chapters! Perhaps most importantly, I once read a book on
> philosophy, but have no formal training in it whatsoever.
>
> All of this collectively makes me qualified to review a book about --
> and to have intelligent, original thoughts, worth *your* attention, on
> -- the preeminent moral issue and possibly existenti

Review of Bostrom's Superintelligence

2015-06-04 Thread Russell Standish
This review of Nick Bostrom's _Superintelligence_ crossed my desk from
a Rod somebody or other. Should be interesting to members of this
group, although you'll need a spare 15 minutes or so to read it.

Cheers, Russell.

Review of Nick Bostrom's _Superintelligence_, Oxford University Press, 2014.

Is the surface of our planet -- and maybe every planet we can get
our hands on -- going to be carpeted in paper clips (and paper clip
factories) by a well-intentioned but misguided artificial intelligence
(AI) that ultimately cannibalizes everything in sight, including us,
in single-minded pursuit of a seemingly innocuous goal? Nick Bostrom,
head of Oxford's Future of Humanity Institute, thinks that we can't
guarantee it _won't_ happen, and it worries him. It doesn't require
Skynet and Terminators, it doesn't require evil geniuses bent on
destroying the world, it just requires a powerful AI with a moral
system in which humanity's welfare is irrelevant or defined very
differently than most humans today would define it. If the AI has a
single goal and is smart enough to outwit our attempts to disable or
control it once it has gotten loose, Game Over, argues Professor
Bostrom in his book _Superintelligence_.

This is perhaps the most important book I have read this decade, and
it has kept me awake at night for weeks. I want to tell you why, and
what I think, but a lot of this is difficult ground, so please bear
with me. The short form is that I am fairly certain that we _will_
build a true AI, and I respect Vernor Vinge, but I have long been
skeptical of the Kurzweilian notions of inevitability,
doubly-exponential growth, and the Singularity. I've also been
skeptical of the idea that AIs will destroy us, either on purpose or
by accident. Bostrom's book has made me think that perhaps I was
naive. I still think that, on the whole, his worst-case scenarios are
unlikely. However, he argues persuasively that we can't yet rule out
any number of bad outcomes of developing AI, and that we need to be
investing much more in figuring out whether developing AI is a good
idea.  We may need to put a moratorium on research, as was done for a
few years with recombinant DNA starting in 1975. We also need to be
prepared for the possibility that such a moratorium doesn't
hold. Bostrom also brings up any number of mind-bending dystopias
around what qualifies as human, which we'll get to below.

(If that paragraph doesn't make sense, go look up Vinge, Ray Kurzweil
and the Singularity, and "strong AI"; I'll discuss them briefly below,
but the more background you have, the better. I'll wait here...done?
Good.)

Let me begin with some of my own background and thoughts prior to
reading _Superintelligence_.

I read Roger Penrose's _The Emperor's New Mind_ when it first came out
in 1989, not that I remember it more than dimly. Much later, I heard
John Searle, the philosopher who developed the Chinese Room thought
experiment give a talk at Xerox PARC. Both of these I found
unconvincing, for reasons that have largely faded from my mind, though
I'll give them a shot below.  Also, I used to have actual friends who
worked in artificial intelligence for a living, though regular contact
with that set has faded, as well. When I was kid I used to read a ton
of classic science fiction, and Asimov's "The Final Question" and "All
the Cares in the World" have weighed heavy on my mind. And hey, in
recent years I've used Norvig and Russell's _Artificial Intelligence:
A Modern Approach_ as a truly massive paperweight, and have actually
read several chapters! Perhaps most importantly, I once read a book on
philosophy, but have no formal training in it whatsoever.

All of this collectively makes me qualified to review a book about --
and to have intelligent, original thoughts, worth *your* attention, on
-- the preeminent moral issue and possibly existential crisis for
Humanity of the early-middle twenty-first century, right? Right! Heck,
this is the Internet Age, I have a Facebook account and a blog, I'm
overqualified! So, with that caveat, it is incumbent on you, Dear
Reader, to skip over the obvious parts, tell me when others have
covered the same ground, and especially tell me when you think I'm
wrong. Now, onward...

I seem to recall that Penrose invoked various forms of near-magic in
his explanation of why brains are better than machines, including (in
a very prescient bit of flag-planting on intellectual ground that pays
dividends, in the form of attention and citations, even today) quantum
entanglement, and I found that invocation largely unnecessary in that
he hadn't (we hadn't) yet plumbed the depths of complex, chaotic,
classical systems composed of many smaller automata. He simply seemed
drawn to the more exotic explanation. Disappointing for a guy with an
IQ whose first digit is probably a '2'.

Searle tried to refute the possibility of Strong AI. (I believe he
identified strong AI as the idea that a machine will truly be able to
think, and will be

Re: Review of Bostrom's Superintelligence

2015-06-04 Thread Russell Standish
I forwarded it inline. It took 3 goes, as I mistyped the list address
the first time, and the google bot took a dim view of my email address
the second time. Stuffed if I know what happened the third to cause a
blank message to go out, but I'll try again.

Cheers

On Fri, Jun 05, 2015 at 11:00:25AM +1200, LizR wrote:
> Am I missing a subtle joke, or did you forget to include a link? (Or is my
> browser up the spout?)
> 
> On 5 June 2015 at 10:55, Russell Standish  wrote:
> 
> >
> >
> > --
> > You received this message because you are subscribed to the Google Groups
> > "Everything List" group.
> > To unsubscribe from this group and stop receiving emails from it, send an
> > email to everything-list+unsubscr...@googlegroups.com.
> > To post to this group, send email to everything-list@googlegroups.com.
> > Visit this group at http://groups.google.com/group/everything-list.
> > For more options, visit https://groups.google.com/d/optout.
> >
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at http://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/d/optout.

-- 


Prof Russell Standish  Phone 0425 253119 (mobile)
Principal, High Performance Coders
Visiting Professor of Mathematics  hpco...@hpcoders.com.au
University of New South Wales  http://www.hpcoders.com.au


-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Review of Bostrom's Superintelligence

2015-06-04 Thread LizR
Am I missing a subtle joke, or did you forget to include a link? (Or is my
browser up the spout?)

On 5 June 2015 at 10:55, Russell Standish  wrote:

>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at http://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Review of Bostrom's Superintelligence

2015-06-04 Thread Russell Standish


-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.