Bill,

A long time ago in places far away I delivered speeches at colleges
regarding the practical impossibility of making a good missile defense
system (the then so called Star Wars Defense Initiative SDI). The arguments
centered around the ease with which various proposed methods could be
defeated using home-garage level methods, e.g. putting flat conductive
sides on warheads to avoid RADAR. Still, the U.S. government pumped
millions into this.

SDI threw the Soviets into a tizzy and they spent billions on Star Wars
technology - until it bankrupted their economy. The country where bread
would ALWAYS be available suddenly had no bread to hand out, and within
weeks the USSR was GONE.

Of course the REAL goal of SDI had always been to bankrupt the Soviets. My
lectures only worked to impair that effort. I spoke the truth - but in the
process I impaired larger better things.

Here and now we are discussing AI/AGI and in some ways things seem much the
same as with Star Wars. People are arguing the various sides of things with
MUCH greater implications. What those implications are I can hardly guess.
I certainly never saw the fall of the Soviet Union coming.

Suppose for a moment we were to announce small AI-based desert spiders that
hunt people down and kill them, and consume their bodies as fuel, and that
we are about to start dropping these on ISIS or whatever follows them in
the future. It is hard to rise from the dead when you have been turned into
vapor. Note here that victories could be achieved without a single spider
being dropped - except that some people (like me) pointing out the
impossibility of this would muck up the works.

So, I have adjusted my zeal to pointing out impossibilities but NOT
campaigning for or against particular paths, at least not until I am pretty
sure that not only am I right, but also that I am not screwing up something
MUCH bigger.

AGI is NOT going be become a reality anytime soon, e.g. probably not within
my lifetime. I can see that, but I am NOT about to campaign for that.

Similarly, I see GOOD political reasons to keep research secret, e.g. so
the above mentioned spiders could be announced.

Often very simplistic methods can create seemingly intelligent military
hardware, e.g. bounding mines, that jump out of the ground and destroy
tanks. That and a few rumors about spiders and they might thin the ranks of
ISIS on short order.

Opening up AI research would work to defeat such bluffs.

Steve
===============



On Wed, Oct 21, 2015 at 7:32 AM, Bill Hibbard <[email protected]> wrote:

> The New York Times, Washington Post and Huffington
> Post didn't want this, minus its last paragraph, as
> an op-ed:
>
> https://sites.google.com/site/whibbard/g/transparency
>
> Perhaps people on this mailing list will find it
> interesting.
>
>
> -------------------------------------------
> AGI
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/10443978-6f4c28ac
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>



-- 
Full employment can be had with the stoke of a pen. Simply institute a six
hour workday. That will easily create enough new jobs to bring back full
employment.



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to