HOW TO CREATE THE BUZZ THAT BRINGS THE BUCKS


Wow, there it is. That just about says it all. Take the content of that
concise evaluation and go on the road. That is what AGI needs. For general
PR purposes it doesn’t have to be much more detailed than that. Talk shows
and news articles are unlikely to cover even that much. I have done
national media and this is the kind of story they would love. Some
additional hooks, events and visuals would also be effective.



>From this and prior emails do I detect a pattern.

-When I am an AGI booster, pat me on the head and throw me a bone. (I love
it. If I had a tail it would wag.)

-When I am an AGI detractor, politely and intelligently challenge me .  (I
don’t love it as much, but it is interesting and thought provoking.)



So given your apparent bias, I would like to agree in part and disagree in
part.



I would love to be a big time booster for AGI.  I want to be on one of the
teams that make it happen in some capacity.  I want to be one of the first
people to ride an AGI dream machine, something that can talk with you like
the most wise, intelligent, and funny of men, that can be like the most
brilliant of teacher, one with the world’s knowledge likely to be of any
interest already in deep structure, and that can not only brilliantly talk
in real time but also simultaneously show real time images, graphs, and
photorealistic animations as it talks.



I am 59 so I want this to start happening soon.  I am convinced it can
happen.  I am convinced I basically know how to do it (at a high level
with a lot of things far from totally filled in)  But I think others, like
Ben Goertzel, are probably significantly ahead of me.  And I have no
experience at leading a software team, which I would need because I have
never written a program more than 100 pages long and that was twenty years
ago, when I programmed Dragon Systems’ first general purpose dictating
machine.



So I am an AGI booster, but there needs to be serious discussion of AGI’s
threats and how we can deal with them, at least among the AGI community,
to which the readers of this list are probably pretty much limited.



Admittedly there are many possible dangers with future AGI technology. We
can think of a million horror stories and in all probability some of the
problems that will crop up are things we didn’t anticipate. At this point
it is pure conjecture.



True, the threat is pure conjecture, if by that you mean reasoning without
proof.  But that is not the proper standard for judging threats.  If you
had lived you life by disregarding all threats except those that had proof
you almost certainly would have died in early childhood.



 All new technologies have dangers, just like life in general. We can’t
know the kinds of personal problems and danger we will face in our future.




True, many other new technologies involve threats, and certainly among
them are nano-technology and bio-technology, which have potentials for
severe threats.  But there is something particularly threatening about a
technology can purposely try to outwit us that, particularly if networked,
it could easily be millions of times more intelligent than we are, and
that would be able to understand and hack the lesser computer
intelligences that we depend our lives on millions of times faster than
any current team of humans.  Just as it is hard to image a world in which
humans long stayed enslaved to cows, it is hard to imagine one in which
machines much brighter than we are stayed enslaved to us.



..in the end you have to follow the road ahead.



Totally agree.



There is no turning back at this point.  The wisp of smoke that will
become the Geni is already out of the bottle.  Assuming that Moore’s law
keeps on keeping on for another couple generations, within five to seven
years starting to make a powerful AGI will probably be within the capacity
of half the world’s governments and all of the world’s thousand largest
companies.   So to keep the world safe we will need safer AI’s to protect
us from the type the Leona Helmsly’s and Kim Yung il’s of the world are
likely to make.



We well be better informed and better adept at dealing with the inevitable
problems the future holds as they arise.



This is particularly true if there is a special emphasis on the problem.
That is why it should be discussed.



I have said for years that for humans to defend themselves against
machine, learning how to defend human against machines should be rewarded
as one of mankind’s highest callings.  That is why, despite the fact that
I disagree with Eliezer Yudkowski on certain points, have I tremendous
respect for the fact that he is probably the first human to dedicate him
self to this highest calling.



Of course the proper use of intelligence augmentation and collective human
intelligence greatly increases our chances, particularly if through the
use of augmented intelligence and collective intelligence we can both
better learn and understand how to control superhuman intelligences and we
can develop a fairly enforceable system for substantially limiting
superintelligences uses to what are considered the more safe forms and
uses.



We must have a certain amount of faith in our ability meet the challenges
of the future



There is good reason to doubt that, unless we significantly improve our
collective intelligence, we humans will be able to resist the urge to
unleash the dangerous power of more free minded machines for selfish short
term gain.  It is not even clear mankind can get control of the global
warming issue.  We have never been able to get control over mankind’s
tendency toward war.  We have held off nuclear war for sixty years, an
impressive accomplishment, but it is not clear in the age of
fundamentalism, nuclear proliferation, and stateless terrorism how much
longer that will be true.



or we will stagnate in the past cowering in fear. AGI has the potential of
being hugely beneficial to humanity. Should we outlaw its development just
because it could possibly be used inappropriately?



As I said above, I don’t think there is any turning back at this point.
We should move forward, seeking to maximize the tremendous potential
benefits and minimize the threats.



If we have a relatively just human society the benefits will be both
important and many.  That is, as long as we can keep the threats at bay.
I am certain the benefits will come and that there will be many of them,
assuming Moore’s law keeps on keeping on..



I don’t spend that much time talking about the benefits because they are
not in dispute with me.  But when it comes to HOW TO CREATE THE BUZZ THAT
BRINGS THE BUCKS I should have emphasized them more.



Fear has never stop technological development before and it is unlikely to
stop it this time.



That means the more dangerous, mind of their own, AI’s, provided they have
the promise of providing a significant useful advantage to someone for a
few years, are unlikely to be stopped.  This emphasizes the threat.



What needs to be emphasized is the many ways AGI can be limited and
controlled. First of all most infrastructure technology can be controlled
by regular AI. In other words, an effective firewall between the more
powerful AGI systems and human infrastructure could possibly be created.
There are many other possible strategies for controlling any dangers
represented by AGI in the future. In fact AGI itself can help us to
develop them.



I agree, as indicated by comments above.



But remember for AGI to achieve its greatest promise, they have to be able
to program, and if they can program there is an increased chance they
could hack, and they would be brilliant hackers.  The war between the good
(as defined by the interests of humanity) and bad sides of the AGI force
may well be the final conflict of human existence.  As I said above,
learning how to define human from machine intelligence, particularly to IA
and collective intelligence, should become one of the most rewarded of
human callings.



In any case this is a technology that is too important to outlaw or just
pass up.



Agreed to above.



These are important questions and unless the AGI I community gets out in
front of the issue we will be at the mercy of the media and the opposition
to set the framework of the debate and to create the public stereotype of
AGI. The one who gets there first has a tremendous advantage in
establishing the public image.



No lie!  That is why I think it is important that we focus particularly
hard on the threats now.  Achieving many of the benefits will be the easy
part.  Dealing with the threats is the hard part, and we should be
thinking about it now.  This would be not only to know how to best reduce
such  threats, but also to help clarify how to best discribe them to those
outside the field.  For example, this conversation is helping to clarify
my view on both of these subjects at once.



I think it is likely the mainstream media would make a mockery of the
discussion of these issues.  I think the religious right and most of
mainstream America would be opposed to AGI if they know roughly what we
were talking about.  A large part of me was opposed to AI until I realized
it was unstoppable.  A large part of me is still deeply disturbed by it.
Thus, you are right to say there are dangers to the field of AGI by having
a discussion like that in our recent posts.  But there is also a danger in
not having such discussions.



THIS IS THE PUBLIC IMAGE THEY ARE TRYING TO PAINT OF YOU NOW

>Some critics have mocked singularists for their obsession with
"techno-salvation" and >"techno-holocaust" — or what some wags have called
the coming "nerdocalypse." Their >predictions are grounded as much in
science fiction as science, the detractors claim, and <may never come to
pass. – Associated Press

IT’S BOUND TO GET WORSE.



For years people made fun of the people who talked about global warming,
calling them names like “tree huggers”  So should people have stopped
talking about the problem.



A



I know I keep posting this AP quote, but it proves the need better than
anything that you guys better wake up and take some action.



So what are we to do?



Studiously ignore talking about a very serious threat.  You tell me.  I
want this field to be funded.  I realize from this conversation that I
should emphasize the positives, which I largely take for granted, more.
But how are we likely to deal well with very real threats, such as the
fact that these machines will be able to put most people in America out of
anything like their current work, unless we discuss and think about them.



I would like to hear your answer.



Don Detrich




Edward W. Porter
Porter & Associates
24 String Bridge S12
Exeter, NH 03833
(617) 494-1722
Fax (617) 494-1822
[EMAIL PROTECTED]



-----Original Message-----
From: Don Detrich - PoolDraw [mailto:[EMAIL PROTECTED]
Sent: Friday, September 28, 2007 4:55 PM
To: agi@v2.listbox.com
Subject: Re: [agi] HOW TO CREATE THE BUZZ THAT BRINGS THE BUCKS



 HOW TO CREATE THE BUZZ THAT BRINGS THE BUCKS



Wow, there it is. That just about says it all. Take the content of that
concise evaluation and go on the road. That is what AGI needs. For general
PR purposes it doesn’t have to be much more detailed than that. Talk shows
and news articles are unlikely to cover even that much. I have done
national media and this is the kind of story they would love. Some
additional hooks, events and visuals would also be effective.



>> And now for the buzz fuster...

AGI Will Be The Most Powerful Technology In Human History – In Fact, So
Powerful that it Threatens Us <<

Admittedly there are many possible dangers with future AGI technology. We
can think of a million horror stories and in all probability some of the
problems that will crop up are things we didn’t anticipate. At this point
it is pure conjecture. All new technologies have dangers, just like life
in general. We can’t know the kinds of personal problems and danger we
will face in our future. We can be careful not to take unnecessary risks,
but in the end you have to follow the road ahead. We well be better
informed and better adept at dealing with the inevitable problems the
future holds as they arise. We must have a certain amount of faith in our
ability meet the challenges of the future or we will stagnate in the past
cowering in fear. AGI has the potential of being hugely beneficial to
humanity. Should we outlaw its development just because it could possibly
be used inappropriately? Fear has never stop technological development
before and it is unlikely to stop it this time.



What needs to be emphasized is the many ways AGI can be limited and
controlled. First of all most infrastructure technology can be controlled
by regular AI. In other words, an effective firewall between the more
powerful AGI systems and human infrastructure could possibly be created.
There are many other possible strategies for controlling any dangers
represented by AGI in the future. In fact AGI itself can help us to
develop them. In any case this is a technology that is too important to
outlaw or just pass up.



These are important questions and unless the AGI I community gets out in
front of the issue we will be at the mercy of the media and the opposition
to set the framework of the debate and to create the public stereotype of
AGI. The one who gets there first has a tremendous advantage in
establishing the public image.



THIS IS THE PUBLIC IMAGE THEY ARE TRYING TO PAINT OF YOU NOW

>Some critics have mocked singularists for their obsession with
"techno-salvation" and >"techno-holocaust" — or what some wags have called
the coming "nerdocalypse." Their >predictions are grounded as much in
science fiction as science, the detractors claim, and <may never come to
pass. – Associated Press

IT’S BOUND TO GET WORSE.



I know I keep posting this AP quote, but it proves the need better than
anything that you guys better wake up and take some action.



Don Detrich





  _____

This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?
<http://v2.listbox.com/member/?&;
> &

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=48002975-85a363

Reply via email to