I fully agree here, AGI can be very dangerous in wrong hands.
But same is the case with any powerful tech. Controlling the knowledge is only a temporary measure. In fact, general wisdom says that limiting the knowledge to a chosen few can be more dangerous. Power corrupts easily. Its misuse can only be diluted by spreading it so much that more good people than bad get hold of it. Given that there is not a single way to come up with AI, its only a matter of time before other groups figure out their own ways.
IMHO, you cannot compare s/w with Nukes, because a nuke is a material thing and requires costly resources, so even if everyone knew how to make one, only a handful will be able to actually build it, as the resources required are immense. For AGI you need only a very fast computer...easy.
How to protect general public from misuse of AGI? May be the answer lies in AGI itself - make AGI which can detect such attempts, equip the potential victims with it and let the fight begin on equal ground. Once AGI becomes smarter than humans, only AGI will be able to save humans from itself.
However, I predict that the immediate problem will be not security, but ethics. On the lines of the opposition being faced by cloning etc, AGI will receive a lot of criticism and some groups will even fight to stop/ban any research being done on it. Even if we manage to make a machine resembling a 2 year old kid, the ethical problems its going to create would be great. So can I power it off any time? Can I make copies of it? Can it reprogram it to do what I wish? and so on..... because we are dealing with a thinking and sensing being here.
Well, its a big debate really. To be on a safer side, I feel its good to hold the source until you are very much sure that its safe to release it.
Sanjay
On 12/11/05, Ben Goertzel <[EMAIL PROTECTED]> wrote:
Hi,
> Ben has been thinking about publishing his Novamente books, so he may be
> more receptive to the idea of open-sourcing his AGI.
I have no near-term plans to open-source Novamente. I think that
would be a bad idea for AGI safety reasons. I am worried that if we
truly succeed in making a human-level intelligence, and opened up the
code, some jerks might do really nasty things with it.
Publishing the book is also a safety risk but less so because
replicating the code from a fairly abstract book would be pretty hard.
It's sorta like the difference between publishing a text on nuclear
physics versus publishing the exact specifications for a nuclear
bomb...
-- Ben G
-------
To unsubscribe, change your address, or temporarily deactivate your subscription,
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]
To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/[EMAIL PROTECTED]
