To me this approach sounds like you are trying to out smart the AGI; to somehow constrain it in some way so that it is forced into being nice to people... or Ben at least. I don't think this kind of an approach will work due to the simple fact that you'll never be able to outsmart a super intelligence.
The trick will be, I think, to dig right down the the deepest essence of the AGI and make sure that it would never want to harm humanity in the first place. I think that's really the only place that danger could be stopped. Trying to control the AGI at higher levels seems doomed to fail to me as the AGI will simply be far to smart to be restrained by any system that a human could devise.
Exactly what this means is of course hard to establish in the absence of an obviously correct AGI design... but it's probably worth thinking a bit about the problem sooner rather than later nevertheless.
I think part of the solution will be to control, at least early on while the AGI is learning, its interaction with humans. I read Mary Shelley's Frankenstein recently and thought that it is still quite relevant in this respect. Frankenstein succeeding in creating a living being that was larger, stronger and more nimble than any man and rather intelligent also. His "monster" had no hatred of people to start with and was just confused and scared. However rather than taking responsibility for his creation, Frankenstein tried to hide from what he had done and just left his creation roam around free in the Swiss wilderness to be tomented by various people until his creation eventually turned into something rather nasty. When Frankenstein finally decided that his monster needed to be stopped, well it was a bit late then as he was really no match for his own creation...
Shane
Kevin Copple wrote:
Here's an idea (if it hasn't already been considered before):
In each executing component of an AGI has Control Code. This Control Code monitors a Big Red Switch. If the switch is turned on, execution proceeds normally. If not, all execution by the component stopped.
The big red switch could be a heart monitor on Ben's chest. The AGI better keep Ben alive! And if Ben gets angry with the AGI, he can remove the heart monitor, so the AGI better keep Ben happy also.
Several other features could be thrown in: 1. Components will check that calling components have the active Control Code that every component is expected to have. 2. The Control Code checks that certain parameters are not exceeded, such as AGI memory storage and computational resources. 3. The Big Red Switch monitors not only Ben's heart, but other control parameters as well. 4. The Big Red Switch monitors another AGI, which has a focused purpose of looking for trouble with the first AGI.
I imagine that the control code would require a tiny slice of the AGI resources, yet still be effective. Implementation details would naturally require a lot of thought and effort, but I think the concept of a built-in "virus" would be effective. Maybe "immune system" would be a more appealing term.
Kevin Copple
-------
To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/[EMAIL PROTECTED]
-------
To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/[EMAIL PROTECTED]
