Hi David
What of the possibility, Ben, of an Asimov-like reaction to the
possibility of thinking machines that compete with humans? It's the
kind of dumb, Man-Was-Not-Meant-to-Go-There, scenario we see all the
time on Sci-Fi Channel productions, but it is plausible, especially in
a world
Is anybody working on building ethical capacity into AGI from the
ground up?
As I mentioned to Ben yesterday, AGIs without ethics could end up
being the next decade's e-viruses (on steriods).
Cheers, Philip
My thoughts on this are at
www.goertzel.org/dynapsyc/2002/AIMorality.htm
Ben Goertzel wrote:
What if iterative self-revision causes the system's goal G to drift
over time...
I think this is inevitable - it's just evolution keeping on going as it always
will. The key issue then is what processes can be set in train to operate
throughout time to keep evolution
One trouble with this endeavor is that AGI is a fuzzy set...
However, I'd be quite interested to see this list, even so.
In fact, I think it'd be more valuable to simply see a generic list of all
AGI projects, be they commercial or non.
If anyone wants to create such a list, I'll be happy to