Hi Ben,

As far as I can work out, there are four things that could conceivably contribute to a Novamente reaching human intelligence parity:

1   the cleverness/power of the original architecture

2   the intensity, length and effectiveness of the Novamente learning
    after being booted up

3   the upgrading of the achitecture/code base by humans as a result of
    learning by anyone (including Novamentes).

4   the self-improvement of the achitecture/code base by the Novamente
    as a result of learning by anyone (humans and Novamentes).

To what extend is the learning system of the current Novamente system (current or planned for the first switched on version) dependent on or intertwined with the capacity for a Novamente to alter its own fundamental architecture?

It seems to me that the risk of getting to the sigularity (or even a dangerous earlier stage) without the human plus AGI community being adequately prepared and sufficiently ethically mature lies in the possiblity of AGIs self-improving on an unhalted exponential trajectory.

If you could get Novamentes to human parity using strategies 1-3 only then you might be able to control the process of moving beyond human parity sufficiently to make it safe.

If getting to human parity relies on strategy 4 then the safety strategy could well be very problematic - Eliezer's full Friendly AI program might need to be applied in full (ie. developing the theory of friendlieness first and then applying "Supersaturated Friendliness" (as Eliezer calls it).

What do you reckon?

Cheers, Philip

Reply via email to