Tim Freeman writes:> >Let's take Novamente as an example. ... It cannot improve 
itself> >until the following things happen:> >> >1) It acquires the knowledge 
and skills to become a competent> > programmer, a task that takes a human many 
years of directed> > training and practical experience.> > > >2) It is given 
access to its own implementation and permission to alter it.> > > >3) It 
understands its own implementation well enough to make a helpful change.> >...> 
> I agree that resource #1, competent programming, is essential for any> 
interesting takeoff scenario. I don't think the other two matter,> though.
Ok, this alternative scenario -- where Novamente secretly reinvents the 
theoretical foundations needed for AGI development, designs its successor from 
those first principles, and somehow hijacks an equivalent or superior 
supercomputer to receive the de novo design and surreptitiously trains it to 
superhuman capacity -- should also be protected against.  It's a fairly 
ridiculous scenario, but for completeness should be mentioned.
 

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=53003223-9d4579

Reply via email to