Not a problem. completely bullet proof. But what about Oucm Razor, If AGI sees 
a future where its
algorithms can not predict then choose a another path. 
 It would stall until it learned a new algorithm to deal with it. Then see if 
it is worth dealing with it. Or
it could learn from some other person or AGI. Or stall until it can up grade 
its hardware. 
 Very important for a AGI device forecast up coming chaos and assessing if it 
has proper internal,
 software, algorithms, and hardware to deal with up coming situations. Get the 
low hanging fruit first.
 If things happen too fast it falls into fetal positions. Or ducks into a safe 
place. 

And this is a free will conscious machine. It will take the easy path, the most 
rewarding path, 
the less anti rewarding path. And look for new paths, simplest ones first. 

  AGI Father programs AGI. Then turn it on and never touch it brain software 
again. Just monitors 
and data log AGIs internal and external data. You grab it an want plug in usb 
and reprogram it and
it fight you on this then the project is a success. It can see that if you turn 
it off it can not chase those 
rewarding pattern loops.



 

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T42e4833e6fa7875c-M40b9a2604d20029c000fcd35
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to