> I can spot the problem in AIXI because I have practice looking for silent > failures, because I have an underlying theory that makes it immediately > obvious which useful properties are formally missing from AIXI, and > because I have a specific fleshed-out idea for how to create > moral systems > and I can see AIXI doesn't work that way. Is it really all that > implausible that you'd need to reach that point before being able to > create a transhuman Novamente? Is it really so implausible that AI > morality is difficult enough to require at least one completely dedicated > specialist? > > -- > Eliezer S. Yudkowsky http://singinst.org/
There's no question you've thought a lot more about AI morality than I have... and I've thought about it a fair bit. When Novamente gets to the point that its morality is a significant issue, I'll be happy to get you involved in the process of teaching the system, carefully studying the design and implementation, etc. -- Ben G ------- To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]
