Ben, you and I have a long-standing disagreement on a certain issue which
impacts the survival of all life on Earth. I know you're probably bored
with it by now, but I hope you can understand why, given my views, I keep
returning to it, and find a little tolerance for my doing so.
The issue
I can spot the problem in AIXI because I have practice looking for silent
failures, because I have an underlying theory that makes it immediately
obvious which useful properties are formally missing from AIXI, and
because I have a specific fleshed-out idea for how to create
moral systems
Your intuitions say... I am trying to summarize my impression of your
viewpoint, please feel free to correct me... AI morality is a matter of
experiential learning, not just for the AI, but for the programmers.
Also, we plan to start Novamente off with some initial goals embodying
ethical
Hi,
2) If you get the deep theory wrong, there is a strong possibility of a
silent catastrophic failure: the AI appears to be learning
everything just
fine, and both you and the AI are apparently making all kinds of
fascinating discoveries about AI morality, and everything seems to be
Eliezer S. Yudkowsky wrote:
1) AI morality is an extremely deep and nonobvious challenge which has
no significant probability of going right by accident.
2) If you get the deep theory wrong, there is a strong possibility of
a silent catastrophic failure: the AI appears to be learning
Ben Goertzel wrote:
Your intuitions say... I am trying to summarize my impression of your
viewpoint, please feel free to correct me... AI morality is a
matter of experiential learning, not just for the AI, but for the
programmers. To teach an AI morality you must give it the right
feedback
This is slightly off-topic but no more so than the rest of the thread...
1) That it is selfishly pragmatic for a superintelligence to deal with
humans economically rather than converting them to computronium.
For convenience, lets rephrase this
the majority of arbitrarily generated
Jonathan Standley wrote:
Now here is my question, it's going to sound silly but there is
quite a bit behind it:
Of what use is computronium to a superintelligence?
If the superintelligence perceives a need for vast computational
resources, then computronium would indeed be very useful.