[agi] Re: Two draft papers: AI and existential risk; heuristics and biases

2006-06-15 Thread Bill Hibbard
Eliezer Yudkowsky wrote: Bill Hibbard wrote: Eliezer, I don't think it inappropriate to cite a problem that is general to supervised learning and reinforcement, when your proposal is to, in general, use supervised learning and reinforcement. You can always appeal to a different

[agi] Re: Two draft papers: AI and existential risk; heuristics and biases

2006-06-07 Thread Bill Hibbard
Eliezer, I don't think it inappropriate to cite a problem that is general to supervised learning and reinforcement, when your proposal is to, in general, use supervised learning and reinforcement. You can always appeal to a different algorithm or a different implementation that, in some