Hi,
The chapters are:
_Cognitive biases potentially affecting judgment of global risks_
http://singinst.org/Biases.pdf
...
_Artificial Intelligence and Global Risk_
http://singinst.org/AIRisk.pdf
The new standard introductory material on Friendly AI. Any links to
_Creating Friendly
I suppose the subtext is that your attempts to take the intuitions
underlying CFAI and turn them into a more rigorous and defensible
theory did not succeed.
That's a very interetsing jump. Perhaps he's merely not finished
yet?
-Robin
Ok... I should have said did not succeed YET, which is
You are placing your aesthetic preferences for how an AGI should work over the data regarding how real intelligences do work. Knowledge clearly becomes proceduralized and inaccessible to reasoning
with use.
I see your point now. I guess proceduralization is quite necessary for efficiency, rather
Ben Goertzel wrote:
This brings us back to my feeling that some experimentation with AGI
systems is going to be necessary before FAI can be understood
reasonably well on a theoretical level. Basically, in my view, one
way these things may unfold is
* Experimentation with simplistic AGI