On Thu, Aug 28, 2008 at 12:34 PM, Valentina Poletti <[EMAIL PROTECTED]> wrote: > All these points you made are good points, and I agree with you. However, > what I was trying to say - and I realized I did not express myself too well, > is that, from what I understand I see a paradox in what Eliezer is trying to > do. Assuming that we agree on the definition of AGI - a being far more > intelligent than human beings, and we agree on the definition of > intelligence - ability to achieve goals. He would like to build an AGI, but > he would also like to ensure human safety. Although I don't think this will > be a problem for limited forms of AI, this does imply that some control is > necessarily given to its paramenters, specifically to its goal system. We > *are* controlled in that sense, contrary to what you say, by our genetic > code.
Please don't take the discussion out of context (it's a reply to a point I extracted in another thread), and clarify your meaning of "control". We have a road ahead of us, potential for growth, that includes overcoming our current limitations. But these limitations only limit us with respect to what we know, with respect to a control algorithm that shows the advantage of the actions that are currently inaccessible. And this knowledge, the preference for actions that we can't access, is determined by our cognitive structure, given by our genetic makeup. We are not limited, we are given more potential than fits in our legacy apish bodies, than fits in our home planet. You can't ask the mindless evolutionary regularity for more. > That is why you will never voluntarily place your hand in the fire, as > long as your pain scheme is genetically embedded correctly. As I mentioned, > exceptions to this control scheme are often imprisoned, sometimes killed - > in order not to endanger the human species. However, just because genetic > limitations are not enforced visibly, that does not exclude them from being > a kind of control on our behavior and actions. Genetic limitations on their > part are 'controlled' by the scope of our species, that to evolve and to > preserve itself. And that in turn is controlled by laws of thermodynamics. > Now the problem is, we often overestimate the amount of control we have on > our environemnt, and *that* is a human bias, embedded in us and necessary > for our success. > > If you can break the laws of thermodynamics and information theory (which I > assume is what Eliezer is trying to do), then yes, perhaps you can create a > real AGI that will not try to preserve itself or to ameliorate, and > therefore its only goals will be those of preserving and ameliorating the > human species. But until we can do that, to me, is an illusion. This is crazy. What do you mean by breaking the laws of information theory? Superintelligence is a completely lawful phenomenon, that can exist entirely within the laws of physics as we know them and bootrapped by technology as we know it. It might discover some surprising things, like a way to compute more efficiently than we currently think physics allows, but that would be extra and certainly that won't do impossible-by-definition things like breaking mathematics. -- Vladimir Nesov [EMAIL PROTECTED] http://causalityrelay.wordpress.com/ ------------------------------------------- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51 Powered by Listbox: http://www.listbox.com
