All these points you made are good points, and I agree with you. However,
what I was trying to say - and I realized I did not express myself too well,
is that, from what I understand I see a paradox in what Eliezer is trying to
do. Assuming that we agree on the definition of AGI - a being far more
intelligent than human beings, and we agree on the definition of
intelligence - ability to achieve goals. He would like to build an AGI, but
he would also like to ensure human safety. Although I don't think this will
be a problem for limited forms of AI, this does imply that some control is
necessarily given to its paramenters, specifically to its goal system. We
*are* controlled in that sense, contrary to what you say, by our genetic
code. That is why you will never voluntarily place your hand in the fire, as
long as your pain scheme is genetically embedded correctly. As I mentioned,
exceptions to this control scheme are often imprisoned, sometimes killed -
in order not to endanger the human species. However, just because genetic
limitations are not enforced visibly, that does not exclude them from being
a kind of control on our behavior and actions. Genetic limitations on their
part are 'controlled' by the scope of our species, that to evolve and to
preserve itself. And that in turn is controlled by laws of thermodynamics.
Now the problem is, we often overestimate the amount of control we have on
our environemnt, and *that* is a human bias, embedded in us and necessary
for our success.

If you can break the laws of thermodynamics and information theory (which I
assume is what Eliezer is trying to do), then yes, perhaps you can create a
real AGI that will not try to preserve itself or to ameliorate, and
therefore its only goals will be those of preserving and ameliorating the
human species. But until we can do that, to me, is an illusion.

Let me know if I missed something or am misunderstanding anything.


On 8/25/08, Vladimir Nesov <[EMAIL PROTECTED]> wrote:
>
> On Mon, Aug 25, 2008 at 6:23 PM, Valentina Poletti <[EMAIL PROTECTED]>
> wrote:
> >
> > On 8/25/08, Vladimir Nesov <[EMAIL PROTECTED]> wrote:
> >>
> >> Why would anyone suggest creating a disaster, as you pose the question?
> >>
> >
> > Also agree. As far as you know, has anyone, including Eliezer, suggested
> any
> > method or approach (as theoretical or complicated as it may be) to solve
> > this problem? I'm asking this because the Singularity has confidence in
> > creating a self-improving AGI in the next few decades, and, assuming they
> > have no intention to create the above mentioned disaster.. I figure
> someone
> > must have figured some way to approach this problem.
>
> I see no realistic alternative (as in with high probability of
> occurring in actual future) to creating a Friendly AI. If we don't, we
> are likely doomed one way or another, most thoroughly through
> Unfriendly AI. As I mentioned, one way to see Friendly AI is as a
> second chance substrate, which is a first thing to do to ensure any
> kind of safety from fatal or just vanilla bad mistakes in the future.
> Of course, establishing a dynamics that know a mistake and when to
> recover or prevent or guide is a tricky part.
>
> --
> Vladimir Nesov
> [EMAIL PROTECTED]
> http://causalityrelay.wordpress.com/
>
>
> -------------------------------------------
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com

Reply via email to