Thanks Vlad, I read all that stuff plus other Eliezer papers. They don't
answer my question: I am asking what is the use of a non-embodied AGI, given
it would necessarily have a different goal system from that of humans, I'm
not asking how to make any AGI friendly - that is extremely difficult.


On 8/21/08, Vladimir Nesov <[EMAIL PROTECTED]> wrote:
>
> On Thu, Aug 21, 2008 at 5:33 PM, Valentina Poletti <[EMAIL PROTECTED]>
> wrote:
> > Sorry if I'm commenting a little late to this: just read the thread. Here
> is
> > a question. I assume we all agree that intelligence can be defined as the
> > ability to achieve goals. My question concerns the establishment of those
> > goals. As human beings we move in a world of limitations (life span,
> ethical
> > laws, etc.) and have inherent goals (pleasure vs pain) given by
> evolution.
> > An AGI in a different embodyment might not have any of that, just a pure
> > meta system of obtaining goals, which I assume, we partly give the AGI
> and
> > partly it establishes. Now, as I understand, the point of Singularity is
> > that of building an AGI more intelligent than humans so it could solve
> > problems for us that we cannot solve. That entails that the goal system
> of
> > the AGI and ours must be interconnected somehow. I find it difficult
> > to understand how that can be achieved with an AGI with a different type
> of
> > embodyment. I.e. planes are great in achieving flights, but are quite
> > useless to birds as their goal system is quite different. Can anyone
> > clarify?
> >
>
> This is the question of Friendly AI: how to construct AGI that are
> good to have around, that are a right thing to launch Singularity
> with, what do we mean by goals, what do we want AGI to do and how to
> communicate this in implementation of AGI. Read CFAI (
> http://www.singinst.org/upload/CFAI/index.html ) and the last arc of
> Eliezer's posts on Overcoming Bias to understand what the problem is
> about. This is a tricky question, not in the least because everyone
> seems to have a deep-down intuitive confidence that they understand
> what the problem is and how to solve it, out of hand, without
> seriously thinking about it. It takes much reading to even get what
> the question is and why it won't be answered "along the way", as AGI
> itself gets understood better, or by piling lots of shallow rules,
> hoping that AGI will construct what we want from these rules by the
> magical power of its superior intelligence.
>
> For example, "inherent goals (pleasure vs pain) given by evolution"
> doesn't even begin cut it, leading the investigation in the wrong
> direction. Hedonistic goal is answered by the universe filled with
> doped humans, and it's certainly not what is right, no more than a
> universe filled with paperclips.
>
> --
> Vladimir Nesov
> [EMAIL PROTECTED]
> http://causalityrelay.wordpress.com/
>
>
> -------------------------------------------
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>



-- 
A true friend stabs you in the front. - O. Wilde

Einstein once thought he was wrong; then he discovered he was wrong.

For every complex problem, there is an answer which is short, simple and
wrong. - H.L. Mencken



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com

Reply via email to