Hi Jiri,
Comments below...
--- On Thu, 8/28/08, Jiri Jelinek <[EMAIL PROTECTED]> wrote:
> >That's difficult to reconcile if you don't
> believe embodiment is all that important.
>
> Not really. We might be qualia-driven, but for our AGIs
> it's perfectly
> ok (and only "natural") to be driven by given
> goals.
I've argued elsewhere that goals that are not grounded in an AGI's experience
impart no meaning. Either an agent has some kind of embodied experience, in
which case the specified goal is not grounded in anything the agent can relate
to, or it is not embodied at all, in which case it is a mindless automaton.
> >question I would pose to you non-embodied advocates is:
> how in the world will you motivate your creation? I suppose
> that you won't. You'll just tell it what to do
> (specify its goals) and it will do it..
>
> Correct. AGIs driven by human-like-qualia would be less
> safe & harder
> to control. Human-like-qualia are too high-level to be
> safe. When
> implementing qualia (not that we know hot to do it ;-))
> & increasing
> granularity for safety, you would IMO end up with basically
> "giving
> the goals" - which is of course easier without messing
> with qualia
> implementation. Forget qualia as a motivation for our AGIs.
> Our AGIs
> are supposed to work for us, not for themselves.
So much talk about Friendliness implies that the AGI will have no ability to
choose its own goals. It seems that AGI researchers are usually looking to
create clever slaves. That may fit your notion of general intelligence, but not
mine.
Terren
-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com