On Sun, Aug 24, 2008 at 7:28 AM, Terren Suydam <[EMAIL PROTECTED]> wrote:
>
> --- On Sat, 8/23/08, Vladimir Nesov <[EMAIL PROTECTED]> wrote:
>
>> But you
>> can have an AI that has a bootstrapping mechanism that
>> tells it where
>> to look for goal content, tells it to absorb it and embrace
>> it.
>
> Yes, but in this scenario, the AI does not structure the goals itself. It is
> not fully embodied. Of course, we will probably argue about how important 
> that is.
>

What do you mean by "does not structure"? What do you mean by fully or
not fully embodied?


>> Evolution has nothing to do with it, except in the sense
>> that it was
>> one process that implemented the bedrock of goal system,
>> making a
>> first step that initiated any kind of moral progress. But
>> evolution
>> certainly isn't an adequate way to proceed from now on.
>
> I assume you make this assertion based on how much time/computation
> would be required, and the lack of control we have over the process. In
> other words, at the end of this process we can never have a provably
> friendly AI. We cannot dictate its morals, any more than we can dictate
> morals to our fellow humans.
>
> However, going down the path of "provably friendly AI" is fraught with its
> own concerns. Going into what those concerns are is a whole different
> topic, but for me that road is a dead end.
>

Did you read CFAI? At least it dispels the mystique and ridicule of
"provable" Friendliness and shows what kind of things are relevant for
its implementation. You don't really want to fill the universe with
paperclips, do you? The problem is that you can't take a wrong route
just because it's easier, it is an illusion born of insufficient
understanding of the issue that it might be OK anyway.


>> Basically, non-embodied interaction as you described it is
>> extracognitive interaction, workaround that doesn't
>> comply with a
>> protocol established by cognitive algorithm. If you can do
>> that, fine,
>> but cognitive algorithm is there precisely because we
>> can't build a
>> mature AI by hand, by directly reaching into the AGI's
>> mind, it needs
>> a subcognitive process that will assemble its cognition for
>> us. It is
>> basically the same problem with general intelligence and
>> with
>> Friendliness: you can neither assemble an AGI that already
>> knows all
>> the stuff and possesses human-level skills, nor an AGI that
>> has proper
>> humane goals. You can only create a metacognitive metamoral
>> process
>> that will collect both from the environment.
>
> I'm not trying to wave a magic wand and pretend we can just create
> something out of thin air that will be intelligent. Of course there needs
> to be some underlying cognitive process... did something I say lead
> you to believe I thought otherwise?

I was exploring the notion of nonembodied interaction that you talkied about.


> I'm saying that we don't specify that process. We let it emerge through
> large numbers of generations of simulated evolution. Now that's going
> to be a very unpopular idea in this forum, but it comes out of what I think
> are valid philosophical criticisms of designed (or metacognitive/metamoral
> if you wish) intelligence.

Name them.

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com

Reply via email to