--- On Sun, 8/24/08, Vladimir Nesov <[EMAIL PROTECTED]> wrote:
> What do you mean by "does not structure"? What do
> you mean by fully or
> not fully embodied?

I've already discussed what I mean by embodiment in a previous post, the one 
that immediately preceded the post you initially responded to. When I say the 
agent does not structure the goals given to it by a boot-strapping mechanism, I 
mean that the content of those goals - the way they are structured - has 
already been created by something outside of the agent.
 
> Did you read CFAI? At least it dispels the mystique and
> ridicule of
> "provable" Friendliness and shows what kind of
> things are relevant for
> its implementation. You don't really want to fill the
> universe with
> paperclips, do you? The problem is that you can't take
> a wrong route
> just because it's easier, it is an illusion born of
> insufficient
> understanding of the issue that it might be OK anyway.

I'm not taking the easy way out here, I'm talking about what I see as the only 
possible path to general intelligence. I could be wrong of course, but that's 
why we're here, to talk about our differences.

I've read parts of the CFAI but like most of Eliezer's writings, if I had time 
to read every word he writes I'd have no life at all. The crux of his argument 
seems to come down to what he calls renormalization, in which the AI corrects 
its goals as it goes. But that begs the question of what the AI is comparing 
its behavior against - some supergoal or meta-ethics or whatever you want to 
call it - and the answer must ultimately come from us, pre-structured. 
Non-embodied.
 

> I was exploring the notion of nonembodied interaction that
> you talkied about.

Right, in a way that suggests you didn't grasp what I was saying, and that may 
be a failure on my part.  
 
> > I'm saying that we don't specify that process.
> We let it emerge through
> > large numbers of generations of simulated evolution.
> Now that's going
> > to be a very unpopular idea in this forum, but it
> comes out of what I think
> > are valid philosophical criticisms of designed (or
> metacognitive/metamoral
> > if you wish) intelligence.
> 
> Name them.

I refer you to my article "Design is bad -- or why artificial intelligence 
needs artificial life":

http://machineslikeus.com/news/design-bad-or-why-artificial-intelligence-needs-artificial-life

Terren


      


-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com

Reply via email to