On Thursday 03 July 2008 11:14:15 am Vladimir Nesov wrote:
> On Thu, Jul 3, 2008 at 9:36 PM, William Pearson <[EMAIL PROTECTED]> 
wrote:...
> > I know this doesn't have the properties you would look for in a
> > friendly AI set to dominate the world. But I think it is similar to
> > the way humans work, and will be as chaotic and hard to grok as our
> > neural structure. So as likely as humans are to explode intelligently.
>
> Yes, one can argue that AGI of minimal reliability is sufficient to
> jump-start singularity (it's my current position anyway, Oracle AI),
> but the problem with faulty design is not only that it's not going to
> be Friendly, but that it isn't going to work at all.

The problem here is that proving a theory is often considerably more difficult 
than testing it.  Additionally there are a large number of conditions 
where "almost optimal" techniques can be found relatively easily, but where 
optimal techniques require an infinite number of steps to derive.  In such 
conditions "generate and test" is a better approach, but since you are 
searching a very large state-space you can't expect to get very close to 
optimal, unless there's a very large area where the surface is smooth enough 
for hill-climbing to work.

So what's needed are criteria for "sufficiently friendly" that are testable.  
Of course, we haven't yet generated the first entry for "generate and test", 
but friendly, like optimal, may be too high a bar.  "Sufficiently friendly" 
might be a much easier goal...but to know that you've achieved it, you need 
to be able to test for it.



-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com

Reply via email to