Re: [agi] Re: Theoretic estimation of reliability vs experimental

2008-07-03 Thread William Pearson
2008/7/3 Vladimir Nesov [EMAIL PROTECTED]:
 On Thu, Jul 3, 2008 at 9:36 PM, William Pearson [EMAIL PROTECTED] wrote:
 Sorry about the long thread jack

 2008/7/3 Vladimir Nesov [EMAIL PROTECTED]:
 On Thu, Jul 3, 2008 at 4:05 PM, William Pearson [EMAIL PROTECTED] wrote:
 Because it is dealing with powerful stuff, when it gets it wrong it
 goes wrong powerfully. You could lock the experimental code away in a
 sand box inside A, but then it would be a separate program just one
 inside A, but it might not be able to interact with programs in a way
 that it can do its job.

 There are two grades of faultiness. frequency and severity. You cannot
 predict the severity of faults of arbitrary programs (and accepting
 arbitrary programs from the outside world is something I want the
 system to be able to do, after vetting etc).


 You can't prove any interesting thing about an arbitrary program. It
 can behave like a Friendly AI before February 25, 2317, and like a
 Giant Cheesecake AI after that.

 Whoever said you could? The whole system is designed around the
 ability to take in or create arbitrary code, give it only minimal
 access to other programs that it can earn and lock it out from that
 ability when it does something bad.

 By arbitrary code I don't mean random, I mean stuff that has not
 formally been proven to have the properties you want. Formal proof is
 too high a burden to place on things that you want to win. You might
 not have the right axioms to prove the changes you want are right.

 Instead you can see the internals of the system as a form of
 continuous experiments. B is always testing a property of A or  A', if
 at any time it stops having the property that B looks for then B flags
 it as buggy.

 The point isn't particularly about formal proof, but more about any
 theoretic estimation of reliability and optimality. If you produce an
 artifact A' and theoretically estimate that probability of it working
 correctly is such that you don't expect it to fail in 10^9 years, you
 can't beat this reliability with a result of experimental testing.
 Thus, if theoretic estimation is possible (and it's much more feasible
 for purposefully designed A' than for arbitrary A'), experimental
 testing has vanishingly small relevance.

This, I think, is a wild goose chase, hence why I am not following it.
Why won't the estimation system will run out of steam, like Lenats
Automated Mathematician?


 I know this doesn't have the properties you would look for in a
 friendly AI set to dominate the world. But I think it is similar to
 the way humans work, and will be as chaotic and hard to grok as our
 neural structure. So as likely as humans are to explode intelligently.


 Yes, one can argue that AGI of minimal reliability is sufficient to
 jump-start singularity (it's my current position anyway, Oracle AI),
 but the problem with faulty design is not only that it's not going to
 be Friendly, but that it isn't going to work at all.

By what principles do you think humans develop their intellects? I
don't seem to be made processes that probabilistically guarantee that
I will work better tomorrow than I did today. How do you explain
developing echolocation or specific areas specialised for reading
braille in blind people?

  Will


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: Theoretic estimation of reliability vs experimental

2008-07-03 Thread Charles Hixson
On Thursday 03 July 2008 11:14:15 am Vladimir Nesov wrote:
 On Thu, Jul 3, 2008 at 9:36 PM, William Pearson [EMAIL PROTECTED] 
wrote:...
  I know this doesn't have the properties you would look for in a
  friendly AI set to dominate the world. But I think it is similar to
  the way humans work, and will be as chaotic and hard to grok as our
  neural structure. So as likely as humans are to explode intelligently.

 Yes, one can argue that AGI of minimal reliability is sufficient to
 jump-start singularity (it's my current position anyway, Oracle AI),
 but the problem with faulty design is not only that it's not going to
 be Friendly, but that it isn't going to work at all.

The problem here is that proving a theory is often considerably more difficult 
than testing it.  Additionally there are a large number of conditions 
where almost optimal techniques can be found relatively easily, but where 
optimal techniques require an infinite number of steps to derive.  In such 
conditions generate and test is a better approach, but since you are 
searching a very large state-space you can't expect to get very close to 
optimal, unless there's a very large area where the surface is smooth enough 
for hill-climbing to work.

So what's needed are criteria for sufficiently friendly that are testable.  
Of course, we haven't yet generated the first entry for generate and test, 
but friendly, like optimal, may be too high a bar.  Sufficiently friendly 
might be a much easier goal...but to know that you've achieved it, you need 
to be able to test for it.



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com