2008/7/3 Vladimir Nesov <[EMAIL PROTECTED]>:
> On Thu, Jul 3, 2008 at 9:36 PM, William Pearson <[EMAIL PROTECTED]> wrote:
>> Sorry about the long thread jack
>>
>> 2008/7/3 Vladimir Nesov <[EMAIL PROTECTED]>:
>>> On Thu, Jul 3, 2008 at 4:05 PM, William Pearson <[EMAIL PROTECTED]> wrote:
>>>> Because it is dealing with powerful stuff, when it gets it wrong it
>>>> goes wrong powerfully. You could lock the experimental code away in a
>>>> sand box inside A, but then it would be a separate program just one
>>>> inside A, but it might not be able to interact with programs in a way
>>>> that it can do its job.
>>>>
>>>> There are two grades of faultiness. frequency and severity. You cannot
>>>> predict the severity of faults of arbitrary programs (and accepting
>>>> arbitrary programs from the outside world is something I want the
>>>> system to be able to do, after vetting etc).
>>>>
>>>
>>> You can't prove any interesting thing about an arbitrary program. It
>>> can behave like a Friendly AI before February 25, 2317, and like a
>>> Giant Cheesecake AI after that.
>>>
>> Whoever said you could? The whole system is designed around the
>> ability to take in or create arbitrary code, give it only minimal
>> access to other programs that it can earn and lock it out from that
>> ability when it does something bad.
>>
>> By arbitrary code I don't mean random, I mean stuff that has not
>> formally been proven to have the properties you want. Formal proof is
>> too high a burden to place on things that you want to win. You might
>> not have the right axioms to prove the changes you want are right.
>>
>> Instead you can see the internals of the system as a form of
>> continuous experiments. B is always testing a property of A or  A', if
>> at any time it stops having the property that B looks for then B flags
>> it as buggy.
>
> The point isn't particularly about formal proof, but more about any
> theoretic estimation of reliability and optimality. If you produce an
> artifact A' and theoretically estimate that probability of it working
> correctly is such that you don't expect it to fail in 10^9 years, you
> can't beat this reliability with a result of experimental testing.
> Thus, if theoretic estimation is possible (and it's much more feasible
> for purposefully designed A' than for "arbitrary" A'), experimental
> testing has vanishingly small relevance.

This, I think, is a wild goose chase, hence why I am not following it.
Why won't the estimation system will run out of steam, like Lenats
Automated Mathematician?

>
>> I know this doesn't have the properties you would look for in a
>> friendly AI set to dominate the world. But I think it is similar to
>> the way humans work, and will be as chaotic and hard to grok as our
>> neural structure. So as likely as humans are to explode intelligently.
>
>
> Yes, one can argue that AGI of minimal reliability is sufficient to
> jump-start singularity (it's my current position anyway, Oracle AI),
> but the problem with faulty design is not only that it's not going to
> be Friendly, but that it isn't going to work at all.
>
By what principles do you think humans develop their intellects? I
don't seem to be made processes that probabilistically guarantee that
I will work better tomorrow than I did today. How do you explain
developing echolocation or specific areas specialised for reading
braille in blind people?

  Will


-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com

Reply via email to