Re: Formal proved code change vs experimental was Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-07-09 Thread Steve Richfield
William,

On 7/7/08, William Pearson [EMAIL PROTECTED] wrote:

 2008/7/3 Steve Richfield [EMAIL PROTECTED]:
  William and Vladimir,
 
  IMHO this discussion is based entirely on the absence of any sort of
  interface spec. Such a spec is absolutely necessary for a large AGI
 project
  to ever succeed, and such a spec could (hopefully) be wrung out to at
 least
  avoid the worst of the potential traps.

 And if you want the interface to be upgradeable, or alterable what
 then? This conversation was based on the ability to change as much of
 the functional and learning parts of the systems as possible.


You should read the X.25 (original US version) or EDIFACT(newer/better
European version) EDI (Electronic Data Interchange) spec. There are several
free downloadable EDIFACT descriptions on-line, but the X.25 people want to
charge for EVERYTHING. This is the basis for most of the world's financial
systems. It is designed for smooth upgrading, even though some users on a
network do NOT have the latest spec or software. The specifics of various
presently defined message types aren't interesting in this context. However,
the way that they make highly complex networks gradually upgradable IS
interesting and I believe provides a usable roadmap for AGI development.
When looking at this, think of this as a prospective standard for RPC
(Remote Procedure Calls).

Steve Richfield



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: Formal proved code change vs experimental was Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-07-07 Thread William Pearson
2008/7/3 Steve Richfield [EMAIL PROTECTED]:
 William and Vladimir,

 IMHO this discussion is based entirely on the absence of any sort of
 interface spec. Such a spec is absolutely necessary for a large AGI project
 to ever succeed, and such a spec could (hopefully) be wrung out to at least
 avoid the worst of the potential traps.

And if you want the interface to be upgradeable, or alterable what
then? This conversation was based on the ability to change as much of
the functional and learning parts of the systems as possible.

 Will Pearson


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Formal proved code change vs experimental was Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-07-03 Thread William Pearson
Sorry about the long thread jack

2008/7/3 Vladimir Nesov [EMAIL PROTECTED]:
 On Thu, Jul 3, 2008 at 4:05 PM, William Pearson [EMAIL PROTECTED] wrote:
 Because it is dealing with powerful stuff, when it gets it wrong it
 goes wrong powerfully. You could lock the experimental code away in a
 sand box inside A, but then it would be a separate program just one
 inside A, but it might not be able to interact with programs in a way
 that it can do its job.

 There are two grades of faultiness. frequency and severity. You cannot
 predict the severity of faults of arbitrary programs (and accepting
 arbitrary programs from the outside world is something I want the
 system to be able to do, after vetting etc).


 You can't prove any interesting thing about an arbitrary program. It
 can behave like a Friendly AI before February 25, 2317, and like a
 Giant Cheesecake AI after that.

Whoever said you could? The whole system is designed around the
ability to take in or create arbitrary code, give it only minimal
access to other programs that it can earn and lock it out from that
ability when it does something bad.

By arbitrary code I don't mean random, I mean stuff that has not
formally been proven to have the properties you want. Formal proof is
too high a burden to place on things that you want to win. You might
not have the right axioms to prove the changes you want are right.

Instead you can see the internals of the system as a form of
continuous experiments. B is always testing a property of A or  A', if
at any time it stops having the property that B looks for then B flags
it as buggy.

I know this doesn't have the properties you would look for in a
friendly AI set to dominate the world. But I think it is similar to
the way humans work, and will be as chaotic and hard to grok as our
neural structure. So as likely as humans are to explode intelligently.

  Will Pearson


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: Formal proved code change vs experimental was Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-07-03 Thread Steve Richfield
William and Vladimir,

IMHO this discussion is based entirely on the absence of any sort of
interface spec. Such a spec is absolutely necessary for a large AGI project
to ever succeed, and such a spec could (hopefully) be wrung out to at least
avoid the worst of the potential traps.

For example: Suppose that new tasks stated the maximum CPU resources needed
to complete. Then, exceeding that would be cause for abnormal termination.
Of course, this doesn't cover logical failure.

More advanced example: Suppose that tasks provided a chain of
consciousness log as they execute, and a monitor watches that chain of
consciousness to see that new entries are repeatedly made, that they are
grammatically (machine grammar) correct, and verifies anything that is
easily verifiable.

Even more advanced example: Suppose that a new pseudo-machine were proposed,
whose fundamental code consisted of reasonable operations in the
logic-domain being exploited by the AGI. The interpreter for this
pseudo-machine could then employ countless internal checks as it operated,
and quickly determine when things went wrong.

Does anyone out there have something, anything in the way of an interface
spec to really start this discussion?

Steve Richfield
===
On 7/3/08, William Pearson [EMAIL PROTECTED] wrote:

 Sorry about the long thread jack

 2008/7/3 Vladimir Nesov [EMAIL PROTECTED]:
  On Thu, Jul 3, 2008 at 4:05 PM, William Pearson [EMAIL PROTECTED]
 wrote:
  Because it is dealing with powerful stuff, when it gets it wrong it
  goes wrong powerfully. You could lock the experimental code away in a
  sand box inside A, but then it would be a separate program just one
  inside A, but it might not be able to interact with programs in a way
  that it can do its job.
 
  There are two grades of faultiness. frequency and severity. You cannot
  predict the severity of faults of arbitrary programs (and accepting
  arbitrary programs from the outside world is something I want the
  system to be able to do, after vetting etc).
 
 
  You can't prove any interesting thing about an arbitrary program. It
  can behave like a Friendly AI before February 25, 2317, and like a
  Giant Cheesecake AI after that.
 
 Whoever said you could? The whole system is designed around the
 ability to take in or create arbitrary code, give it only minimal
 access to other programs that it can earn and lock it out from that
 ability when it does something bad.

 By arbitrary code I don't mean random, I mean stuff that has not
 formally been proven to have the properties you want. Formal proof is
 too high a burden to place on things that you want to win. You might
 not have the right axioms to prove the changes you want are right.

 Instead you can see the internals of the system as a form of
 continuous experiments. B is always testing a property of A or  A', if
 at any time it stops having the property that B looks for then B flags
 it as buggy.

 I know this doesn't have the properties you would look for in a
 friendly AI set to dominate the world. But I think it is similar to
 the way humans work, and will be as chaotic and hard to grok as our
 neural structure. So as likely as humans are to explode intelligently.

 Will Pearson


 ---
 agi
 Archives: http://www.listbox.com/member/archive/303/=now
 RSS Feed: http://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 http://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com