2008/5/10 Russell Wallace <[EMAIL PROTECTED]>:
> On Sat, May 10, 2008 at 8:38 AM, William Pearson <[EMAIL PROTECTED]> wrote:
>> 2) A system similar to automatic programming that takes descriptions
>> in a formal language given from the outside and potentially malicious
>> sources and generates a program from them. The language would be
>> sufficient to specify new generative elements in and so extensible in
>> that fashion. A system that cannot maintain itself trying to do this
>> would quickly get swamped by viruses and the like.
>
> If I'm understanding you correctly, what you need - or at least one
> thing you need - is sandboxing, the ability to execute an arbitrary
> program with the assurance that it can't compromise the environment.
> This is trickier than it seems; I've been thinking about it on and off
> for awhile. Most systems don't even attempt to provide full
> sandboxing, the most they try to do is restrict vulnerability to
> denial of service - good enough to get by on for a web browser,
> probably not good enough for an AI system that might be running
> millions of candidate programs over a long, unattended run. And unless
> I'm missing something, you can't retrofit it after the fact, e.g. you
> can't sandbox Java, you have to go back to C and write your own VM,
> garbage collector etc.

It depends on the system you are designing on. I think you can easily
create as many types of sand box as you want in programming language E
(1) for example. If the principle of least authority (2) is embedded
in the system, then you shouldn't have any problems.

> I'm told MIT Scheme provides sandboxing, though I haven't had a chance
> to try it out. I don't know of any other nontrivial environment that
> does so. (Lots of trivial ones do, of course: Corewars, Tierra, an
> ICFP contest a couple of years ago etc.)
>

I'm not strictly going for sandboxing, the programs need to be able to
break out of their box piecemeal if they prove themselves worthwhile.

I am adopting the capability model (3) with system resources being
capabilities that are bid upon and the auctions periodically resolved.
The resolution of the auction is one of the few active things the arch
will do. This is similar to ideas of mark miller and drexlers, called
agoric computing(4), although I want scheduling to be maintainable as
well, although that is a challenge.

So most likely a new program would be sand boxed initially, being
provided capabilities to heavily monitored access to resources and
limited run time. If it proves itself worthwhile it can bid for less
monitored resources. It would have to be amazing to be able to get a
large amount of control of the system. And continue to be so (or have
wiped out all its competition), to remain in control of the system.


  Will Pearson

1 http://wiki.erights.org/wiki/Walnut/intro
2 http://en.wikipedia.org/wiki/Principle_of_least_authority
3 http://en.wikipedia.org/wiki/Object-capability_model)
4 http://www.agorics.com/Library/agoricpapers.html

-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com

Reply via email to