On Sat, May 10, 2008 at 8:38 AM, William Pearson <[EMAIL PROTECTED]> wrote:
> 2) A system similar to automatic programming that takes descriptions
> in a formal language given from the outside and potentially malicious
> sources and generates a program from them. The language would be
> sufficient to specify new generative elements in and so extensible in
> that fashion. A system that cannot maintain itself trying to do this
> would quickly get swamped by viruses and the like.

If I'm understanding you correctly, what you need - or at least one
thing you need - is sandboxing, the ability to execute an arbitrary
program with the assurance that it can't compromise the environment.
This is trickier than it seems; I've been thinking about it on and off
for awhile. Most systems don't even attempt to provide full
sandboxing, the most they try to do is restrict vulnerability to
denial of service - good enough to get by on for a web browser,
probably not good enough for an AI system that might be running
millions of candidate programs over a long, unattended run. And unless
I'm missing something, you can't retrofit it after the fact, e.g. you
can't sandbox Java, you have to go back to C and write your own VM,
garbage collector etc.

I'm told MIT Scheme provides sandboxing, though I haven't had a chance
to try it out. I don't know of any other nontrivial environment that
does so. (Lots of trivial ones do, of course: Corewars, Tierra, an
ICFP contest a couple of years ago etc.)

-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com

Reply via email to