At 02:46 PM 5/4/2001 +0100, Michael G Schwern wrote:
>On Fri, May 04, 2001 at 09:20:13AM -0400, Dan Sugalski wrote:
> > Building a good sandbox with resource limits on a VMS system is trivial. I
> > expect it may even be easier with IBM's big iron OSes.
>
>I'm sure it is.  I'm just worried about having lots of:
>
>         if( $^O =~ /VMS/ ) {
>             do some really scarey (to a Unix user) VMS hacks
>         }

Gack, no. No way. The functions we need will have platform-specific code 
living in the right place--vms.c, solaris.c, hp.c, or whatever. The 
function call signature stays the same and the right platform-specific code 
is brought in at perl build time.

> > It's less trivial with Unix, but not bad. Beats me on WindowsNT,
> > though I'd bet it's up to the task.
> >
> > The single-user OSes are more problematic. I don't know that MacOS (before
> > OS X) provides the info we need but as of System 7.x it didn't. Nor Win9x,
> > or AmigaOS. (Though for those we can still track memory usage)
>
>I'd prefer that when we think about the cross-platformness of Perl 6
>we keep these troublesome OS's in mind as far as considering what Perl
>should leave to itself and what it should leave for the OS to decide.

We really can't leave much at all to the OS to decide. Most of the OS-level 
safety measures are process-wide and many are undoable. (Which is good, 
otherwise they're reasonably pointless as ways to restrict access) chroot() 
for example won't help us much when executing code in a safe partition, 
since we really want to be able to undo it when we're not in the partition. 
That means we need to handle the restrictions from within the interpreter. 
(And yes, I realize that C extensions can bypass it if they want)

> > Luckily the security sandbox features are all implementable from
> > within perl. It's the resource limitation ones that are trickier,
> > especially CPU time.
>
>Memory limits we should be able to do, assuming Perl 6 continues to
>have its own malloc.

Yep. Even if it doesn't we can jacket the system calls if we need to for 
tracking purposes.

>CPU usage is a problem... we could provide two similar, but easier to
>implement, features.  Throttling (by sticking a tiny little sleep
>between each opcode) and limited running (ie. kill yourself if you run
>longer than X seconds).  The latter we might be able to pull of
>externally using SIGALRM, but not all systems have that.

We'd want an alternative opcode running loop for all this, and it could 
easily enough check times, as could special opcodes. Long-running codes 
could also check at reasonable breakpoints. (Still in trouble with C 
extensions, but that's pretty much a guarantee)

>Also things like limiting the number of open filehandles and sockets and
>limiting network usage could be done inside perl.

Sure. Reasonably easily trackable from within a partition. We might want to 
track this anyway for the debugger or statistical analyzer or something.

                                        Dan

--------------------------------------"it's like this"-------------------
Dan Sugalski                          even samurai
[EMAIL PROTECTED]                         have teddy bears and even
                                      teddy bears get drunk

Reply via email to