I certainly like the idea of exposing a "low stack" check to the user
so that they can do better recovery. I also like the idea of
`call_with_new_stack`. I am not sure if this means that the default
recovery should be *abort* vs *task failure* (which is already fairly
drastic).
But I guess it is a legitimate question: to what extent should we
permit safe rust code to bring a system to its knees? We can't truly
execute untrusted code, since it could invoke native things or include
unsafe blocks, but it'd be nice if we could give some guarantees as to
the limits of what safe code can do. Put differently, it'd be nice
if tasks could serve as an effective sandbox for *safe code*.
It seems to me that the main way that safe code can cause problems for
a larger system are (1) allocating too much heap; (2) looping
infinitely; and (3) over-recursing. But no doubt there are more.
Maybe it doesn't make sense to address only one problem and not the
others; on the other hand, we should not let the perfect be the enemy
of the good, and perhaps we can find ways to address the others as
well (e.g., hard limits on total memory a task can ever allocate;
leveraging different O/S threads for pre-emption and killing, etc).
Niko
On Tue, Oct 29, 2013 at 11:51:10PM +0100, Igor Bukanov wrote:
> SpiderMonkey uses recursive algorithms in quite a few places. As the
> level of recursion is at mercy of JS code, checking for stack
> exhaustion is a must. For that the code explicitly compare an address
> of a local variable with a limit set as a part of thread
> initialization. If the limit is breached, the code either reports
> failure to the caller (parser, interpreter, JITed code) or tries to
> recover using a different algorithm (marking phase of GC).
>
> This explicit strategy allowed to archive stack safety with relatively
> infrequent stack checks compared with the total number of function
> calls in the code. Granted, without statick analysis this is fragile
> as missing stack check on a code path that is under control of JS
> could be potentially exploitable (this is C++ code after all), but it
> has being working.
>
> So I think aborting on stack overflow in Rust should be OK as it
> removes security implications from a stack overflow bugs. However, it
> is a must then to provide facilities to check for a low stack. It
> would also be very useful to have an option to call code with a newly
> allocated stack of the given size without creating any extra thread
> etc. This would allow for a pattern like:
>
> fn part_of_recursive_parser ... {
> if stack_low() {
> call_with_new_stack(10*1024*1024, part_of_recursive_parser)
> }
> }
>
> Then missing stack_low() becomes just a bug without security implications.
> _______________________________________________
> Rust-dev mailing list
> [email protected]
> https://mail.mozilla.org/listinfo/rust-dev
_______________________________________________
Rust-dev mailing list
[email protected]
https://mail.mozilla.org/listinfo/rust-dev