On Sat, 18 Jan 2014, Corey Richardson wrote:
Rust's safety model is not intended to prevent untrusted code from
doing evil things.
Doesn't it succesfully do that, though? Or at least with only a small amount
of extra logic? For example, suppose I accept, compile, and run arbitrary rust
code, with only the requirement that there be no "unsafe" blocks (ignore for a
moment the fact that libstd uses unsafe). Barring compiler bugs, I think it's
then guaranteed nothing bad can happen.
It seems to me that (as usual with languages like Rust) it's simply a mildly
arduous task of maintaining a parallel libstd implementation to be used for
sandboxing, which either lacks implementations for dangerous functionality, or
has them replaced with special versions that perform correct permissions
checking. That, coupled with forbidding unsafe blocks in submitted code,
should solve the problem.
I could be completely wrong. (Is there some black magic I don't know?)
On Sat, Jan 18, 2014 at 10:18 PM, Josh Haberman <[email protected]> wrote:
Is it a design goal of Rust that you will be able to run untrusted
code in-process safely?
In other words, by whitelisting the set of available APIs and
prohibiting unsafe blocks, would you be able to (eventually, once Rust
is stable and hardened) run untrusted code in the same address space
without it intentionally or unintentionally escaping its sandbox?
(Sorry if this a FAQ, I couldn't find any info about it).
Thanks,
Josh
_______________________________________________
Rust-dev mailing list
[email protected]
https://mail.mozilla.org/listinfo/rust-dev
_______________________________________________
Rust-dev mailing list
[email protected]
https://mail.mozilla.org/listinfo/rust-dev
--
Scott Lawrence
_______________________________________________
Rust-dev mailing list
[email protected]
https://mail.mozilla.org/listinfo/rust-dev