Re: [rust-dev] sandboxing Rust?

2014-01-19 Thread Patrick Walton
I think this is too strongly worded. While I agree that naively running 
untrusted Rust code is not a good idea at all, I think that language level 
security is not unachievable. It is absolutely an utmost priority to get to the 
point where the language is secure, and Rust treats memory safety issues with 
the same severity as security bugs. Even though we presently strongly advise 
against it, we intend to pretend that the point of Rust is to run untrusted 
code *as far as triaging issues and bugs is concerned*.

Emscripten/OdinMonkey and PNaCl have demonstrated that effectively hardening 
LLVM is possible for untrusted code. (Of course, there is a performance penalty 
for this.)

Finally, I disagree that processes are always the right solution here. If 
processes were as flexible as threads, there would be no need for threads! The 
trouble with isolation through processes is that isolation at the process level 
makes shared memory more difficult. For isolation with complex use of shared 
memory (mutexes and cvars), you really want language-level safety.

Patrick

Daniel Micay danielmi...@gmail.com wrote:
On Sat, Jan 18, 2014 at 10:30 PM, Scott Lawrence byt...@gmail.com
wrote:
 On Sat, 18 Jan 2014, Corey Richardson wrote:

 Rust's safety model is not intended to prevent untrusted code from
 doing evil things.


 Doesn't it succesfully do that, though? Or at least with only a small
amount
 of extra logic? For example, suppose I accept, compile, and run
arbitrary
 rust code, with only the requirement that there be no unsafe blocks
 (ignore for a moment the fact that libstd uses unsafe). Barring
compiler
 bugs, I think it's then guaranteed nothing bad can happen.

Even a small subset of Rust hasn't been proven to be secure. It has
plenty of soundness holes left in the unspoken specification. It will
eventually provide a reasonable level of certainty that you aren't
going to hit one of these issues just writing code, but it's not even
there yet.

 It seems to me that (as usual with languages like Rust) it's simply a
mildly
 arduous task of maintaining a parallel libstd implementation to be
used for
 sandboxing, which either lacks implementations for dangerous
functionality,
 or has them replaced with special versions that perform correct
permissions
 checking. That, coupled with forbidding unsafe blocks in submitted
code,
 should solve the problem.

You'll need to start with an implementation of `rustc` and `LLVM` free
of known exploitable issues. Once the known issues are all fixed, then
you can start worrying about *really* securing them against an
attacker who only needs to find a bug on one line of code in one
poorly maintained LLVM pass. Even compiling untrusted code with LLVM
without running it is a very scary prospect.

 I could be completely wrong. (Is there some black magic I don't
know?)

Yes, you're completely wrong. This kind of thinking is dangerous and
how we ended up in the mess where everyone is using ridiculously
complex and totally insecure web browsers to run untrusted code
without building a very simple trusted sandbox around it. Many known
exploits been discovered every year, and countless ones kept private
by entities like nation states and organized crime.

The language isn't yet secure and the implementation is unlikely to
ever be very secure. LLVM is certainly full of many known exploitable
bugs and many more unknown ones. There are many known issues in
`rustc` and the language too.

I don't see much of a point in avoiding a process anyway. On Linux, it
close to no overhead over a thread. Giving up shared memory is an
obvious first step, and the process can be restricted to making
`read`, `write` and `exit` system calls.

The `chromium` sandbox isn't incredibly secure but it's not insane
enough to even render from the same process as where it's compiling
JavaScript. Intel open-source Linux driver is reaching the point where
an untrusted process can be allowed to use it, but it's not there yet
and any other video driver on any of the major operating systems is a
joke.

You're not going to get very far if you're not willing to start from
process isolation, and then build real security on top of it. Anyway,
the world doesn't need another Java applet.
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev

-- 
Sent from my Android phone with K-9 Mail. Please excuse my brevity.___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] sandboxing Rust?

2014-01-19 Thread Daniel Micay
On Sun, Jan 19, 2014 at 4:17 AM, Daniel Micay danielmi...@gmail.com wrote:

 If there was a tiny subset of Rust it could be compiled down to with a
 simpler backend (not LLVM), then I think you could talk seriously
 about the language offering a secure sandbox. I don't think it is even
 obtainable with a codebase as large as librustc/LLVM. A pretty high
 number of issues in the Rust and LLVM trackers could be considered
 security issues, and those are just the ones we know about.

Of course, the entire compiler still has to be free of vulnerabilities
itself. Even if it targets a backend assumed to be correct, the
attacker still has the entire surface area of libsyntax/librustc to
play with.
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] sandboxing Rust?

2014-01-19 Thread Josh Haberman
On Sun, Jan 19, 2014 at 12:34 AM, Patrick Walton pwal...@mozilla.com wrote:
 I think this is too strongly worded. While I agree that naively running
 untrusted Rust code is not a good idea at all, I think that language level
 security is not unachievable. It is absolutely an utmost priority to get to
 the point where the language is secure, and Rust treats memory safety issues
 with the same severity as security bugs.

Cool, this is really what I was looking to know. For my own purposes
I'm not thinking so much of running entirely untrusted code, but more
like pretty trusted code: like the level of trust you have in a
framework/library that you download and use in your project; where you
didn't write the code yourself but you can read it first if you want
(and others probably have); where there is reputation on the line and
it would be tricky to hide an exploit in plain sight.

For this scenario you would care first and foremost that the code is
highly unlikely to escape inadvertently, and resistance to intentional
attack is just icing on the cake. From the above it sounds like the
goal is to take safety seriously, which would seem to make it entirely
appropriate for this purpose (eventually, once Rust is stable).

Thanks,
Josh
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


[rust-dev] sandboxing Rust?

2014-01-18 Thread Josh Haberman
Is it a design goal of Rust that you will be able to run untrusted
code in-process safely?

In other words, by whitelisting the set of available APIs and
prohibiting unsafe blocks, would you be able to (eventually, once Rust
is stable and hardened) run untrusted code in the same address space
without it intentionally or unintentionally escaping its sandbox?

(Sorry if this a FAQ, I couldn't find any info about it).

Thanks,
Josh
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] sandboxing Rust?

2014-01-18 Thread Scott Lawrence

On Sat, 18 Jan 2014, Corey Richardson wrote:


Rust's safety model is not intended to prevent untrusted code from
doing evil things.


Doesn't it succesfully do that, though? Or at least with only a small amount 
of extra logic? For example, suppose I accept, compile, and run arbitrary rust 
code, with only the requirement that there be no unsafe blocks (ignore for a 
moment the fact that libstd uses unsafe). Barring compiler bugs, I think it's 
then guaranteed nothing bad can happen.


It seems to me that (as usual with languages like Rust) it's simply a mildly 
arduous task of maintaining a parallel libstd implementation to be used for 
sandboxing, which either lacks implementations for dangerous functionality, or 
has them replaced with special versions that perform correct permissions 
checking. That, coupled with forbidding unsafe blocks in submitted code, 
should solve the problem.


I could be completely wrong. (Is there some black magic I don't know?)



On Sat, Jan 18, 2014 at 10:18 PM, Josh Haberman jhaber...@gmail.com wrote:

Is it a design goal of Rust that you will be able to run untrusted
code in-process safely?

In other words, by whitelisting the set of available APIs and
prohibiting unsafe blocks, would you be able to (eventually, once Rust
is stable and hardened) run untrusted code in the same address space
without it intentionally or unintentionally escaping its sandbox?

(Sorry if this a FAQ, I couldn't find any info about it).

Thanks,
Josh
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev

___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev



--
Scott Lawrence
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] sandboxing Rust?

2014-01-18 Thread Corey Richardson
On Sat, Jan 18, 2014 at 10:30 PM, Scott Lawrence byt...@gmail.com wrote:
 On Sat, 18 Jan 2014, Corey Richardson wrote:

 Rust's safety model is not intended to prevent untrusted code from
 doing evil things.


 Doesn't it succesfully do that, though?

It might! But Graydon was very adamant that protection from untrusted
code was/is not one of Rust's goals.

I can't think of anything evil you could do without unsafe code, and
assuming a flawless compiler.
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] sandboxing Rust?

2014-01-18 Thread Daniel Micay
On Sat, Jan 18, 2014 at 10:30 PM, Scott Lawrence byt...@gmail.com wrote:
 On Sat, 18 Jan 2014, Corey Richardson wrote:

 Rust's safety model is not intended to prevent untrusted code from
 doing evil things.


 Doesn't it succesfully do that, though? Or at least with only a small amount
 of extra logic? For example, suppose I accept, compile, and run arbitrary
 rust code, with only the requirement that there be no unsafe blocks
 (ignore for a moment the fact that libstd uses unsafe). Barring compiler
 bugs, I think it's then guaranteed nothing bad can happen.

Even a small subset of Rust hasn't been proven to be secure. It has
plenty of soundness holes left in the unspoken specification. It will
eventually provide a reasonable level of certainty that you aren't
going to hit one of these issues just writing code, but it's not even
there yet.

 It seems to me that (as usual with languages like Rust) it's simply a mildly
 arduous task of maintaining a parallel libstd implementation to be used for
 sandboxing, which either lacks implementations for dangerous functionality,
 or has them replaced with special versions that perform correct permissions
 checking. That, coupled with forbidding unsafe blocks in submitted code,
 should solve the problem.

You'll need to start with an implementation of `rustc` and `LLVM` free
of known exploitable issues. Once the known issues are all fixed, then
you can start worrying about *really* securing them against an
attacker who only needs to find a bug on one line of code in one
poorly maintained LLVM pass. Even compiling untrusted code with LLVM
without running it is a very scary prospect.

 I could be completely wrong. (Is there some black magic I don't know?)

Yes, you're completely wrong. This kind of thinking is dangerous and
how we ended up in the mess where everyone is using ridiculously
complex and totally insecure web browsers to run untrusted code
without building a very simple trusted sandbox around it. Many known
exploits been discovered every year, and countless ones kept private
by entities like nation states and organized crime.

The language isn't yet secure and the implementation is unlikely to
ever be very secure. LLVM is certainly full of many known exploitable
bugs and many more unknown ones. There are many known issues in
`rustc` and the language too.

I don't see much of a point in avoiding a process anyway. On Linux, it
close to no overhead over a thread. Giving up shared memory is an
obvious first step, and the process can be restricted to making
`read`, `write` and `exit` system calls.

The `chromium` sandbox isn't incredibly secure but it's not insane
enough to even render from the same process as where it's compiling
JavaScript. Intel open-source Linux driver is reaching the point where
an untrusted process can be allowed to use it, but it's not there yet
and any other video driver on any of the major operating systems is a
joke.

You're not going to get very far if you're not willing to start from
process isolation, and then build real security on top of it. Anyway,
the world doesn't need another Java applet.
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev