Le 03/08/2016 à 18:47, Andrew McCreight a écrit :
On Wed, Aug 3, 2016 at 8:35 AM, Jack Moffitt <j...@metajack.im> wrote:

I asked ekr how much this mattered, and he thought it was important. I
don't think anyone has pointed me to a documented attack, but it
definitely seems like the kind of thing that could be done somehow.

It seems like it will be important in the long run, but the current state
of the art is that every browser is getting exploited every year at
Pwn2Own, really badly. This is from an article on Pwn2Own this year: "In
fact, every successful attack at Pwn2Own this year achieved system or root
privileges, which has never happened at the event before." [1]
Trying to summarize what is said in wrap-up videos for successful exploits:

Day 1 recap https://www.youtube.com/watch?v=DOmzWKW-mto
1) combination of 4 vulns including 1 use-after-free and 1 heap overflow
2) 1 use-after-free in windows OS
3) 2 use-after-free en Flash + 1 use-after-free in Windows OS + 1 out-of-bounds vuln in Chrome
4) 2 use-after-free (1 safari 1 in privileged OS process)
5) out-of-bounds in Flash + 1 use-after-free in Windows OS

Day 2 recap https://www.youtube.com/watch?v=Sh8pveFv2DI
1) 1 use-after-free in Safari + 1 out-of-bounds in MacOS 10
2) Uninitialized stack variable in Edge + OS vuln
3) out-of-bounds vuln in Edge + buffer overflow in the kernel

Boring business as usual, I guess... To summarize the summary:
1) memory safety browser-level bugs
2) OS/kernel bugs

I guess 2) is not something that can really be addressed by Servo or only after each browser vuln is discovered (preventing the conditions that trigger the OS vuln).
But 1) is exactly what (safe) Rust was meant to address [1][2].

In a world where a sophisticated attacker can get root privileges, I
wouldn't spend too much time worrying about intraprocess attacks.
If (safe) Rust holds its promises of preventing use-after-free, uninitialized stack variables, out-of-bounds, etc, then the attacks demonstrated at pwn2own are... not possible at all (modulo OS vulns). And then the security discussion needs to start over at a new starting point, in this new world where these attacks are not possible at all. Maybe intra-process attacks are the most worrisome threat in this new world...

My undertanding of pwalton's email is that the parts written in unsafe rust binding to complex C++ libraries should be bundled together in their own process(es). Were the JS and layout engines written in safe Rust, I don't think process-as-a-sandoxing-boundary would be necessary?

David

[1] http://fr.slideshare.net/BrendanEich/future-tense-7782010#14
[2] https://air.mozilla.org/overview-of-research-team-projects/ (starting at 55'0'')

If you do want to think about it, I'd first read over the work that Chrome
people have already put into thinking about this issue:
http://www.chromium.org/developers/design-documents/site-isolation

I'm not sure how much they have actually shipped from this initiative,
though.

Andrew

[1]
http://venturebeat.com/2016/03/18/pwn2own-2016-chrome-edge-and-safari-hacked-460k-awarded-in-total/


How we allocate domains to content processes is an open question. It's
not clear whether we want to segregate high value targets or low value
targets. But the infrastructure required is the same either way pretty
much. The only strategy we know won't work is round-robin/random,
since the attacker could just keep creating domains until they land in
the right process.

To be clear, I don't think there is very much code complexity here
over the normal 2 process (chrome + content) solution. We already have
to have process spawning and IPC. The only thing that changes here is
code to decide where to spawn new pipelines.

Implementation wise, we currently spawn a new process per script
thread. I think we should change this to spawn a single, sandboxed
content process that contains all the pipelines. Later we can expand
this once it's more clear how we should allocate pipelines to
processes.

jack.

On Wed, Aug 3, 2016 at 2:53 AM, Till Schneidereit
<t...@tillschneidereit.net> wrote:
I wonder to which extent this matters. I'm not aware of any real-world
instances of the mythical cross-tab information harvesting attack. Sure,
in
theory the malvertising ad from one tab would be able to read information
from your online banking session. In practice, it seems like attacks that
gain control of the machine are so much more powerful that that's where
all
the focus is.

Additionally, it seems like two content processes, one for normal sites,
one for high-security ones (perhaps based on EV certificates), should
give
much of the benefits. Or perhaps an additional one for low-security ones
such as ads (perhaps based on tracking blocking lists).

On Wed, Aug 3, 2016 at 5:43 AM, Jack Moffitt <j...@metajack.im> wrote:

Each process is a sandboxing boundary. Without security as a concern
you would just have a single process. A huge next step is to have a
second process that all script/layout threads go into. This however
still leaves a bit of attack surface for one script task to attack
another. How many processes you want is a tradeoff of overhead vs.
security.

So really it should say "more process more security".

jack.

On Tue, Aug 2, 2016 at 9:09 PM, Patrick Walton <pwal...@mozilla.com>
wrote:
It's not a stupid question :) I actually think we should gather all
script
and layout threads together into one process. Maybe two, one for
high-security sites and one for all other sites.

Patrick


On Aug 2, 2016 6:47 PM, "Paul Rouget" <p...@mozilla.com> wrote:
On Tue, Aug 2, 2016 at 6:47 PM, Jack Moffitt <j...@metajack.im>
wrote:
First, is multiprocess and sandboxing actively supported?
I tested this right before the nightly release, and it was working
fine and didn't seem to have bad performance. Note that you can
run -M
or -M and -S, but not -S by itself (which doesn't make sense). Also
note that -M and -S probably don't work on Windows or Android
currently.

Is Servo tested with the "-M -S" options?
We do not have automated testing of these yet.

What's the status of the sandbox?
Should work on Mac and Linux, but hasn't been audited.

Is there any reasons for these options to not be turned on by
default?
They should be, although I think we wanted to fix perf issues
running
the WPT suite and get all the platforms working first. We should
probably test both configurations.

Do we want to enable "-M -S" for browserhtml? Would that help?
I wanted to have this for the nightly, but didn't have time to
test.
If it works and has decent performance we can switch to having
these
be on.

I'd like to understand what is not part of the sandboxed content
process.
I guess compositor code and anything GPU and window related is not
sandboxed so it runs in the main process.
How does a sync call to localStorage work in a sandboxed process?
Where is networking code executed?
The thing that lives in the extra processes (which are sandboxed)
are
the script and layout threads. Right now each script/layout thread
gets its own process (and I think any pipeline which shares the
same
script thread).

Eventually we'll want to have each extra process contain some
number
of pipelines. So that is script+layout but for arbitrary numbers of
domains.
In your slides, you say "more process more better".
That might be a stupid question, but why?
Because of the nature of Servo, can't we just gather all the
script+layout threads into one single sandboxed process?

The constellation, networking, graphics, etc all live in the root
process which has privileges.


I'm trying to understand the relation between a constellation,
iframes
and a sandboxed process. I would naively expect to have one
process
per constellation, but apparently, it's one process per iframe. If
I'm
not mistaken, today in browserhtml, we have only one
constellation. I
imagine in the future there would be one sandboxed process per
constellation, one constellation per group of tabs of the same
domain,
and one constellation for browserhtml.
There is only one constellation. A constellation owns a set of
pipelines which then form a tree of pipelines. It is only these
pipelines that live outside the main process.
Would there be any advantage of having one constellation per tab?
Can't a constellation fail? Would it be more robust to have multiple
constellations?

I've read somewhere that a constellation should be seen as the set of
pipelines per tab.

But maybe it's a different story with browserhtml because what would
hold the tabs/constellations would be a pipeline, so at the end, it's
just doesn't make sense to have multiple constellations.

Asking because if multiple constellation is better and if that's we
eventually want to do, we need to rethink bhtml architecture.

Eventually we'll probably experiment with where resource caching
threads and such go.

Here's a link to the deck I presented in London which has pretty
pictures of what the design should be:


https://docs.google.com/presentation/d/1ht96DBAynx7dbL2taDAzNHs78QWeKvyzrVV1O-cDQLQ/edit?usp=sharing
jack.
_______________________________________________
dev-servo mailing list
dev-servo@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-servo
_______________________________________________
dev-servo mailing list
dev-servo@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-servo

_______________________________________________
dev-servo mailing list
dev-servo@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-servo
_______________________________________________
dev-servo mailing list
dev-servo@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-servo

_______________________________________________
dev-servo mailing list
dev-servo@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-servo


_______________________________________________
dev-servo mailing list
dev-servo@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-servo

Reply via email to