[chromium-dev] Re: Unpacking Extensions and the Sandbox

2009-05-01 Thread Nicolas Sylvain
On Fri, May 1, 2009 at 10:19 AM, Aaron Boodman a...@chromium.org wrote:


 Right now, we are unpacking extensions in the browser process. This
 basically consists of unzipping the package into a directory structure
 and parsing a JSON manifest.

 Both of these things feel like things we should not be doing in the
 browser. Additionally, extensions can contains PNG images that will be
 used in the browser process, for example, for themes. Decoding these
 images also shouldn't be done in the browser process.

 I'm looking for advice on how best to sandbox all of this.


 Here are my current thoughts:

 To me, the conceptually simplest solution would be to do all of the
 unpacking in whichever renderer happened to be the one that the user
 clicked Install in. In the case of autoupdate, we'd use the
 extension's own process, which are also just renderers.

 The browser would tell the renderer about the zip file that needed to
 be unpacked, and the renderer would unzip it, parse it, and decode
 images into bitmaps, which would all be shipped back to the browser.

 The immediate practical problem with this approach is that the zip
 library we use works in terms of files, not memory. This could be
 changed, but I am not sure how good an idea that is since packages
 could be large. Average Firefox extensions are ~300k, but we are
 planning for a max of 1M.

 Maybe the renderers could be allowed to have a temporary directory
 they are allowed to do work in? The browser could put the zip file
 there and they could be unpacked in place?


We've talked about this for a bunch of different reasons, and always
pushed back. But maybe the gears team was going to do that anyway? I'm
not sure what we decided at the end.

Either way, can you modify your zip library to take a file handle instead of
a filename?

If so, we have everything you need to pass a file handle across processes,
or
even better, a memory map file.

Nicolas



 Another orthogonal idea I have heard kicked around is a separate
 utility process. This seems like it would have the same problems
 with how to get the data in and out, though, and I don't see why
 bother having a new process when we already have a renderer we could
 use.

 Looking forward to your brilliant ideas,

 - a

 


--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: Unpacking Extensions and the Sandbox

2009-05-01 Thread Erik Kay
On Fri, May 1, 2009 at 10:19 AM, Aaron Boodman a...@chromium.org wrote:


 Right now, we are unpacking extensions in the browser process. This
 basically consists of unzipping the package into a directory structure
 and parsing a JSON manifest.

 Both of these things feel like things we should not be doing in the
 browser. Additionally, extensions can contains PNG images that will be
 used in the browser process, for example, for themes. Decoding these
 images also shouldn't be done in the browser process.

 I'm looking for advice on how best to sandbox all of this.


 Here are my current thoughts:

 To me, the conceptually simplest solution would be to do all of the
 unpacking in whichever renderer happened to be the one that the user
 clicked Install in. In the case of autoupdate, we'd use the
 extension's own process, which are also just renderers.

 The browser would tell the renderer about the zip file that needed to
 be unpacked, and the renderer would unzip it, parse it, and decode
 images into bitmaps, which would all be shipped back to the browser.


For normal extensions where images are always just rendered in HTML, we
don't need to do anything special with the images.  They'll always be read
and rendered in the renderer.

The issue with images is with themes, since they're displayed by the browser
process.  I'm not sure I followed your flow with this.  At install time, you
ship decoded images over to the browser process so it can display them.
 Does it need to re-encode the images itself for storage to disk?  Or is it
going to need to ask the renderer to decode each time?



The immediate practical problem with this approach is that the zip
 library we use works in terms of files, not memory. This could be
 changed, but I am not sure how good an idea that is since packages
 could be large. Average Firefox extensions are ~300k, but we are
 planning for a max of 1M.


I think the max was actually 10M.  Perhaps we'd need to implement it as a
streaming API.  Isn't that kind of logic already in place for audio/video?



 Maybe the renderers could be allowed to have a temporary directory
 they are allowed to do work in? The browser could put the zip file
 there and they could be unpacked in place?


Perhaps the renderer could just have read access to the zip file, and then
pass the files it's unpacking one-by-one up to the browser.  If the zip has
any single large files, that gets expensive though.


Another orthogonal idea I have heard kicked around is a separate
 utility process. This seems like it would have the same problems
 with how to get the data in and out, though, and I don't see why
 bother having a new process when we already have a renderer we could
 use.


There have been other cases people have brought up where the work being
requested wasn't associated with a renderer (PAC parsing for example).  With
the extension example, I think it could be associated with a renderer, but
in some cases, we'd be opening up a new one (say you double click on a crx
file).

Erik

--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: Unpacking Extensions and the Sandbox

2009-05-01 Thread Finnur Thorarinsson
 The issue with images is with themes, since they're displayed by the
browser process.
The issue with images is also an issue with PageActions, where we want to
display icons (handed to us by an extension) inside the Omnibox.

--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: Unpacking Extensions and the Sandbox

2009-05-01 Thread Aaron Boodman

Thanks for the replies!

On Fri, May 1, 2009 at 10:42 AM, Adam Barth aba...@chromium.org wrote:
 I think we should go with the utility process.  We've seen several
 examples where this would be a useful concept to have.

On Fri, May 1, 2009 at 10:48 AM, Erik Kay erik...@chromium.org wrote:
 There have been other cases people have brought up where the work being
 requested wasn't associated with a renderer (PAC parsing for example).  With
 the extension example, I think it could be associated with a renderer, but
 in some cases, we'd be opening up a new one (say you double click on a crx

On Fri, May 1, 2009 at 10:48 AM, Ojan Vafai o...@chromium.org wrote:
 An advantage of the utility process is that it's not tied to the lifetime of
 the tab, so we don't have to deal with edge cases like when the user closes
 a tab that's mid-installing an extension.

I hadn't thought of the double-click the CRX case. If this is the only
example, I'd be willing to lose this feature, to be honest.

I think losing an in-process install when the tab that started it goes
away would be reasonable behavior.

On the other other hand, implementing a utility process doesn't seem
like a huge amount of work, and I guess it would be useful for other
chromium systems. I guess the bigger issues is getting data in and out
of whichever process we do the work in.


On Fri, May 1, 2009 at 10:42 AM, Adam Barth aba...@chromium.org wrote:
 As for the zip libraries, I seem to recall that we can marshal file
 handles into sandboxed processes, but I'm not an expert on this.

On Fri, May 1, 2009 at 10:48 AM, Erik Kay erik...@chromium.org wrote:
 I think the max was actually 10M.  Perhaps we'd need to implement it as a
 streaming API.  Isn't that kind of logic already in place for audio/video?

 Perhaps the renderer could just have read access to the zip file, and then
 pass the files it's unpacking one-by-one up to the browser.  If the zip has
 any single large files, that gets expensive though.

On Fri, May 1, 2009 at 10:45 AM, Nicolas Sylvain nsylv...@chromium.org wrote:
 We've talked about this for a bunch of different reasons, and always
 pushed back. But maybe the gears team was going to do that anyway? I'm
 not sure what we decided at the end.
 Either way, can you modify your zip library to take a file handle instead of
 a filename?
 If so, we have everything you need to pass a file handle across processes,
 or
 even better, a memory map file.

We can use DuplicateHandle() to get the input file handle in, but I am
not sure what to do about getting the directory sturcture out.

I thought perhaps the case with Gears was slightly different, but I'm
not sure why. Here, all we would need is a temporary directory (any
temporary directory) we could use to do work in. In Gears, we needed
access to a particular path.


On Fri, May 1, 2009 at 10:48 AM, Erik Kay erik...@chromium.org wrote:
 For normal extensions where images are always just rendered in HTML, we
 don't need to do anything special with the images.  They'll always be read
 and rendered in the renderer.

The issue with needing to decode images in the package is going to
come up for other things. Finnur brought up one example, PageActions.
But I forsee us needing to display images that came with the extension
in the browser for other reasons.

 The issue with images is with themes, since they're displayed by the browser
 process.  I'm not sure I followed your flow with this.  At install time, you
 ship decoded images over to the browser process so it can display them.
  Does it need to re-encode the images itself for storage to disk?  Or is it
 going to need to ask the renderer to decode each time?

I was thinking that ideally, the renderer would just unpack the
extension into whatever directory structure is useful to us. It could
be that part of this would be decode any images into bitmaps that we
store along side the extension. Later on, the browser process refers
to these.

- a

--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: Unpacking Extensions and the Sandbox

2009-05-01 Thread Scott Hess

On Fri, May 1, 2009 at 11:17 AM, Aaron Boodman a...@chromium.org wrote:
 We can use DuplicateHandle() to get the input file handle in, but I am
 not sure what to do about getting the directory sturcture out.

Crazy-talk: Have the renderer unpack the zip into a SQLite database.

Architecture-astronaut-talk: Have a virtual filesystem API which you
could expose either from the browser to a renderer (chroot-like
sandboxing), or from a utility process to a renderer (utility process
handles the unzipping).  It's only crazy if this problem never comes
up again.

-scott

--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: Unpacking Extensions and the Sandbox

2009-05-01 Thread cpu

Utility process is an amenable idea. We do something like that for
first-run import as well.

Key items, I can think of:

1- Utility process would not display UI (would it?)
2- We can allow a directory to be available for read/write
3- Use IPC for progress / heartbeat

In other words pretty much a custom renderer.

For the images I am lost. Unless we transcode I don't see the point.
Transcoding to a format that we handle well or that is not crazy would
mitigate most attacks. Or mandate the image format and do a cursory
decoding to validate.



On May 1, 11:17 am, Aaron Boodman a...@chromium.org wrote:
 Thanks for the replies!

 On Fri, May 1, 2009 at 10:42 AM, Adam Barth aba...@chromium.org wrote:
  I think we should go with the utility process.  We've seen several
  examples where this would be a useful concept to have.
 On Fri, May 1, 2009 at 10:48 AM, Erik Kay erik...@chromium.org wrote:
  There have been other cases people have brought up where the work being
  requested wasn't associated with a renderer (PAC parsing for example).  With
  the extension example, I think it could be associated with a renderer, but
  in some cases, we'd be opening up a new one (say you double click on a crx
 On Fri, May 1, 2009 at 10:48 AM, Ojan Vafai o...@chromium.org wrote:
  An advantage of the utility process is that it's not tied to the lifetime of
  the tab, so we don't have to deal with edge cases like when the user closes
  a tab that's mid-installing an extension.

 I hadn't thought of the double-click the CRX case. If this is the only
 example, I'd be willing to lose this feature, to be honest.

 I think losing an in-process install when the tab that started it goes
 away would be reasonable behavior.

 On the other other hand, implementing a utility process doesn't seem
 like a huge amount of work, and I guess it would be useful for other
 chromium systems. I guess the bigger issues is getting data in and out
 of whichever process we do the work in.





 On Fri, May 1, 2009 at 10:42 AM, Adam Barth aba...@chromium.org wrote:
  As for the zip libraries, I seem to recall that we can marshal file
  handles into sandboxed processes, but I'm not an expert on this.
 On Fri, May 1, 2009 at 10:48 AM, Erik Kay erik...@chromium.org wrote:
  I think the max was actually 10M.  Perhaps we'd need to implement it as a
  streaming API.  Isn't that kind of logic already in place for audio/video?

  Perhaps the renderer could just have read access to the zip file, and then
  pass the files it's unpacking one-by-one up to the browser.  If the zip has
  any single large files, that gets expensive though.
 On Fri, May 1, 2009 at 10:45 AM, Nicolas Sylvain nsylv...@chromium.org 
 wrote:
  We've talked about this for a bunch of different reasons, and always
  pushed back. But maybe the gears team was going to do that anyway? I'm
  not sure what we decided at the end.
  Either way, can you modify your zip library to take a file handle instead of
  a filename?
  If so, we have everything you need to pass a file handle across processes,
  or
  even better, a memory map file.

 We can use DuplicateHandle() to get the input file handle in, but I am
 not sure what to do about getting the directory sturcture out.

 I thought perhaps the case with Gears was slightly different, but I'm
 not sure why. Here, all we would need is a temporary directory (any
 temporary directory) we could use to do work in. In Gears, we needed
 access to a particular path.

 On Fri, May 1, 2009 at 10:48 AM, Erik Kay erik...@chromium.org wrote:
  For normal extensions where images are always just rendered in HTML, we
  don't need to do anything special with the images.  They'll always be read
  and rendered in the renderer.

 The issue with needing to decode images in the package is going to
 come up for other things. Finnur brought up one example, PageActions.
 But I forsee us needing to display images that came with the extension
 in the browser for other reasons.

  The issue with images is with themes, since they're displayed by the browser
  process.  I'm not sure I followed your flow with this.  At install time, you
  ship decoded images over to the browser process so it can display them.
   Does it need to re-encode the images itself for storage to disk?  Or is it
  going to need to ask the renderer to decode each time?

 I was thinking that ideally, the renderer would just unpack the
 extension into whatever directory structure is useful to us. It could
 be that part of this would be decode any images into bitmaps that we
 store along side the extension. Later on, the browser process refers
 to these.

 - a
--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: Unpacking Extensions and the Sandbox

2009-05-01 Thread Jeremy Orlow
On Fri, May 1, 2009 at 11:36 AM, cpu c...@chromium.org wrote:


 Utility process is an amenable idea. We do something like that for
 first-run import as well.

 Key items, I can think of:

 1- Utility process would not display UI (would it?)
 2- We can allow a directory to be available for read/write
 3- Use IPC for progress / heartbeat

 In other words pretty much a custom renderer.


I think it's also important to add the following:

4 - Very little (if any) state in the utility process so that restarting it
is trivial.
5 - A design so that sync calls from browser to helper are OK.  (If the
utility process dies during a call, we can maybe retry and return an error
if it crashes again.)

As for #2, are you suggesting that the utility process would do all
operations in its directory and then the browser process would push things
in or pull them out before/after the processing is done?  This might be a
simple and element way to avoid having some extensive virtual file system
layer.

--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---