If you have a good reference for the Windows file security model I'd love to read up some more on it, as I haven't haven't found too many conclusive references on the risks/implications of passing file handles, esp. among processes of dramatically different privilege levels (other than kernel -> userland, which is where we've actually seen bugs with stdin/stdout/stderr).

Regarding the point below, I said there it would reduce platform- specific code in the content process, which means we don't end up with multiple copies of that around. Regarding context switching, we are already serializing so much stuff across IPC (pretty much all security sensitive stuff EXCEPT file access) that serializing file uploads will not have a significant impact. It would remove a lot of potentially tricky platform-specific code from the content process which is a good thing.

In some respects I feel like we're trying to address the same problem from different angles. From a defense-in-depth standpoint, we want to prevent access to natives files, sockets, libraries, and other sensitive resources by content processes except where explicitly permitted by the chrome process. Since the proposed model involves passing native file handles/descriptors, it requires the content process to have access to all of the usual file I/O libraries and prevents us from trying to whole-sale break those, and rely instead on ensuring that content processes can't load anything by direct or indirect references, can't import libraries that would allow them to do so, etc. Now it may not be possible to disable these wholesale because, as Zack mentioned, a bunch of libc assumes access to certain public files. But its a worthy goal. :)

My take-away from the chrome security model is that identifying and patching the various libraries to prohibit access to bad stuff is a painful game of whack-a-mole.
  Lucas.

On Aug 20, 2009, at 10:46 PM, Rob Arnold wrote:

On Thu, Aug 20, 2009 at 4:40 PM, Lucas Adamski <lu...@mozilla.com> wrote:

I'm not too familiar with file descriptor security issues (as that's generally a domain of local elevation of privilege exploits and traditionally more of an OS-security problem), but it seems like we have a few area of concern here:

a) file descriptors have been at the root (no pun intended) of several local escalation of privilege vulnerabilities, such as:
http://www.net-security.org/advisory.php?id=2593
http://www.securiteam.com/unixfocus/5SP0N157FK.html
http://www.derkeiler.com/Mailing-Lists/Full-Disclosure/2007-01/msg00344.html

These are all unix file descriptors - NT uses a common object manager for checking permissions and ensuring the validity of handles to operating system objects. I'm not saying that there are no security holes, but given that the code path is used quite frequently (moreso than unix file descriptors), I would be surprised if there are still OS level issues that cannot be solved by more careful application-level checks (ex: sharing access to symlinks).

b) file descriptors can be implemented differently on different platforms; exposing it directly means we give up control and are at the mercy of platform implementations (and I'm not sure how we'd even do that on Windows). There has been enough recent work around properly securing file descriptors to make me pretty nervous: http://udrepper.livejournal.com/20407.html (I don't think his browser and plugin example is super relevant but the point is file descriptors are tricky from a security standpoint)
http://www.nsa.gov/research/_files/selinux/papers/module/x92.shtml#FDPERMSCHANGES

I'm not a fan of reinventing the wheel. Again, for what it's worth, those papers deal with unix file descriptors which operate differently that Windows handles. We may need to take separate approaches (what does Chrome do?).

I suggest that you read up on the Windows file APIs/kernel design - it's quite informative. Files are a type of Executive (kernel if you will) objects. Processes access objects through handles - each process has its own handle table. Handles can be duplicated (similar to dup) with a different set of permissions from one process to another (including itself). A child process can also inherit its parents handles if both of the following apply: a) the handle is marked as inheritable (most code defaults to false here, including ours) b) The child process is created with a flag that says it gets to inherit handles.

We would most likely be using the first method (DAC), opening the file in the content process as readonly, then duplicating the handle to the child, stripping all rights except for reading (the child process can close the handle, but the object is refcounted in the kernel).

I don't know how you'd perform that on unix-based systems but in the event we cannot, we shouldn't penalize performance on the OS with the largest Firefox marketshare to workaround those limitations.

c) defense-in-depth and minimizing privilege: we should not expose low-level OS primitives to content processes where avoidable; in particular for filesystem access where paranoia is advisable. In particular I'm not clear on how a process could receive permission to access a specific file without also exposing the rest of the OS to attack either directly or via low-level disk writes. Given one of the primary goals of electrolysis security is to protect the filesystem from remote content we shouldn't over-optimize here.

It's not clear to me how an operating system can let a process use a file descriptor to a file to scribble over or read from the entire disk and claim to be stable/secure - we should not support such systems.

On Windows, it is not possible to perform low level disk writes without being an Administrator - a UAC prompt may even be involved.

I'm not sure how we would consider denial of service attacks via disk I/O from the content process - do you know of any papers on this?

Since we are serializing graphics constantly it seems like a minimal performance hit to serialize all file I/O as well. Doing so will reduce platform-specific code in the content process, provide significant defense in depth and might even provide performance and stability benefits (since the chrome process will be in a better position to clean up any open files and other resources left behind a crashed content process).

Giving the child process a native file descriptor lets us avoid excessive context switching and context switches are not cheap. I don't see how there's a reduction in platform specific code - it's either in the content process or the master process. This indirection will actually add more code. Why not let the OS do it's job? I'm not convinced that we should try to rewrite the wheel when there are existing solutions to this problem.

At minimum it seems
to invite escalation of privilege attacks.  A better model might be
for the broker process to handle the file I/O on behalf of the
content process using a randomly generated file reference that is not
an underlying file descriptor; something strongly coupled to that
particular content process (to prevent rogue processes from trying to
brute force other file references).

This doesn't make a whole lot of sense to me given that file
descriptors are meaningful only in the context of a particular process.
You can't brute force them.


If we serialize all file I/O we will need to provide some sort of file ID back to the content process; my point is that ID should be tied to the content process and not brute-forceable.

If we go that route, then yes. Why not mimic what the OS does for file descriptor numbers or process ids? OpenBSD is particularly paranoid and uses a secure random numbe

-Rob

_______________________________________________
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security

Reply via email to