Hi Brendan, List

Thank you so much.  The slides you pointed to are really interesting.  I 
agree that JavaScript is difficult to analyze statically, at least to 
the level of completeness required to provide meaningful assurances 
about security.

The work I've done so far is intended to put the end user in control 
over untrusted (and mobile) code introduced into the browser at the 
chrome level.  We have found some means to help achieve this goal, but 
have not yet reached a full solution.

Ideally we could statically analyze extensions' source code by some 
automated mechanism to reach an understanding of what the extension is 
capable of or intends to do.  This could be matched against a user 
defined policy to determine if it is safe for the browser to load the 
extension.

When I say source code here, I'm talking strictly about cross-platform, 
interpreted statements (e.g., JavaScript, XML/XBL, flat text such as 
chrome.manifest) and not platform-specific stuff like natively compiled 
XPCOM components.

To this point we have tried to retrofit Firefox with two levels of 
defense against untrusted code: (1) Regulation of what untrusted code 
can be executed within the browser, and (2) Regulation of the 
capabilities granted to the untrusted code.

To address the first issue, I developed a system of user authorization 
for extension loading.  It has the user sign the code of an extension 
when the extension is installed.  Firefox will then disallow the 
extension to be loaded at startup if the code is unsigned or does not 
match the signature.  This is different from Firefox's current 
implementation of signed extensions, as the new solution validates the 
code each time a browser session is begun (not just at install time).

This system is capable of preventing the unauthorized introduction of 
malicious code as happened in the case of the FormSpy extension/malware. 
  There is still some work to be done on this solution, though I do have 
a working proof of concept.

The second issue we have attempted to address by runtime monitoring of 
extensions' actions.  Mozilla's security manager interfaces have proven 
useful in implementing the monitor, though there are problems with fault 
isolation.  One problem is that of determining, for a single monitored 
action, which extension the action came from.  Spidermonkey is able to 
help us in most cases by providing a chrome:// URI to the currently 
executing file, which can be mapped to an extension ID.  The solution is 
not as simple when actions are intercepted from files that have been the 
target of an overlay.

Proper attribution of actions to their source extension is key to 
runtime policy enforcement, so we are currently working toward a 
solution to this problem.  We've been able to add an option to 
Spidermonkey so that it performs code interposition, and are looking to 
see if rewriting JavaScript can help with action attribution.  There is 
still a fair amount of work to be done on this front, though again we 
have a partial solution in place.

That summarizes the core of our work so far.  We've approached this from 
the outset with the intention that this be a workable solution for end 
users, that the changes made can be of real benefit and are not just 
theoretical security research.  With this in mind, I was hoping the GSoC 
program would allow me to further this work so that it is suitable for 
contribution to the Mozilla source tree.

Thanks again for offering to mentor!

Mike


[EMAIL PROTECTED] wrote:
> I would be happy to mentor. Any information you can share would be
> great. Here's a talk I gave recently that points toward future work we
> will be doing in Tamarin:
> 
> http://kathrin.dagstuhl.de/files/Materials/07/07091/07091.EichBrendan.Slides.pdf
> 
> Looking forward to hearing more about what you're doing. Regards,
> 
> /be

_______________________________________________
dev-security mailing list
[email protected]
https://lists.mozilla.org/listinfo/dev-security

Reply via email to