On Tue, Mar 20, 2012 at 2:08 AM, lkcl <[email protected]> wrote:

>   ok. so. a summary of the problems with using SSL - and CSP,
>  and "pinning" - is described here:
>
>     https://wiki.mozilla.org/Apps/Security#The_Problem_With_Using_SSL
>
>  the summary: it's too complex to deploy, and its deployment results in
>  the site becoming a single-point-of-failure [think: 1,000,000 downloads
>  of angri burds a day].


I don't think I entirely understand what that section is referring to – in
the first section maybe it is referring to client certificates?  I don't
think https alone is unscalable.  And yes, there are other kinds of attacks
on SSL; some of which we can actually handle (like with pinning – I'm not
sure why pinning causes problems?)

The reason to use CSP and third party reviewed code hosting is to avoid a
different set of problems IMHO.  One is that a web application hosted on an
arbitrary server does not have code that can be reviewed in any way – any
request may be dynamic, there is no enumeration of all the source files,
and files can be updated at any time.  Now... I'm not sure I believe we can
meaningfully review applications... but imagine a Mozilla-hosted server,
with clear security standards, which might require *developers* to sign
their uploads, even if users don't get signed downloads, and which has no
dynamicism and so is immune to quite a few attacks as a result.  Also, if
we force applications to have appropriate CSP rules if they want access to
high-privilege APIs, those applications will be immune to some XSS
attacks.  If we require those CSP rules, then a server compromise that
removes the CSP rules will also disable the privileged APIs.  And a server
compromise wouldn't be enough to upload arbitrary Javascript without also
being able to access the management API for the Mozilla code-hosting server
(at least if the people who upload code to the code-hosting server don't do
it with the same web servers that they serve the site from).  Also we can
do things like send out email notifications when code is updated, so even
if some developer's personal machine is compromised (assuming the developer
has the keys available to upload code) then the attack can't be entirely
silent.  So even if we don't really review any code that is uploaded to our
hosted servers, we'll still have added a lot of additional security with
the process.

Note that I only think this applies to highly privileged APIs, the kind we
don't have yet.  I think the permission conversation got confusing here
when we lost sight of the starting point: web applications, deployed how
they are deployed now, using the security we have now.  Applications which
don't need access to new sensitive APIs (and most probably don't) shouldn't
have requirements beyond what already exists.
_______________________________________________
dev-security mailing list
[email protected]
https://lists.mozilla.org/listinfo/dev-security

Reply via email to