Re: WS/Service Workers, TLS and future apps - [was Re: HTTP is just fine]

2015-12-01 Thread Brad Hill
> As far as I see it, a "mixed content" has the word "content", which is
supposed to designate something that can be included in a web page and
therefore be dangerous.

"Mixed Content" (and "mixed content blocking") is a term of art that has
been in use for many years in the browser community.  As such, you are
correct that it is a bit inadequate in that it's coinage predates the
widespread use of AJAX-type patterns, but we felt that it was better to
continue to use and refine a well-known term than to introduce a new term.
Apologies if this has created any confusion.

More than just "content", the question boils down to "what does seeing the
lock promise the user?"  Browsers interpret the lock as a promise to the
user that the web page / application they are currently interacting with is
safe according to the threat model of TLS.  That is to say, it is protected
end-to-end on the network path against attempts to impersonate, eavesdrop
or modify traffic.  In a modern browser, this includes not just fetches of
images and script done declaratively in HTML, but any kind of potentially
insecure network communication.

In general, we've seen this kind of argument before for building secure
protocols at layer 7 on top of insecure transports - notably Netflix's
"Message Security Layer".  Thus far, no browser vendors have been convinced
that these are good ideas, that there is a way to safely allow their use in
TLS-protected contexts, or that building these systems with existing TLS
primitives is not an adequate solution.

In specific, I don't think you'll ever find support for treating Tor
traffic that is subject to interception and modification after it leaves an
exit node as equivalent to HTTPS, especially since we know there are active
attacks being mounted against this traffic on a regular basis.  (This is
why I suggested .onion sites as potentially secure contexts, which do not
suffer from the same exposure outside of the Tor network.)


On Tue, Dec 1, 2015 at 5:42 AM Aymeric Vitte <> wrote:

> Le 01/12/2015 05:31, Brad Hill a écrit :
> > Let's keep this discussion civil, please.
> Maybe some wording was a little tough below, apologies for this, the
> logjam attack is difficult to swallow, how something that is supposed to
> protect forward secrecy can do quietly the very contrary without even
> having the keys compromised, it's difficult to understand too why TLS
> did not implement a mechanism to protect the DH client public key.
> >
> > The reasons behind blocking of non-secure WebSocket connections from
> > secure contexts are laid out in the following document:
> >
> >
> >
> > A plaintext ws:// connection does not meet the requirements of
> > authentication, encryption and integrity, so far as the user agent is
> > able to tell, so it cannot allow it.
> The spec just mentions to align the behavior of ws to xhr, fetch and
> eventsource without giving more reasons, or I missed them.
> Let's concentrate on ws here.
> As far as I see it, a "mixed content" has the word "content", which is
> supposed to designate something that can be included in a web page and
> therefore be dangerous.
> WS cannot include anything in a page by itself, it is designed to
> discuss with external entities, for other purposes than fetching
> resources (images, js, etc) from web servers that are logically tied to
> a domain, in that case you can use xhr or other fetching means instead.
> Therefore it is logical to envision that those external entities used
> with WS are not necessarily web servers and might not have valid
> certificates.
> WS cannot hurt anything unless the application decides to insert the
> results in the page, which is not something problematic with WS only,
> the application is loaded via https, so is supposed to be secure but if
> it is doing wrong things nothing can save you, even wss.
> Unlike usual fetching means, in case of malicious use WS cannot really
> hurt like scaning URLs, this is a specific protocol not talked by anybody.
> As a result of the current policy, if we want to establish WS with
> entities that can't have a valid certificate, we must load the code via
> http which is obviously completely insecure.
> So forbiding WS with https is just putting the users at risk and prevent
> any current or future uses of ws with entities that can't have a valid
> certificate, therefore reducing the interest and potential of ws to
> something very small.
> >
> > If there is a plausible mechanism by which browsers could distinguish
> > external communications which meet the necessary security criter

Re: WS/Service Workers, TLS and future apps - [was Re: HTTP is just fine]

2015-11-30 Thread Brad Hill
I don't think there is universal agreement among browser engineers (if
anyone agrees at all) with your assertion that the Tor protocol or even Tor
hidden services are "more secure than TLS".  TLS in modern browsers
requires RSA 2048-bit or equivalent authentication, 128-bit symmetric key
confidentiality and SHA-256 or better integrity.If .onion identifiers
and the Tor protocol crypto were at this level of strength, it would be
reasonable to argue that a .onion connection represented a "secure
context", and proceed from there.  In the meantime, with .onion site
security (without TLS) at 80-bits of truncation of a SHA-1 hash of a 1024
bit key, I don't think you'll get much traction in insisting it is
equivalent to or better than TLS.

On Mon, Nov 30, 2015 at 7:52 AM Aymeric Vitte 

> Redirecting this to WebApps since it's probable that we are facing a
> design mistake that might amplify by deprecating non TLS connections. I
> have submitted the case to all possible lists in the past, never got a
> clear answer and was each time redirected to another list (ccing
> webappsec but as a whole I think that's a webapp matter, so please don't
> state only that "downgrading a secure connection to an insecure one is
> insecure").
> The case described below is simple:
> 1- https page loading the code, the code establishes ws + the Tor
> protocol to "someone" (who can be a MITM or whatever, we don't care as
> explained below)
> 2- http page loading the code, the code establishes ws + the Tor protocol
> 3- https page loading the code, the code establishes wss + the Tor protocol
> 4- https page loading the code, the code establishes normal wss connections
> 3 fails because the WS servers have self-signed certificates.
> What is insecure between 1 and 2? Obviously this is 2, because loading
> the code via http.
> Even more, 1 is more secure than 4, because the Tor protocol is more
> secure than TLS.
> It's already a reality that projects are using something like 1 and will
> continue to build systems on the same principles (one can't argue that
> such systems are unsecure or unlikely to happen, that's not true, see
> the Flashproxy project too).
> But 1 fails too, because ws is not allowed inside a https page, so we
> must use 2, which is insecure and 2 might not work any longer later.
> Service Workers are doing about the same, https must be used, as far as
> I understand Service Workers can run any browser instance in background
> even if the spec seems to focus more on the offline aspects, so I
> suppose that having 1 inside a (background) Service Worker will fail too.
> Now we have the "new" "progressive Web Apps" which surprisingly present
> as a revolution the possibility to have a web app look like a native app
> while it can be done on iOS since the begining, same thing for some
> offline caching features that were possible before, but this indeed
> brings new things, hopefully we can have one day something like all the
> cordova features inside browsers + background/headless browser instances.
> So we are talking about web apps here, not about a web page loading
> plenty of http/https stuff, web apps that can be used as
> independant/native apps or nodes to relay traffic and therefore discuss
> with some entities that can't be tied to a domain and can only use
> self-signed certificates (like WebRTC peers, why do we have a security
> exception here allowing something for WebRTC and not for this case?).
> Then 1 must be possible with WS and Service Workers, because there are
> no reasons why it should not be allowed and this will happen in the
> future under different forms (see the link below), that's not illogical,
> if you use wss then you expect it to work as such (ie fail with
> self-signed certificates for example), if you use ws (what terrible
> things can happen with ws exactly? ws can't access the DOM or whatever)
> then you are on your own and should better know what you are doing,
> that's not a reason to force you to use much more insecure 2.
> Such apps can be loaded while navigating on a web site, entirely (ie the
> web site is the app), or for more wide distribution from different sites
> than the original app site via an iframe (very ugly way) or extracted as
> a component (cool way, does not seem to be foreseen by anybody) with
> user prompt/validation ("do you want to install application X?")
> possibly running in background when needed in a sandboxed context with
> service workers.
> Le 25/11/2015 17:43, Aymeric Vitte a écrit :
> >
> >
> > Le 20/11/2015 12:35, Richard Barnes a écrit :
> >> On Thu, Nov 19, 2015 at 8:40 AM, Hanno Böck  wrote:
> >>
>  It's amazing how the same wrong arguments get repeated again and
>  again...
> >> +1000
> >>
> >> All of these points have been raised and rebutted several times.  My
> >> favorite reference is:
> >>
> >>

Re: WS/Service Workers, TLS and future apps - [was Re: HTTP is just fine]

2015-11-30 Thread Brad Hill
Let's keep this discussion civil, please.

The reasons behind blocking of non-secure WebSocket connections from secure
contexts are laid out in the following document:

A plaintext ws:// connection does not meet the requirements of
authentication, encryption and integrity, so far as the user agent is able
to tell, so it cannot allow it.

If there is a plausible mechanism by which browsers could distinguish
external communications which meet the necessary security criteria using
protocols other than TLS or authentication other than from the Web PKI,
there is a reasonable case to be made that such could be considered as
potentially secure origins and URLs.  (as has been done to some extent for
WebRTC, as you have already noted)

If you want to continue this discussion here, please:

1) State your use cases clearly for those on this list who do not already
know them.  You want to "use the Tor protocol" over websockets?  To connect
to what?  Why?  Why is it important to bootstrap an application like this
over regular http(s) instead of, for example, as an extension or modified
user-agent like TBB?

2) Describe clearly why and how the protocol you propose to use meets the
necessary guarantees a user expects from an https page.

3) Describe clearly how the user agent can determine, before any
degradation in the security state of the context is possible, that only a
protocol meeting these requirements will be used.

Ad-hominem and security nihilism of the forms "TLS / PKI is worthless so
why bother trying to enforce any security guarantees" or "other insecure
configurations like starting with http are allowed, so why not allow this
insecure configuration, too" are not appropriate or a good use of anyone's
time on this list.  Please refrain from continuing down these paths.

thank you,

Brad Hill, as co-chair

On Mon, Nov 30, 2015 at 6:25 PM Florian Bösch <> wrote:

> On Mon, Nov 30, 2015 at 10:45 PM, Richard Barnes <>
> wrote:
>> 1. Authentication: You know that you're talking to who you think you're
>> talking to.
> And then Dell installs a their own root authority on machines they ship,
> or your CA of choice gets pwn'ed or the NSA uses some undisclosed backdoor
> in the EC they managed to smuggle into the constants, or somebody combines
> a DNS poison/grab with a non verified (because piss poor CA) double
> certificate, or you hit one of the myriad of bugs that've plaqued TLS
> implementations (particularly certain large and complex ones that're
> basically one big ball of gnud which shall remain unnamed).

Fwd: [webappsec] CfC: Proposed non-normative updates to CORS

2015-08-03 Thread Brad Hill
(Dang, just realized I forgot to include WebApps on this joint deliverable.)

Members of WebApps, please note the below Call for Consensus on proposed
non-normative updates to the CORS recommendation and comment on by Monday, August 10, 2015.

Thank you,

Brad Hill
co-chair, WebAppSec WG

-- Forwarded message -
From: Brad Hill
Date: Tue, Jun 30, 2015 at 2:05 PM
Subject: [webappsec] CfC: Proposed non-normative updates to CORS

In response to and
other requests, I would like to propose the following non-normative edits
to the CORS Recommendation. (

See attached file for the proposed publication-ready document including
these edits.

A detailed description of the proposed edits follows:

1) Remove text referring to expected changes in HTML5 and the HTTP Status
Code 308, as both have advanced to REC and RFC status, respectively.

2) Update the HTTP Status Code 308 reference to point to RFC7538

3) Remove text and links for implementation reports that are 404.

4) Add the following to the end of SOTD:

p Development of the CORS algorithm after 2013 has continued in the a
href=;Fetch Living Standard/a. /p

5) Correct Section 6.2 Preflight Request, step 10, second Note, to
correctly refer to Access-Control-Request-Headers.

These changes do not impact the conformance characteristics of any user
agent implementation.  This is a call for consensus to publish these
changes, which will end in 10 days, on July 10th.


Brad Hill
WebAppSec co-chair
Title: Cross-Origin Resource Sharing


   Cross-Origin Resource Sharing

   W3C Recommendation 16 January 2014

This Version:

 Latest Version:

Previous Versions:

Anne van Kesteren
(formerly of Opera Software ASA)
Please note there may be errata for this document.
  The English version of this specification is the only normative version. Non-normative
  translations may also be available.

Copyright © 2014 W3C® (MIT, ERCIM, Keio, Beihang), All Rights Reserved. W3C liability, trademark and document use rules apply.




  This document defines a mechanism to enable client-side cross-origin
  requests. Specifications that enable an API to make cross-origin requests
  to resources can use the algorithms defined by this specification. If
  such an API is used on resources, a
  resource on http://hello-world.example can opt in using the
  mechanism described by this specification (e.g., specifying
  Access-Control-Allow-Origin: as response
  header), which would allow that resource to be fetched cross-origin from

  Status of this Document

This section describes the status of this document at the time of
  its publication. Other documents may supersede this document. A list of
  current W3C publications and the latest revision of this technical report
  can be found in the
  W3C technical reports index at

  This document has been reviewed by W3C Members, by software developers,
and by other W3C groups and interested parties, and is endorsed by the
Director as a W3C Recommendation. It is a stable document and may be
used as reference material or cited from another document. W3C's role in
making the Recommendation is to draw attention to the specification and
to promote its widespread deployment. This enhances the functionality
and interoperability of the Web.

  This W3C Recommendation of CORS was produced jointly by the
  Web Applications (WebApps)
  Web Application
  Security (WebAppSec) Working Groups, and published by the
  WebAppSec Working Group. No changes were made since the previous
  publication as Proposed Recommendation.
  If you wish to make comments regarding this document, please send them to 

 This document was produced by groups operating under the 5

Re: CORS performance

2015-02-19 Thread Brad Hill
I think that POSTing JSON would probably expose to CSRF a lot of things
that work over HTTP but don't expect to be interacted with by web browsers
in that manner.  That's why the recent JSON encoding for forms mandates
that it be same-origin only.

On Thu Feb 19 2015 at 12:23:48 PM Jonas Sicking wrote:

 On Thu, Feb 19, 2015 at 4:49 AM, Dale Harvey wrote:
  so presumably it is OK to set the Content-Type to text/plain
  Thats not ok, but may explain my confusion, is Content-Type considered a
  Custom Header that will always trigger a preflight? if so then none of
  caching will apply, CouchDB requires sending the appropriate content-type

 We most likely can consider the content-type header as *not* custom.
 I was one of the people way back when that pointed out that there's a
 theoretical chance that allowing arbitrary content-type headers could
 cause security issues. But it seems highly theoretical.

 I suspect that the mozilla security team would be fine with allowing
 arbitrary content-types to be POSTed though. Worth asking. I can't
 speak for other browser vendors of course.

 / Jonas

Re: CORS performance

2015-02-17 Thread Brad Hill
On both this, and CSP pinning, I find myself getting nervous about adding
an increasing number of headers which, when sent by any resource, impact
the security posture and functioning of an entire origin.   HSTS and HPKP
are somewhat special in that: they convey only a few bits of information.
are somewhat self-verifying (e.g. an https pin cannot be applied over http,
and a public key pin cannot be applied from a certificate without a
matching key) and unlikely to change.

For more complex, application-layer issues, are pinning headers really the
best and most manageable approach?

I think it is at least worth discussing the relative merits of using a
resource published under /.well-known for such use cases, vs. sending
pinned headers with every single resource.

Some of the things that argue against /.well-known are:

1) Added latency of fetching the resource.

For CORS pinning, this probably isn't an issue.  A client that has never
seen a pinned policy (by whatever mechanism it is delivered) will always
have to make a preflight, and servers will always have to respond
appropriately (rather than relying on the pinned policy) for legacy
clients.  So a client that knows about pinning can make a request to
/.well-known in parallel the first time it makes a CORS request to a given
host, and speed up all subsequent requests without any additional latency
on the first.

2) Clients hammering servers for non-existent /.well-known resources (the
favicon issue)

Again, probably not an issue for CORS, or not nearly as large an issue.
Clients can be configured to never ask for the CORS policy at an origin
unless/until they make a CORS request to that origin.  There is a question
of how long to treat a 404 as authoritative before re-polling, but it can
be fairly long without breaking anything.  We might also devise a hint
header that suggests a pinned policy is available, which would only be sent
along with preflight responses.

On disadvantages of headers:

1) Individual resources declaring policy for an entire origin can be
problematic and difficult to manage.

Doing this right, both to set a correct policy and police lower-privileged
resources from setting incorrect ones, in practice would require an
administrator to set up a filtering proxy in front of the entire origin,
which is more difficult and costly than simply locking down access to a
specific resource path to only administrators.

2) Lots of redundant chit-chat.

HTTP/2 header compression reduces the impact of this somewhat, and CORS
specifically could avoid sending pinning headers except on preflight
responses, but it does seem like a real concern.  How much long-term bloat
are we willing to accept on the network in order to save a few ms of
latency the first time you connect to a site?


On Tue Feb 17 2015 at 10:42:20 AM Anne van Kesteren

 Concerns raised by Monsur
 and others before him are still valid.

 When you have an HTTP API on another origin you effectively get a huge
 performance penalty. Even with caching of preflights, as each fetch is
 likely to go to a distinct URL.

 With the recent introduction of CSP pinning, I was wondering whether
 something like CORS pinning would be feasible. A way for a server to
 declare that it speaks CORS across an entire origin.

 The CORS preflight in effect is a rather complicated way for the
 server to announce that it can handle CORS. We made it rather tricky
 to avoid footgun scenarios, but I'm wondering whether that is still
 the right tradeoff.

 Something like:

   CORS: max-age=31415926; allow-origin=*; allow-credentials=true;
 allow-headers=*; allow-methods=*; expose-headers=*


Re: Security use cases for packaging

2015-01-29 Thread Brad Hill
Paging (future Dr.) Deian Stefan to the ER...

Any thoughts on using COWL for this kind of thing, with a pinned crypto key
as a confinement label to be combined with the regular Origin label?


On Thu Jan 29 2015 at 1:43:05 PM Yan Zhu wrote:

 chris palmer wrote:

  But other code from the same origin might not be signed, which could
  break the security assertion of code signing.

 Maybe the code from the downloaded package has to be run from a local
 origin like chrome://*.

Re: No-context ACTION emails are confusing

2014-10-28 Thread Brad Hill
These are created automatically by the tracker, and the create a new
action web form doesn't let you insert context until after the action is

On 10/28/14, 2:47 AM, Anne van Kesteren wrote:

Can we perhaps not post ACTION-creation emails to the list?


Re: [webappsec + webapps] CORS to PR plans

2013-08-16 Thread Brad Hill
Based on the feedback received during the Call for Consensus, I've updated
the draft CORS PR at:

Language about an implementation report has been removed, and the only
substantive modification from the previous proposed draft is that
references to status codes 200 and 204 have been replaced with the 2xx
range based on conformance testing that showed interoperability across the
broad set of codes.

A supplemental test report on the 2xx status codes as well as 308 redirect
handling is available at:

2xx handling is implemented consistently across Opera (Presto), IE,
Firefox, Chrome and Mobile Safari.

308 handling is implemented consistently with 307 handling in Opera
(Presto) and IE, and partially implemented in Firefox, satisfying the WG's
criteria of two independent interoperating implementations.

Additional test result submissions are welcome and encouraged.  (I don't
have a desktop Mac environment easily at hand.)

This message resets the call for consensus by an additional seven days, to
23-Aug-2013.  Please send feedback to  Positive
feedback is encouraged and silence will be considered assent.  I have
updated the target date for PR to 26-Sep-2013.

Thank you,

Brad Hill

On Mon, Aug 5, 2013 at 4:48 PM, Brad Hill wrote:

 I'd like to issue this as a formal Call for Consensus at this point.  If
 you have any objections to CORS advancing to Proposed Recommendation,
 please reply to  Affirmative response are also
 encouraged, and silence will be taken as assent.

 The proposed draft is available at:

 This CfC will end and be ratified by the WebAppSec WG on Tuesday, August
 13, 2013.

 Thank you,

 Brad Hill

 On Tue, Jul 16, 2013 at 12:47 PM, Brad Hill wrote:

 WebAppSec and WebApps WGs,

  CORS advanced to Candidate Recommendation this January, and I believe it
 is time we consider advancing it to Proposed Recommendation.  In the
 absence of an editor, I have been collecting bug reports sent to the
 public-webappsec list, and now have a proposed draft incorporating these
 fixes I would like to run by both WGs.

 The proposed draft can be found at:

 A diff-marked version is available at:

 (pardon some spurious diffs indicated in pre-formatted text that has not
 actually changed)

 A list of changes is as follows:

 1. Changed Fetch references.  The CR document referenced the WHATWG
 Fetch spec in a number of places.  This was problematic due to the maturity
 / stability requirements of the W3C for document advancement, and I feel
 also inappropriate, as the current Fetch spec positions itself as a
 successor to CORS, not a reference in terms of which CORS is defined.  The
 proposal is to substitute these references for the Fetching Resources
 section of the HTML5 spec at:

 I do not believe this produces substantive changes in the reading of CORS

 2. In the Terminology section, added a comma after Concept in
 response to:

 3. Per discussion to clarify the interaction of HTTP Authorization
 headers with the user credentials flag, and,
 I have inserted the following clarification:

 user credentials for the purposes of this specification means cookies,
 HTTP authentication, and client-side SSL certificates
   !-- begin change -- that would be sent based on the user agent's
 previous interactions with the origin. !-- end change --

 4. In the defintion of the Access-Control-Allow-Methods header, in
 response to ,
 clarified that The Allow header is not relevant for the purposes of the
 CORS protocol.

  that header and method are not defined correctly in the response
 headers for preflight requests.
  It appears that the intent was to respond with the list provided as
 part of the preflight request,
  rather than the potentially unbounded list the resource may actually

The following clarifications were made:

   (for methods) Since the list of methods can be unbounded, simply
 returning the method indicated by
 Access-Control-Request-Method (if supported) can be enough

Re: [webappsec + webapps] CORS to PR plans

2013-08-12 Thread Brad Hill

 Sorry for the delayed reply.

 If you look at the diff-marked version, you'll see the links to WHATWG
Fetch that I updated.

  As far as the full range of success status codes, if you look at Boris
Zbarsky's comments in the thread linked, it seems that Firefox was only
planning to implement 204.

  I will consider the CfC suspended until I get some tests running to
determine the actual implementation status in various browsers, and we can
expand the list to whatever will interop.


On Tue, Jul 16, 2013 at 5:18 PM, Anne van Kesteren wrote:

 On Tue, Jul 16, 2013 at 3:47 PM, Brad Hill wrote:
  1. Changed Fetch references.  The CR document referenced the WHATWG
  spec in a number of places.  This was problematic due to the maturity /
  stability requirements of the W3C for document advancement, and I feel
  inappropriate, as the current Fetch spec positions itself as a successor
  CORS, not a reference in terms of which CORS is defined.

 Pretty sure CORS didn't reference the Fetch Standard. Given that the
 Fetch Standard is written after I wrote CORS, that would be somewhat

  8. In response to thread beginning at:,
  added 204 as a valid code equivalent to 200 for the CORS algorithm.

 I think implementations are moving towards allowing the whole 200-299
 range. (Fetch Standard codifies that, at least.)