Re: http-schemed URLs and HTTP/2 over unauthenticated TLS (was: Re: WebCrypto for http:// origins)

2014-11-17 Thread Henri Sivonen
On Fri, Nov 14, 2014 at 8:00 PM, Patrick McManus mcma...@ducksong.com wrote:

 On Thu, Nov 13, 2014 at 11:16 PM, Henri Sivonen hsivo...@hsivonen.fi
 wrote:

 The part that's hard to accept is: Why is the countermeasure
 considered effective for attacks like these, when the level of how
 active the MITM needs to be to foil the countermeasure (by
 inhibiting the upgrade by messing with the initial HTTP/1.1 headers)
 is less than the level of active these MITMs already are when they
 inject new HTTP/1.1 headers or inject JS into HTML?

 There are a few pieces here -
 1] I totally expect what you describe about signalling stripping to happen
 to some subset of the traffic, but an active cleartext carrier based MITM is
 not the only opponent. Many of these systems are tee'd read only dragnets.
 Especially the less sophisticated scenarios.

I agree that http+OE is effective in the case of mere read-only fiber
splitters when no hop on the way inhibits the upgrade. (The flipside
is, of course, that if you have an ISP inhibiting the upgrade as a
small time attack to inject ads, a fiber split at another hop gets
to see the un-upgraded traffic.)

 1a] not all of the signalling happens in band especially wrt mobility.

The notion that devices move between networks that change the contents
of IP packets and networks that deliver IP packets without changing
their contents and upgrade signals seen in the latter kind of network
getting remembered for the first kind makes sense, yes. But this isn't
really about whether there exist some cases where OE works but about
whether OE distracts from https.

 2] When the basic ciphertext technology is proven, I expect to see other
 ways to signal its use.

 I casually mentioned a tofu pin yesterday and you were rightly concerned
 about pin fragility - but in this case the pin needn't be hard fail (and pin
 was a poor word choice) - its an indicator to try OE. That can be downgraded
 if you start actively resetting 443, sure - but that's a much bigger step to
 take that may result in generally giving users of your network a bad
 experience.

 And if you go down this road you find all manner of other interesting ways
 to bootstrap OE - especially if what you are bootstrapping is an
 opportunistic effort that looks a lot like https on the wire: gossip
 distribution of known origins, optimistic attempts on your top-N frecency
 sites, DNS (sec?).. even h2 https sessions can be used to carry http schemed
 traffic (the h2 protocol finally explicitly carries scheme as part of the
 transaction instead of making all transactions on the same connection carry
 the same scheme) which might be a very good thing for folks with mixed
 content problems. Most of this can be explored asynchronously at the cost of
 some plaintext usage in the interim. Its opportunistic afterall.

 There is certainly some cat and mouse here - as Martin says, its really just
 a small piece. I don't think of it as more than replacing some plaintext
 with some encryption - that's not perfection, but I really do think its
 significant.

I think the idea that there might be other signals is a bad sign: It's
a sign that incrementally patching up OE signaling will end up taking
more and more effort while still falling short of https that is
already available for adoption and available even in legacy browsers.

Also, it's a bad sign in the sense that some of the things you mention
as possibilities are problems in themselves: While DNSSEC-based
signaling to use encryption in a legacy protocol whose baseline is
unencrypted makes some sense for protocols where the connection
latency is not an important part of the user experience, such as
server-to-server SMTP, it seems pretty clear that with all the focus
on initial connection latency, browsers won't start making additional
DNS queries--especially ones that might fail thanks to
middleboxes--before connecting. (Though, I suppose when the encryption
is *opportunistic* anyway, you could query DNSSEC lazily and let the
first few HTTP requests go in the clear.) As for prescanning your
top-N frecency, that's a privacy leak it itself, since an eavesdropper
could tell what the top-N frecency is by looking at the DNS traffic
the browser generates (DNSSEC infamously not providing
confidentiality...). (Also, at least if you don't have a huge legacy
of third-party includes that would become mixed content, https+HSTS is
way easier to deploy than DNSSEC in terms of setting it up, in terms
of keeping it running without interruption and in terms of not having
middleboxes mess with it.)

But I think the fundamental problem is still opportunity cost and
sapping the current momentum of https. The idea this is effective
against read-only dragnet some of the time, therefore let's do it to
improve things even some of the time might make sense if it was an
action to be taken just by the browser without the participation of
server admins and had no effect on how server admins perceive https in

Re: http-schemed URLs and HTTP/2 over unauthenticated TLS (was: Re: WebCrypto for http:// origins)

2014-11-17 Thread voracity
On Friday, November 14, 2014 6:25:43 PM UTC+11, Henri Sivonen wrote:
 This is obvious to everyone reading this mailing list. My concern is
 that if the distinction between http and https gets fuzzier, people
 who want encryption but who want to avoid ever having to pay a penny
 to a CA will think that http+OE is close enough to https that they
 deploy http+OE when if http+OE didn't exist, they'd hold their nose,
 pay a few dollars to a CA and deploy https with a publicly trusted
 cert (now that there's more awareness of the need for encryption).

Could I just interject at this point (while apologising for my general rudeness 
and lack of technical security knowledge).

The issue isn't that people are cheapskates, and will lose 'a few dollars'. The 
issue is that transaction costs http://en.wikipedia.org/wiki/Transaction_cost 
can be crippling.

Another problem is that the whole CA system is equivalent to a walled-garden, 
in which a small set of 'trusted' individuals (ultimately) restrict or permit 
what everyone else can see. It hasn't caused problems in the history of the 
internet so far, because a non-centralised alternative exists. (An alternative 
that is substantially more popular *precisely* *because* of transaction costs 
and independence.) This means it's currently a difficult environment for a few 
mega-CAs (and governments) to exercise any power. A CA-only internet changes 
that environment radically.

I'm unsurprised that Google doesn't think this is an issue. If they do 
something that (largely invisibly but substantially) increases the internet's 
http://en.wikipedia.org/wiki/Barriers_to_entry , it reduces diversity on the 
internet, but otherwise doesn't affect Google very much. (Actually, it may do, 
since it will make glorified hosting services like Facebook much more popular 
still over independent websites.) However, there is a special onus on Mozilla 
to think through *all* the social implications of what it does. Security is 
*never* pure win; there is *always* a trade off that society has to make, and I 
don't see this being considered properly here.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: http-schemed URLs and HTTP/2 over unauthenticated TLS (was: Re: WebCrypto for http:// origins)

2014-11-14 Thread Martin Thomson

 On 2014-11-13, at 21:25, Henri Sivonen hsivo...@hsivonen.fi wrote:
 Your argument relies on there being no prior session that was not 
 intermediated by the attacker.  I’ll concede that this is a likely situation 
 for a large number of clients, and not all servers will opt for protection 
 against that school of attack.
 
 What protection are you referring to?

HTTP-TLS (which seems to be confused with Alt-Svc in some of the discussion 
I’ve seen).  If you ever get a clean session, you can commit to being 
authenticated and thereby avoid any MitM until that timer lapses.  I appreciate 
that you think that this is worthless, and it may well be of marginal or even 
no use.  That’s why it’s labelled as an experiment.

 
 I haven't been to the relevant IETF sessions myself, but assume that
 https://twitter.com/sleevi_/status/509954820300472320 is true.
 
 That’s pure FUD as far as I can tell.
 
 How so given that
 http://tools.ietf.org/html/draft-loreto-httpbis-trusted-proxy20-01
 exists and explicitly seeks to defeat the defense that TLS traffic
 arising from https and TLS traffic arising from already-upgraded OE
 http look pretty much alike to an operator?

That is a direct attempt to water down the protections of the opportunistic 
security model to make MitM feasible by signaling its use.  That received a 
strongly negative reaction and E/// and operators have since distanced 
themselves from that line of solution.

 What about
 http://arstechnica.com/security/2014/10/verizon-wireless-injects-identifiers-link-its-users-to-web-requests/
 ? What about 
 http://arstechnica.com/tech-policy/2014/09/why-comcasts-javascript-ad-injections-threaten-security-net-neutrality/
 ?

Opportunistic security is a small part of our response to that.  I don’t 
understand why this is difficult to comprehend.  A simple server upgrade with 
no administrator intervention is very easy, and the protection that affords is, 
for small time attacks like these, what I consider to be an effective 
countermeasure.

 I’ve been talking regularly to operators and they are concerned about 
 opportunistic security.  It’s less urgent for them given that we are the 
 only ones who have announced an intent to deploy it (and its current status).
 
 Concerned in what way? (Having concerns suggests they aren't seeking
 to merely carry IP packets unaltered.)

Concerned in the same way that they are about all forms of increasing use of 
encryption.  They want in.  To enhance content.  To add services.  To collect 
information.  To decorate traffic to include markers for their partners.  To do 
all the things they are used to doing with cleartext traffic.  You suggest that 
they can just strip this stuff off if we add it.  It’s not that easy.


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: http-schemed URLs and HTTP/2 over unauthenticated TLS (was: Re: WebCrypto for http:// origins)

2014-11-14 Thread Henri Sivonen
On Fri, Nov 14, 2014 at 10:51 AM, Martin Thomson m...@mozilla.com wrote:
 How so given that
 http://tools.ietf.org/html/draft-loreto-httpbis-trusted-proxy20-01
 exists and explicitly seeks to defeat the defense that TLS traffic
 arising from https and TLS traffic arising from already-upgraded OE
 http look pretty much alike to an operator?

 That is a direct attempt to water down the protections of the opportunistic 
 security model to make MitM feasible by signaling its use.  That received a 
 strongly negative reaction and E/// and operators have since distanced 
 themselves from that line of solution.

Seems to be an indication of what some operators want nonetheless.

 What about
 http://arstechnica.com/security/2014/10/verizon-wireless-injects-identifiers-link-its-users-to-web-requests/
 ? What about 
 http://arstechnica.com/tech-policy/2014/09/why-comcasts-javascript-ad-injections-threaten-security-net-neutrality/
 ?

 Opportunistic security is a small part of our response to that.  I don’t 
 understand why this is difficult to comprehend.  A simple server upgrade with 
 no administrator intervention is very easy, and the protection that affords 
 is, for small time attacks like these, what I consider to be an effective 
 countermeasure.

The part that's hard to accept is: Why is the countermeasure
considered effective for attacks like these, when the level of how
active the MITM needs to be to foil the countermeasure (by
inhibiting the upgrade by messing with the initial HTTP/1.1 headers)
is less than the level of active these MITMs already are when they
inject new HTTP/1.1 headers or inject JS into HTML?

 I’ve been talking regularly to operators and they are concerned about 
 opportunistic security.  It’s less urgent for them given that we are the 
 only ones who have announced an intent to deploy it (and its current 
 status).

 Concerned in what way? (Having concerns suggests they aren't seeking
 to merely carry IP packets unaltered.)

 Concerned in the same way that they are about all forms of increasing use of 
 encryption.  They want in.  To enhance content.  To add services.  To collect 
 information.  To decorate traffic to include markers for their partners.  To 
 do all the things they are used to doing with cleartext traffic.  You suggest 
 that they can just strip this stuff off if we add it.  It’s not that easy.

Why isn't stripping the HTTP/1.1 headers that signal the upgrade that
easy? Rendering the upgrade signaling headers unrecognizable without
stretching or contracting the bytes seems easier than adding HTTP
headers or adding JS.

-- 
Henri Sivonen
hsivo...@hsivonen.fi
https://hsivonen.fi/
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: http-schemed URLs and HTTP/2 over unauthenticated TLS (was: Re: WebCrypto for http:// origins)

2014-11-14 Thread Patrick McManus
On Thu, Nov 13, 2014 at 11:16 PM, Henri Sivonen hsivo...@hsivonen.fi
wrote:

 The part that's hard to accept is: Why is the countermeasure
 considered effective for attacks like these, when the level of how
 active the MITM needs to be to foil the countermeasure (by
 inhibiting the upgrade by messing with the initial HTTP/1.1 headers)
 is less than the level of active these MITMs already are when they
 inject new HTTP/1.1 headers or inject JS into HTML?



There are a few pieces here -
1] I totally expect what you describe about signalling stripping to happen
to some subset of the traffic, but an active cleartext carrier based MITM
is not the only opponent. Many of these systems are tee'd read only
dragnets. Especially the less sophisticated scenarios.
1a] not all of the signalling happens in band especially wrt mobility.
2] When the basic ciphertext technology is proven, I expect to see other
ways to signal its use.

I casually mentioned a tofu pin yesterday and you were rightly concerned
about pin fragility - but in this case the pin needn't be hard fail (and
pin was a poor word choice) - its an indicator to try OE. That can be
downgraded if you start actively resetting 443, sure - but that's a much
bigger step to take that may result in generally giving users of your
network a bad experience.

And if you go down this road you find all manner of other interesting ways
to bootstrap OE - especially if what you are bootstrapping is an
opportunistic effort that looks a lot like https on the wire: gossip
distribution of known origins, optimistic attempts on your top-N frecency
sites, DNS (sec?).. even h2 https sessions can be used to carry http
schemed traffic (the h2 protocol finally explicitly carries scheme as part
of the transaction instead of making all transactions on the same
connection carry the same scheme) which might be a very good thing for
folks with mixed content problems. Most of this can be explored
asynchronously at the cost of some plaintext usage in the interim. Its
opportunistic afterall.

There is certainly some cat and mouse here - as Martin says, its really
just a small piece. I don't think of it as more than replacing some
plaintext with some encryption - that's not perfection, but I really do
think its significant.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: http-schemed URLs and HTTP/2 over unauthenticated TLS (was: Re: WebCrypto for http:// origins)

2014-11-13 Thread Martin Thomson
I’m not all that enthused by the blow-by-blow here.  Nonetheless, there are 
some distortions to correct.

 On 2014-11-12, at 20:23, Henri Sivonen hsivo...@hsivonen.fi wrote:
 
 That's true if the server presents a publicly trusted cert for the
 wrong hostname (as is common if you try to see what happens if you
 change the scheme for a random software download URL to https and get
 a cert for Akamai--I'm mentioning Akamai because of the [unmentioned
 on the draft] affiliation of the other author). However, if the site
 presents a self-signed cert, the MITM could check the chain and treat
 self-signed certs differently from publicly trusted certs. (While
 checking the cert chain takes more compute, it's not outlandish
 considering that an operator bothers to distinguish OpenVPN from
 IMAP-over-TLS on the same port per
 https://grepular.com/Punching_through_The_Great_Firewall_of_TMobile .)

This is true for TLS = 1.2, but will not be true for TLS 1.3.  Certificates 
are available to a MitM currently, but in future versions, that sort of attack 
will be detectable.

 But even so, focusing on what the upgraded sessions look like is
 rather beside the point when it's trivial for the MITM to inhibit the
 upgrade in the first place. In an earlier message to this thread, I
 talked about overwriting the relevant header in the initial HTTP/1.1
 traffic with spaces. I was thinking too complexly. All it takes is
 changing one letter in the header name to make it unrecognized. In
 that case, the MITM doesn't even need to maintain the context of two
 adjacent TCP packets but can, with little risk of false positives,
 look for the full header string in the middle of the packet or a tail
 of at least half the string at the start of a packet or at least half
 the string at the end of a packet and change one byte to make the
 upgrade never happen--all on the level of looking at individual IP
 packets without bothering to have any cross-packet state.

Your argument relies on there being no prior session that was not intermediated 
by the attacker.  I’ll concede that this is a likely situation for a large 
number of clients, and not all servers will opt for protection against that 
school of attack.

 I haven't been to the relevant IETF sessions myself, but assume that
 https://twitter.com/sleevi_/status/509954820300472320 is true. 

That’s pure FUD as far as I can tell.  I’ve been talking regularly to operators 
and they are concerned about opportunistic security.  It’s less urgent for them 
given that we are the only ones who have announced an intent to deploy it (and 
its current status). 
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: http-schemed URLs and HTTP/2 over unauthenticated TLS (was: Re: WebCrypto for http:// origins)

2014-11-13 Thread Patrick McManus
I haven't really waded into this iteration of the discussion because there
isn't really new information to talk about. But I know everyone is acting
in good faith so I'll offer my pov again. We're all trying to serve our
users and the Internet - same team :)

OE means ciphertext is the new plaintext. This is a transport detail.

Of course https:// is more secure than http:// of any form. This isn't
controversial - OE proponents believe this too :) Its a matter of opinion
exactly how common, comprehensive, and easy downgrade to cleartext will be
in practice - but its trivially easy to show an existence proof. Therefore,
given the choice, you should be running https://. full stop.

However, in my opinion https deployment is not trivially easy to do all the
time and in all environments and as a result tls based ciphertext is an
improvement on the defacto cleartext alternative.

Particularly at scale using forward-secret suites mixed in with https://
traffic it creates an obstacle to dragnet interception. tofu pinning is
another possibility that helps especially wrt mobility. Its a matter of
opinion how big of an obstacle that is. I get feedback from people that I
know are collecting cleartext right now that don't want us to do it. That's
encouraging.

https:// has seen very welcome growth - but ilya's post is a bit generous
in its implications on that front and even the most optimistic reading
leaves tons of plaintext http://. If you measure by HTTP transaction you
get an amount of https in the mid 50%'s (this is closer to Ilya's approach)
and our metrics match the post about chrome.. However,  if you measure by
page load or by origin you get numbers much  much lower with slower growth.
(we have metrics on the former - origin numbers are based on web
crawlers).. if you measure by byte count you start getting ridiculously low
amounts of https. I want to see those numbers higher, we all do, but I also
think that bringing some transport confidentiality to the fraction you
can't bring over to the https:// camp is a useful thing for the
confidentiality of our users and it doesn't ignore the reality of the
situation.

There are lots of reasons people don't run https://. The most unfortunate
one, that OE doesn't help with in any sense, is that this choice is wholly
in the hands of the content operator when the cost of confidentiality loss
is borne at least partially (and perhaps completely) by the user. But
that's not the only reason - mixed content, cert management, application
integration, sni problems, pki distrust, ocsp risk, and legacy markup are
just various parts of the story why some content owners don't deploy
https://. OE can help with those - those sites aren't run by folks with
google.com like resources to overcome them all. There are other barriers OE
can't help with such as hosting premium charges.

Its a false dichotomy to suggest we can't work on mitigations to those
problems to encourage https and also provide OE for scenarios that can't be
satisfied that way. This isn't hypothetical - we absolutely are both
walking and chewing gum at the same time already on this front.

I don't really believe many in the position to choose between OE and https
would choose OE - I expect it to be used by the folks that can't quite get
there. OE doesn't change the semantics of web security, so if I'm wrong
about OE's relationship to https transition rates we can disable it - it
has no semantic meaning to worry about compatibility with: ciphertext is
the new plaintext but the web (security and other) model is completely
unchanged as this is a transport detail. Reversion is effectively a safety
valve that I would have no problem using if it were necessary.

Thanks.

-Patrick

On Wed, Nov 12, 2014 at 8:23 PM, Henri Sivonen hsivo...@hsivonen.fi wrote:

 On Wed, Nov 12, 2014 at 11:12 PM, Richard Barnes rbar...@mozilla.com
 wrote:
 
  On Nov 12, 2014, at 4:35 AM, Anne van Kesteren ann...@annevk.nl
 wrote:
 
  On Mon, Sep 15, 2014 at 7:56 PM, Adam Roach a...@mozilla.com wrote:
  The whole line of argumentation that web browsers and servers should be
  taking advantage of opportunistic encryption is explicitly informed by
  what's actually happening elsewhere. Because what's *actually*
 happening
  is an overly-broad dragnet of personal information by a wide variety
 of both
  private and governmental agencies -- activities that would be
 prohibitively
  expensive in the face of opportunistic encryption.
 
  ISPs are doing it already it turns out. Governments getting to ISPs
  has already happened. I think continuing to support opportunistic
  encryption in Firefox and the IETF is harmful to our mission.
 
  You're missing Adam's point.  From the attacker's perspective,
 opportunistic sessions are indistinguishable from

 I assume you meant to say indistinguishable from https sessions, so
 the MITM risks breaking some https sessions in a noticeable way if the
 MITM tries to inject itself into an opportunistic session.

 

Re: http-schemed URLs and HTTP/2 over unauthenticated TLS (was: Re: WebCrypto for http:// origins)

2014-11-13 Thread Henri Sivonen
On Thu, Nov 13, 2014 at 8:29 PM, Martin Thomson m...@mozilla.com wrote:
 This is true for TLS = 1.2, but will not be true for TLS 1.3.  Certificates 
 are available to a MitM currently, but in future versions, that sort of 
 attack will be detectable.

Great. I was unaware of this. (This is particularly nice to hear after
the move from NPN to ALPN going the other way.)

 Your argument relies on there being no prior session that was not 
 intermediated by the attacker.  I’ll concede that this is a likely situation 
 for a large number of clients, and not all servers will opt for protection 
 against that school of attack.

What protection are you referring to?

The draft has only this:
  Once a server has indicated that it will support authenticated TLS, a
   client MAY use key pinning [I-D.ietf-websec-key-pinning] or any other
   mechanism that would otherwise be restricted to use with HTTPS URIs,
   provided that the mechanism can be restricted to a single HTTP
   origin.
...which seems too vague to lead to interoperable implementations.

Also, it seems that the set of sites that have the operational
maturity to deploy key pinning but for whom provisioning publicly
trusted certs is too hard/expensive is going to be a very small
set--likely a handful of CDNs who haven't yet responded to the
competitive pressure from Cloudflare to buy publicly trusted certs in
wholesale but will eventually have to anyway.

 I haven't been to the relevant IETF sessions myself, but assume that
 https://twitter.com/sleevi_/status/509954820300472320 is true.

 That’s pure FUD as far as I can tell.

How so given that
http://tools.ietf.org/html/draft-loreto-httpbis-trusted-proxy20-01
exists and explicitly seeks to defeat the defense that TLS traffic
arising from https and TLS traffic arising from already-upgraded OE
http look pretty much alike to an operator? What about
http://arstechnica.com/security/2014/10/verizon-wireless-injects-identifiers-link-its-users-to-web-requests/
? What about 
http://arstechnica.com/tech-policy/2014/09/why-comcasts-javascript-ad-injections-threaten-security-net-neutrality/
?

 I’ve been talking regularly to operators and they are concerned about 
 opportunistic security.  It’s less urgent for them given that we are the only 
 ones who have announced an intent to deploy it (and its current status).

Concerned in what way? (Having concerns suggests they aren't seeking
to merely carry IP packets unaltered.)

On Thu, Nov 13, 2014 at 10:08 PM, Patrick McManus mcma...@ducksong.com wrote:
 Of course https:// is more secure than http:// of any form. This isn't
 controversial - OE proponents believe this too :) Its a matter of opinion
 exactly how common, comprehensive, and easy downgrade to cleartext will be
 in practice - but its trivially easy to show an existence proof. Therefore,
 given the choice, you should be running https://. full stop.

This is obvious to everyone reading this mailing list. My concern is
that if the distinction between http and https gets fuzzier, people
who want encryption but who want to avoid ever having to pay a penny
to a CA will think that http+OE is close enough to https that they
deploy http+OE when if http+OE didn't exist, they'd hold their nose,
pay a few dollars to a CA and deploy https with a publicly trusted
cert (now that there's more awareness of the need for encryption).

 However, in my opinion https deployment is not trivially easy to do all the
 time and in all environments and as a result tls based ciphertext is an
 improvement on the defacto cleartext alternative.

OE is a strict improvement over cleartext if the existence of OE
doesn't cause sites that in the absence of OE would have migrated to
https in the next couple of years to migrate only to OE.

That is, things that are technically improvements can still be
distractions that harm the deployment of the further improvements that
are really important (in this case, real https). OTOH, point 8 at
http://open.blogs.nytimes.com/2014/11/13/embracing-https/ suggests
that holding better performance hostage works as a way to drive https
adoption.

 Particularly at scale using forward-secret suites mixed in with https://
 traffic it creates an obstacle to dragnet interception.

If the upgrade takes place, yes.

 tofu pinning is
 another possibility that helps especially wrt mobility.

TOFU pinning seems rather hand-wavy at this point, so I think it's not
well enough defined to base assessments about the merit of http+OE on.
Also, if TOFU pinning for http+OE existed, it would mean that server
admins who deploy http+OE have to care about key management in order
to avoid TOFU failures arising from random rekeying. This would bring
the deployability concerns of http+OE even closer to those of real
https, which would make it even sillier not to just do the real thing.

Specifically, https+HSTS requires you to:
 * Configure the server to do TLS.
 * Bear the performance burden of the server doing TLS.
 * Add a header to 

Re: http-schemed URLs and HTTP/2 over unauthenticated TLS (was: Re: WebCrypto for http:// origins)

2014-11-12 Thread Anne van Kesteren
On Mon, Sep 15, 2014 at 7:56 PM, Adam Roach a...@mozilla.com wrote:
 The whole line of argumentation that web browsers and servers should be
 taking advantage of opportunistic encryption is explicitly informed by
 what's actually happening elsewhere. Because what's *actually* happening
 is an overly-broad dragnet of personal information by a wide variety of both
 private and governmental agencies -- activities that would be prohibitively
 expensive in the face of opportunistic encryption.

ISPs are doing it already it turns out. Governments getting to ISPs
has already happened. I think continuing to support opportunistic
encryption in Firefox and the IETF is harmful to our mission.


 Google's laser focus on preventing active attackers to the exclusion of any
 solution that thwarts passive attacks is a prime example of insisting on a
 perfect solution, resulting instead in substantial deployments of nothing.
 They're naïvely hoping that finding just the right carrot will somehow
 result in mass adoption of an approach  that people have demonstrated, with
 fourteen years of experience, significant reluctance to deploy universally.

Where are you getting your data from?

https://plus.google.com/+IlyaGrigorik/posts/7VSuQ66qA3C shows a very
different view of what's happening.


-- 
https://annevankesteren.nl/
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: http-schemed URLs and HTTP/2 over unauthenticated TLS (was: Re: WebCrypto for http:// origins)

2014-11-12 Thread Richard Barnes

 On Nov 12, 2014, at 4:35 AM, Anne van Kesteren ann...@annevk.nl wrote:
 
 On Mon, Sep 15, 2014 at 7:56 PM, Adam Roach a...@mozilla.com wrote:
 The whole line of argumentation that web browsers and servers should be
 taking advantage of opportunistic encryption is explicitly informed by
 what's actually happening elsewhere. Because what's *actually* happening
 is an overly-broad dragnet of personal information by a wide variety of both
 private and governmental agencies -- activities that would be prohibitively
 expensive in the face of opportunistic encryption.
 
 ISPs are doing it already it turns out. Governments getting to ISPs
 has already happened. I think continuing to support opportunistic
 encryption in Firefox and the IETF is harmful to our mission.

You're missing Adam's point.  From the attacker's perspective, opportunistic 
sessions are indistinguishable from 


 Google's laser focus on preventing active attackers to the exclusion of any
 solution that thwarts passive attacks is a prime example of insisting on a
 perfect solution, resulting instead in substantial deployments of nothing.
 They're naïvely hoping that finding just the right carrot will somehow
 result in mass adoption of an approach  that people have demonstrated, with
 fourteen years of experience, significant reluctance to deploy universally.
 
 Where are you getting your data from?
 
 https://plus.google.com/+IlyaGrigorik/posts/7VSuQ66qA3C shows a very
 different view of what's happening.

Be careful how you count.  Ilya's stats are equivalent to the Firefox 
HTTP_TRANSACTION_IS_SSL metric [1], which counts things like search box 
background queries; in particular, it greatly over-samples Google.

A more realistic number is HTTP_PAGELOAD_IS_SSL, for which HTTPS adoption is 
still around 30%.  That's consistent with other measures of how many sites out 
there support HTTPS.

--Richard

[1] 
http://telemetry.mozilla.org/#filter=release%2F32%2FHTTP_TRANSACTION_IS_SSLaggregates=multiselect-all!SubmissionsevoOver=Buildslocked=truesanitize=truerenderhistogram=Graph

[2] 
http://telemetry.mozilla.org/#filter=release%2F32%2FHTTP_PAGELOAD_IS_SSLaggregates=multiselect-all!SubmissionsevoOver=Buildslocked=truesanitize=truerenderhistogram=Graph



 
 
 -- 
 https://annevankesteren.nl/

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: http-schemed URLs and HTTP/2 over unauthenticated TLS (was: Re: WebCrypto for http:// origins)

2014-11-12 Thread Henri Sivonen
On Wed, Nov 12, 2014 at 11:12 PM, Richard Barnes rbar...@mozilla.com wrote:

 On Nov 12, 2014, at 4:35 AM, Anne van Kesteren ann...@annevk.nl wrote:

 On Mon, Sep 15, 2014 at 7:56 PM, Adam Roach a...@mozilla.com wrote:
 The whole line of argumentation that web browsers and servers should be
 taking advantage of opportunistic encryption is explicitly informed by
 what's actually happening elsewhere. Because what's *actually* happening
 is an overly-broad dragnet of personal information by a wide variety of both
 private and governmental agencies -- activities that would be prohibitively
 expensive in the face of opportunistic encryption.

 ISPs are doing it already it turns out. Governments getting to ISPs
 has already happened. I think continuing to support opportunistic
 encryption in Firefox and the IETF is harmful to our mission.

 You're missing Adam's point.  From the attacker's perspective, opportunistic 
 sessions are indistinguishable from

I assume you meant to say indistinguishable from https sessions, so
the MITM risks breaking some https sessions in a noticeable way if the
MITM tries to inject itself into an opportunistic session.

That's true if the server presents a publicly trusted cert for the
wrong hostname (as is common if you try to see what happens if you
change the scheme for a random software download URL to https and get
a cert for Akamai--I'm mentioning Akamai because of the [unmentioned
on the draft] affiliation of the other author). However, if the site
presents a self-signed cert, the MITM could check the chain and treat
self-signed certs differently from publicly trusted certs. (While
checking the cert chain takes more compute, it's not outlandish
considering that an operator bothers to distinguish OpenVPN from
IMAP-over-TLS on the same port per
https://grepular.com/Punching_through_The_Great_Firewall_of_TMobile .)

But even so, focusing on what the upgraded sessions look like is
rather beside the point when it's trivial for the MITM to inhibit the
upgrade in the first place. In an earlier message to this thread, I
talked about overwriting the relevant header in the initial HTTP/1.1
traffic with spaces. I was thinking too complexly. All it takes is
changing one letter in the header name to make it unrecognized. In
that case, the MITM doesn't even need to maintain the context of two
adjacent TCP packets but can, with little risk of false positives,
look for the full header string in the middle of the packet or a tail
of at least half the string at the start of a packet or at least half
the string at the end of a packet and change one byte to make the
upgrade never happen--all on the level of looking at individual IP
packets without bothering to have any cross-packet state.

This is not a theoretical concern. See
https://www.eff.org/deeplinks/2014/11/starttls-downgrade-attacks for
an analogous attack being carried out for email by ISPs.

If we kept http URLs strictly HTTP 1.1, it would be clear that if you
want the fast new stuff, you have to do confidentiality, integrity and
authenticity properly. Sites want the fast new stuff, so this would be
an excellent carrot. By offering an upgrade to unauthenticated TLS,
people both at our end and at the server end expend effort to support
MITMable encryption, which is bad in two ways: 1) that effort would be
better spent on proper https [i.e. provisioning certs properly as far
as the sites are concerned; you already need a TLS setup for the
opportunistic stuff] and 2) it makes the line between the MITMable and
the real thing less clear, so people are likely to mistake the
MITMable for the real thing and feel less urgency to do the real
thing.

I haven't been to the relevant IETF sessions myself, but assume that
https://twitter.com/sleevi_/status/509954820300472320 is true. If even
only some operators show a preference to opportunistic encryption over
real https, that alone should be a huge red flag that they intend to
keep MITMing what's MITMable. Therefore, we should allocate our finite
resources to pushing https to be better instead of diverting effort to
MITMable things.

-- 
Henri Sivonen
hsivo...@hsivonen.fi
https://hsivonen.fi/
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: http-schemed URLs and HTTP/2 over unauthenticated TLS (was: Re: WebCrypto for http:// origins)

2014-09-21 Thread Aryeh Gregor
On Mon, Sep 15, 2014 at 11:34 AM, Anne van Kesteren ann...@annevk.nl wrote:
 It seems very bad if those kind of devices won't use authenticated
 connections in the end. Which makes me wonder, is there some activity
 at Mozilla for looking into an alternative to the CA model?

What happened to serving certs over DNSSEC?  If browsers supported
that well, it seems it has enough deployment on TLDs and registrars to
be usable to a large fraction of sites.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: http-schemed URLs and HTTP/2 over unauthenticated TLS (was: Re: WebCrypto for http:// origins)

2014-09-21 Thread Anne van Kesteren
On Sun, Sep 21, 2014 at 1:14 PM, Aryeh Gregor a...@aryeh.name wrote:
 What happened to serving certs over DNSSEC?  If browsers supported
 that well, it seems it has enough deployment on TLDs and registrars to
 be usable to a large fraction of sites.

DNSSEC does not help with authentication of domains and establishing a
secure communication channel as far as I know. Is there a particular
proposal you are referring to?


-- 
https://annevankesteren.nl/
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: http-schemed URLs and HTTP/2 over unauthenticated TLS (was: Re: WebCrypto for http:// origins)

2014-09-21 Thread Richard Barnes
Pretty sure that what he's referring to is called DANE.  It lets a domain 
holder assert a certificate or key pair, using DNSSEC to bind it to the domain 
instead of PKIX (or in addition to PKIX).

https://tools.ietf.org/html/rfc6698



On Sep 21, 2014, at 8:01 AM, Anne van Kesteren ann...@annevk.nl wrote:

 On Sun, Sep 21, 2014 at 1:14 PM, Aryeh Gregor a...@aryeh.name wrote:
 What happened to serving certs over DNSSEC?  If browsers supported
 that well, it seems it has enough deployment on TLDs and registrars to
 be usable to a large fraction of sites.
 
 DNSSEC does not help with authentication of domains and establishing a
 secure communication channel as far as I know. Is there a particular
 proposal you are referring to?
 
 
 -- 
 https://annevankesteren.nl/
 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: http-schemed URLs and HTTP/2 over unauthenticated TLS (was: Re: WebCrypto for http:// origins)

2014-09-15 Thread Henri Sivonen
On Fri, Sep 12, 2014 at 6:07 PM, Trevor Saunders
trev.saund...@gmail.com wrote:
  Do we really want all servers to have to authenticate themselves?

On the level of DV, yes, I think. (I.e. the user has a good reason to
believe that the [top-level] page actually comes from the host named
in the location bar.)

  In
  most cases they probably should, but I suspect there are cases where
  you want to run a server, but have plausable deniability.  I haven't
  gone looking for legal precedent, but it seems to me cryptographically
  signing material makes it much harder to reasonably believe a denial.

It seems to me this concern would have more weight if you actually had
found precedent of someone successfully repudiating what they've
allegedly served on the grounds of the absence of authenticated https.

(In general, the way things work is that the absence of cryptographic
evidence doesn't create enough doubt. Whenever there is a scandal over
a famous person's SMSs, those SMSs haven't been cryptographically
signed...)

 Is it really the right call for the Web to let people get the
 performance characteristics without making them do the right thing
 with authenticity (and, therefore, integrity and confidentiality)?

 On the face of things, it seems to me we should be supporting HTTP/2
 only with https URLs even if one buys Theodore T'so's reasoning about
 anonymous ephemeral Diffie–Hellman.

 The combination of
 https://twitter.com/sleevi_/status/509954820300472320 and
 http://arstechnica.com/tech-policy/2014/09/why-comcasts-javascript-ad-injections-threaten-security-net-neutrality/
 is pretty alarming.

 I agree that's bad, but I tend to believe anonymous ephemeral
 Diffie–Hellman is good enough to deal with the Comcat's of the world,

I agree that anonymous ephemeral Diffie–Hellman as the baseline would
probably reduce ISP MITMing by making it more costly. My point is that
with https://tools.ietf.org/html/draft-ietf-httpbis-http2-encryption-00
, the baseline isn't anonymous ephemeral Diffie–Hellman but
unencrypted HTTP 1.1. If a major American ISP has the capacity to
inject some JS into HTTP 1.1 for all users, they definitely have the
capacity to strip a header from HTTP 1.1 (to make the upgrade to
HTTP/2 not take place) *and* inject some JS for all users. It would
have a performance impact on those connections (the delta between HTTP
1.1 and HTTP/2), but it seems that you get to remain a major American
ISP even if you are widely perceived as providing slow connections...

(Note that ad injection can happen on the edge and the logic of having
to perform operations on Internet exchange traffic volumes doesn't
apply. Making a copy of all traffic on the edge is harder, since
there's a need to move the copy somewhere from the edge. However, if
the edge makes sure the connections never upgrade in order to keep
doing HTTP 1.1 ad injection, then the connection is unupgraded at all
hops, including the hops that are suitable for moving a copy
elsewhere.)

On Fri, Sep 12, 2014 at 7:06 PM, Martin Thomson m...@mozilla.com wrote:
 The view that encryption is expensive is a prevailing meme, and it’s 
 certainly true that some sites have reasons not to want the cost of TLS, but 
 the costs are tiny, and getting smaller 
 (https://www.imperialviolet.org/2011/02/06/stillinexpensive.html).  I will 
 concede that certain outliers will exist where this marginal cost remains 
 significant (Netflix, for example), but I don’t think that’s generally 
 applicable.  As the above post shows, it’s not that costly (even less on 
 modern hardware).  And HTTP/2 and TLS 1.3 will remove a lot of the 
 performance concerns.

Yeah, I think the best feature of
https://tools.ietf.org/html/draft-ietf-httpbis-http2-encryption-00 is
that anyone who deploys it loses the argument that they can't deploy
https due to TLS being too slow (since they already deployed TLS--just
not with publicly trusted certs).

 The current consensus view in the IETF (at least) is that all or nothing 
 approach has not done enough to materially improve security.

It's worth noting that the historical data is from a situation where
you have two alternatives: one one hand unencrypted and
unauthenticated and on the other hand encrypted and authenticated and
the latter is always slower (maybe not slower enough to truly
technically matter but truly slower so that anyone who ignores the
magnitude of how much slower can always make a knee-jerk decision not
to use the slower thing).

What the Chrome folks suggest for HTTP/2 would give rise to a
situation where your alternatives are still one one hand unencrypted
and unauthenticated and on the other hand encrypted and authenticated
*but* the latter is *faster*. So the performance argument is reversed
compared to the historical data. What if the IETF consensus is based
on an attribution error and the historical data is actually
attributable to the speed difference (not the magnitude but to the
perception that there's a difference) 

Re: http-schemed URLs and HTTP/2 over unauthenticated TLS (was: Re: WebCrypto for http:// origins)

2014-09-15 Thread Daniel Stenberg

On Mon, 15 Sep 2014, Henri Sivonen wrote:

What the Chrome folks suggest for HTTP/2 would give rise to a situation 
where your alternatives are still one one hand unencrypted and 
unauthenticated and on the other hand encrypted and authenticated *but* the 
latter is *faster*.


You mess up that reversal of the speed argument if you let unauthenticated 
be as fast as authenticated.


In my view that is a very depressing argument. That's favouring *not* 
improving something just to make sure the other option runs faster in 
comparision. Shouldn't we strive to make the user experience better for all 
users, even those accessing HTTP sites?


In a world with millions and billions of printers, fridges, TVs, settop boxes, 
elevators, nannycams or whatever all using embedded web servers - the amount 
of certificate handling for all those devices to run and use fully 
authenticated HTTPS is enough to prevent a large amount of those to just not 
go there. With opp-sec we could still up the level and make pervasive 
monitoring of a lot of such network connections much more expensive.


--

 / daniel.haxx.se
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: http-schemed URLs and HTTP/2 over unauthenticated TLS (was: Re: WebCrypto for http:// origins)

2014-09-15 Thread Anne van Kesteren
On Mon, Sep 15, 2014 at 10:24 AM, Daniel Stenberg dan...@haxx.se wrote:
 Shouldn't we strive to make the user experience better for all
 users, even those accessing HTTP sites?

Well, the question is whether we want HTTP in the end. E.g. we are
opting to not enable new powerful features such as service workers on
them, and we also want the whole web to work offline (in theory).


 In a world with millions and billions of printers, fridges, TVs, settop
 boxes, elevators, nannycams or whatever all using embedded web servers - the
 amount of certificate handling for all those devices to run and use fully
 authenticated HTTPS is enough to prevent a large amount of those to just not
 go there. With opp-sec we could still up the level and make pervasive
 monitoring of a lot of such network connections much more expensive.

It seems very bad if those kind of devices won't use authenticated
connections in the end. Which makes me wonder, is there some activity
at Mozilla for looking into an alternative to the CA model?


-- 
http://annevankesteren.nl/
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: http-schemed URLs and HTTP/2 over unauthenticated TLS (was: Re: WebCrypto for http:// origins)

2014-09-15 Thread Henri Sivonen
On Mon, Sep 15, 2014 at 11:24 AM, Daniel Stenberg dan...@haxx.se wrote:
 On Mon, 15 Sep 2014, Henri Sivonen wrote:
 What the Chrome folks suggest for HTTP/2 would give rise to a situation
 where your alternatives are still one one hand unencrypted and
 unauthenticated and on the other hand encrypted and authenticated *but* the
 latter is *faster*.

 You mess up that reversal of the speed argument if you let unauthenticated
 be as fast as authenticated.

 In my view that is a very depressing argument. That's favouring *not*
 improving something just to make sure the other option runs faster in
 comparision. Shouldn't we strive to make the user experience better for all
 users, even those accessing HTTP sites?

I think the primary way for making the experience better for users
currently accessing http sites should be getting the sites to switch
to https so that subsequently people accessing those sites would be
accessing https sites. That way, the user experience not only benefits
from HTTP/2 performance but also from the absence of ISP-injected ads
or other MITMing.

 In a world with millions and billions of printers, fridges, TVs, settop
 boxes, elevators, nannycams or whatever all using embedded web servers - the
 amount of certificate handling for all those devices to run and use fully
 authenticated HTTPS is enough to prevent a large amount of those to just not
 go there.

It seems like a very bad idea not to have authenticated security for
devices that provide access to privacy-sensitive data (nannycams,
fridges, DVRs) or that allow intruders to effect unwanted
physical-world behaviors (printers, elevators).

For devices like this that are exposed to the public network, I think
it would be worthwhile to make it feasible for dynamic DNS providers
to run a publicly trusted sub-CA that's constrained to issuing certs
only to host under their domain (i.e. not allowed to sign all names on
the net).

For devices that aren't exposed to the public network, maybe we should
make the TOFU interstitial for self-signed certs different for RFC1918
IP addresses or at least 192.168.*.*. (Explain that if you are on your
home network and accessing an appliance for the first time, it's OK
and expected to create and exception to pin that particular public key
for that IP address. However, if you are on a hotel or coffee shop
network, don't.)

-- 
Henri Sivonen
hsivo...@hsivonen.fi
https://hsivonen.fi/
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: http-schemed URLs and HTTP/2 over unauthenticated TLS (was: Re: WebCrypto for http:// origins)

2014-09-15 Thread Richard Barnes

On Sep 15, 2014, at 5:11 AM, Henri Sivonen hsivo...@hsivonen.fi wrote:

 On Mon, Sep 15, 2014 at 11:24 AM, Daniel Stenberg dan...@haxx.se wrote:
 On Mon, 15 Sep 2014, Henri Sivonen wrote:
 What the Chrome folks suggest for HTTP/2 would give rise to a situation
 where your alternatives are still one one hand unencrypted and
 unauthenticated and on the other hand encrypted and authenticated *but* the
 latter is *faster*.
 
 You mess up that reversal of the speed argument if you let unauthenticated
 be as fast as authenticated.
 
 In my view that is a very depressing argument. That's favouring *not*
 improving something just to make sure the other option runs faster in
 comparision. Shouldn't we strive to make the user experience better for all
 users, even those accessing HTTP sites?
 
 I think the primary way for making the experience better for users
 currently accessing http sites should be getting the sites to switch
 to https so that subsequently people accessing those sites would be
 accessing https sites. That way, the user experience not only benefits
 from HTTP/2 performance but also from the absence of ISP-injected ads
 or other MITMing.

Just turn on HTTPS is not as trivial as you seem to think.  For example, 
mixed content blocking means that you can't upgrade until all of your external 
dependencies have too.

--Richard



 In a world with millions and billions of printers, fridges, TVs, settop
 boxes, elevators, nannycams or whatever all using embedded web servers - the
 amount of certificate handling for all those devices to run and use fully
 authenticated HTTPS is enough to prevent a large amount of those to just not
 go there.
 
 It seems like a very bad idea not to have authenticated security for
 devices that provide access to privacy-sensitive data (nannycams,
 fridges, DVRs) or that allow intruders to effect unwanted
 physical-world behaviors (printers, elevators).
 
 For devices like this that are exposed to the public network, I think
 it would be worthwhile to make it feasible for dynamic DNS providers
 to run a publicly trusted sub-CA that's constrained to issuing certs
 only to host under their domain (i.e. not allowed to sign all names on
 the net).
 
 For devices that aren't exposed to the public network, maybe we should
 make the TOFU interstitial for self-signed certs different for RFC1918
 IP addresses or at least 192.168.*.*. (Explain that if you are on your
 home network and accessing an appliance for the first time, it's OK
 and expected to create and exception to pin that particular public key
 for that IP address. However, if you are on a hotel or coffee shop
 network, don't.)
 
 -- 
 Henri Sivonen
 hsivo...@hsivonen.fi
 https://hsivonen.fi/
 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: http-schemed URLs and HTTP/2 over unauthenticated TLS (was: Re: WebCrypto for http:// origins)

2014-09-15 Thread Anne van Kesteren
On Mon, Sep 15, 2014 at 5:59 PM, Richard Barnes rbar...@mozilla.com wrote:
 On Sep 15, 2014, at 5:11 AM, Henri Sivonen hsivo...@hsivonen.fi wrote:
 I think the primary way for making the experience better for users
 currently accessing http sites should be getting the sites to switch
 to https so that subsequently people accessing those sites would be
 accessing https sites. That way, the user experience not only benefits
 from HTTP/2 performance but also from the absence of ISP-injected ads
 or other MITMing.

 Just turn on HTTPS is not as trivial as you seem to think.  For example, 
 mixed content blocking means that you can't upgrade until all of your 
 external dependencies have too.

I don't think anyone is suggesting it's trivial. We're saying that a)
it's necessary if you want to prevent MITM, ad-injection, etc. and b)
it's required for new features such as service workers (which in turn
are required if you want to make your site work offline).

At the moment setting up TLS is quite a bit of hassle and requires
dealing with CAs to get a certificate. But given that there's no way
around TLS becoming the bottom line for interesting new features in
browsers, we need to start looking into how we can simplify that
process.

Looking into how we can prolong the non-TLS infrastructure should have
much less priority I think. Google seems to have the right trade off
and the IETF consensus seems to be unaware of what is happening
elsewhere.


-- 
https://annevankesteren.nl/
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: http-schemed URLs and HTTP/2 over unauthenticated TLS (was: Re: WebCrypto for http:// origins)

2014-09-15 Thread Eric Rescorla
On Mon, Sep 15, 2014 at 9:08 AM, Anne van Kesteren ann...@annevk.nl wrote:

 On Mon, Sep 15, 2014 at 5:59 PM, Richard Barnes rbar...@mozilla.com
 wrote:
  On Sep 15, 2014, at 5:11 AM, Henri Sivonen hsivo...@hsivonen.fi wrote:
  I think the primary way for making the experience better for users
  currently accessing http sites should be getting the sites to switch
  to https so that subsequently people accessing those sites would be
  accessing https sites. That way, the user experience not only benefits
  from HTTP/2 performance but also from the absence of ISP-injected ads
  or other MITMing.
 
  Just turn on HTTPS is not as trivial as you seem to think.  For
 example, mixed content blocking means that you can't upgrade until all of
 your external dependencies have too.

 I don't think anyone is suggesting it's trivial. We're saying that a)
 it's necessary if you want to prevent MITM, ad-injection, etc. and b)
 it's required for new features such as service workers (which in turn
 are required if you want to make your site work offline).

 At the moment setting up TLS is quite a bit of hassle and requires
 dealing with CAs to get a certificate. But given that there's no way
 around TLS becoming the bottom line for interesting new features in
 browsers, we need to start looking into how we can simplify that
 process.

 Looking into how we can prolong the non-TLS infrastructure should have
 much less priority I think.


I'm not really sure what's being debated here. There seem to be several
questions, each of which has both a standards and implementation
answer.

- Should there be HTTP2 w/o authenticated TLS (i.e., HTTPS)?
  [Standards answer: Yes. Chrome answer: no. Firefox answer: no HTTP
  w/o TLS but support opportunistic unauthenticated TLS.]

- Should there be new Web features on non-HTTPS origins? Specifically.
  * ServiceWorkers [Standards answer: HTTPS only, Chrome/Firefox answer:
same]
  * WebCrypto [Standards answer: yes, Google answer: No, Firefox  yes.]
  * gUM [Standards answer: yes, Chrome/Firefox answer: same]

Generally, I think it's useful to distinguish between settings where
TLS is especially necessary for security reasons (e.g., gUM persistent
permissions) and those where it's merely desirable as part of a general
raising of the security the bar (arguably gUM). It seems like much of the
debate about WebCrypto is where it falls in this taxonomy.



 Google seems to have the right trade off
 and the IETF consensus seems to be unaware of what is happening
 elsewhere.


I don't think it's clear that Google has a general position, seeing as they
are
doing gUM without HTTPS.

I'd also be interested in what is happening elsewhere that you think that
the
IETF consensus is unaware of. Maybe I too am unaware of it. Perhaps
you could enlighten me?

-Ekr
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: http-schemed URLs and HTTP/2 over unauthenticated TLS (was: Re: WebCrypto for http:// origins)

2014-09-15 Thread Adam Roach

On 9/15/14 11:08, Anne van Kesteren wrote:

Google seems to have the right trade off
and the IETF consensus seems to be unaware of what is happening
elsewhere.


You're confused.

The whole line of argumentation that web browsers and servers should be 
taking advantage of opportunistic encryption is explicitly informed by 
what's actually happening elsewhere. Because what's *actually* 
happening is an overly-broad dragnet of personal information by a wide 
variety of both private and governmental agencies -- activities that 
would be prohibitively expensive in the face of opportunistic encryption.


Google's laser focus on preventing active attackers to the exclusion of 
any solution that thwarts passive attacks is a prime example of 
insisting on a perfect solution, resulting instead in substantial 
deployments of nothing. They're naïvely hoping that finding just the 
right carrot will somehow result in mass adoption of an approach that 
people have demonstrated, with fourteen years of experience, significant 
reluctance to deploy universally.


This is something far worse than being simply unaware of what's 
happening elsewhere: it's an acknowledgement that pervasive passive 
monitoring is taking place, and a conscious decision not to care.


--
Adam Roach
Principal Platform Engineer
a...@mozilla.com
+1 650 903 0800 x863
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: http-schemed URLs and HTTP/2 over unauthenticated TLS (was: Re: WebCrypto for http:// origins)

2014-09-12 Thread Patrick McManus
On Fri, Sep 12, 2014 at 1:55 AM, Henri Sivonen hsivo...@hsivonen.fi wrote:

 tion to https
 that obtaining, provisioning and replacing certificates is too
 expensive.


Related concepts are at the core of why I'm going to give Opportunistic
Security a try with http/2. The issues you cite are real issues in
practice, but they become magnified in other environments where the PKI
doesn't apply well (e.g. behind firewalls, in embedded devices, etc..)..
and then, perhaps most convincingly for me, there remains a lot of legacy
web content that can't easily migrate to vanilla https:// schemes we all
want them to run (e.g. third party dependencies or SNI dependencies) and
this is a compatibility measure for them.

Personally I expect any failure mode here will be that nobody uses it, not
that it drives out https. But establishment is all transparent to the web
security model and asynchronous, so if that does happen we can easily
remove support. The potential upside is that a lot of http:// traffic will
be encrypted and protected against passive monitoring.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: http-schemed URLs and HTTP/2 over unauthenticated TLS (was: Re: WebCrypto for http:// origins)

2014-09-12 Thread Trevor Saunders
On Fri, Sep 12, 2014 at 08:55:51AM +0300, Henri Sivonen wrote:
 On Thu, Sep 11, 2014 at 9:00 PM, Richard Barnes rbar...@mozilla.com wrote:
 
  On Sep 11, 2014, at 9:08 AM, Anne van Kesteren ann...@annevk.nl wrote:
 
  On Thu, Sep 11, 2014 at 5:56 PM, Richard Barnes rbar...@mozilla.com 
  wrote:
  Most notably, even over non-secure origins, application-layer encryption 
  can provide resistance to passive adversaries.
 
  See https://twitter.com/sleevi_/status/509723775349182464 for a long
  thread on Google's security people not being particularly convinced by
  that line of reasoning.
 
  Reasonable people often disagree in their cost/benefit evaluations.
 
  As Adam explains much more eloquently, the Google security team has had an 
  all-or-nothing attitude on security in several contexts.  For example, in 
  the context of HTTP/2, Mozilla and others have been working to make it 
  possible to send http-schemed requests over TLS, because we think it will 
  result in more of the web getting some protection.
 
 It's worth noting, though, that anonymous ephemeral Diffie–Hellman* as
 the baseline (as advocated in
 http://www.ietf.org/mail-archive/web/ietf/current/msg82125.html ) and
 unencrypted as the baseline with a trivial indicator to upgrade to
 anonymous ephemeral Diffie–Hellman (as
 https://tools.ietf.org/html/draft-ietf-httpbis-http2-encryption-00 )
 are very different things.
 
 If the baseline was that there's no unencrypted mode and every
 connection starts with anonymous ephemeral Diffie–Hellman, a passive
 eavesdropper would never see content and to pervasively monitor
 content, the eavesdropper would have to not only have the capacity to
 compute Diffie–Hellman for each connection handshake but would also
 have to maintain state about the symmetric keys negotiated for each
 connection and keep decrypting and re-encrypting data for the duration
 of each connection. This might indeed lead to the cost outcomes that
 Theodore T'so postulates.
 
 https://tools.ietf.org/html/draft-ietf-httpbis-http2-encryption-00 is
 different. A passive eavesdropper indeed doesn't see content after the
 initial request/response pair, but to see all content, the level of
 active that the eavesdropper needs to upgrade to is pretty minimal.
 To continue to see content, all the MITM needs to do is to overwrite
 the relevant HTTP headers with space (0x20) bytes. There's no need to
 maintain state beyond dealing with one of those headers crossing a
 packed boundary. There's no need to adjust packet sizes. There's no
 compute or state maintenance requirement for the whole duration of the
 connection.
 
 I have a much easier time believing that anonymous ephemeral
 Diffie–Hellman as the true baseline would make a difference in terms
 of pervasive monitoring, but I have a much more difficult time
 believing that an opportunistic encryption solution that can be
 defeated by overwriting some bytes with 0x20 with minimal maintenance
 of state would make a meaningful difference.
 
 Moreover, https://tools.ietf.org/html/draft-ietf-httpbis-http2-encryption-00
 has the performance overhead of TLS, so it doesn't really address the
 TLS takes too much compute power objection to https, which is the
 usual objection from big sites that might particularly care about the
 performance carrot of HTTP/2. It only addresses the objection to https
 that obtaining, provisioning and replacing certificates is too
 expensive. (And that's getting less expensive with HTTP/2, since
 HTTP/2 clients support SNI and SNI makes the practice of having to get
 host names from seemingly unrelated domains certified together
 obsolete.)
 
 It seems to me that this undermines the performance carrot of HTTP/2
 as a vehicle of moving the Web to https pretty seriously. It allows
 people to get the performance characteristics of HTTP/2 while still
 falling short of the last step of to make the TLS connection properly
 authenticated.

 Do we really want all servers to have to authenticate themselves?  In
 most cases they probably should, but I suspect there are cases where
 you want to run a server, but have plausable deniability.  I haven't
 gone looking for legal precedent, but it seems to me cryptographically
 signing material makes it much harder to reasonably believe a denial.

 Is it really the right call for the Web to let people get the
 performance characteristics without making them do the right thing
 with authenticity (and, therefore, integrity and confidentiality)?
 
 On the face of things, it seems to me we should be supporting HTTP/2
 only with https URLs even if one buys Theodore T'so's reasoning about
 anonymous ephemeral Diffie–Hellman.
 
 The combination of
 https://twitter.com/sleevi_/status/509954820300472320 and
 http://arstechnica.com/tech-policy/2014/09/why-comcasts-javascript-ad-injections-threaten-security-net-neutrality/
 is pretty alarming.

I agree that's bad, but I tend to believe anonymous ephemeral
Diffie–Hellman is good enough to 

Re: http-schemed URLs and HTTP/2 over unauthenticated TLS (was: Re: WebCrypto for http:// origins)

2014-09-12 Thread Martin Thomson
On 2014-09-11, at 22:55, Henri Sivonen hsivo...@hsivonen.fi wrote:

 Moreover, https://tools.ietf.org/html/draft-ietf-httpbis-http2-encryption-00
 has the performance overhead of TLS, so it doesn't really address the
 TLS takes too much compute power objection to https, which is the
 usual objection from big sites that might particularly care about the
 performance carrot of HTTP/2. It only addresses the objection to https
 that obtaining, provisioning and replacing certificates is too
 expensive. (And that's getting less expensive with HTTP/2, since
 HTTP/2 clients support SNI and SNI makes the practice of having to get
 host names from seemingly unrelated domains certified together
 obsolete.)
 
 It seems to me that this undermines the performance carrot of HTTP/2
 as a vehicle of moving the Web to https pretty seriously. It allows
 people to get the performance characteristics of HTTP/2 while still
 falling short of the last step of to make the TLS connection properly
 authenticated.

The view that encryption is expensive is a prevailing meme, and it’s certainly 
true that some sites have reasons not to want the cost of TLS, but the costs 
are tiny, and getting smaller 
(https://www.imperialviolet.org/2011/02/06/stillinexpensive.html).  I will 
concede that certain outliers will exist where this marginal cost remains 
significant (Netflix, for example), but I don’t think that’s generally 
applicable.  As the above post shows, it’s not that costly (even less on modern 
hardware).  And HTTP/2 and TLS 1.3 will remove a lot of the performance 
concerns.

I’ve seen it suggested a couple of times (largely by Google employees) that an 
opportunistic security option undermines HTTPS adoption.  That’s hardly a 
testable assertion, and I think that Adam (Roach) explained the current 
preponderance of opinion there.  The current consensus view in the IETF (at 
least) is that all or nothing approach has not done enough to materially 
improve security.

One reason that you missed for the -encryption draft is the problem with 
content migration.  A great many sites have a lot of content with http:// 
origins that can’t easily be rewritten.  And the restrictions on the Referer 
header field also mean that some resources can’t be served over HTTPS (their 
URL shortener is apparently the last hold-out for http:// at Twitter).  There 
are options in -encryption for authentication that can be resistant to some 
active attacks.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: http-schemed URLs and HTTP/2 over unauthenticated TLS (was: Re: WebCrypto for http:// origins)

2014-09-12 Thread Anne van Kesteren
On Fri, Sep 12, 2014 at 6:06 PM, Martin Thomson m...@mozilla.com wrote:
 And the restrictions on the Referer header field also mean that some 
 resources can’t be served over HTTPS (their URL shortener is apparently the 
 last hold-out for http:// at Twitter).

That is something that we should have fixed a long time ago. It's
called meta name=referrer and is these days also part of CSP.


-- 
http://annevankesteren.nl/
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: http-schemed URLs and HTTP/2 over unauthenticated TLS (was: Re: WebCrypto for http:// origins)

2014-09-12 Thread Adam Roach

On 9/12/14 10:07, Trevor Saunders wrote:

[W]hen it comes to the NSA we're pretty much just not going to be able
to force everyone to use something strong enough they can't beat it.


Not to get too far off onto this sidebar, but you may find the following 
illuminating; not just for potentially adjusting your perception of what 
the NSA can and cannot do (especially in the coming years), but as a 
cogent analysis of how even the thinnest veneer of security can temper 
intelligence agencies' overreach into collecting information about 
non-targets:


http://justsecurity.org/7837/myth-nsa-omnipotence/

While not the thesis of the piece, a highly relevant conclusion the 
author draws is: [T]hose engineers prepared to build defenses against 
bulk collection should not be deterred by the myth of NSA omnipotence.  
That myth is an artifact of the post-9/11 era that may now be outdated 
in the age of austerity, when NSA will struggle to find the resources to 
meet technological challenges.


(I'm hesitant to appeal to authority here, but I do want to point out 
the About the Author section as being important for understanding 
Marshall's qualifications to hold forth on these matters.)


--
Adam Roach
Principal Platform Engineer
a...@mozilla.com
+1 650 903 0800 x863
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform