> On 06 May 2016, at 14:00, Viktor Dukhovni <[email protected]> wrote:
> 
> On Fri, May 06, 2016 at 12:57:15PM +0700, Aaron Zauner wrote:
> 
>>> You need to keep in mind that in the vast majority of cases
>>> authentication errors will be operational errors (self-DoS) on
>>> receiving systems, and the task at hand is to minimize the frequency
>>> and duration of outages, so that security is used and not just
>>> disabled as unusable.  Maximalist approaches are highly counter-productive
>>> here.
>> 
>> This is not what I understood from the draft in it's current form:
> 
> The current form is not the final form.  To the expect that the
> draft  emphasizes statistical reporting over problem alerting it
> needs to be corrected.

I'm aware of that, but if the draft turns out to be a delivery reporting 
system, I'm not going to support it.

> Yes!  The vast majority of the failures will not be MiTM-related,
> and if there is an MiTM, well then you can't send the report to
> the right party anyway.

There're currently multiple feedback channels proposed, if one would fail, you 
might fall back to the other one. Though I doubt that MITM on the HTTPS channel 
 will be very frequent given CT and scanning efforts.

> 
>> Confidentiality may be needed as detailed
>> information on MITM attacks can give indication as to which paths and
>> systems are affected (e.g. to 3rd party attackers, not the original MITM).
> 
> There's little need for very detailed information.  We just want
> to distinguish between a few classes of failures.
> 
>    * Lack of STARTTLS (this too is often self-inflicted!)
>    * Lack of shared protocol protocol parameters (protocol version/ciphers)
>    * Lack of shared trust anchors
>    * Reference identifier mismatch
>    * (Perhaps one or two others I'm forgetting right now)

I understand this is what you consider relevant. I'd like to know why and how 
attacks happen as well, if possible.

> 
>>> The key requirement as I see is *timely* delivery of focused *alerts*
>>> to operators who can fix the problem, ideally before it affects
>>> the delivery of email, while the issue is still localized to just
>>> a proper subset of the MX hosts.
>> 
>> I think that may be a reason for another draft in another working group.
>> This one is called "Utilizing TLS in Applications",..
> 
> There will first be non-IETF documents in this space, and perhaps
> ultimately some IETF BCPs once we have even more operational
> experience.

Which non-IETF documents? Where?

> 
>>> Sites that don't have the operational discipline to do this right
>>> are best off staying with opportunistic TLS and publishing neither
>>> DANE nor STS policies.  DoSing themselves periodically, and causing
>>> pain for senders trying to do the right thing helps no one.
>> 
>> Yea, we'll try to help them out separately with Let's Encrypt automation 
>> anyway.
> 
> That's still a bit shaky, the early adopters of LE for SMTP are
> where some of the most frequent DANE problems show up.  This is
> because automated LE cert rotation ignores the necessity to update
> TLSA records from time to time.  Survey says 424 MX hosts with
> stable "IN TLSA [23] 1 ?" records and 109 MX hosts with fragile,
> soon to break "IN TLSA [23] 0 ?" TLSA records.  The ratio is
> improving, over time, but needs to be much better.

No it isn't shaky, it's non-existent. People that currently deploy LE certs on 
their mailservers do so on their own, without guidance and without any support 
nor automation. And I'm sorry to say: I don't care about DNSSEC (and thus 
unfortunately DANE, wich is a nice standard by itself), thus will probably 
implement support at the very last moment possible.

> There are things LE can do to make this better:
> 
>    * More prominent documentation of best-practice for combining DANE
>      and LE certs, and monitoring their correctness:
> 
>       ; Keep server key fixed when doing automated rotation via "--csr"
>       ; option option.  Rotate server key manually, from time to time,
>       ; and update "3 1 1" record accordingly.  Do so only while the
>       ; "2 1 1" LE key is not also changing.
>       ;
>       ; Ensure MX hostname matches the certificate SAN
>       ;
>       _25._tcp.smtp.example.org. IN TLSA 3 1 1 <sha256(server public key)>
> 
>       ; Track LE issuer public key, update TLSA RR promptly when it changes.
>       ; Stable server key above makes it possible to tolerate brief mismatches
>       ; if the LE issuer key changes unexpectedly.
>       ;
>       ; Ensure the issuer CA cert is part of server's chain file
>       ; (i.e. is sent to peer in server TLS Certificate message).
>       ; [ This is of course also necessary for STS, as the LE issuer
>       ;   CA is an intermediate CA, not a trust anchor. ]
>       ;
>       _25._tcp.smtp.example.org. IN TLSA 2 1 1 <sha256(LE issuer public key)>

Again: there's currently neither incentive nor effort within LE to look at 
DANE. Nor enough man-power available. Nor is it really relevant in my opinion.

>    * Better advance notification of planned changes in the LE
>      issuer public key, not just via the blogs, but also via email sent
>      locally to the administrator by the key rotation scripts.

Agreed. Please talk to the LE-Ops team about that, I'm working on an entirely 
different part and have zero influence on that.

>    * Provide tools to compute the new "3 1 1" and "2 1 1" records
>      and compare them to what's in DNS.  Avoid deploying new cert
>      chain if neither match.  If just one changes, deploy, but
>      send alert to admin to update the TLSA RRs.

See above.

>    * As explained a recent post to this list (NEWSFLASH thread),
>      the "3 1 1" + "2 1 1" combination is quite resilient if
>      sensibly monitored.

See above.

>    * The "mail in a box" project has done a good job of integrating
>      LE and DANE and DNSSEC for a complete turn-key system, and
>      I see very few errors from those machines.  They just routinely
>      have correct working TLSA RRs across cert rollovers.  Kudos
>      to that project. <https://mailinabox.email/>

I like it as well.

> I've for a long time been deeply involved in the day-to-day practice
> of MTA SMTP transport security.  Designing and implementing the
> Postfix TLS interface, supporting Postfix TLS users on the mailing
> list, authoring DANE support in OpenSSL, monitoring DANE adoption.
> Monitoring and resolving DNSSEC interop issues at hosting providers,

I'm aware of that :)

> ...
> 
> So yes, I don't just view this as a pure security protocol issue.
> All the pieces have to fit together to actually create something
> that gets used.  Think systemically, optimization of the security
> of a small component of a large system can easily turn out to be
> at the expense of the security system as a whole.

We're currently trying to fix ancient (as in early 80ties) protocols, that will 
be around for another 20 years. I'd rather consider every detail than rush a 
standard through the IETF process.

Aaron

Attachment: signature.asc
Description: Message signed with OpenPGP using GPGMail

_______________________________________________
Uta mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/uta

Reply via email to