I caught wind of this in my DNSOP folder...(cutting down the reply a *little* 
bit)

On 7/11/18, 04:23, "DNSOP on behalf of Petr Špaček" <dnsop-boun...@ietf.org on 
behalf of petr.spa...@nic.cz> wrote:

>   On 10.7.2018 20:57, Ryan Sleevi wrote:
>   > 
>   > 
>   > On Tue, Jul 10, 2018 at 2:09 PM, Mike Bishop <mbis...@evequefou.be
>   > <mailto:mbis...@evequefou.be>> wrote:
>   > 
>   >     sufficient, but how fresh is sufficiently fresh?  And does DNSSEC
>   >     provide that freshness guarantee?
>   > 
>   > 
>    > Right, these are questions that need to be answered, and not just with
>    > abstract theoretical mitigations that don't or can't be deployed.
>    
>    Signatures in DNSSEC (i.e. RRSIG records) have validity period with
>    1-second granularity so in theory it allows fine-tunning. Of course
>    shorter validity means more cryptographic operations to update
>    signatures etc.
>    
>    Taking into account that Cloudflare signs all recorsd on the fly, it is
>    clearly feasible for CDN to generate fresh signatures on every DNS request.
    
This reminds me of one of the dreams chased while designing DNSSEC in the 
1990's (the in the lab era).  The idea that the signature validity span could 
convey freshness of data, or other timeliness came up and was kicked around.

The purpose of the signature validity data in the RRSIG records is to defend 
against replay of the signatures.  For that reason, for the first time in the 
evolution of the DNS protocol, we let wall-clock time (absolute time) into the 
protocol.  In the early history of protocol development, whether there was a 
universal clock at play was a big design choice, so the concept of clocks and 
time occupied our minds.

DNS has always had relative time.  The SOA record timers, for instance, the 
TTL.  These marked the passage of time (locally) and are not dependent on a 
coordinated clock.  (FWIW, Network Time Protocol wasn't a utility yet, NTP 
today makes this all see like a tempest in a teapot.)  Until the RRSIG, and 
then TSIG, wall-clock time wasn't part of the protocol.

So absolute time in DNS started with defending against potential replay 
attacks.  Because of this, we needed wall-clock time, it was in the door with 
that, no going back on that.

First we then tried things like "what if we want to sign data for Monday and 
different for Tuesday" - a sort of versioning.  This was an artifact of DNSSEC 
assuming air-gap signing as host software security was really weak back then 
(can't trust servers with private keys).  This didn't go far, no one has ever 
asked the DNS to have time-defined "versions", we just update the servers on 
demand now.

We then thought about freshness of data.  Should a set with an RRSIG with a 
"newer" validity span knock a set that was validated-and-cached out of a cache? 
 What I recall was that we weren't going to "go there" - that is - the validity 
period would remain solely for the purpose of preventing replays and not be 
used to compare the "value" of one set of validated data with another set of 
validated data.

Why?  I will admit that I am foggy on that, I wish I could be more definitive 
or provide a useful reference.  Nevertheless, here is what I recall (mindful 
this may be personal opinion, not consensus of any group):

1. Two validity periods may overlap, complicating what it means to be 
"fresher".  I.e., one might be from Jan 1 to Dec 31, the other from Feb 1 to 
Feb 10.  (Assume same year.)  Which is fresher?  Is more specificity important? 
 This is a practical question for the code developers.

2. The philosophy against scope creep: why expand the semantics of the fields 
based on a loosely, possibly incompletely defined concept (referring to 
"freshness of DNS data")?  (A hand-wavy push back.)

3. The philosophy that once a data set is validated, it stays validated in the 
cache.  There are use cases where a "hijack" might get unauthorized (yet 
validated) data into caches and calls to flush these, these call into question 
this philosophy.  The reason for this philosophy is it eases implementations 
(and with flushing an operator manual option, acceptable).  Caches don't have 
to re-test what's in them as it is now.

4. Under what use case would a "fresher" set of data come to the cache.  If the 
data was already in the cache, the resolver wouldn't fetch a new copy, so the 
need to compare wasn't apparent.  This is in the era where the additional data 
section was seen as the carrier of disease (see "Clarifications to the DNS 
Specification"), so that wasn't an active avenue.

Reasons 1 and 4 I think were the most compelling back then, although I can see 
reason 4 falling apart if data in the additional section is properly validated.


_______________________________________________
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop

Reply via email to