Re: [DNSOP] New Version Notification for draft-pan-dnsop-swild-rr-type-00.txt

2017-08-22 Thread Lanlan Pan
Hi Warren,

Thanks for your work, :-)

I totally agree:  Not all Disposable Domains are wildcards like
xxx.github.io / xxx.blogspot.com / xxx.qzone.qq.com. So, wildcard solution
can not cover all example Disposable/Temporary Domains in the paper, as
your analysis.

The question is how to avoid unnecessary caches and queries, give recursive
more power to defense with wildcard issue.

As Ted's advice, for a better justification, I will do more analysis on
recursive cache, which kind of temporary domain is "particular", wildcard
domains or "domains has no incentive to publish a wildcard marker".


Warren Kumari 于2017年8月22日周二 上午2:45写道:

> I was really trying to stay out of this thread...
>
>
> On Fri, Aug 18, 2017 at 1:05 PM, Ted Lemon  wrote:
> > El 18 ag 2017, a les 11:33, Lanlan Pan  va escriure:
> >
> > So, can you talk about how your proposal saves cost over using a
> heuristic?
> > It can be used with cache aging heuristic.
> > Heuristic read in aaa/bbb/ccc.foo.com, expire and move out;  then read
> in
> > xxx/yyy/zzz.foo.com,  expire and move out;  loop...
> > => Map aaa/bbb/ccc/xxx/yy/zzz.foo.com to *.foo.com when heuristic read,
> it
> > will reduce the load of move in/out.
> >
> >
> > By "move out" you mean "remove," right?   Move out implies that you are
> > moving it somewhere.   You haven't actually answered my question.  You
> say
> > that SWILD will remove the load, but you don't give any evidence of this.
>
> Yup.
>
> Earlier in the thread, one of justifications for this work was "DNS
> Noise: Measuring the Pervasiveness ofDisposable Domains in Modern DNS
> Traffic." (posted by Lanlan Pan).
> In the abstract of the same is: "Disposable domains are likely
> generated automatically, characterized by a “one-time use” pattern,
> and appear to be used as a way of “signaling” via DNS queries."
>
> The document contains some examples of these "disposable domains",
> including:
> 0.0.0.0.1.0.0.4e.135jg5e1pd7s4735ftrqweufm5.avqs.mcafee.com
>
> McAfee's site says: "GTI File Reputation looks for suspicious
> programs, Portable Document Format (PDF) files, and Android
> Application Package (.APK) files that are active on endpoints running
> McAfee products, including Endpoint Security (ENS), VirusScan
> Enterprise (VSE), and SaaS Endpoint Protection (formerly known as
> Total Protection Service). If any suspicious files are found that do
> not trigger existing signature DAT files, GTI sends a DNS request to a
> central database server hosted by McAfee Labs."
>
> Basically, the way that this works is to generate a hash (or similar)
> of an object and query that hash in the DNS -- information is returned
> encoded in the address.
> It is quite clear that McAfee could not (and would not) want to
> publish a record saying that this is a wildcard -- if they did, they
> would either mark all objects as safe, or as malicious.
>
>
> Another example is:
>
> load-0-p-01.up-1852280.mem-251379712-24440832-0-p-50.swap-236691456-297943040-0-p-44.3302068.1222092134.device.trans.manage.esoft.com
> Fascinating, but device.trans.manage.esoft.com has no incentive to
> publish a wildcard marker -- the whole purpose of this query appears
> to get data back to manage.esoft.com -- having this get answered from
> an intermediate cache would defeat this.
>
> The third example is:
> p2.a22a43lt5rwfg.ihg5ki5i6q3cfn3n.191742.s1.v4.ipv6-exp.l.google.com
> Lorenzo Colitti, Steinar, Erik Kline, Tiziana Refice have a good
> writeup of the purpose of these here: "Evaluating IPv6 Adoption in the
> Internet" -
> https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/36240.pdf
> Again, these queries are being made for a reason -- it isn't that the
> senders figured "I know, let's make a DNS query for fun!!!" - they
> wanted to query for some data, or send some data -- like the abstract
> says: "“signaling” via DNS queries."
>
> In all of the above, publishing a "this is a wildcard" completely
> breaks the purpose of the queries, and so there is no incentive for
> these entities to publish such a record...
>
>
>
> >>
> >> 2) cache miss
> >> All of temporary subdomain wildcards will encounter cache miss.
> >> Query xxx.foo.com,  then query yyy.foo.com, zzz.foo.com, ...
> >> We can use SWILD to optimize it,  only query xxx.foo.com for the first
> >> time and get SWILD, avoid to send yyy/zzz.foo.com queries to
> authoritative
> >> server.
> >>
> >>
> >> Can you characterize why sending these queries to the authoritative
> server
> >> is a problem?
> >
>
> For the examples used in the paper justifying this, it's not a
> problem, it's the purpose :-)
>
> >
> > Ok, Similar to RFC8198 section 6
> > Benefit but not problem,  directly return from cache, avoid send queries
> to
> > authoritative, and wait for response, reduce laterncy.
> >
> >
> > Okay, but this isn't a reason to prefer this to existing, standardized
> > technology.
> >
> >> 3) DDoS risk
> >> The botnet 

Re: [DNSOP] New Version Notification for draft-pan-dnsop-swild-rr-type-00.txt

2017-08-18 Thread Ted Lemon
El 18 ag 2017, a les 11:33, Lanlan Pan  va escriure:
> So, can you talk about how your proposal saves cost over using a heuristic?
> It can be used with cache aging heuristic.
> Heuristic read in aaa/bbb/ccc.foo.com , expire and move 
> out;  then read in xxx/yyy/zzz.foo.com ,  expire and 
> move out;  loop...
> => Map aaa/bbb/ccc/xxx/yy/zzz.foo.com  to *.foo.com 
>  when heuristic read, it will reduce the load of move in/out.

By "move out" you mean "remove," right?   Move out implies that you are moving 
it somewhere.   You haven't actually answered my question.  You say that SWILD 
will remove the load, but you don't give any evidence of this.

> 
>> 2) cache miss
>> All of temporary subdomain wildcards will encounter cache miss.
>> Query xxx.foo.com ,  then query yyy.foo.com 
>> , zzz.foo.com , ...
>> We can use SWILD to optimize it,  only query xxx.foo.com 
>>  for the first time and get SWILD, avoid to send 
>> yyy/zzz.foo.com  queries to authoritative server.
> 
> Can you characterize why sending these queries to the authoritative server is 
> a problem?
> 
> Ok, Similar to RFC8198 section 6 
> 
> Benefit but not problem,  directly return from cache, avoid send queries to 
> authoritative, and wait for response, reduce laterncy.

Okay, but this isn't a reason to prefer this to existing, standardized 
technology.

>> 3) DDoS risk 
>> The botnet ddos risk and defense is like NSEC aggressive wildcard, or NSEC 
>> unsigned.
>> For example,  [0-9]+.qzone.qq.com  is a popular SNS 
>> website in China, like facebook. If botnets send "popular website wildcards" 
>> to recursive,  the cache size of recursive will rise, recursive can not just 
>> simply remove them like some other random label attack.
>> We prefer recursive directly return the IP of subdomain wildcards, and not 
>> rise recursive cach, not send repeat query to authoritative.
> 
> Why do you prefer this?   Just saying "we prefer ..." is not a reason for the 
> IETF to standardize something.
> 
> Sorry, my expression is fault.
> 
> More details:
> 1) All of the attack botnets were customers of ISP, sent queries to ISP 
> recursive with low rate, so all of the client's IP addresses were 
> "legitimate", could not simply use ACL.
> 2) Normal users also visit [0-9]+.qzone.qq.com , all 
> the the random queries domain seems to "legitimate".
> => The client ip addresses and the random subdomains are all in the 
> whitelist, not in blacklist.
> 3) ISP didn't have any DNS firewall equipment ( very sad situation, but it 
> was true ) to take over the response of "*.qzone.qq.com 
> ".
> 
> In this weaker scenario,  it will be better if give recursive more 
> information to directly answer queries from cache, and make recursive not to 
> send/cache many subdomains query/response.
> Of course, we can defense the attack with professional operation, solve the 
> problem very well. But there are also many more weaker recursive only run 
> bind software, without any protection...

Maybe they should upgrade.

> I will reconsider these problems of the proposal, make the improvement 
> analysis on real-world caches before next step.

Thanks!   However, I would really encourage you to step back from your proposal 
and see if there's a way to accomplish what you want without adding this 
resource record.   I think you can get the same results you want without SWILD, 
and the result will be a lot better for the DNS as a whole.

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] New Version Notification for draft-pan-dnsop-swild-rr-type-00.txt

2017-08-18 Thread Lanlan Pan
Thanks a lot for your detail analysis, :-)

Ralf Weber 于2017年8月17日周四 下午11:16写道:

> Moin!
>
> On 17 Aug 2017, at 0:09, Lanlan Pan wrote:
> > Yes, I agree, in fact the *online cache rate* is small (0.12% queries),
> LRU
> > & TTL works fine.
> > SWILD not save many online cache size, because of the queries rate.
> > And Temporary Domain Names/ All Names: 41.7% for 7 days statistics,  the
> > rate can be about 10% for 1 day statistics. Because temporary domain
> names
> > expire after TTL time.  Ralf has similar curious.
> As you mention me, with the data you supplied it is highly unlikely that a
> lot of records will still be active in the Cache because of the TTL and
> how least recently used (LRU) algorithms work.
>

> > The problem is:
> >
> > 1)  cache size
> > Recursives commonly cache "all queried domain in n days" for some
> > SERVFAIL/TIMEOUT condition, which has been documented in
> > https://tools.ietf.org/html/draft-tale-dnsop-serve-stale-01
> That is not what the draft suggest and is not what the current
> implementations of this or similar features do. They all rely on a cache
> with a fixed size and if the record that normally would be expired is
> still in the cache extend it's lifetime when queried. The records you
> mention are not queried and thus would be expired because other records
> that are queried more frequently would have overwritten them anyway.
>
> There also is nearly no harm if these queries fail in case the
> authoriative is not responding as most of those queries you describe
> are computer and not human generated. The draft above and similar
> techniques where done because of the twitter.com problem. Now I can
> assure you that twitter.com will always be hot (asked at least every
> couple of seconds) in a regular resolver at your ISP or a provider
> of DNS services, and thus the expired record will probably still be
> in the cache.
>
You are totally right.
I admit the online cache size is not a problem on fixed size LRU/TTL
refresh.

>
> > The subdomain wildcards cache are needlessly,  we can use heuristics
> > algorithm for deciding what to cache, or just use simple rule like
> "select
> > domain which queies time > 5 in last n days".
> > We can use SWILD to optimize it, not need to detecting, just remove items
> > which SWILD marked, to save cost.
> The cost of sending a query now and then is very low resolvers do that all
> the time and the rate on which they have to do that is very low. However to
> actually save costs you would have to deploy your proposal on the
> authoritative server that have that behaviour and the resolvers. Good luck
> with that. I also assume some of the authorities are actually interested in
> the queries so they would not implement your proposal even if they could,
> making the theoretical improvement of 0.12% even smaller.
>
0.12% is the rate of recursive queries times.
authorities's improvement rate is not 0.12%, it depends on the query
distribution of subdomain wildcards on the whole zone, and the number of
recursives that enable SWILD.

>
> > 2) cache miss
> > All of temporary subdomain wildcards will encounter cache miss.
> > Query xxx.foo.com,  then query yyy.foo.com, zzz.foo.com, ...
> > We can use SWILD to optimize it,  only query xxx.foo.com for the first
> time
> > and get SWILD, avoid to send yyy/zzz.foo.com queries to authoritative
> > server.
> See above.
>
same as above, and the above reply to Ted, :-)

>
> > 3) DDoS risk
> > The botnet ddos risk and defense is like NSEC aggressive wildcard, or
> NSEC
> > unsigned.
> > For example,  [0-9]+.qzone.qq.com is a popular SNS website in China,
> like
> > facebook. If botnets send "popular website wildcards" to recursive,  the
> > cache size of recursive will rise, recursive can not just simply remove
> > them like some other random label attack.
> As PRSD (Pseudo Random Subdomain) attacks as I call them or waterfall
> attacks
> as others call them are usually asking every subdomain once (and these
> botnets
> take great care on doing this) the record would be removed by the least
> recently used (LRU) algorithm when other records that are used more are
> questioned.
>
> While these attacks on recursive resolvers can stress the recursive to
> authoritative part of the resolver there are techniques to limit the
> exposure to clients. I did gave a talk on that at DNS-OARC 2015 Spring
> Workshop in Amsterdam on that topic:
> https://indico.dns-oarc.net/event/21/contribution/29
> and the summary of it is that all major vendors of recursive resolvers
> handle this, so again while your solution would be one once universally
> deployed there are already solutions to the problem out there, so why do
> another one?
>
See also the above reply to Ted.
All major vendors of recursive resolvers can deal "legitimate subdomain
wildcards" problem with some feature like Response Rate Limiting.
The difference of my solution is directly reply IP to all "legitimate"
clients' low rate 

Re: [DNSOP] New Version Notification for draft-pan-dnsop-swild-rr-type-00.txt

2017-08-18 Thread Lanlan Pan
Thanks a lot for your pertinent comments, :-)

Ted Lemon 于2017年8月17日周四 下午9:56写道:

> El 17 ag 2017, a les 0:09, Lanlan Pan  va escriure:
>
> We can use SWILD to optimize it, not need to detecting, just remove items
> which SWILD marked, to save cost.
>
>
> So, can you talk about how your proposal saves cost over using a heuristic?
>
It can be used with cache aging heuristic.
Heuristic read in aaa/bbb/ccc.foo.com, expire and move out;  then read in
xxx/yyy/zzz.foo.com,  expire and move out;  loop...
=> Map aaa/bbb/ccc/xxx/yy/zzz.foo.com to *.foo.com when heuristic read, it
will reduce the load of move in/out.

>
> 2) cache miss
> All of temporary subdomain wildcards will encounter cache miss.
> Query xxx.foo.com,  then query yyy.foo.com, zzz.foo.com, ...
> We can use SWILD to optimize it,  only query xxx.foo.com for the first
> time and get SWILD, avoid to send yyy/zzz.foo.com queries to
> authoritative server.
>
>
> Can you characterize why sending these queries to the authoritative server
> is a problem?
>

Ok, Similar to RFC8198 section 6

Benefit but not problem,  directly return from cache, avoid send queries to
authoritative, and wait for response, reduce laterncy.

> 3) DDoS risk
> The botnet ddos risk and defense is like NSEC aggressive wildcard, or NSEC
> unsigned.
> For example,  [0-9]+.qzone.qq.com is a popular SNS website in China, like
> facebook. If botnets send "popular website wildcards" to recursive,  the
> cache size of recursive will rise, recursive can not just simply remove
> them like some other random label attack.
> We prefer recursive directly return the IP of subdomain wildcards, and not
> rise recursive cach, not send repeat query to authoritative.
>
>
> Why do you prefer this?   Just saying "we prefer ..." is not a reason for
> the IETF to standardize something.
>

Sorry, my expression is fault.

More details:
1) All of the attack botnets were customers of ISP, sent queries to ISP
recursive with low rate, so all of the client's IP addresses were
"legitimate", could not simply use ACL.
2) Normal users also visit [0-9]+.qzone.qq.com, all the the random queries
domain seems to "legitimate".
=> The client ip addresses and the random subdomains are all in the
whitelist, not in blacklist.
3) ISP didn't have any DNS firewall equipment ( very sad situation, but it
was true ) to take over the response of "*.qzone.qq.com".

In this weaker scenario,  it will be better if give recursive more
information to directly answer queries from cache, and make recursive not
to send/cache many subdomains query/response.
Of course, we can defense the attack with professional operation, solve the
problem very well. But there are also many more weaker recursive only run
bind software, without any protection...


> There are a bunch of problems with your proposal, as I'm sure others have
> remarked before.   It breaks DNSSEC validation for stub resolvers that
> aren't aware of SWILD.  In the absence of DNSSEC validation, it creates a
> new and very effective spoofing attack (poison the cache with bogus SWILD
> records).   Etc.
>


> So you need to clearly explain why it is that you prefer this approach,
> and not just say that it's something you like.   Are you using it in
> production?   Do you have data on what it does?   Do you have data on the
> behavior of real-world caches that you can cite that shows that SWILD would
> produce more of an improvement than just using a better cache aging
> heuristic?
>

I will reconsider these problems of the proposal, make the improvement
analysis on real-world caches before next step.
-- 
致礼  Best Regards

潘蓝兰  Pan Lanlan
___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] New Version Notification for draft-pan-dnsop-swild-rr-type-00.txt

2017-08-17 Thread Ralf Weber
Moin!

On 17 Aug 2017, at 0:09, Lanlan Pan wrote:
> Yes, I agree, in fact the *online cache rate* is small (0.12% queries), LRU
> & TTL works fine.
> SWILD not save many online cache size, because of the queries rate.
> And Temporary Domain Names/ All Names: 41.7% for 7 days statistics,  the
> rate can be about 10% for 1 day statistics. Because temporary domain names
> expire after TTL time.  Ralf has similar curious.
As you mention me, with the data you supplied it is highly unlikely that a
lot of records will still be active in the Cache because of the TTL and
how least recently used (LRU) algorithms work.

> The problem is:
>
> 1)  cache size
> Recursives commonly cache "all queried domain in n days" for some
> SERVFAIL/TIMEOUT condition, which has been documented in
> https://tools.ietf.org/html/draft-tale-dnsop-serve-stale-01
That is not what the draft suggest and is not what the current
implementations of this or similar features do. They all rely on a cache
with a fixed size and if the record that normally would be expired is
still in the cache extend it's lifetime when queried. The records you
mention are not queried and thus would be expired because other records
that are queried more frequently would have overwritten them anyway.

There also is nearly no harm if these queries fail in case the
authoriative is not responding as most of those queries you describe
are computer and not human generated. The draft above and similar
techniques where done because of the twitter.com problem. Now I can
assure you that twitter.com will always be hot (asked at least every
couple of seconds) in a regular resolver at your ISP or a provider
of DNS services, and thus the expired record will probably still be
in the cache.

> The subdomain wildcards cache are needlessly,  we can use heuristics
> algorithm for deciding what to cache, or just use simple rule like "select
> domain which queies time > 5 in last n days".
> We can use SWILD to optimize it, not need to detecting, just remove items
> which SWILD marked, to save cost.
The cost of sending a query now and then is very low resolvers do that all
the time and the rate on which they have to do that is very low. However to
actually save costs you would have to deploy your proposal on the
authoritative server that have that behaviour and the resolvers. Good luck
with that. I also assume some of the authorities are actually interested in
the queries so they would not implement your proposal even if they could,
making the theoretical improvement of 0.12% even smaller.

> 2) cache miss
> All of temporary subdomain wildcards will encounter cache miss.
> Query xxx.foo.com,  then query yyy.foo.com, zzz.foo.com, ...
> We can use SWILD to optimize it,  only query xxx.foo.com for the first time
> and get SWILD, avoid to send yyy/zzz.foo.com queries to authoritative
> server.
See above.

> 3) DDoS risk
> The botnet ddos risk and defense is like NSEC aggressive wildcard, or NSEC
> unsigned.
> For example,  [0-9]+.qzone.qq.com is a popular SNS website in China, like
> facebook. If botnets send "popular website wildcards" to recursive,  the
> cache size of recursive will rise, recursive can not just simply remove
> them like some other random label attack.
As PRSD (Pseudo Random Subdomain) attacks as I call them or waterfall attacks
as others call them are usually asking every subdomain once (and these botnets
take great care on doing this) the record would be removed by the least
recently used (LRU) algorithm when other records that are used more are
questioned.

While these attacks on recursive resolvers can stress the recursive to
authoritative part of the resolver there are techniques to limit the
exposure to clients. I did gave a talk on that at DNS-OARC 2015 Spring
Workshop in Amsterdam on that topic:
https://indico.dns-oarc.net/event/21/contribution/29
and the summary of it is that all major vendors of recursive resolvers
handle this, so again while your solution would be one once universally
deployed there are already solutions to the problem out there, so why do
another one?

So long
-Ralf


___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] New Version Notification for draft-pan-dnsop-swild-rr-type-00.txt

2017-08-16 Thread Lanlan Pan
Ralf Weber 于2017年8月16日周三 下午4:22写道:

> Moin!
>
> On 16 Aug 2017, at 6:19, Lanlan Pan wrote:
>
> > We analyzed our recursive query log, about 18.6 billion queries from
> > 12/01/2015 to 12/07/2015.
> >
> > We found about 4.7 Million temporary domains occupy the recursive's
> > cache,
> > which are subdomain wildcards from Skype, QQ, Mcafee, Microsoft,
> > 360safedns, Cloudfront, Greencompute...
> >
> > Temporary Domain Names/ All Names: 41.7%
> > Queries for Temporary Domain Names/ All Queries: 0.12%
> So you are designing a protocol change for 0.12% of your queries? IMHO
> not a
> good use of engineering time.
>

The temporary domain name's rate > 40%.

Every xxx/yyy/zzz.foo.com query must be sent to Authoritative Nameserver
for the subdomain wildcard same answer, we can try to reduce this cost, and
shorten the response laterncy.

>
> Details in: Dealing with temporary domain name issues in the DNS
> > <
> https://www.computer.org/csdl/proceedings/iscc/2016/0679/00/07543831-abs.html
> >
> >
> > <
> https://www.computer.org/csdl/proceedings/iscc/2016/0679/00/07543831-abs.html
> >
> > The operational problem is, subdomain wildcards waste recursive cache
> > capacity. Existing solution to the problem is not adequate in
> > recursive
> > operating environment at present, because of low DNSSEC deployment.
> Sorry can't read that, but from the abstract and your emails I think the
> main
> flaw in your thinking is that you want to cache all the records,
> regardless of
> how often they are queried. That is not how caching resolvers work.
> Records that
> are not used frequently and most of these signalling queries are one
> time queries
> just expire from the cache, either by LRU mechanism or TTL.
>

Yes, LRU and TTL can expire from the cache, which were also discussed in
the paper.

Recursives commonly cache "all queried domain in n days" for some
SERVFAIL/TIMEOUT condition, which has been documented in
https://tools.ietf.org/html/draft-tale-dnsop-serve-stale-01
The subdomain wildcards cache are needlessly,  and we can make some
optimization.


> So long
> -Ralf
>
-- 
致礼  Best Regards

潘蓝兰  Pan Lanlan
___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] New Version Notification for draft-pan-dnsop-swild-rr-type-00.txt

2017-08-16 Thread Ralf Weber

Moin!

On 16 Aug 2017, at 6:19, Lanlan Pan wrote:


We analyzed our recursive query log, about 18.6 billion queries from
12/01/2015 to 12/07/2015.

We found about 4.7 Million temporary domains occupy the recursive's 
cache,

which are subdomain wildcards from Skype, QQ, Mcafee, Microsoft,
360safedns, Cloudfront, Greencompute...

Temporary Domain Names/ All Names: 41.7%
Queries for Temporary Domain Names/ All Queries: 0.12%
So you are designing a protocol change for 0.12% of your queries? IMHO 
not a

good use of engineering time.


Details in: Dealing with temporary domain name issues in the DNS



The operational problem is, subdomain wildcards waste recursive cache
capacity. Existing solution to the problem is not adequate in 
recursive

operating environment at present, because of low DNSSEC deployment.
Sorry can't read that, but from the abstract and your emails I think the 
main
flaw in your thinking is that you want to cache all the records, 
regardless of
how often they are queried. That is not how caching resolvers work. 
Records that
are not used frequently and most of these signalling queries are one 
time queries

just expire from the cache, either by LRU mechanism or TTL.

So long
-Ralf

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] New Version Notification for draft-pan-dnsop-swild-rr-type-00.txt

2017-08-13 Thread Matthew Pounsett
On 13 August 2017 at 18:14, Peter van Dijk 
wrote:

>
> https://tools.ietf.org/html/draft-ietf-dnsop-nsec-aggressive
> use-10#section-10 is not in the published RFC 8198 because 7942 (sadly)
> mandates that this section is removed before publication. I suspect this
> removal is specifically hurting OPENPGPKEY deployment today.
>
> So, if you want to know about implementation status, please click through
> from https://tools.ietf.org/html/rfc to the relevant draft.
>
> Ah, I did not realize that was mandated to be removed.  Thanks for filling
me in!
___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] New Version Notification for draft-pan-dnsop-swild-rr-type-00.txt

2017-08-13 Thread Peter van Dijk

On 12 Aug 2017, at 18:31, Matthew Pounsett wrote:


8198 doesn't have an implementation status section


https://tools.ietf.org/html/draft-ietf-dnsop-nsec-aggressiveuse-10#section-10 
is not in the published RFC 8198 because 7942 (sadly) mandates that this 
section is removed before publication. I suspect this removal is 
specifically hurting OPENPGPKEY deployment today.


So, if you want to know about implementation status, please click 
through from https://tools.ietf.org/html/rfc to the relevant draft.


Kind regards,
--
Peter van Dijk
PowerDNS.COM BV - https://www.powerdns.com/

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] New Version Notification for draft-pan-dnsop-swild-rr-type-00.txt

2017-08-11 Thread Paul Hoffman
On 11 Aug 2017, at 7:39, Matthew Pounsett wrote:

> It sounds like you're assuming that SWILD would be supported by caching
> servers that do not support DNSSEC or NSEC aggressive use.  Why do you
> expect implementers would adopt SWILD before adopting these much older
> features?

This is my top question on the document.

--Paul HOffman

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop