[dns-operations] Filtering policy: false positive rate
Hi, Resolver policies typically describe operational rules, such as which data is collected and retained for how long etc. When a resolver offers filtering for ads, abuse, ... their policy ought to say something about this, such as how to unblock a benign domain that was flagged in error. Now, block-list-based filtering is one thing. For resolvers like DNS4EU which (will) employ heuristic, prediction-based filtering, a new type of error source appears, namely false categorization from prediction. I think that the resolver policy should say what's an acceptable false positive rate for such filtering. The problem is, how do you measure that? At a given time, one might not know which names would be blocked by the classifier (until someone asks). So you can't go and check the list for false positives, because there's no list. Then, how to define a false positive rate? Look at all blocked queries, and do a post-hoc investigation? How about popularity -- should one factor in that blocking *.ddns.net is more severe than blocking *.blank.page? I.e., is it a ratio of blocked/total queries, or blocked/total names? Or, wait for complaints, and somehow relate the complaints to the number of queries, i.e. take "complaints per 1M (blocked?) queries" or something? (That would not exactly be a false positive rate, but it *might* somewhat correlate.) One may also not compute a ratio at all, and just count complaints (and define an acceptable threshold per day). -- Such a count would have to scale with the user base. Questions over questions. Is there best practice on this? What do other resolver operators do? In any case, I want to collect input and feed this back to the DNS4EU consortium, to make sure that *some* level of quality is committed to. Thanks, Peter -- https://desec.io/ ___ dns-operations mailing list dns-operations@lists.dns-oarc.net https://lists.dns-oarc.net/mailman/listinfo/dns-operations
Re: [dns-operations] Percentage of DoT/DoH requests for public resolvers?
Hi Stephane, On 6/12/23 08:49, Stephane Bortzmeyer wrote: I'm looking for the current percentage of encrypted DNS requests vs. in-the-clear ones on public resolvers having DoT/DoH/DoQ. I do not find public information about it. May be I searched too fast? Geoff gave an IEPG presentation in November which has some numbers on Cloudflare's 1.1.1.1 Do* breakdown, see slides 7 and 8 here: https://iepg.org/2022-11-06-ietf115/slides-115-iepg-sessa-doh-vs-dot-geoff-huston-joao-damas-00.pdf Best, Peter -- https://desec.io/ ___ dns-operations mailing list dns-operations@lists.dns-oarc.net https://lists.dns-oarc.net/mailman/listinfo/dns-operations
Re: [dns-operations] List of registries that support CDS/CDNSKEY ?
Hi, On 11/20/22 16:50, vom513 wrote: My understanding is that unfortunately this list is currently pretty small. But is this being tracked anywhere ? Would be nice to have a wiki or something with a table and status, notes etc. Some information on the CDS/CDNSKEY prevalence is kept here: https://github.com/oskar456/cds-updates Best, Peter -- Like our community service? Please consider donating at https://desec.io/ deSEC e.V. Kyffhäuserstr. 5 10781 Berlin Germany Vorstandsvorsitz: Nils Wisiol Registergericht: AG Berlin (Charlottenburg) VR 37525 ___ dns-operations mailing list dns-operations@lists.dns-oarc.net https://lists.dns-oarc.net/mailman/listinfo/dns-operations
Re: [dns-operations] Browser Public suffixes list
Hi Meir, On 8/26/22 06:38, Meir Kraushar via dns-operations wrote: We are about to go public with a new IDN ccTLD in Hebrew, being xn--4dbrk0ce. We have done the procedure of updating Mozilla PSL, also merged the list into chromium. But as far as how Safari browser behaves, we are totally in the dark. How to to reach out to any Apple staff, or crate an update request? Any help will be much appreciated. As per the PSL algorithm [1], there is a rule "*" that matches the top level, so that all TLDs automatically qualify as Public Suffixes. You can verify this by entering "example.xn--4dbrk0ce" into the form at [2]. Given that, it does not seem necessary to make sure that browsers include new TLDs explicitly. Are you encountering a problem due to the lack of inclusion in the PSL, or are you merely trying to get it included for completeness? [1]: https://github.com/publicsuffix/list/wiki/Format#algorithm [2]: https://publicsuffix.zone/ Also, out of curiosity.. if anyone knows why the mess? Why evey browser needs attention, rather than relaying on the IANA tld list? That's because there are public suffixes that are operated by other entities [3]. For example, s3.amazonaws.com is a public suffix. There is a significant number of them (just take a look at the raw PSL file on GitHub). [3]: https://github.com/publicsuffix/list/wiki/Format#divisions HTH, Peter -- https://desec.io/ ___ dns-operations mailing list dns-operations@lists.dns-oarc.net https://lists.dns-oarc.net/mailman/listinfo/dns-operations
Re: [dns-operations] BlackHat Presentation on DNSSEC Downgrade attack
Hi, On 8/11/22 17:56, Phillip Hallam-Baker wrote: Looks to me like there is a serious problem here. ... Won’t go into extreme detail here as researcher’s slides will be available tomorrow. The slides are now available: http://i.blackhat.com/USA-22/Thursday/US-22-Heftrig-DNSSEC-Downgrade-Attacks.pdf For the benefit of all, in a nutshell: a) Slides 1-21 and 32-35: DNS/DNSSEC intro, refresher on IETF-recommended algorithms b) Slides 22-25: Attacker generates DNSKEY with colliding DS record, then takes over zone --> assumes very broken DS digest algorithm --> if multiple digest types present, this allows "downgrade" to "weakest" (whatever that means) c) Slides 26-31: Attacker generates RRSIG without knowing private key, then takes over zone --> assumes very broken signing algorithm --> if multiple algorithms present, this allows "downgrade" to "weakest" (whatever that means) d) Slides 36-43: Attacker strips RRSIG or rewrites algorithm, so validator receives only unsupported algorithm --> some resolvers pass this as "insecure" instead of "bogus" (even when DS indicates a supported algorithm) --> these are implementation bugs at some resolver operators (should be fixed) --> Google/Cloudflare bugs originally discovered by Nils Wisiol, sparking further analysis* e) Slides 44-47: Attacker strips all DNSKEY/RRSIG but one, so validator receives only unsupported DNSKEY/RRSIG --> some resolvers pass this as "insecure" instead of "bogus" (even when DS indicates a supported algorithm) --> these are implementation bugs at some resolver operators (not sure if fixed) f) Slides 48-51: --> Recommendation and wrap-up: RFCs need some clarification for d) and e) On 8/11/22 17:56, Phillip Hallam-Baker wrote: NSEC record specifies what is signed but not the algorithm used to sign. DNSSEC allows multiple signature and digest algorithms on the same zone. If a zone does this, validators are prohibited from rejecting records only signed using one of the algorithms rather than both. ... This definitely needs fixing. I agree that the specs should more clearly say that when a validating resolver sees a (supported?) algorithm in DS without seeing corresponding RRSIG authenticated via such DS record, the response MUST be bogus. Apart from that: What else needs fixing? (You mentioned something with NSEC.) Best. Peter * Further analysis occurred in collaboration with the Black Hat authors, but led to disagreement. The collaboration ended, and a flawed paper [1] was later uploaded on arXiv by the Black Hat authors. As it had Nils' and my name, but not our consent, we had the paper withdrawn. Besides the numerous technical and editorial errors in the paper, we in particular disagree with the conclusion that DNSSEC algorithm agility causes the problems. It's just bugs. [1]: https://arxiv.org/abs/2205.10608 Personally, I also don't believe that the claim of Section 5.2.1 has been experimentally demonstrated. It says: "[...] the adversary manipulates the algorithm number in an RRSIG RRset over DS RRset to some unsupported algorithm. This is required to disable DNSSEC validation of the DS RRset. The adversary manipulates the DS to correspond to its own key-pair. [...] Once this DNSKEY is stored in cache, the adversary can inject any record of its choice. [...] [...] it immediately affects all the subdomains of the poisoned domain. In particular, the adversary can further create secure delegations for the subdomains using its own malicious key. Launching this attack against Google public DNS would have severe consequences for all the domains under com.." [2] That would require that a resolver would regard a DNSKEY as trusted based on a DS record that it has not validated, and use that DNSKEY later to generate responses with AD bit for delegated names. While that is conceivable, it is conceptually different from finding d). At the time when Nils and I were part of the collaboration, the measurement tooling [3] was not capable of this measurement. There is no indication that it was later extended. I will therefore consider that finding a fabrication until the data is made available. [2]: Original revision of the paper (pre-withdrawal): https://arxiv.org/pdf/2205.10608v1.pdf [3]: https://github.com/nils-wisiol/dns-downgrade-attack (In the arXiv metadata, it is recorded that the paper was withdrawn upon request of one of the authors (me), and not because it was found to be inaccurate. Co-authors did not retract the claim from Section 5.2.1, and instead opposed withdrawal until a lawyer got involved. It is peculiar that the claim still was not made in the Black Hat talk, and I'm actually curious to see data that support it.) -- https://desec.io/ ___ dns-operations mailing list dns-operations@lists.dns-oarc.net
Re: [dns-operations] How should work name resolution on a modern system?
Hi Dave, On 6/15/22 19:33, Dave Lawrence wrote: Kind of surprising to me the number of TLDs who report their address as 127.0.53.53: .arab .cpa .kids .music .xn--mxtq1m (Chinese: "government") .xn--ngbrx (Urdu: "Arab") This looks reminiscent of the 2014 NCAP approach for new gTLDs: https://www.icann.org/en/announcements/details/icann-approves-name-collision-occurrence-management-framework--special-ip-address-12705353-alerts-system-administrators-of-potential-issue-1-8-2014-en Cheers, Peter -- https://desec.io/ ___ dns-operations mailing list dns-operations@lists.dns-oarc.net https://lists.dns-oarc.net/mailman/listinfo/dns-operations
Re: [dns-operations] Input from dns-operations on NCAP proposal
Hi Thomas, On 5/23/22 15:48, Thomas, Matthew wrote: In the 2012 round of new gTLDs, DNS data collected at the root server system via DNS-OARC’s DITL collection was used to assess name collision visibility. The use of DITL data for name collision assessment purposes has growing limitations in terms of accessibility, increasing data anonymization constraints, a narrow data collection time window, and the limited annual collection frequency. I think these are valid concerns, but they don't necessarily mean that a new assessment methodology is needed. Instead, we could try to work towards reducing these limitations, i.e. improve accessibility, collection frequency etc. Other changes in the DNS, such as Qname Minimization, Aggressive NSEC Caching, etc., also continue to impair name collision measurements at the root. QNAME Minimization drops labels from the left. How would it impact root traffic? Aggressive NSEC caching covers non-delegated domains. Assuming such a record is cached, the root would not be asked for a name contained in the range, regardless of which special response strategy would be employed for such queries. In both cases, I think it would be helpful to understand better how these mechanisms impair name collision measurements. Can you elaborate? In preparation for the next round of TLDs, the NCAP team is examining possible new ways of passively collecting additional DNS data while providing a less disruptive NXDOMAIN response to queries. Before deciding what set-up best suits the data collection, I'd like to understand what data do you want to collect specifically? The proposed system below is an attempt to preserve the NXDOMAIN response these name collision systems are currently receiving, [...] The proposal would involve delegating a candidate TLD. The delegation process of inserting a string into the DNS root zone will make the TLD active in the domain name system. The required delegation information in the referral from the root is a complete set of NS records and the minimal set of requisite glue records. Given the DNS protocol, these two goals seem inconsistent. If the servers referred to by the NS rrset do know the zone, they will answer NOERROR for apex queries, and depending on the query type, they will also return records (e.g. NS, or SOA). If they don't know the zone, the best practice response is REFUSED. There doesn't seem to be a compliant way to delegate a candidate TLD and then have the auth return NXDOMAIN to queries for that domain. (If you do that, there is no guarantee of success. The set-up would be strange, and resolvers may decide to pass SERVFAIL to their clients, for example. Also, cache issues may arise, as pointed out by Vladimír.) Configuration 3: Use a properly configured empty zone with correct NS and SOA records. Queries for the single label TLD would return a NOERROR and NODATA response. If I understand correctly, that's similar to what was done in 2012. Again, I'm not sure why it would not work now when it did back then. I think this is the question that should be answered first. The level of disruption to existing private use of such labels by this restricted form of name delegation would be reasonably expected to be /minimal/; I think it would be "hoped", not "expected" to be minimal. :-) Best, Peter -- Like our community service? Please consider donating at https://desec.io/ deSEC e.V. Kyffhäuserstr. 5 10781 Berlin Germany Vorstandsvorsitz: Nils Wisiol Registergericht: AG Berlin (Charlottenburg) VR 37525 ___ dns-operations mailing list dns-operations@lists.dns-oarc.net https://lists.dns-oarc.net/mailman/listinfo/dns-operations
Re: [dns-operations] Survey on DNS resolver operations and DNSSEC
On 3/21/22 13:19, Bill Woodcock wrote: The alternative to DNSSEC validation is man-in-the-middle compromises. We wouldn’t be doing DNSSEC validation if it caused more workload than man-in-the-middle compromises. Therefore the increased workload is negative, not positive. Is that (economic) argument all there is to it? -- If so, wouldn't one expect all resolver operators to do DNSSEC validation? (Validation prevalence is far from 100%.) Best, Peter -- Like our community service? Please consider donating at https://desec.io/ deSEC e.V. Kyffhäuserstr. 5 10781 Berlin Germany Vorstandsvorsitz: Nils Wisiol Registergericht: AG Berlin (Charlottenburg) VR 37525 ___ dns-operations mailing list dns-operations@lists.dns-oarc.net https://lists.dns-oarc.net/mailman/listinfo/dns-operations