Alexander and all, tl;dr Lets move forward with our best guesses now. dkg's recommendations seem good, but maybe we can tweak these based on expected maximum packet sizes.
[ apologies for the rambling post ] At 2017-06-12 11:38:56 +0000 Alexander Mayrhofer <alexander.mayrho...@nic.at> wrote: > Hi, > > > draft-ietf-dprive-padding-policy is now expired. Anyone knows what > > happens? > > What happened is that (obviously) i did not update the draft with > the latest findings. > > > I think that the results presented by Daniel Kahn Gillmor at NDSS > > were very promising (allowing to base policies on facts). > > I do agree, though i think the community should discuss the > implications before we "set in stone" dkg's findings. Is the > recommendation that he made also the view of the working group? > Opinions? By NDSS I guess we mean this work? https://gitlab.labs.nic.cz/labs/knot/merge_requests/692 I had not seen the presentation, and it was very interesting. I knew that dkg was working on this, but didn't know he had actually put it together. Nice to see actual measurements and modeling based on the measurements! One implicit assumption in this though is that bandwidth costs are important to minimize. My understanding is that DNS actually uses very little bandwidth compared to actual traffic - which is reasonable since DNS is basically metadata. While we see a lot of DoS or DDoS or reflection attacks that consume a lot of bandwidth, mostly these are problems because DNS uses UDP without validating source address; in DNS over TCP/TLS/DTLS/HTTP we don't have that problem. I am told that packets-per-second is the much bigger problem with DNS. This makes sense because DNS is a horrible protocol for modern network and server architectures in terms of performance, so we are almost always CPU-bound. For me this implies that we need to be very careful to avoid padding in a way that adds extra packets. Note that we cannot insure this, because there will always be some packets near the magical 1500 byte Ethernet boundary, and we'll want to pad them, but those are rare even with DNSSEC and encryption bloating things. It also implies that padding which does NOT add extra packets might not matter so much. What we can say for sure is that we have no real idea how moving to DNS-over-TLS is going to impact the performance of DNS going forward. Does the CPU time for encrypting matter more than the CPU time for doing DNS processing? Is it just a rounding error? Will the extra handshakes for TLS be the highest cost? Will maintaining state about clients on the server to minimize handshakes make memory our bottleneck? Bringing us back to the question at hand... I think we should go ahead and make recommendations based on the best guesses that we have now. Those seem to be dkg's. Sure, it would be nice if we replicated the work based on additional resolver data sets. I don't see many people volunteering to do that work though. And sure, it would be nice if we could have a handful of cryptanalysts able to publish their results think about the problem and make recommendations, but I don't see any of them around either. :) The only question I have is whether we want to expand the response padding a little. I see a few options: 1. We could pad to 500 bytes, which would mean saving a packet for DNS messages between 1404 (468*3) and 1500 bytes long in the usual case. 2. We could pad to 486 bytes, which means that we would get as many paddings as possible in a 1460 byte packet (small enough for tunneled IPv6 traffic). This would save a packet for DNS messages between 1404 and 1460 bytes. I especially like that 486 is easily confused with 468, dkg's original recommendation. ;) A similar logic about packet sizes might be applied for query padding, but in practice queries are never that big so I am happy with the 128 byte recommendation. Cheers, -- Shane
pgpoNLBdaISi4.pgp
Description: OpenPGP digitale handtekening
_______________________________________________ dns-privacy mailing list dns-privacy@ietf.org https://www.ietf.org/mailman/listinfo/dns-privacy