On Fri, May 16, 2014 at 7:54 AM, Nicholas Weaver
<nwea...@icsi.berkeley.edu>wrote:

> > 16k/second is nothing, and I can generate that from a wristwatch
> computer. Caching doesn't help, as the attackers can (and do) bust caches
> with nonce-names and so on :/  A 16 core machine can do a million QPS
> relatively easily - so it's a big degradation.
>
> You miss my point.  That server is doing a million QPS, but its only
> providing ~16k/s distinct answers.
>

That's not a typical CDN environment though. CDNs typically have far more
names than that. But you're right; online signing + caching probably is
workable in some environments.


> Your wristwatch computer can only cause a dynamic server a problem if its
> competing with the legitimate query stream's priority category.  The
> "priority" category, assuming 10k names and 100 options/name and 1m max TTL
> requires only a single system to support.
>

I've never been able to make prioritisation really work at microsecond
scale. I can imagine a dedicated process for signing and having prioritized
queues to it, but that would need so much packet copying that it would
likely degrade throughput seriously. Alternatively the DNS handling process
may defer the signing and keep its own queue locally, but that introduces
scheduling overhead. Every time I've tried it, I've found that taking out
prioritisation and smart scheduling made the overall average faster.


> Thus your wristwatch loaders can only act to load the non-priority
> category, which would be NSEC3.  If you actually care about zone
> enumeration, you MUST generate NSEC3 records on the fly, because lets face
> it, NSEC3 in the static case doesn't stop trivial enumeration of the zone.
>

Another approach to this is to pre-sign a fixed number of NSEC3 records per
zone, regardless of the zone's real size or contents :)

-- 
Colm
_______________________________________________
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop

Reply via email to