On Mon, 18 Aug 2008, Paul Wouters wrote:
> I wouldn't be using starbucks resolver, since i just installed my
> own DNSSEC-aware resolver?
Ordinarilly , when you get a DHCP-supplied nameserver from starbucks,
your stub resolver directs its requests to that caching server. It is
indeed possible that your stub resolver could make its own queries,
including to the root and TLD servers, but putting this non-cached
end-user load on those servers is yet another unanticpated operational
problem. No one has accounted for such increased load for DNSSEC.
> > When the internal representation is updated, it is often done one RR at
> > a time. So two responses updating the same two records could be
> > interleaved. This is a simple database problem called a race condition.
>
> However, we have TTL's and signature life times, so older entries will only
> get updated with newer entries, so this "race" is not a problem.
I don't think you understand the programming issues. This has nothing
to do with TTLs or signature lifetimes. The point is that DNS servers
don't currently perform transactions internally; normally, records don't
depend tightly on other records (hence the term 'loosely coherent').
BTW, adding transaction capability greatly slows down database servers.
This is probably one of the reasons people like to put stuff in DNS
instead of databases. DNS is fast because of its stateless nature.
DNSSEC changes that, and imposes speed-reducing demands on server
architecture. Or else, won't work right. This change alters DNS from
being loosely coherent to tightly coherent, with corresponding
performance impacts to get correct operation.
> >> You lost me here.
> >
> > Sorry. When the RR isn't the same as the one for which the RRSIG was
> > computed, the DNSSEC-aware stub resolver will reject the record. If it
> > asks the caching server again, it gets the same records. So the resolver
> > will fail, and will continue to fail until the TTL expires. The bad guy
> > just creates two records with long TTL, and you have the DOS attack.
>
> Mark and Roy already explained why this is not the case.
Who do you mean by 'Mark and Roy'? Mark Andrews and Roy Arends? When?
> >>> If the caching server checks the signature of all records, its
> >>> susceptible to a DOS attack by lots of DNSSEC queries that take a
> >>> lot of computation to check. Seems to be no-win.
> >>
> >> That's not a DOS attack. That's the price of cryptogrpahically signing
> >> the DNS.
> >
> > When your server can't handle the load of all these calculations on
> > millions of queries sent by the attacker, its a DOS attack.
>
> So is not getting any traffic because you lost .com due to cache
> poisoning.
True enough. Same problem continues after all the DNSSEC effort, and now
we have created the additional problems of the other DDOS attacks using
the signed records. Seems to be a giant step backward, with no step
forward.
> In fact, what I understood is that resolvers mostly have problems
> switching to DNSSEC not due to DNSSEC but due to DNSSEC requiring
> EDNS0. And mind you, the EDNS0 was released in what? 1999?
>
> It's not the crypto that's the resolvers issue. That's easilly solved
> by adding a cpu or a box.
The transition issue you speak of is a different problem. The DOS
attack I speak of is when your caching server gets lots of requests for
DNSSEC records, which it then must verify. These requests aren't
'legitimate' in the sense that users made these queries genuinely trying
to get information. These requests are bogus, generated only with the
purpose of creating load on the server. Verifying the crypto keys is
not easy. Verifying millions is impossible.
> > Now imagine the target spoofed IP's are the nameservers of, say
> > yourdomain.com. When the roots (or TLDs, etc) get millions of
> > forged UDP packets, the roots can't block the incoming
> > packets---that would severely harm the target IP addresses. On the
> > other end, the target IP addresses can't block the root
> > servers---that would also seriously harm the target IP addresses.
> > The target just has to 'take it'.
> >
> > The greater the amplification factor (Response size / request size--
> > perhaps only 64 bytes), the more damaging this attack is. Since
> > there are (or may be--probably will be) a number of additional (and
> > large) DNSSEC records in a response, the response could get quite
> > large, causing significant damage. So yes, that is a reason not to
> > deploy DNSSEC.
>
> This has also been explained earlier. Just get a botnet that's 100x
> the size to accomplish the same. This argument amounts to something
> like "let's not do HDTV because the home user DSL might not be able to
> download it in real time anyway".
No, actually, it isn't like that at all. If there is no amplification,
the botnet will simply use ICMP or something it already has a program to
do. When there is amplification (perhaps up to 100x in this case) a very
small botnet can do a very great deal of damage, for a much longer time.
A great deal of effort has gone into eliminating 'smurf amps'; networks
that allow broadcast ping responses. DNSSEC is the greatest smurf amp
of all time, and it can't be shut off like one can blocking ICMP to a
broadcast addresss.
To put it another way, the necessary size of the botnet was reduced by a
factor of 100. That size reduction puts 'taking down microsoft' in the
range of much, much smaller botnets.
--Dean
--
Av8 Internet Prepared to pay a premium for better service?
www.av8.net faster, more reliable, better service
617 344 9000
_______________________________________________
DNSOP mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/dnsop