Hi Steve,
There was an interesting paper at Usenix Security on the effects of deploying
DNSSEC; see
https://www.usenix.org/conference/usenixsecurity13/measuring-practical-impact-dnssec-deployment
. The difference in geographical impact was quite striking.
George Michaelson and I have been undertaking similar work in DNSSEC, using an
advertisement to enrol users' browsers to perform a set of URL loads that tests
their ability to perform DNSSEC validation. Our methodology differed from that
in the Usenix paper - we worked hard at setting up name structures that
eliminated any benefits from DNS caching as well as web caching. We presented
on this work at the IEPG meeting at IETF 87 a couple of weeks ago.
The bottom line: around 8% of clients across the Internet will perform DNSSEC
validation - i.e. they are seen to fetch the DS and DNSKEY RRs for the signed
objects, and will fetch the object that is correctly signed, and will not fetch
the object that is badly signed. A further 4% of clients appears to use a set
of resolvers where there is a mix of validating resolvers and non-validating
resolvers. What we see is that the client's resolver will perform a set of
fetches of the DS and DNSKEY records for the badly signed onject, then ask for
the A record a second time (generally using a different resolver) and then
fetch the object anyway - i.e. the original SERV FAIL response causes the
client to turn to another resolver in its list, and use that result. 87% of
clients only ask for A records - no signs of DNSSEC life for them.
We did some basic mapping of client to country (there is a LOT of DNSSEC
validation in Sweden!) and network service provider bu origin AS, and also
looked at the performance implications, both if you serve a zone thats signed,
and if you serve a zone that is signed badly.
The presentation is at http://www.iepg.org/2013-07-ietf87/2013-07-28-dnssec.pdf
if you are interested.
Geoff