Mikael,

On Oct 15, 2016, at 11:22 AM, Mikael Abrahamsson 
<swm...@swm.pp.se<mailto:swm...@swm.pp.se>> wrote:

These kinds of migration scenarios to newer algorithms MUST be hashed out, 
because otherwise we're never going to be able to deploy new algorithms (and 
per previous experience, it seems we want to change them every 5-10 years).

Agreed! To capture this kind of information, a group of us wrote a draft in 
DNSOP about new crypto algorithms:

https://tools.ietf.org/html/draft-york-dnsop-deploying-dnssec-crypto-algs-01

In section 2.1.1 we mention the situation with resolvers and unknown 
algorithms. However, we assume compliance with RFC 4035. Your case study here 
shows that we need to add some text about the challenge that can happen if the 
resolver does the wrong thing and fails the validation.

I'll add that. Thank you for bringing this case to the list.

It seems to me there is a larger issue of whether a system will "fail insecure" 
(or "fail open") or "fail secure".

RFC 4035 has the "fail insecure" view where the DNS info is still passed along, 
thus allowing the deployment of new algorithms to NOT break things, although 
with a lower level of security until the new algorithms are supported.

It seems the dnsmasq developers chose to "fail secure" thus potentially 
"protecting" the end user from insecure data, although in this case the data is 
secure, just not understood to be secure.

This is one of the tougher points of algorithm change, particularly when so 
many of the resolvers may be in commodity customer-premises equipment (CPE) 
that may or may not be easily updated or replaced.

Dan


--
Dan York
Senior Content Strategist, Internet Society
y...@isoc.org<mailto:y...@isoc.org>   +1-802-735-1624
Jabber: y...@jabber.isoc.org<mailto:y...@jabber.isoc.org>
Skype: danyork   http://twitter.com/danyork

http://www.internetsociety.org/




_______________________________________________
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop

Reply via email to