Re: [DNSOP] [Ext] Call for Adoption: draft-hardaker-dnsop-rfc8624-bis, must-not-sha1, must-not-ecc-gost

2024-05-02 Thread Philip Homburg
>On the other hand, if it issued annoying warning messages every time it
>used a SHA1 key, I'd eventually notice and probably rotate the keys.
>
>I'm with Peter, I do not see a MUST NOT as requiring vendors or operators
>to do stupid stuff.

For my understanding, do you mean to say that if we publish that a signer
MUST NOT generate signatures using algorithms 5 and 7, then the signer can
just do that if it generates and annoying warning each time you sign?

To me that sounds more like a SHOULD NOT.

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] [Ext] Call for Adoption: draft-hardaker-dnsop-rfc8624-bis, must-not-sha1, must-not-ecc-gost

2024-05-02 Thread Philip Homburg
In your letter dated Thu, 2 May 2024 10:27:17 +0200 you wrote:
>I'm not following what breaks based on the wording I suggested, and I'm not su
>re why you keep bringing that up. :-)

Let's say I sign my zones using some scripts and ldns-signzone. This
has been working for years so is now on autopilot.

Then an RFC gets published that signers MUST NOT support signing using SHA1,
so ldns removes those algorithms. Then a software update brings the new
version of ldns my system. Now an unsigned zone gets deployed, and the whole
zone is considered bogus by validators who see valid DS record but not a
corresponding signed zone.

My reading is that this is what the draft tries to do.

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] [Ext] Call for Adoption: draft-hardaker-dnsop-rfc8624-bis, must-not-sha1, must-not-ecc-gost

2024-05-02 Thread Philip Homburg
In your letter dated Thu, 2 May 2024 09:58:43 +0200 you wrote:
>Right. Their policy may be "it's compliant and it works, so why roll?". It'll 
>be easier to push those SHA-1 signers to switch if one can tell them "look, no
>w you're not compliant anymore".

So basically we need a BCP: operators of zones MUST NOT sign their zones
with algorithms 5 and 7. If they currently do, they need to move away
from those algorithms as quickly as possible.

To me, that would sound better then trying to break protocols to get people
to move.

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] Questions before adopting must-not-sha1

2024-05-02 Thread Philip Homburg
> e.g. as other OS vendors follow suit and SHA-1 support
> disappears from crypto libraries.

As described by Mark Andrews, one thing that made the Redhat situation more
complex is that they didn't just remove SHA1 signing support, they modified
openssl to return bogus RSA valdation results at runtime. Which requires
very specific detection techniques on the side of validation software. So
the SHA1 support is there, it is just made unreliable.

Going beyond Redhat, BGP is still using MD5. That's not going away.
NSEC3 uses SHA1 that is also not going away soon, Git uses SHA1. So the
risk of SHA1 getting removed from crypto libraries is extremely small.
Even NIST, which recommends against using SHA1 in signing has carved out
exceptions.

So maybe we can wait until implementors speak up that is hard to support
those old algorithms?

> There are other reasons to deprecate SHA-1 in DNSSEC than mathematical
> concern about the use of that particular digest algorithm in the
> protocol. Problems with SHA-1 definitively exist in other places,
> in protocols that are in much more widespread use than DNSSEC. For
> example, a message that says "stop using SHA-1" might be more
> effective at fixing TLS implementations than a message that says
> "stop using SHA-1 unless you are using it in one of the following
> ways, in which case it's totally fine". From the perspective of
> DNSSEC, "stop using SHA-1" might be a much more effective message
> to communicate at the same time that everybody else is saying it
> than ten years later.

There have been quite a number of non-technical arguments used in this
discussion. I'll list a few.

1) It is already broken (because Redhat broke it). It don't see how it is
   better to break it some more.

2) We are late, we should have remove SHA1 support years ago.

3) The above quote, we need to lead the pack and remove SHA1 to help others.
   I find the combination of 2) and 3) quite funny.

4) We need to set an end date. We still have the WKS record which has not seen
   any use for decades and the presentation format is a pain to implement.
   But that one is still there. But we really need to get rid of something
   that is in active use. Maybe we can have a guideline that we first
   deprecate what has been obsolete by decades before we start with
   what is currenly in use?

5) People are using old software. We don't even know if people are old
   software to sign using SHA1. But even then, do we really want go and
   break protocols just to move people to newer software. Is that productive?

6) There are about 140k zones signed using SHA1. That's a small number we
   don't have to care. If find this confusing. The biggest problem we have
   is getting people to sign there zones in the first place (and adding
   transport security). But we have time to just kill 140k signed for
   no technical reasons?

In the end the current draft has a strong negative effect on the direct
and indirect users of about 140k zones. Indirect use might also be if there
are DANE records in those zones and the use of DANE by the sender of an
email will silently stop after a validator lists the domain as insecure.

>From a technical point of view, there is no second preimage attack on SHA1,
it will probably take a quantum computer to perform one. And if that's
the case then we will have to deprecate RSA, rendering the issue moot.

What are the positive points of this draft? Checking a box that there is now
a little bit less SHA1? It doesn't seem to bring any meaningful increase
in security.

The impact on validation software may also be very annoying. Validation 
software will have to default to not support SHA1 in signing to implement
this draft. But no doubt there will be customers who do need SHA1 support.
So there will be a config option. And the config option will be there 
until the end of time. Effectively leading to more complexity.

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] [Ext] Call for Adoption: draft-hardaker-dnsop-rfc8624-bis, must-not-sha1, must-not-ecc-gost

2024-05-02 Thread Philip Homburg
In your letter dated Thu, 2 May 2024 09:21:29 +0200 you wrote:
>In my view, it's fine to disallow signing with SHA-1-based algorithms to help 
>push signers towards other algorithms. 

I appreciate the effort, but I'm curious what that means.

As far as I know, just about all zones that start signing are not using
SHA1 as part of the signature. There is not really an issue with new
installations. The affected algorithms have been marked as not recommended
for many years so we can assume that in just about any signer they are not
the default. The problem is with existing zones who probably have an
existing relationship with signer software.

The IETF is not the protocol police so it seems unlikely that signers are
going to suddenly remove all traces of SHA1 signing and leave their users
in the dark.

Worse, if signers would do that, then there is a distinct risk that people
will just use old software.

This may have the effect that new signers will not implement these
algorithms. However, that will probably be until the first customer comes
along who requests these algorithms. Adding RSA+SHA1 is trivial if you
already have RSA+SHA2.

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] Questions before adopting must-not-sha1

2024-04-30 Thread Philip Homburg
>Their zone is already made insecure by a number of OS/DNS implementation
>combos. Perhaps someone with RIPE Atlas credits can run a check like the
>equivalent of "dig dnskey nic.kpn +dnssec" to see how many endusers
>already get insecure answers for this?

This reads as Redhat strong-arming the IETF into adopting a draft that has
no technical merit. The number of OS/DNS comboes that you refer to are all
from or related to Redhat.

Redhat decided to start shipping DNSSEC validators that violate the current
standard. The existance of such software should not override technical
considerations.

This needs to stay what it currently is, a draft, until there are clear
technical reasons why the security of the internet improves by instructing
validators to not support signing algorithms that include SHA1.

The security of the internet does not improve with the current draft.
Operators likely understand that Redhat systems are not a good basis for
DNSSEC and need to be avoided. To the extent that they don't, we an can make
tools that show that their current validator does not conform to IETF
standards.

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] [Ext] Call for Adoption: draft-hardaker-dnsop-rfc8624-bis, must-not-sha1, must-not-ecc-gost

2024-04-30 Thread Philip Homburg
>- FIPS
>- PCI-DSS
>- BSI
>- OWASP
>- SOC2
>- PKI-industry & CAB/Forum
>- TLS, IPsec/IKE, OpenPGP, SMIME, et all at IETF.
>- All the cryptographers including CFRG

The problem is that none if them did an impact analysis for this draft.

Yes of course, in isolation it is good to move away from SHA1. Nobody
says SHA1 is great, we should promote it. RFC 8624 already says that
algorithms 5 and 7 are not recommended for signing.

However, going ahead and breaking things is something different. And that
is exactly what is proposed here. And that is something that doesn't give
security benefits. Just a reduction of security in the name of crypto purity.


___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] [Ext] Call for Adoption: draft-hardaker-dnsop-rfc8624-bis, must-not-sha1, must-not-ecc-gost

2024-04-30 Thread Philip Homburg
>It will also prevent ServFails when the system crypto SHA1 for
>authentication and signature purposes is blocked, and the DNS software
>sees this as a failure and returns BOGUS. I am not sure how many DNS
>implementations are now probing SHA1 and on failure put it in the
>"unsupported algorithm" class, to serve it as insecure instead of bogus.
>
>This issue did hit RHEL,CentOS, Fedora.

Bogus would be perfectly fine. The problem is that no operator is going to
deploy a system that returns bogus for a commonly used signing algorithm.

So what happens instead is that software is patched to return insecure if
SHA1 is not avaiable for signing and that is of course very risky.

>The IETF and its cryptographic policies are a careful interworking
>between market forces, reality and desire. Moving to fast leads to RFCs
>being ignored. Moving too slow means RFCs do not encourage
>modernization. Every other protocol has left SHA1 behind. It's time for
>DNS to follow suit. It's had its "exemption" for a few years already.

One thing that keeps showing up in this context is Redhat. RHEL and
CentOS are directly controlled by Redat, Fedora is strongly connected to
Redhat.

So it seems that one company is trying set policy. Not a policy that is
grounded in security analysis, a policy based on shipping products that
violate current DNSSEC standards.

Given that for a large number of zones, SHA1 does not pose a security risk,
there is no 'too slow'.

There is a general move to EC for signatures and that solves the SHA1
issue as well. For zones that are currently secure, just let them be
secure.

And let Redhat ship broken products if they want.


___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] [Ext] Call for Adoption: draft-hardaker-dnsop-rfc8624-bis, must-not-sha1, must-not-ecc-gost

2024-04-30 Thread Philip Homburg
>The advise is split between producing SHA1 signatures and consuming SHA1
>signatures, and those timings do not have to be identical.
>
>That said, a number of OSes have already forced the issue by failing
>SHA1 as cryptographic operation (RHEL, CentOS, Fedora, maybe more). So
>right now, if you run DNSSEC with SHA1 (which includes NSEC3 using
>SHA1), your validator might already return it as an insecure zone.
>
>I think a MUST NOT for signing with SHA1 is a no-brainer. The timing for
>MAY on validation should be relatively short (eg 0-2 years?)

What worries me about the draft is the security section. I can understand
the desire to get rid of old crypto, but as far as I can tell
this draft will mostly decrease security.

We can accept as given that it is easy to find collisions for SHA1. However,
a second pre-image attack is way off in the future.

>From that we can conclude that for any zone that is now signed using SHA1 and
that does not have a risk of collision attacks (because the zone does not
accept data controlled by third parties), this draft is a clear reduction of
security.

For a site that does have a risk of collision attacks the situation is less
clear. Such a site should move away from using SHA1, but the recommendation 
for validators will still cause an immediate reduction of security.

Looking at the signer part, this is not great either. Moving away from SHA1
requires an algorithm roll-over. DNSSEC is already quite fragile and algorithm
rolls are worse. So there is a failure risk that is too big ignore.

This draft requires zones that do not have a collision risk to move to a
different algorithm, at a significant risk, but there is no increase in 
security. So that part is also a net negative for security.

So it seems that we are asked to adopt a draft that will mostly reduce 
security, not increase it.

There might be other arguments for adopting the draft, such a Redhat not
validating signatures with SHA1 anymore. But those arguments are not
mentioned in the draft.

And if some companies from one country want to shoot themselves in the foot,
does the rest of the world have to follow?


___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] [Ext] Call for Adoption: draft-hardaker-dnsop-rfc8624-bis, must-not-sha1, must-not-ecc-gost

2024-04-29 Thread Philip Homburg
>I also don't think that simple, procedural documents that are straightforwardl
>y-written and uncontentious ought to present a big drain on the resources of t
>he working group. I think if we all tried really hard not to nitpick or to pla
>y amateur copy-editors we could probably last-call simple documents quite quic
>kly and move on with our lives. 

I don't know anything about ghost, but there is one thing I worry about
when it comes to SHA1.

As far as I know there is no second pre-image attack on SHA1, and there
will not be one in the foreseeable future.

So if we deprecate SHA1 for validators, and assuming validators will follow
this advice, and some platforms already stopped validating SHA1, then there
may be zones that are mostly secure today that become insecure or bogus
when we are done with the draft.

That doesn't seem to be a simple procedural discussion.

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] I-D Action: draft-ietf-dnsop-ns-revalidation-06.txt

2024-03-18 Thread Philip Homburg
In your letter dated Mon, 18 Mar 2024 08:01:38 +0100 you wrote:
>On 2024-03-17 20:12 -07, internet-dra...@ietf.org wrote:
>> Internet-Draft draft-ietf-dnsop-ns-revalidation-06.txt is now available. It 
>is
>
>| 7.  Security Considerations
>| [...]
>| In case of non DNSSEC validating
>| resolvers, an attacker controlling a rogue name server for the root
>| has potentially complete control over the entire domain name space
>| and can alter all unsigned parts undetected.
>
>can alter *all* parts undetected.
>
>It's a non-DNSSEC validating resolver, it doesn't care about signed or
>unsigned. Maybe just drop that sentence, it doesn't add much.

A non DNSSEC validation resolver may have downstream validators that can detect 
changes to signed data. So an attacker that wishes to stay undetected has to
be careful not to modify signed data. 

I guess the authors should add some clarifying text here to make clear why
in the case of a non validating resolver the attacker can only alter the
unsigned parts.

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] [Ext] About key tags

2024-03-05 Thread Philip Homburg
>Note that RFC 8901 is an IETF consensus document that was produced in the
>DNS Operations working group. So, we can't just dismiss it and propose
>protocol changes without considering effects on that (and other documents).
>As I also noted earlier, its status will likely be upgraded when we revise
>it.

Under the assumption that adding a constraint for unique key tags for the
DNSSEC standard protocols is relatively easy (from a protocol point
of view, getting changes deployed is always a lot harder), we can
take a look at RFC 8901. As you say it likely needs a revision anyhow.

So, for example Section 6.1 of RFC 8901.

For a KSK rollover, the zone owner would generate a new KSK. The zone owner
also signs the DNSKEY RRset. So this is essentially the same as what any
other signer would have to do. No real changes are required except making
sure to select a new KSK with a key tag that does not conflict with
keys already in the DNSKEY RRset.

Then for a ZSK rollover. Each provider has independent signing keys, so a
provider would have to select a new signing key that does not conflict
with current keys in the DNSKEY RRset. Again, this is what any signer would
do in the new model.

Then the provider would submit the new key to the zone owner for inclusion
in the DNSKEY RRset. At this point, if two providers would want to update
their keys at the same time and happen to have new keys with conflicting
key tags, we would have a problem.

However, this can only happen if two ZSK rollovers would happen at the same
time. Which is something that can be consideren bad operational practice
and is certainly not suggested as something to do in Section 6.1. Furthermore,
Section 6.1 is explicit about the coordination between providers and the
zone owner when it comes to a ZSK rollover. So it is easy for a zone owner
to prevent two ZSK rollovers at the same time.

I have to admit, model 2 would be a bit more complex. But not fundamentally
impossible.


___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] [Ext] Nothing more useful to say About key tags

2024-03-04 Thread Philip Homburg
>Not at all. This would be an incompatible change that breaks existing
>working DNS configurations, for at most a trivial simplification in
>load limiting code many years from now, even assuming people were to
>implement it.

Opinions difference how much this change will help.

The point I wanted to make is that this change does not lead to issues at
level of DNSSEC standard protocols.

Yes, there might be some implementations that need to adjust. That's often
the case with protocol changes. If we cannot make protocol changes any more
out of fear that implementations may need to change then we have
reached the top of ossification.


___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] [Ext] About key tags

2024-03-04 Thread Philip Homburg
In your letter dated Sat, 2 Mar 2024 16:55:59 -0400 you wrote:
>The core DNSSEC protocol includes multi-signer. RFC 8901 just spells out expli
>citly how it is covered by the protocol; that's why its status is Informationa
>l.
>
>> The first step to conclude is that for the core DNSSEC protocol, requiring
>> unique key tags is doable.
>
>No. There is no core and non-core part of the spec. Support for multiple keys,
> including keytag collisions, simply is part of that protocol.

What I mean is that if we take all of the standards track DNSSEC RFCs and we
add a new RFC that says something to the effect:
1) A signer MUST NOT sign a DS or DNSKEY RRset if the set has duplicate key
   tags.
2) An authoritative DNS server MUST not serve a set of RRSIG records that 
   corresponds to a single RRset where the collection of RRSIG records has a
   duplicate key tag.

then as far as I can tell, there is no conflict with currently published
standards track DNSSEC RFCs. 

In addition for most signers and authoritative servers it will be easy to meet
those requirements and many signers are already in line with those 
requirements.

The only thing that prevents us from publishing such an update is an
informational RFC about multi-signers (or other practices that are not
documented or standardized within the IETF).

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] [Ext] About key tags

2024-03-01 Thread Philip Homburg
In your letter dated Fri, 1 Mar 2024 15:42:49 -0500 you wrote:
>Offlist because I don=E2=80=99t want to feed the flames, but:
>
>>=20
>> 2) Operators of validators don't want customer facing errors due resource
>>   limit constraits. So they set them generous enough that it works for
>>   real traffic. Nobody knows what happens during a new attack.
>> 3) Some content providers are quite creative with the way they use DNS.
>>   So the limits need to high enough to accomodate them.
>
>Why do you give operators and content providers a freebie but not signers ?

That's not my intent. I think it is more that signers are less visible.
A validator does not see how a zone is signed. A validator only sees
the contents of the zone, not where the keys are located.

So any resource contraints probably don't reflect what signers do other
than to accomodate whatever shows up as the output.

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] [Ext] About key tags

2024-03-01 Thread Philip Homburg
> If a validator chooses to discard all signatures for which there
> are multiple DNSKEY resource records matching the key tab in the
> RRSIG resource record, there'll be SERVFAILs across the population
> that cares about the data involved.  From past observations, when
> there's a widespread "I can't get to that", it bubbles up to the
> service provider and then take steps to fix it.

I don't think that would fly.

If the major vendors of validating software together the big public resolvers
would come together and announce a flag day where after that day
key tags would have to be unique or SERVFAIL would be the result, then
that would put a sizable group of people in a very bad position. 

If it was that easy, then we would not have this discussion, then we
could just publish an update to DNSSEC that requires key tags to be unique.

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] [Ext] About key tags

2024-03-01 Thread Philip Homburg
> For about the hundredth time, the woy you deal with any of this is
> resource limits, not trying to invent new rules about stuff we
> might have forbidden if we'd thought of it 20 years ago.

There are a number of problems with resource limits:
1) We haven't written it down (in an RFC). So we got to the point that many
   validators got it wrong. A bit of hand waving that implementations should
   do resource limiting doesn't magically make it happen.
2) Operators of validators don't want customer facing errors due resource
   limit constraits. So they set them generous enough that it works for
   real traffic. Nobody knows what happens during a new attack.
3) Some content providers are quite creative with the way they use DNS. 
   So the limits need to high enough to accomodate them.
4) Because there are no standards for those limits, we cannot really reason
   about them. 
5) It is tricky for researchers because they first have to figure out how
   popular software works in order to exploit it. But if it is tunable
   resource limit it doesn't result in a lot of credit.

So from a validator point is, it is better to move some of those resource
limits to the protocol. Even if the DNSSEC spec would say that you only
have to validate with 2 public keys and two signatures per RR set, then
that would be a massive improvement over the vagueness in the current specs.

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] [Ext] About key tags

2024-03-01 Thread Philip Homburg
>Remember that the keytags are just a hint to limit the number of keys
>you need to check for each signature. If I have a zone with 300
>signatures per key, it's still going to take a while to check them all
>even with no duplicate tags. It won't be as bad as the quadratic
>keytrap but it'll still be annoying.

If key tags are unique, then a validator can just discard anything that
has multiple signatures with the same key tag.

So to reach the 300 signatures on a single RR set, you would need to have
300 keys in the DNSKEY RR set. In that case, we can assume that the 
validator will just discard the DNSKEYs. So the validation effort would
be zero. Not a very good attack.


___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] [Ext] About key tags

2024-03-01 Thread Philip Homburg
> I removed a lot of logic, as it seems dead on.  But...
> 
> >This would allow validators to reject any DS or DNSKEY RR set that has a
> >duplicate key tag.
> 
> "This" refers to barring keys from having duplicate key tags.  My
> knee-jerk response is that validators are already permitted to
> reject anything they want to reject.  (We used to talk about the
> catch-all "local policy" statements in the early specs.)  You don't
> have to bar duplicate key tags to allow validators to dump them,
> validators already have that "right."

The basics of protocol design is that parties that want the protocol to
work follow the protocol. Of course there will be random failures, and in
the case of security protocols, also attackers. 

If we have a protocol where validators are allowed to discard RR sets with
duplicate key tags but we place no restriction on signers, then we have a 
protocol with a high chance of failure even if all parties follow the 
protocol.

So we have essentially 2 options for a successful protocol:
1) the current one where validators tolerator key tag collissions
2) or a potential one where signers ensure that key tag collisions do not
   happen.

If validators violate the protocol then all kinds of things can happen. They
just place themselves outside the protocol and cannot rely on the properties
of the protocol.

At the end of the day, following the protocol is voluntary. But if we want
to be able to reason about the protocol, then we have to assume that all
interested parties try to follow the protocol.

> >Duplicate key tags in RRSIGs is a harder problem
> 
> I'm not clear on what you mean.
> 
> I could have RRSIG generated by the same key (binary-ily speaking,
> not key tag-speaking) that have different, overlapping temporal
> validities.  If you want to draw a malicious use case, I could take
> an RRSIG resource record signed in January with an expiration in
> December for an address record that is changed in March, and replay
> that along with a new signature record, signed in April and valid
> in December.  One would validate and the other not.  But this isn't
> a key tag issue, it's a bad signing process issue.

Indeed. But the question is, if a validator finds both RRSIGs associated with a
RR set and we have guarantees about uniqueness of key tags for public key,
can the validator then discard those signatures?

> >But for the simple question, would requiring unique key tags in DNSSEC be
> >doable without significant negative effects, then I think the answer is yes.
> 
> Heh, heh, if you make the problem simpler, then solving it is
> possible.
> 
> Seriously, while I do believe in the need for a coherent DNSKEY
> resource record set, there are some multi-signer proposals that do
> not.  If the key set has to be coherent, then someone can guard
> against two keys being published with the same key tag.  The recovery
> may not be easy as you'd have to determine what key needs to be
> kicked and who does it and where (physically in HSMs or process-wise).
> I have some doubt that key tag collisions can be entirely avoided.

So now we moved the problem away from the core DNSSEC protocols to the
realm of multi signer protocols.

The first step to conclude is that for the core DNSSEC protocol, requiring
unique key tags is doable. Even without a lot of effort (other the usual
of coordinating changes to the protocol).

Then the question becomes, how hard will it be to adapt multi signer protocols
to ensure that the effective set of DNSKEYs has unique key tags.

> Even if you could - you still have the probablility that someone
> intentionally concocts a key tag collision.  Not everyone plays by
> the rules, especially when they don't want to.

That is not a problem. If we modify the core DNSSEC protocol and 
direct validators to just discard anything that has duplicate key tags,
then the attack would go nowhere.

> So - to me - it keeps coming back to - a validator has to make
> reasonable choices when it comes to using time/space/cpu to evaluate
> an answer.  No matter whether or not the protocol "bars" duplicate
> key tags and whether or not signers are instructed to avoid such
> duplication.

But the protocol also has to take reasonable measures to limit the amount
of time a validator has to spend on normal (including random exceptional)
cases.

For example, without key tags, validators would have to try all keys in
a typical DNSKEY RR set or face high random failures.

Going a step further, we have to decide where to place complexity. Unique key
tags simplifies validator code in many ways. But, it increases the
complexity of signers, in particular multi signer setups.

So the question is, does requiring unique key tags significantly reduce the
attack surface for a validator?

Are there other benefits (for example in diagnotics tools) for unique key
tags that outweight the downside or making multi signer protocols more
complex?

___

Re: [DNSOP] [Ext] About key tags

2024-03-01 Thread Philip Homburg
> So really what you're suggesting is that we change the keytag
> algorithm to something that has a lower chance of collisions.
> 
> It's a shame that the design of keytags didn't anticipate a need
> for algorithm agility.

Even if key tags would have been MD5 it would have been enough for 
statistical uniqueness.

But that's water under the bridge. Unless we have plans to redesign DS and
RRSIG.

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] [Ext] About key tags

2024-03-01 Thread Philip Homburg
>First, forbidding key tag collisions is not controversial, the
>trouble is that forbidding them is not feasible and, more
>importantly, does not prevent them from happening.  Validators
>still need to guard themselves.  Forbidding is what I'm objecting
>to - discouraging them, limiting them is fine, but forbidding
>is beyond feasibility.
> 
> 
>Second, directing validators to fail at the first sign of failure
>increases the brittleness of the protocol.  

It has been shown many times that, certainly in a security context,
"be liberal in what you accept" leads to all kinds of problems that later on.
Which leads to fragile systems because they have to create security out of
chaos.

If there is more than one way to do something, then it becomes harder to
reason about the system. At the same time cryptography is brittle by nature.
Either you get the details right, or you have a failure. There is very room
for gracefull degradation.

As far as I can tell, there are three places where a duplicate key tag
can show up:
1) In the DS RR set
2) In the DNSKEY RR set
3) In a set of RRSIGs associated with an RR set.

The first two can not happen by accident, cannot be the result of a temporary
inconsistency. Both the DS and the DNSKEY RR sets have to be signed. If we
would change to rules on duplicate key tags then all signers can be fixed to
never sign a DS or DNSKEY RR set that has a duplicate key tag. Then that would
prevent them from showing up on the wire.

This would allow validators to reject any DS or DNSKEY RR set that has a
duplicate key tag.

Note that we expect both signers and validators to do a lot of complicate 
things that have to be exactly right otherwise DNSSEC validation fails. 
Requiring key tags to be unique is not a particular hard requirement on a
signer.

Duplicate key tags in RRSIGs is a harder problem. RRSIGs do not come in sets. 
However, if every DNSKEY in an RR set has a unique key tag, then there is no
reason for an authoritative to have RRSIGs with duplicate key tags in an
answer, which in turn allows recursors to reject such answers when received,
which in turn allows validators to reject such answers when validating.

Now all of this is at the DNSSEC protocol level. At this level guaranteeing
unique key tags is doable and quite easy.

The hard part starts when (in a multi signer setup) a signer wants to insert
a DNSKEY into an RR set where that would lead to a duplicate key tag. Then
the signer has to back off and come back with a different key.

The trade off is that key tag collisions is only a small part of all 
possible DoS attacks on a validator. If we require all signers to avoid
duplicate key tags only to get rid of one possible DoS, but many 
attacks continue to exist, then it may not be worth the effort.

But for the simple question, would requiring unique key tags in DNSSEC be
doable without significant negative effects, then I think the answer is yes.


___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] [Ext] About key tags

2024-03-01 Thread Philip Homburg
>The full key is not there. There is only a key tag. Are you proposing a wire f
>ormat change to DNSSEC that puts the full key there? That would be hard and sl
>ow to deploy and use up value bytes of the limited +/- 1400 bytes.
>
>> Wouldn't that limit the risk of collision?
>
>At a price, yes.

Technically only a SHA-2 hash of the key would need to be there. If somebody
can create a SHA-2 hash collision then the world has bigger problems than
a DoS on DNSSEC validation.

However, changing RRSIG is probably not practical unless there are other
reason to change it.

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] [Ext] About key tags

2024-02-15 Thread Philip Homburg
> Hmmm, key tags were intended to simplify computation, somehow it
> seems that they've gone the other way.

It seems that key tags set a trap for signers. 

A signer needs a way to identify keys to do key management. This mechanism
needs to be robust such that the signer cannot get confused about which key
is which.

Where it went wrong is that signers started using the key tag to identify
keys. And somehow this practice continued even though we know that
the chance of collision is high.

The obvious thing to do is to publish a document on how signers should 
identify keys. And then try to fix all signers to not use key tags anymore.

If we look at validators, the design of DNSSEC does not include systematic
analysis of denial of service potential and design to avoid that. This is 
mostly absent, often wrong basically left to the implementor.

So it should not come as a surpise that key tags (as currently specified)
do not really help to avoid denial of service attacks.

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] Encourage by the operator ... Re: [Ext] Re: General comment about downgrades vs. setting expectations in protocol definitions

2024-02-09 Thread Philip Homburg
> One of the misconceptions in DNSSEC is that the zone administrator
> is in control of the situation, dictating the state of signing,
> the cryptography in use, and so on.  DNSSEC is for the benefit of
> the querier, not the responder.  A zone administrator can't force
> a querier to validate the results, it can't dictate what cryptographic
> library support the receiver must have.  

I don't see how this statement is relevant.

The discussion was about possible downgrade attacks if the querier would
fallback to NS/DS.

Given that we are talking about downgrade attacks, there is already the
implicit assumption that the querier is interested in the integrity of the
reply. The zone administrator can assist the queriers with making an
informed decision, if the protocol allows for extra informaiton to be
passed.

> The zone administrator is out there in plain
> sight, anyone can see the data, anyone can see activity.  One can't
> (always) identify the receiver, that's what the privacy-enhancing
> transports support.

With TLS a lot of classical assumptions can change. With emphasis on *can*.
For example, client could use TLS to authenticate themselves, the
transferred data could be kept confidential. People could take DNS in
unexpected directions.

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] [Ext] Re: General comment about downgrades vs. setting expectations in protocol definitions

2024-02-08 Thread Philip Homburg
>Agreed, I don't think that the protocol should prescribe what
>to do in case of "operational error". Differentiating an
>"operational error" from an actual malicious interference is
>very likely going to be a slippery slope.  That being said, I
>think it will be useful for adoption that resolvers provide a
>feature to use DELEG and fallback to NS when things are not
>correct. 

One thing that is sub-optimal is the current design is how alias-mode works.

When alias-mode is used and the target of the alias cannot be resolved then
the alias-mode DELEG provides zero information.

So one path forward is to have extra information in alias-mode. For example
whether fallback to NS/DS is encouraged by the operator of the zone.

If DELEG is mainly used to signal that a secure transport, such as DoT, DoH, or
DoQ, is available then falling back to NS/DS might be preferred (by the zone
operator) over failure.

On the other hand, if there is no DS record, but the service mode DELEG does
have the equivalent of DS, then fallback to just NS is not desired.

So maybe alias-mode DELEG should be allowed to have its own set of key/value
pairs focussed on fallback issues.

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] DELEG and parent only resolution

2024-02-01 Thread Philip Homburg
In your letter dated Thu, 01 Feb 2024 10:17:33 +0100 (CET) you wrote:
>Stupid question time:
>
>> The target of a DELEG alias cannot be stored in the child
>> zone. It would not resolve if you do.
>
>Doesn't this mean that we can never get to an environment where
>there only exists DELEG records and no NS records, and still have
>a working DNS?

DELEG records can contain IP addresses so they can replace NS+glue.

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] Documenting DELEG design trade-offs

2024-01-31 Thread Philip Homburg
>When DNSSEC came out, I admit I was kind of surprised to see how long
>it took to be used.  I thought it would be adopted faster.  There was
>insufficient motivation when the system worked well enough and the
>problem being addressed was, to many people, largely theoretical.
>
>When DoH was proposed I admit I was kind of surprised to see how many
>implementations rapidly came out. I thought it would take
>longer. Developers sure were motivated though.  It was addressing
>something they really wanted.

DNSSEC has a lot of moving parts that needed to be in place compared to
DoH.

DoH has the benefit that HTTPS is a very popular protocol on the internet.
Almost every HTTPS client can connect to every HTTPS server in the world.
There are exceptions, but they are rare. In addition deploying protocols on
top HTTPS is very common, there is lots of operational experience with that.

In constrast with DNSSEC:
1) You can't really sign your zones unless all of your parent zones are
   signed. So at least the root and TLDs need to be signed. 
2) If you use multiple TLDs then you want all or at least most of them
   signed.
3) The registry of a TLD has to accept DS records. That's separated from
   signing.
4) Your registrar needs to accepts DS records and be able to send them to
   the registries for your TLDs.
5) There need to be enough validating resolvers, otherwise signing is rather
   pointless.
6) For validating resolvers, small mistakes in DNSSEC signing have significant
   consequences. There is no fallback. Which also makes validating less
   popular.
7) DNSSEC requires significantly larger packet sizes. Which tends to cause
   operational issues if that leads fragmentation.
8) Still mostly unsolved is automatically updating DS records during a key
   rollover. Very few registries support CDS/CDNSKEY.

There are proably some lessons there for DELEG:
1) what needs to in place before we can use DELEG
2) what is the effect of failure
   

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


[DNSOP] Documenting DELEG design trade-offs

2024-01-31 Thread Philip Homburg
Something I wonder about, certainly after the interim, is how do we discuess
with the wider DNS community the trade-offs that are available in de design
of DELEG such that we get good feedback about priorities.

For example, the current design used two contraints:
1) no creative (ab)use of DS records
2) no extra queries.

The net effect in the current design is that current auth. servers cannot
serve DELEG because they don't know that have to include DELEG records in
referrals. So a zone can only start delegating using DELEG if all its auth.
servers (and any other part that might answer queries) has been upgraded.

In a similar way, a validating resolver that lives behind other resolvers
or forwarders cannot support DELEG until all upstream resolvers support DELEG
(the same issue, DELEG records have to be treatet special because they don't
live in the child zone).

For validators on roaming devices such as laptops it can take essentially
forever until all upstream resolvers support DELEG.

Alternatives are:
1) just put it in DS. Certainly if we expect that alias mode will be most
common.
2) Store DELEG somewhere else. For example for child.parant store the 
DELEG record in child._child.parent. This may require an extra query, that
may or may not be optimized in some way.

Note I'm not saying that the current design is wrong. It is certainly the most
elegant way of doing things from a protocol perspective.

But one question is how do we deploy DELEG. And then some of the alternatives
might become more attractive.

So at this stage, when we talk about DELEG, we should talk about alternatives
as well and collect feedback on what operators consider more important.

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] DELEG and parent only resolution

2024-01-31 Thread Philip Homburg
>Let me just point out a key distinction: the typical use case
>of DELEG should be kind-of child centric.  Most people will only
   use a simple alias-mode DELEG at the parent, pointing somewhere
>into their DNS hoster's namespace.  That's practically important,
>because all the information can then be managed by that entity
>without touching the parent (e.g. on KSK rollovers).

To avoid confusion, we should avoid calling DELEG in alias mode
'child centric'.

The target of a DELEG alias cannot be stored in the child zone. It would not
resolve if you do. Resolvers cannot judge whether the alias at the parent
seems sensible or not. So if the parent makes a mistake and points the
alias to a random other DNS provider then resolvers will just blindly
follow that link even if they have the child zone cached already.

Personally, I think that is fine. I think a parent delegates name space to
a child, the parent can also take it back and point it somewhere else.

However for people who feel strong about child centric, something else might
be needed.

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] New Version Notification for draft-homburg-dnsop-igadp-00.txt

2023-11-14 Thread Philip Homburg
>An important thing we really should define is safeguards for
   loop prevention (eg, an EDNS0 hop-count limit or something like
>rfc8586 which defines CDN-Loop). Doing this without Loop Prevention
>is dangerous, at least based on experience with similar patterns
>in the CDN world.  Even if we don't define the broader specification,
>I'd be very interested in seeing standardization of loop prevention
>in both recursive and authoritative forwarding setups.

Yes, it is a good idea to try to do something about loop prevention. It is not
clear to me how to do that in a way that fits DNS. Just putting in a list
of hostnames feels wrong, but maybe it is a good starting point.

>There's lots of work that would be needed on this draft (I'm
>not sure that the way TTLs are handled is the only way we might
>want to define, as there may be other approaches).  Similarly,
>it may make sense to allow ECS under certain circumstances (for
>example, if DoT or DoQ is used from the forwarding proxy to the
>origin authoritative).

Returning anything other than the original TTL may cause a lot of confusion.
But there may be many ways that a cache can be kept up-to-date. We have to
see which ones are expected to be common enough to be worth documenting.

I don't really know what ECS looks like from an authoritative point of view.
How is that kind of data distributed from a primary to secondaries?

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


[DNSOP] New Version Notification for draft-homburg-dnsop-igadp-00.txt

2023-10-18 Thread Philip Homburg
Based on some feedback we received, I created a draft that describes what to
do if you want to build a proxy that acts as an authoritative server in an
anycast setup. The draft just describes the basics, if there is interest we
can add the details.

Name: draft-homburg-dnsop-igadp
Revision: 00
Title:Implementation Guidelines for Authoritative DNS Proxies
Date: 2023-10-17
Group:Individual Submission
Pages:5
URL:  https://www.ietf.org/archive/id/draft-homburg-dnsop-igadp-00.txt
Status:   https://datatracker.ietf.org/doc/draft-homburg-dnsop-igadp/
HTML: https://www.ietf.org/archive/id/draft-homburg-dnsop-igadp-00.html
HTMLized: https://datatracker.ietf.org/doc/html/draft-homburg-dnsop-igadp


Abstract:

   In some situations it be can attractive to have an authoritative DNS
   server that does not have a local copy of the zone or zones that it
   serves.  In particular in anycast operations, it is sensible to have
   a great geographical and topological diversity.  However, sometimes
   the expected use of a particular site does not warrant the cost of
   keeping local copies of the zones.  This can be the case if a zone is
   very large or if the anycast cluster serves many zones from which
   only a few are expected to receive significant traffic.  In these
   cases it can be useful to have a proxy serve some or all of the
   zones.  The proxy would not have a local copy of the zones it serves,
   instead it forwards request to another server that is authoritative
   for the zone.  The proxy may have a cache.  This document describes
   the details of such proxies.

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] [v6ops] WG call for adoption: draft-momoka-v6ops-ipv6-only-resolver-01

2023-07-07 Thread Philip Homburg
> I agree with you that 464XLAT is a better solution and the world
> should use it as much as possible.
> 
> But for those already deployed DNS64 and can't move to 464XLAT soon
> (possibly due to lack of CLAT support, e.g. in some residential
> gateways), wouldn't Momoka's draft help?  If Momoka adds statements
> in a new version telling people to consider 464XLAT first, will it
> be acceptable to you?  Thanks.

NAT64 without 464xlat is a rather broken way of providing access to the IPv4
internet because it cannot deal with IPv4 literals. 

So instead of creating documents for every possible protocol that
uses IPv4 literals, why not create one document that describes how to 
deal with IPv4 literals in existing protocols in the context of NAT64?


___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


[DNSOP] DNS54 Was: Re: [v6ops] WG call for adoption: draft-momoka-v6ops-ipv6-only-resolver-01

2023-07-06 Thread Philip Homburg
>I believe Mark is referring to a validating stub (not a full
>service resolver) behind a NAT64/DNS64. If such a stub uses the
>DNS64 as its upstream resolver, it will encounter a variety of
>potential failures with responses that can't be validated because
>the DNS64 passed them on without checking (CD=1), and without
>retrying other available authoritative servers for the zone (in
>case the response was spoofed, or in case some of the servers
>gave broken responses while others were working).  (Presumably
>the validating stub is aware of DNS64 translated responses and
>the NAT64 prefix via RFC7050 support, and can thus authenticate
>the original response).  Shumon.

Just a random thought regarding DNS64. A recursive resolver that does DNS64
synthesizes  records based on A records if no  records are available.

A validating recursive resolver processing data in a secure zone knows that
 records are not available based on the type bitmap in NSEC or NSEC3
records. 

So my thought is, what if the DNS64 part returns that NSEC or NSEC3 record
along with the synthesized  record.

In that case, a validating stub could notive that it cannot validate the
 RRset and that there is a valid NSEC/NSEC3 RRset that proves no 
records exist. The stub resolver can then conclude a NODATA response for a
 query.

Obviously, a validating stub resolver may find other way to trigger an
NSEC/NSEC3 response if the DNS64 part doesn't include those records.

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] [v6ops] WG call for adoption: draft-momoka-v6ops-ipv6-only-resolver-01

2023-07-06 Thread Philip Homburg
> Hi all, The goal to
> improve DNSSEC adoption is good.  The goal to improve IPv6 adoptions
> is good too.  It looks like here goals contradict (for technical
> reasons).  But if you would pay attention that DNS64 is already
> massively adopted by *ALL* carriers, Then the harm for DNSSEC is
> already done and non-reversible (this battle was lost many years
> ago).  Hence, please do not harm additionally for IPv6 adoption.
> Please, adopt Momoka's draft at least somewhere (I am not sure
> v6ops or dnsop).  

The draft is a bit confusing to me because it discusses DNS64, though it
seems that DNS64 is mostly irrelevant to this draft.

The draft discusses how a recursive resolver could operate on a link that
has IPv6 as native transport and where IPv4 access is provided using NAT64.

DNS64 can be provided downstream to users of the recursive resolver, but
that seems not relevant. DNS64 can also be left out. With no change to the
upstream requirements.

I think the draft would improve if references to DNS64 are removed as much
as possible.

The thing I find missing in this draft is that the desired functionality 
comes for free if the the host implements 464xlat. Using 464xlat everywhere
has the advantage that it is compatible with DNSSEC and that DNS64 is not
needed. 

In the context of this draft another advantage of using 464xlat is that the
recursive resolver can remain completely unaware of the NAT64 translation
prefix. The draft describes that the translation prefix is typically
configured statically because of DNS64. However, in new installations it
would be best to avoid DNS64, so this would require recursive resolvers
to dynamically find the current translation prefix.

In short, in my opinion a recursive resolver behind NAT64 should use 464xlat
and should not try to implement address translation directly.

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] I-D Action: draft-homburg-dnsop-codcp-00.txt

2023-01-16 Thread Philip Homburg
In your letter dated 13 Jan 2023 12:02:18 -0500 you wrote:
>This isn't exactly the same thing, but over in e-mail land we see lots of 
>small sysems with silly configurations that are so locked down that they 
>don't work.  Someone said they are "more secure."  Same idea.  Well, it 
>works for me, everyone else will just have to change all their software to 
>match my super-secure requirements.  In pratice we ignore what they say 
>and do something reasonable instead.

I think the difference is that for the local DNS proxy this would happen on
a single system. I'm not aware of this being an issue in this context.

For example, a TLS client could require the TLS library to only make TLS
connections using TLS 1.3 and the library could silent allow 1.2 as well.
I'm not aware of any application being that silly.

Obviously, the world is not perfect. But this issue does not seem widespread
among components of a single host system.

>> Though there is also the desire to be feature complete. Today, Firefox
>> allows the user to select DoH to a specific upstream. So the draft would
>> not be feature complete if that behavior cannot be specified.
>
>I can see the diagnostic angle, but if that's the main benefit I'd think 
>there'd be easier ways to do it.  Feature completeness for the sake of 
>feature completeness is unpersuasive.  I mean, my DNS server doesn't do 
>the additional processing for L32 records and I don't think anyone cares.

There is a difference between an experiemental feature that didn't get 
much traction and a feature implemented by major web browsers.


___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] I-D Action: draft-homburg-dnsop-codcp-00.txt

2023-01-13 Thread Philip Homburg
In your letter dated 12 Jan 2023 17:25:25 -0500 you wrote:
>> Who benefits from a fake implementation?
>
>Some application demands, I dunno, DoQUIC or something, and refuses to run 
>otherwise.  So I install a resolver library that makes the complaints go 
>away.

I don't understand. An application requires DoQ even though the local
DNS proxy on a system doesn't support it? How is that application supposed
to work then? Why would an application ship with a configuration that
doesn't work, who benefits

>I'm having trouble coming up with plausible scenarios for this thing 
>beyond a handful of us uber-nerds.

One, if no application is going to use it, why are we discussing all the
ways applications would break?

In my expectation more detailed features, such as detailed listing of
protocols, explicit selection of upstreams, etc. are very useful for
diagnostics.

Though there is also the desire to be feature complete. Today, Firefox
allows the user to select DoH to a specific upstream. So the draft would
not be feature complete if that behavior cannot be specified.


___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] I-D Action: draft-homburg-dnsop-codcp-00.txt

2023-01-12 Thread Philip Homburg
>It occurs to me that it introduces knobs that might not work, since the 
>easiest way to implement it is to accept the EDNS0 options, ignore them, 
>and do whatever you were doing anyway.  (This isn't a new issue; see RFC 
>8689, the SMTP require TLS option, which I've implemented the same way.)

I don't understand the threat model here. The local DNS proxy that implements
this draft is on the same system as the application. The proxy either
comes with the operating system or is explicitly installed by the
user of the host.

Who benefits from a fake implementation? 

It seems weird to argue against a draft with the argument that it may 
intentionally be implemented wrong. 

>Since we all seem to agree that there are already plenty of ways to tell 
>devices what kind of upstream cache to use, what's the benefit of adding 
>another one that as likely as not wouldn't work?

Yes, there are ways for the network to inform a device. I don't any options
for a local proxy to inform a stub resolver. Or for the local proxy to
know what the application really wants of needs.

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] I-D Action: draft-homburg-dnsop-codcp-00.txt

2023-01-12 Thread Philip Homburg
> Or DNSSEC is is use.

Fortunately, DNSSEC has the CD flag. I believe that at least one NTP client
retries with the CD flag is DNS resolution fails.

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] I-D Action: draft-homburg-dnsop-codcp-00.txt

2023-01-11 Thread Philip Homburg
In your letter dated Tue, 10 Jan 2023 11:33:57 -0500 (EST) you wrote:
>>However, such a setup leaves the application with no control over
>>which transport the proxy uses.
>
>Why should the application have control over this? 

The following is just a thought, I didn't implement it.

With local DNS proxies that use encrypted transports there can be a bit of
a bootstrap problem is a system boots without any sense of the current time.

What might happen is that a NTP client tries to lookup pool.ntp.org. If
DNS resolution goes through a proxy that tries to use an encrypted transport,
then the proxy may fail because the time is wrong. The NTP client doesn't
get any answers so it can't set the clock and the system doesn't boot.

In that case, if the NTP client would request DNS resolution over Do53 for
its initinal lookup of pool.ntp.org, then the proxy can return a DNS reply
and the system can boot normally.


___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] I-D Action: draft-homburg-dnsop-codcp-00.txt

2023-01-11 Thread Philip Homburg
In your letter dated Tue, 10 Jan 2023 17:27:12 -0500 (EST) you wrote:
>if applications think it is THAT important, they shouldn't be trusting
>the EDNS options of a stub proxy, which also might go through an OS
>proxy on top. It also cannot trust or know whether the proxy's upstream
>forwardering is using encryption either. So it still has to do it itself
>if it wants to be sure.

That is an interesting point.

Obviously, this is not an issue if the application specifies an encrypted
transport to a public DNS resolver.

If the application just specifies the need for an encrypted transport but
leaves it to the proxy to pick upstreams, then this may become an issue.

If this is an issue that needs fixing, then obviously we can extend the
description of the options to also apply between a proxy and the
proxy's upstream resolvers.

>And a draft that specifies a proxy won't change browsers to not do DoH
>themselves.

Indeed. However, at the moment there is no sensible alternative. This
draft provides a way forward.

>> The draft does not require a cache, but obviously adding a cache is
>> encouraged.
>
>Now you are not talking about stubs anymore.

I'm talking about  cache in the proxy.

>If applications are willing to trust the local system/proxy, then they
>can't get a guarantee about encryption. What if the proxy is stuck on
>a network that blocks all DNS except the DHCP obtained ones, and those
>you can talk encrypted to, but what does it mean to talk encrypted DNS
>to StarBucks ?

Talk encrypted DNS to StarBucks means that other customers at the same
wifi cannot find out what your DNS requests are.

If the application specifies a public DNS resolver and access is blocked,
then the user will get some kind of error.

In theory a local network can also block port 443. But usually we don't
worry about that.

>Yes, I agree. But it seems like a sailed ship to me. Similar to how I
>don't see why applications/users should follow the ADD proposals that
>try to keep you using the local ISP nameserver instead of your own
>trusted DoH server.

I think this draft is compatible with the ADD proposals. It is just that 
the proxy would do ADD and applications specify whether they really
need encrypted transports are can live with best effort.

>The big problem I see is that the application wants end to end privacy
>on the DNS queries (or at least end to a large pool to hide in) where
>as EDNS is a hop by hop signaling mechanism. You cannot know what
>happens when the local proxy sends the query forward. Whether that step
>is using encryption is only one step of the chain to keep the DNS query
>private.

We can send this option on more hops if we think it solves an issue.

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] I-D Action: draft-homburg-dnsop-codcp-00.txt

2023-01-11 Thread Philip Homburg
In your letter dated 10 Jan 2023 16:25:14 -0500 you wrote:
>I'm with Paul here. If you don't like the way my resolver works, use
>another one.
>
>Experience also tells us that if you give users knobs like this, they
>will use them even when (especially when) they have no idea what they
>are doing.  "Someone said DoH pointing to this site in Russia is
>super secure!"

I get the impression that you think this draft introduces knobs that don't
exist at the moment.

At the moment it is only a few clicks in Firefox to configure a custom
resolver.

On most operating systems with a GUI (including phones), it is only a few
clicks to configure a custom DNS resolver. And many systems allow direct
editing of /etc/resolv.conf to specify DNS resolvers.

There have been cases in countries with censorship where people were teaching
each other how to select a public resolver to bypass simple DNS-based
censorship techniques.

I don't understand how creating a protocol where such a policy can be expressed
makes a big difference when users are already routinely selecting DNS
resolvers by hand.

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] I-D Action: draft-homburg-dnsop-codcp-00.txt

2023-01-10 Thread Philip Homburg
In your letter dated Tue, 10 Jan 2023 11:33:57 -0500 (EST) you wrote:
>Why should the application have control over this? If you want to give
>control to the application, what should they control? What if two
>applications disagree? What if they look up the same thing, but the
>first application was okay with in the clear and the second was not.
>
>Will you use any proxy/cache or redo the request over secure transport?
>(assuming if you bother with a proxy, you might as well give the proxy
>a cache)
>
>To me, this seems more like an OS setting, and not an application
>setting.

Why should the application have control? The goal is to reduce the
complexity of the stub resolver that is linked with the application,
without limiting what the application can do.

Should applications control this by default? No. But in my opinion,
it is better if the user can control this per application (in addition
to system-wide defaults) than that we force applications that do want
to have this kind of control work around what the system provides.

If the first application is okay with sending a request in clear text and
attempts to set up an encrypted transport fail, then the request will 
be sent in clear text. It is possible that if the second request 
requires an authenticated transport and such a transport is not 
available that the second request may fail. 

This behavior is quite similar to what would happen if each application
links with its own stub resolver, and decides locally whether to
set up an encrypted transport of not. On current systems, a web browser
may use DoH where other application use Do53.

The draft does not require a cache, but obviously adding a cache is
encouraged. The draft specifies that the cache is partitioned per transport
instance to avoid confusion on multi-homed devices, and to avoid the
possibiliy that a request on unauthenticated transport pollutes
answers to requests that require authentication.

The goal is to make a local proxy safe for applications that do have
requirements with respect to privacy. Currently, applications cannot
assume anything about how a local proxy operates. Which encrourages
applications to only use their own stub resolvers that set up encrypted
transports, potentially ignoring any system-wide settings.



___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


[DNSOP] I-D Action: draft-homburg-dnsop-codcp-00.txt

2023-01-10 Thread Philip Homburg



--- Forwarded Message

Subject: I-D Action: draft-homburg-dnsop-codcp-00.txt

A New Internet-Draft is available from the on-line Internet-Drafts 
directories.


 Title   : Control Options For DNS Client Proxies
 Author  : Philip Homburg
   Filename: draft-homburg-dnsop-codcp-00.txt
   Pages   : 20
   Date: 2023-01-09

Abstract:
The introduction of many new transport protocols for DNS in recent
years (DoT, DoH, DoQ) significantly increases the complexity of DNS
stub resolvers that want to support these protocols.  A practical way
forward is to have a DNS client proxy in the host operating system.
This allows applications to communicate using Do53 and still get the
privacy benefit from using more secure protocols over the internet.
However, such a setup leaves the application with no control over
which transport the proxy uses.  This document introduces EDNS(0)
options that allow a stub resolver to request certain transport and
allow the proxy to report capabilities and actual transports that are
available.


The IETF datatracker status page for this draft is:
https://datatracker.ietf.org/doc/draft-homburg-dnsop-codcp/

There is also an HTML version available at:
https://www.ietf.org/archive/id/draft-homburg-dnsop-codcp-00.html


Internet-Drafts are also available by rsync at 
rsync.ietf.org::internet-drafts


___
I-D-Announce mailing list
i-d-annou...@ietf.org
https://www.ietf.org/mailman/listinfo/i-d-announce
Internet-Draft directories: http://www.ietf.org/shadow.html
or ftp://ftp.ietf.org/ietf/1shadow-sites.txt


--- End of Forwarded Message

Based on feedback I redesigned the Proxy Control Option. It is now a 
collection of TLV sub-options. In addition the flags I used to specify
DNS transports are replaced pairs of transport protocol identifier and
priority. This makes it possible to specify a preference among protocols
but also among upstreams.

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] draft-homburg-add-codcp potential new work for WG?

2022-10-19 Thread Philip Homburg
>That aim doesn't seem consistent with the statement that the
>proxy won't be trusted with DNSSEC validation.  That way you
>still need a rather complex DNS code, ideally in a library.
>And you'll need to query to stub for extra records to form the
>whole chain, so that you even can validate.  Overall I don't
>advise splitting DNSSEC validation away from the other stub work
>- cache in particular.  Also because of the mechanisms that you
>want to happen in case validation fails.

It seems to me that there are basically two cases:
1) The application does not depend on the security of the lookup.
   For example, a  lookup that is used in a TLS connection.
   In this case, the proxy could do DNSSEC validation or leave it to
   the upstream resolver.
2) The DNS lookup is security sensitive, for example a DANE lookup. In
   my option, the application should do local DNSSEC validation and not
   trust either the proxy or the upstream resolver. If that requires
   the application to have a cache as well, then that is just part of the
   implementation of local DNSSEC validation. My experience with the
   getdns library, is that local DNSSEC validation can work quite well 
   without a local cache.

In my opinion the complexity of local DNSSEC validations comes from application
requirements. This complexity will exist with or without a local proxy.

If consensus among application/library developers is that DANE lookups
can rely on DNSSEC validation done by a local proxy, then obviously, we can
change the wording in the draft.


___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] draft-homburg-add-codcp potential new work for WG?

2022-10-19 Thread Philip Homburg
>OK.  I suppose I'm stuck in the model of (at least) machine-wide
>policies, thinking that it would be really messy if each app
>chooses properties of their DNS separately.  (Which sounds more
>like a job for a library API anyway.)

The goal is to move the implementation of the various DNS upstreams
(DoT, DoH, DoQ, etc) to the proxy while keeping the stub resolver
in the application in control.

The division of labour between the application and the stub resolver is
outside the scope of the draft. 

The draft focusses on mechanism. It allows the stub resolver to default to
the system setting, and for example only require an encrypted transport.

Or the application can take full control and specify exactly which upstreams
should be used.

I don't think every application should choose properties of their DNS.
However, if we don't provide the mechanism, then people will just extend 
stub resolvers to give applications more control. And then we end up
with potentially many different implementations in applications, which
seems worse to me.


___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] draft-homburg-add-codcp potential new work for WG?

2022-10-19 Thread Philip Homburg
>  The DNSOP WG chairs welcome feedback on the draft
>  draft-homburg-add-codcp, Control Options For DNS Client Proxies
>  ([1]https://datatracker.ietf.org/doc/draft-homburg-add-codcp/).
> 
>I find it a bit weird for a client to *choose* how the proxy/resolver
>might connect to upstream, even choosing an IP address set.
>And for each request separately.

The goal of the draft is to give a stub resolver the same control over a local
proxy as the stub resolver has when it connects to upstream resolvers
directly.

In most cases, stub resolvers just use systems defaults (usually in
/etc/resolv.conf). But in quite a few cases, applications want to deviate
from that. For example, Firefox allows the user to specify which 
DoH provider is used.

To keep the system simple, we opted to include the proxy control option in
every request. We assume that a connection to localhost is high speed and
does not have MTU issues.

The alternative would be to have a stateful session between the stub
resolver and the local proxy. That deviates quite a bit from how
stub resolvers work today.

>Moreover, I fail to understand motivation for the caching part
>- tagging by properties of transport.  If an answer is cached,
>what are the privacy concerns?  No further connection to upstream
>is made for that request anyway.

The assumption is that in general, different upstream resolvers may give
different answers. The obvious case is if one upstream goes out directly
to the internet and another goes through a company VPN.

To avoid confusion, the cache should keep those answers separate.

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] New Version Notification for draft-rebs-dnsop-svcb-dane-00.txt

2021-12-12 Thread Philip Homburg
>> There is something I don't understand about this draft.
>
>The main thing to understand is that complex applications like browsers
>allow data retrieved from one endpoint to script interaction with a
>*different* endpoint, and possibly see the retrieved content, subject to
>various CORS (Cross-origin resource sharing) controls.

Indeed this is subject to CORS. Nothing new here. Any browser needs to get
this right.

>If the client uses the same (client) certificate to identify itself to
>both the attacker and victim servers, then if it believes that the
>victim server is the same origin as the attacker's server, it may allow
>cross-origin requests between them.

This is broken. There is no reason why a client cerficate should lead the
browser to believe that two sites are the same.

As you describe above, we have CORS headers for that.

In particular, an attacker can always accept a client certificate. If
the mere acceptance of a client cert causes confusion, then the client is
already in deep trouble.

>> Suppose an attacker creates attack[1-3].example.com with 3 different setups:
>>
>> 1) attack1.example.com has a regular PKI cert and the attacker runs a 
>>reverse proxy there that relays traffic to victim.example.com
>
>This does not achieve client cert authentication to victim server.

Indeed. The draft does not mention client certs at all.
Where does that come from?

>> 3) attack3.example.com has a DANE record that refers to the cert of
>>victim.example.com. There the attacker directly relays traffic to 
>>victim.example.com.
>
>This does would achieve client cert authentication to victim server, if
>the client does not perform certificate name checks.

Which is fine, because this connection is still secure. 

Note that the server gets a request asking for attack3.example.com. So
this should immediately return a failure.

Of course, if the server ignore the host header and then sends private
information to client only based on the client cert, this will cause the
client to be confused.

>For applications other than browsers, sure.  Browsers go out of their to
>run code served by the remote server, and then try to make that somehow
>safe.  It's a miracle they sometimes succeed.

It seems that this is an attempt to solve an insecure server problem at
the client in the context of clients with certificates.

It is a pity that the draft doesn't spell that out at all.

The web security is already complex enough. Maybe we should not fix broken
servers in the client by adding obscure requirements to DANE certificate
processing.

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] New Version Notification for draft-rebs-dnsop-svcb-dane-00.txt

2021-12-11 Thread Philip Homburg
> 1.  While DANE certificate validation as described in RFCs 7671,7672
> and 7673
> is fine in SMTP, IMAP, XMPP, ... for HTTP (and perhaps some
> other applications) skipping validation of the target name with
> DANE-EE(3) records introduces a "UKS" (i.e. "Unknown Key Share")
> issue, that would definitely be a concern for "h3".
> 
> https://datatracker.ietf.org/doc/html/draft-barnes-dane-uks-00
> 
> Thus, unless "UKS" is known to not be a concern, applications
> should also validate the target name against the server
> certificate even with DANE-EE(3).

There is something I don't understand about this draft.

Support an attacker creates attack[1-3].example.com with 3 different setups:
1) attack1.example.com has a regular PKI cert and the attacker runs a 
   reverse proxy there that relays traffic to victim.example.com
2) attack2.example.com has a self signed cert and DANE with both
   attack2.example.com and victim.example.com in the DNS names of the cert.
   Again the attacker has a reverse proxy and relays to victim.example.com
3) attack3.example.com has a DANE record that refers to the cert of
   victim.example.com. There the attacker directly relays traffic to 
   victim.example.com.

>From the point of view of the victim client, how are these three setups
different? What makes it that 1) and 2) are fine, but 3) is not.

The only hint I can find in the draft is that when a client connects to
both attack3.example.com and victim.example.com, finds that both use the
same public key, that the client somehow merges them into a single security
context.

How does that work with CDNs? Does every CDN have a unique public key per
hosted client site?

In my experience, a hard part of PKI certs is to make sure that every name
that can be used to reach the cert is infact listed in the certificate.
It would be way better if for DANE, no name checks are done at all.

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] How Slack didn't turn on DNSSEC

2021-12-08 Thread Philip Homburg
> Also stop hiding this
> breakage. Knot and unbound ignore the NSEC records which trigger
> this when synthesising.  All it does is push the problem down the
> road and makes it harder for others to do proper synthesis based
> on the records returned.

I did some tests with unbound (version 1.13.1-1 on Debian Bullseye). 

For types other than 'A', the behavior is quite simple: if both
DNSSEC validation (auto-trust-anchor-file) and aggressive-nsec are enabled
then unbound will synthesize NODATA based on a cached NSEC record.
Both are off by default.

For A records the situation is more complex. If qname-minimisation is off,
then the same applies to A records. However if qname-minimisation is on (and
it is on the default) then unbound will internally generate A record
queries. So the A record will be cached before the NSEC record. 

So in the case of Slack, anybody who enabled both DNSSEC validation and
aggressive-nsec would probably not have seen a failure due to the
broken NSEC records because qname-minimisation is on by default.


___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] How Slack didn't turn on DNSSEC

2021-12-01 Thread Philip Homburg
> Also stop hiding this
> breakage. Knot and unbound ignore the NSEC records which trigger
> this when synthesising.  All it does is push the problem down the
> road and makes it harder for others to do proper synthesis based
> on the records returned.

I'm confused what this means. In the report from Slack about the incident
I found that the problem started with a bad NSEC record, shown in their
debug output as:

qqq.slackexperts.com.   2370IN  NSEC\000.qqq.slackexperts.com. 
RRSIG NSEC

This is returned in response to a  query. The intent was that the NSEC
record should have the 'A' bit as well.

What exactly do Knot and Unbound ignore in this case?

Is it that they should have special processing for an NSEC that has only
RRSIG and NSEC and nothing more? 

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] How Slack didn't turn on DNSSEC

2021-11-30 Thread Philip Homburg
>It is clear from the blog post that this is a fairly sophisticated
>group of ops people, who had a reasonable test plan, a bunch of test
>points set up in dnsviz and so forth.  Neither of these bugs seem
>very exotic, and could have been caught by routine tests.

It not clear to whether or not they did ZSK and KSK key rollovers
on test zones and on minor zones. If they didn't, thats a good way to
get in trouble later on.

The main lesson learned from this incident seems to be to always create
a test zone with content identical to that of the main zone and fully
test that zone.

A common lesson, also not mentioned, is to have low TTLs for stuff you
control. It would not have helped with the DS record. But the discussion
about the ZSK being lost would have been helped with a low TTL in the DNSKEY
RR set.

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] Call for Adoption: draft-arends-private-use-tld

2020-06-18 Thread Philip Homburg
> The root zone and private-use internal zones that anchor private
> namespaces might all benefit from a robust trust anchor distribution
> strategy. If validators have the ability to be configured elegantly
> with all the trust anchors they need without the attention of a
> knowledgeable administrator (as a validating stub resolver might
> need with the root zone trust anchor) we might find that the DNSSEC
> concerns that led to horrors like home.arpa all disappear.

I think it would be good to have support for more trust anchors. Also 
for public domains. 

However, additional root CAs for X509 certs is quite a mess. DNS would be
slightly better, a trust anchor covers only part of the DNS tree, unlike
installing a root CA. However, ultimately trust in your trust anchor is
limited to the trust in the mechanism used to distribute the trust anchor.


___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] Call for Adoption: draft-arends-private-use-tld

2020-06-18 Thread Philip Homburg
>But that problem is independent of the domain names used. If the CPE
>sends queries to the ISP, the deed has already been done, regardless of
>what the ISP does with the query (send it to the root, to telus.com or
>drops it)

Sending a query to the root, which is considered a collection of neutral
parties that try to respect privacy, sounds better on paper than sending
traffic to a random manufacturer without having the relevant contracts for
data processing in place.

Futhermore, for a non-existing TLD, queries don't have to go further than 
the resolver.

Of course, the ISP can try to filter those queries. But a significant 
fraction of users and/or applications use a public resolver which will not
filter.


___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] Call for Adoption: draft-arends-private-use-tld

2020-06-18 Thread Philip Homburg
> basically all the domains you list here could have used one of
> their own domains (eg local.telus.com instead of .telus, etc)

I wonder how that would interact with EU privacy regulations. In the common
case of an ISP providing the customer with a CPE, the ISP is resposible for
anything that goes wrong.

We can be sure that there will be plenty of queries that leak out. How does
an ISP deal with a report that the ISP provided device leads to traffic
going to the manufacturer of said device?

The obvious next problem is where the manufacturer registers a domain name for
a product line and then forgets to renew the domain when the product line 
is no longer sold.

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] I-D Action: draft-ietf-dnsop-server-cookies-01.txt

2019-11-06 Thread Philip Homburg
>Philip Homburg pointed out that, although impractical to determine the
>Client IP before Client Cookie construction, it is feasible for a Client
>to detect it when it learns a Server Cookie from a specific Server.  It
>can subsequently be tried to be reused for the same Server which will
>fail if the Client IP has changed.
>
>This new (and practically implementable) requirement does not only
>enhance privacy and make DNS Cookies work with the IPv6 Privacy
>Extensions (by preventing tracking), it also makes them work in other
>environments where Client source IP can change frequently, such as in
>setups with multiple outgoing gateways.

Note that my preference was a pseudo-random client cookie. 

I can see two issues with the current approach:
1) I'm not sure this actually fixes the IPv6 privacy extensions problem.
   The same client cookie can be used on different addresses if the 
   server doesn't support cookies and the client at some point forgets
   that the server doesn't support cookies (and sends the server the
   same client cookie after a new privacy address is generated).

2) As an extension of the previous, if no server supports cookies, then the
   client will not change the Client Secret and continues to use the same
   client cookie after it moves to new location.


___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] I-D Action: draft-ietf-dnsop-server-cookies-00.txt

2019-09-09 Thread Philip Homburg
I wrote:
>In Section 4.4, the client IP is added to the hash in the creation of the
>server cookie.

Ah, never mind, that is already in RFC 7873.

So a client that wants to (re-)use a server cookie needs to know the
source address it previously used to communicate with the server.
So if the client maintains that kind of state (and sends follow up traffic
only from the recorded source address), then the client can just as well
use a new pseudo-random client cookie each time the client creates new
state. No need to include the client IP address in the cookie or worry 
about the cookie leaking. The send off the packet will fail if the 
source address is no long available.




___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] I-D Action: draft-ietf-dnsop-server-cookies-00.txt

2019-09-09 Thread Philip Homburg
>When implementing DNS Cookies, several DNS vendors found that
>impractical as the Client Cookie is typically computed before the Client
>IP address is known. Therefore, the requirement to put Client IP address
>as input to was removed, 

In Section 4.4, the client IP is added to the hash in the creation of the
server cookie.

I wonder what happens if a client alternates between different IP addresses,
for example, the client has multiple interfaces, the client has multiple
IPv6 prefixes on a single interface or a CGNAT device regards different DNS
requests as independent UDP flows and assigns them to different parts of
a CGNAT system.

It is possible that in those cases, a server would force a client to
retry for every request.

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] I-D Action: draft-ietf-dnsop-server-cookies-00.txt

2019-09-09 Thread Philip Homburg
>This is true.  Including the Client IP in constructing the Client Cookie
>was intended to deal with this, but this operation is impractical with
>UDP; expensive at best and not suitable for high volume recursive to
>authoritative traffic.
>
>We could recommend it for stub to recursive traffic, for which the high
>volume performance requirements are less of an issue... what do you think?

Maybe high volume should be the exception.

I think it is better to specify that all code should include the Client IP
unless explicitly configured to leave it out.

A bit of testing suggests that a naive way of getting the Client IP takes 
about 2 microseconds on modern hardware. So a bit of caching on high 
performance resolvers would be enough.


___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] I-D Action: draft-ietf-dnsop-server-cookies-00.txt

2019-09-09 Thread Philip Homburg
In your letter dated Mon, 9 Sep 2019 14:13:01 +0200 you wrote:
>When implementing DNS Cookies, several DNS vendors found that
>impractical as the Client Cookie is typically computed before the Client
>IP address is known. Therefore, the requirement to put Client IP address
>as input to was removed, and it simply RECOMMENDED to disable the DNS
>Cookies when privacy is required. herefore, the requirement to put
>Client IP address as input to was removed, and it simply RECOMMENDED to
>disable the DNS Cookies when privacy is required.

I don't quite understand this.

The proposed way of constructing a client cookie:
Client-Cookie = MAC_Algorithm(Server IP Address, Client Secret )

means that if a host moves between networks it is quite likely it will
continue to use the same cookie. This allows a host to be tracked across
networks.

Neither RFC 7873 nor this draft has text that requires the host to change
the Client Secret when moving to a different link. 

Most DNS client software is general enough that we cannot rule out that it
will be used on a mobile device.

So we reach then end of Section 3, which says '[...] simply RECOMMENDED
to disable the DNS Cookies when privacy is required'

So it seems that this draft implicitly recommends that DNS client
cookies are by default disabled and should only be enabled on hosts that have
stable IP addresses.

If that's the intention, then maybe this can be stated explicitly in the
introduction.


___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] Proposal: Whois over DNS

2019-07-10 Thread Philip Homburg
> The technical issue with
> whois is that its dark in many places and getting darker with
> minimal to no prospect of coming back (in a usable form).
> 
> While GDPR applies only to EU natural persons because there is no
> way to distinguish between natural persons and legal persons and
> no way to distinguish EU from other countries, many have adopted
> applying strong redaction to all records.

This doesn't make any sense to me.

Previously, information was published in whois without the consent (as
defined in the GDRP) of the subjects.

So obviously, registrars had to stop publishing that kind of information.

However that doesn't say anything about voluntarily providing that information.

Support for voluntary information has a cost to implement. It is possible
that registrars don't want to provide that feature because it would not
make them any money. Of course ICANN could require registrars to support
domain holders adding information in whois voluntarily. 

So I guess the argument then becomes that DNS TXT records have to be used
because it doesn't require additional investments in many cases. Maybe you
should list that argument in the draft. Because that means that as soon
as DNS TXT records are rejected, there is no basis anymore for the
the draft.

>From a technical point of view, having structured information in TXT
records is bad. Having large RRsets is also bad. So the obvious solution
would be to define a new resource type for this kind of information.
However, that defeats your argument that existing zone editiors can be used.

In the end you are reinventing whois to get around policy issues and the
lack of a business case.


___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] Proposal: Whois over DNS

2019-07-10 Thread Philip Homburg
> > As far as I know, there is no issue with whois and the GDRP when it comes
> > to voluntarily publishing information in whois.
> 
> Nope. Its OK for you to publish your Personal Data. For anything
> else, you need to get informed consent first. And be able to prove
> that. And give the Data Subjects the ability to modify those data
> or get them deleted.

When you register a domain, your registrar already has to have your informed
consent to process any PII you supply. And as far as I know,
registrars routinely ask for your name and credit card.

So all GDRP-related processes are already in place.

Looking at it from a technical point of view, whois has a referal mechanism.
So if GDRP compliance would be a big issue, then allowing the handful of
people who wish to publish anything in whois to run their own whois server
would also solve the issue.


___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] Proposal: Whois over DNS

2019-07-10 Thread Philip Homburg
> Im not sure the point
> aside of illustrating if there is no response for the domain records
> by the auth server that there would also be no response for a _whois
> record. Thats true.
> 
> 1) Using _whois is completely optional, like SPF or any other
> record.  2) I cant envision much legitimate need to contact a domain
> owner for something that doesnt exist (aside of domain renewal spam
> or trying to buy the domain).
> 
> Am I missing something?

I read this discussion from the point of view of someone how is very happy
with the result of GDRP in this area.

With that in mind, it seems that this proposal doesn't address any technical
issues with whois.

Where whois allows for querying of contact information associated with a 
domain, this proposal does something similar.

Of course, whois has various technical issues, but it makes sense to first
try to solve those technical issues within the whois system. And only when 
it is clear that certain issues cannot be solved look for a different
protocol. (And I mean cannot be solved for technical reasons, but because 
of lack of consensus)

As far as I know, there is no issue with whois and the GDRP when it comes
to voluntarily publishing information in whois. This draft clearly 
advocates voluntary sharing of this information. 

As the Section 1 suggests, whois works.

So it seems to me that this draft does not solve a technical problem
(or at most a minor one, 'internationalization')


___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] [v6ops] New Version Notification for draft-v6ops-xie-network-happyeyeballs-00.txt

2018-09-26 Thread Philip Homburg
In your letter dated Wed, 26 Sep 2018 12:58:30 +1000 you wrote:
>I have said before, but don't know if I still adhere to it, but
>anyways, here's a question: How *long* do people think a biassing
>mechanism like HE is a good idea?
>
>I used to love HE. I now have a sense, I'm more neutral. Maybe, we
>actually don't want modified, better happy eyeballs, because we want
>simpler, more deterministic network stack outcomes with less bias
>hooks?

In my own implementation of HE, I globally (i.e. across all processes) keep
track of the time it takes to establish a TCP connection for individual
addresses, and I compute values for IPv4 and IPv6.

Conceptually I like the idea that if you know (from past measurements) that 
it takes 100ms or less to establish a connection, you don't wait for a
couple of seconds for an attempt to fail. In addition, instead of trying
one address at a time, you can add a new connection attempt when the
previous one is still running.

All of this provides for a better user experience, at the cost of being
less deterministic and masking failures.

Note that in my implementation, I select the best address (giving a 30ms
preference to IPv6) wait for the expected time for the TCP connection
to complete and only if that timer expires, move to the next address.
So there is no race.

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] [Ext] Re: New draft for helping browsers use the DoH server associated with a resolver

2018-08-24 Thread Philip Homburg
> > Well, if the OS resolver is validating, it will SERVFAIL with such a
> > query.
> 
> The protocol requires special handling of those specific queries,
> so a resolver that understands the protocol will give the desired
> answer. A resolver that doesn't understand the answer will give
> NXDOMAIN even if it is validating because that RRtype is not in
> the root zone.

It seems to go wrong when you have one validating resolver that forwards to a
resolver that supports this mechanism.

It don't really see the point of what you propose. For resolvers obtained by
DHCP it makes more sense to include the URL in the DHCP reply than to have
yet another DNSSEC-violating discovery hack.

For manually configured resolvers, it is likely more convenient for the user
to just enter the URL and let the system figure out the addresses of the
resolvers.

Figuring out what SNI to use using insecure DNS sort of negates any advantage
TLS authentication offers.

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] Draft for dynamic discovery of secure resolvers

2018-08-21 Thread Philip Homburg
In your letter dated Tue, 21 Aug 2018 18:19:39 +0200 you wrote:
>Ehm, we somehow forgot that this thread is supposed to be about DHCP, so
>that's only the "uninteresting" case where you do trust the ISP and want
>to use their DNS over a secure channel :-D

There are still plenty of use cases. An ISP may not want to run a recursive
resolver and instead refer to a public resolver using DHCP.

Additionally, on an open wifi, encrypting DNS traffic can help against 
snooping. So it is in the ISP's interest to announce that the local
recursive resolvers support DoH

>Well, DoT has been standardized for some time, and we now have multiple
>open-source implementations for client- and daemon-side, and some large
>public services support it.  DoH is a little later, but it might gather
>more speed eventually.  From *my* point of view the SNI is the biggest
>hindrance ATM; other technical issues don't seem bad, at least not for
>most motivated users.  (Finding a trusted service might be problem for
>some people, I suspect.)

For DNS, code is not enough. You need to get admins of recursive resolvers
to upgrade. And there are lots of those resolvers. Many of them almost
unmanaged.

DNS is for a large part not end-to-end. You have the recursive resolvers
as middle men.

>Defense against changing DNS is something else than privacy - we have
>DNSSEC for that, so you don't even need to trust the server sending you
>the data, but I think we're getting too much off-topic anyway...

DNSSEC is part of the puzzle, but leaves a lot of holes:
- Currently very few systems ship with locally validating resolvers. So
  most systems can be attacked on the last mile.
- Many domains are not signed for one reason or another. 
- Even with DNSSEC, an on path attacker can see the queries and selectively
  mount a denial of service attack.

DoH protects the last mile from all of those attacks.


___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] Draft for dynamic discovery of secure resolvers

2018-08-21 Thread Philip Homburg
>In fact, roaming wi-fi 
>connections, while still relevant (especially for international tourists), are
> getting less and less used, since everyone now gets several gigabytes of EU-w
>ide mobile data per month included with their base mobile fee.

I assume that you are aware that with HD video, you can easily burn through a
couple of Gbyte in an hour.

>How many browsers can I choose from? Definitely many less than the possible IS
>Ps, and not a single one from the jurisdiction I live in.

Many places have essentially two landline options. Only 3 mobile network is
also quite common. 

In addition, two serious browsers are open source. And there are firefox
forks that try to fix some of the damage done by mozilla.

>> There are many ISPs that try to do the right thing for their customers.
>> There are quite a few ISPs that have court orders to do things that go again
>st the interests of their customers.
>
>Yes, but that's the law. I still don't get how is it possible that the IETF is
> releasing a technology openly designed to allow people to break the law. In m
>y part of the world, this is ethically unacceptable, and possibly also illegal
>.

It is not that black and white. In the Netherlands, a few ISPs are forced to
block access to The Pirate Bay.

That court order applies only to those ISPs, consumers are completely free
to visit The Pirate Bay.


>No, they can't, if the application defaults to its own resolvers, possibly not
> even letting the user choose different resolvers unless they click into three
>-level-deep configuration menus.

Anybody can write an application that does weird stuff. That's not something
a RFC can prevent.

>> The big difference is that when the user does decide to bypass the ISP's
>> resolvers, there will be no way for the ISP to interfere.
>
>Good luck explaining that to several hundred governments that rely on mandator
>y DNS filters to enforce gambling, hate speech and pornography regulation.

Governments will figure out that eventually protocols that communicate in
plaintext will die out. Of course, they can mandate the use of plaintext in
their respective countries, at their own economic disadvantage.


___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] Draft for dynamic discovery of secure resolvers

2018-08-21 Thread Philip Homburg
>Then you have a problem that's not solvable in DNS itself (yet).  That's
>what people usually forget to consider.
>
>The hostnames are clear-text in https hanshakes (so far), and it seems
>relatively easy to collect those.  So, by tunneling *only* DNS you don't
>make it much more difficult for the ISP, and in addition you share the
>names with some other party.  That doesn't sound very appealing to me
>personally, from privacy point of view at least.  (On the other hand,
>big resolvers will have lots of cached answers, etc.)

This is too some extent a chicken and egg problem. Without encrypted DNS 
there is no point in encrypted SNI and vice versa.

I expect that encrypted SNI will be relatively easy to deploy. It can happen
as soon as both endpoints support it.

In contrast, DNS is a very complex eco system. So it makes sense to start
deploying encrypted DNS now, under the assumption that encrypted SNI will
follow.

>After SNI encryption gets widely deployed, tracking through IP addresses
>only will be somewhat harder, so there it will start getting
>interesting.

We have seen already that 'domain fronting' is can be a very effective way
to bypass filters. For large CDNs or cloud providers, filtering based on 
IP addresses is not going to be effective.

>Until then, IMHO you just need to either trust the ISP or
>tunnel *all* traffic to somewhere, e.g. via tor or VPN to some trusted
>party.

True. But we can take small steps to reduce unwanted interference from ISPs.

>From a security point of view, it helps a lot if you can just trust DNS.
Instead of always having to take into account that somebody may interfere 
with DNS replies.


___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] Draft for dynamic discovery of secure resolvers

2018-08-21 Thread Philip Homburg
> If I got it well, what you are trying to bypass is your ISP's
> security filter that prevents you from connecting to malware or to
> illegal content (e.g. intellectual property violations and the
> likes). 

As a user, I think there is little reason to trust an ISP.

If you take a mobile device, do you trust every hotel, bar, etc. where you
may connect to the wifi? Are they all competent? Are you sure none of them will
violate your privacy?

If you have only a few ISPs to chose from, do you trust that ISP?

There are many ISPs that try to do the right thing for their customers.
There are quite a few ISPs that have court orders to do things that go against
the interests of their customers.
And the are quite a few ISPs that are positively evil.

You need to have options in case you can't trust the ISP.

> build a sort of "nuclear bomb" protocol
> that, if widely adopted, will destroy most of the existing practices
> in the DNS "ecosystem" 

There is no reason why DoH has to be deployed as a 'nuclear bomb'.

Hosts can still default to using the resolvers offered by DHCP only switching
to public resolvers when directed by the user.

The big difference is that when the user does decide to bypass the ISP's
resolvers, there will be no way for the ISP to interfere.

Of course, an ISP can still try to block encrypted access to 8.8.8.8, etc.
Ultimately, that may result in users routing their requests over tor. In
areas with netneutrality laws, blocking access to public resolvers is probably
not an option.

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] One Chair's comments on draft-wessels-dns-zone-digest

2018-07-31 Thread Philip Homburg
In your letter dated Tue, 31 Jul 2018 06:49:04 -0700 you wrote:
>> I think there is a big difference between distributing the root zone and
>> distributing a few 'local' zones.
>> 
>> In the first case you need something that is massively scalable.
>
>I'm afraid I don't see those as different problems like you do.  I'd
>like a massively scalable way of distributing any zone, not just the
>root.  If for no other reason, .arpa and root-servers.net should be
>included too, for example.
>
>Yes, huge zones like .com and similar are not possible.  But there are
>many other TLDs that likely are possible to pre-cache and serve locally.

I'm curious how that is going to be provisioned at a large scale.

We don't really know how to roll the KSK of the root zone. I wonder how
we are going to manage thousands, maybe millions, and if you are unlucky
billions of devices that want to fetch some zone files.

Would we paint ourselves into a corner with repect to TTLs? Currently, if
the root would need to have lower TTLs then that would require coordination
with the root server operators, but that's it. If many devices are hardwired
to fetch the root at a fixed rate, you can't do that. If you make the rate
a parameter then the first time you try to lower it, you find that some
large subset accidentally wired the parameter.

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] One Chair's comments on draft-wessels-dns-zone-digest

2018-07-31 Thread Philip Homburg
> Are you suggesting that web servers can't be massively scaleable
> ?
> I'm not sure I understand your examples.

Yes, you can build massively scaleable web servers, but at what price?

What if some popular IoT device starts to fetch the root zone. And at a
high rate?

> You cite overprovisionoing in the root server system as a reason
> not to try and supplement it, but I think it makes sense to look
> at it the other way round -- if there were ways to distribute th
> e
> root zone reliably and accurately without presenting the attack
> targets that the root server system does, the need for continued
> investment in the infrastructure could be reduced (or the effect
> ive
> benefit to end-users from that investment could be increased).

What if your web servers are not massively overprovisioned? Can we handle
failures there. If you do massively overprovision those web servers, will it
actually be cheaper or better than the current system?

> The bandwidth available at the consumer edge, where a lot of the
> attack sources now live, continues to grow far faster than the
> bandwidth that can be provisioned at the root server edge. The
> observation that "there's enough bandwidth that we're safe" does
> n't
> seem future-proof (it doesn't even seem present-proof, really).

>From a ddos point of view there doesn't seem to be big difference between
how the current DNS root absorbs traffic and what a highly available web
service would have to do.


___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] One Chair's comments on draft-wessels-dns-zone-digest

2018-07-31 Thread Philip Homburg
> > The draft states in the Motivation section:
> >
> > "The motivation and design of this protocol enhancement is tied to the 
> DNS root zone [InterNIC]."
> 
> That may be a motivation, but as a prospective user I want to use
> it for much more.  My LocalRoot server is already going to be
> serving 3 zones, and I have plans for many more.  It would be
> helpful to know that on the distribution side of things that I had
> indeed grabbed an authentic source before sending it off to all
> the resolvers that want to pre-cache a random zone X.
> 
> Be careful that we don't collectively interpret the sentence you
> quote as meaning 'this is only useful for the root zone' just
> because that was the original motivation.

I think there is a big difference between distributing the root zone and
distributing a few 'local' zones.

In the first case you need something that is massively scalable.

In the second case, just create a tar file with a zone file and a hash, put
it up on a web server and the problem is solved. Verifying the contents of a
file is not exactly a new problem. 

I wonder if there still is a use case for distributing the root zone. With
QNAME minimization and NXDOMAIN based on NSEC records, the major use cases
seem to be gone. Compared to other zones, the root is massively over
provisioned. So if (from an availability point of view) there is need to have
a local copy of the root, then you would need a local copy of .com as well.

Though I'm sure that are people who want to reinvent DNSSEC.

One final remark, maybe it is worth investigating a 'NSDEL' record type,
and possibly 'ADEL' and 'DEL'. Which are the equivalents of NS, A, 
for delegations/glue. With separate record types, we can define that they are
covered by a RRSIG. Solving issues with data not being signed.

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] [Driu] [Doh] Resolverless DNS Side Meeting in Montreal

2018-07-10 Thread Philip Homburg
>The ip= modifier would be a great way to arrange for something to look like
>it came from a different source than its actual source.   I'm sure there's
>an attack surface in there somewhere.

That's a rather fundamental issue.

In the context of TLS, and a DNSSEC insecure zone, there are two realistic
attack scenarios:
- an attack on DNS that returns different addresses for a DNS lookup
- a routing attack, that reroutes traffic.

Both types of attacks are realistic and happen quite frequently.

If we decide that TLS is strong enough to defend against these attacks,
then there is no need to secure the DNS lookup, other than to reduce
the risk of denial of service and for privacy reasons. Then such an ip=
modifier would be fine, because the worst thing that can happen is denial
of service.

On the other hand, if we don't trust TLS, then we have a bit of a problem.
Too many people using public resolvers. Route hijacks are quite easy, etc.


___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] Resolverless DNS Side Meeting in Montreal

2018-07-10 Thread Philip Homburg
>For example www.example.com pushes you a  record for img1.example.com.
>Should you use it? What if it is for img1.img-example.com ? Do the
>relationship between these domains matter? What kind of relationship (i.e.
>it could be a domain relationship, or in the context of a browser it might
>be a first-party tab like relationship, etc..)? What are the implications
>of poison? Trackers? Privacy of requests never made? Speed? Competitive
>shenanigans or DoS attacks?
>
>This was out of scope for DoH.

Assuming that in the context of DoH reply size is not an issue, is seems to
me that this use case is already solved by DNSSEC. Just push all required
signatures, key material and DS records that allow the receiving side to 
validate the additional information.

Are you trying to re-invent DNSSEC for people who don't want to deploy
DNSSEC?


___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] AD sponsoring draft-cheshire-sudn-ipv4only-dot-arpa

2018-07-09 Thread Philip Homburg
> I think deprecating
> RFC7050 will be a bad idea, there are too many implementations that
> really need that, while updating APIs/libraries to make sure they
> comply with this seems easier.
> 
> For example, we could have a DHCPv6 option, but in the cellular
> world DHCPv6 is not used ... and even in non-cellular, Android is
> not using it either.

You are saying that countless stub resolvers should special case
ipv4only.arpa just because some operators don't want to deploy DHCPv6?

In many cases, there is not even a sensible mechanism to make this work.
A stub resolver often reads /etc/resolv.conf. If that contains just a
public resolver, then there is not much that can be done.

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] AD sponsoring draft-cheshire-sudn-ipv4only-dot-arpa

2018-07-06 Thread Philip Homburg
In your letter dated Fri, 6 Jul 2018 18:50:44 +1000 you wrote:
>All it does is ensure that the DNS queries get to the DNS64 server. 

The way RFC 7050 works that you send queries to your local recursive
resolver. The problem there is that if the user manually configured
a public recursive resolver then you don't learn the translation prefix.

In this context I don't see how serving ipv4only.arpa from dedicated addresses
would help. 

We can define a new prefix discovery protocol where the node that needs to
discover the prefix directly queries the authoritative servers for
ipv4only.arpa. That would solved the issue with manually configured 
resolvers. But it would also add yet another way off discovering the prefix
that needs to be supported.


___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] AD sponsoring draft-cheshire-sudn-ipv4only-dot-arpa

2018-07-06 Thread Philip Homburg
> Most of the special
> handling could be avoided if IANA was instructed to run the servers
> for ipv4only.arpa on dedicated addresses. Hosts routes could then
> be installed for those address that redirect traffic for ipv4only.arpa
> to the ISPs DNS64/ipv4only.arpa server.
> 
> Perhaps 2 address blocks could be allocated for this purpose. One
> for ipv4 and one for ipv6.

If I understand the implications correctly, that would introduce a
completely new way of discovering the NAT64 prefix. We allready have 3,
do we need a 4th one?


___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] AD sponsoring draft-cheshire-sudn-ipv4only-dot-arpa

2018-07-05 Thread Philip Homburg
>draft-cheshire-sudn-ipv4only-dot-arpa document

Section 7.1:
"Name resolution APIs and libraries MUST recognize 'ipv4only.arpa' as
"special and MUST give it special treatment. 

It seems to me that it is going way to far to require all DNS software to
implement support for a hack that abuses DNS for configuration management of
a rather poor IPv4 transition technology.

I think the more obvious approach is to formally deprecate RFC 7050 and
require nodes that need to do NAT64 address synthesis use one of the other
methods for obtaining the NAT64 prefix.

The only part of the draft that makes sense to me is to make ipv4only.arpa
an insecure delegation. 

Any other problems are better solved by deprecating RFC 7050.

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] DoH interaction, sortlist Re: BCP on rrset ordering for round-robin? Also head's up on bind 9.12 bug (sorting rrsets by default)

2018-06-16 Thread Philip Homburg
>At that lunch, we could not figure out who originally required such a
>detailed ordering configuration in BIND, and it might be interesting to find
>out.

What I remember from a very long time ago is the following network setup:
- a collection of NFS servers each with multiple ethernet interface cards
  connecting to different subnets
- a collection of NFS clients that would connect to the first address in 
  the returned RRset (i.e. that would not locally sort the RRset)
- a DNS resolver in a completely different subnet that had incomplete knowledge
  of the network. I think bind treated the 'class B' network as a single
  network, not as a collection of /24s.

Without the sortlist feature the DNS resolver had not enough information
to move the best address to the start of the list. And the router was slow
enough that you didn't want NFS traffic to go through the router.


___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] [v6ops] [IANA #989438] ipv4only.arpa's delegation should be insecure.

2018-06-13 Thread Philip Homburg
>https://tools.ietf.org/html/draft-cheshire-sudn-ipv4only-dot-arpa 
>

>From Section 6.2:
3.  Name resolution APIs and libraries MUST recognize 'ipv4only.arpa'
   as special and MUST give it special treatment.  Regardless of any
   manual client DNS configuration, DNS overrides configured by VPN
   client software, or any other mechanisms that influence the
   choice of the client's recursive resolver address(es) (including
   client devices that run their own local recursive resolver and
   use the loopback address as their configured recursive resolver
   address) all queries for 'ipv4only.arpa' and any subdomains of
   that name MUST be sent to the recursive resolver learned from the
   network via IPv6 Router Advertisement Options for DNS
   Configuration [RFC6106] or via DNS Configuration options for
   DHCPv6 [RFC3646].

First we introduce ipv4only.arpa as a hack to avoid creating/deploying a
suitable mechanism to communicate the NAT64 translation prefix. That's fine
with me.

But when that hack then requires changes to every possible DNS stub resolver
implementation in the world, there is something seriously wrong.

So if this in indeeed required to make RFC7050 work then it is better to
formally deprecate RFC7050 and focus on other ways to discover the
translation prefix.

It seems that at least one already exists (RFC7225) so not much is lost.


___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] [Ext] Re: Resolver behaviour with multiple trust anchors

2017-11-02 Thread Philip Homburg
>Are there cases of "corrupted" registries make the threat of "stolen zones" a 
>real thing?

I think the most well known example is the US government taking the .org domain
of Rojadirecta.

https://torrentfreak.com/u-s-returns-seized-domains-to-streaming-links-site-after-18-months-120830/

There were two issues in this case: for any organisation outside the US, using
a domain with a registry in the US is risky, because the US government assumes
jurisdiction, even if the company itself doesn't do any business in the US.

The second issue is that the domain was seized by the executive branch. And
not just blocked, but actually redirected to servers of the US government.


___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] Resolver behaviour with multiple trust anchors

2017-10-31 Thread Philip Homburg
>It sounds like clarification is needed if even one (much less three) 
>systems treat such a signature as Bogus. My reading of RFC 4035 is that 
>any chain that successfully leads to a trust anchor should return 
>Secure, even if a different chain returns Bogus.

If extra trust anchors are configured for security reasons (as opposed to
availability) then I would expect some sort of longest match on the
trust anchor that is to be used.

For example, if I configure a trust anchor for example.com for security
reasons, then that is probably because I don't fully trust the .com zone
or even the root zone.

If then a record fails to validate using the trust anchor that is configured
for example.com, then it would be very bad if the resolver turns around and
suddenly trusts the information from the .com zone.


___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] I-D Action: draft-ietf-dnsop-isp-ip6rdns-03.txt

2017-05-14 Thread Philip Homburg
>>we will never know, because every v6 end system will have a ptr, either
>>naturally, or machine-generated for it, because v6 providers will not
>>want their rank-and-file v6 endsystems to be excluded from important
>>activities such as transmitting e-mail.
>
>If =B3v6 provider=B2 includes =B3residential ISP=B2 (the topic and audience=
> for
>this draft), then the inability to transmit email is by design.
>That is: ISPs commonly prevent residential users from sending email (by
>default). They say this in their Terms of Service, they block port 25, and
>they don=B9t publish PTRs. This is consistent with recommendations by
>M3AAWG[1] and BITAG[2], for instance.

>People who run mail servers generally understand these limitations. The
>BITAG paper does recommend clear disclosure and methods to opt-out. Makes
>sense to me: I want a human decided they want their system to send mail,
>not a bot.

I wonder if with the EU netneutrality laws it is possible to have a blanket 
block if port 25 outbound. 

Historically, many ISP that wanted to upsell business accounts would
actually block port 25 inbound. Which does prevent relays but not bots
sending spam.

Of course having an option where the customer can request to port to be
opened and then have it closed by default is best. But that may be too
expensive for many ISPs.

But my goal was not to say something about whether port 25 should be
blocked or not. But just that based on todays internet and spam filtering,
if an ISP allows customers to send mail, then the ISP has to provide
the customer with a way of setting up reverse DNS.

I don't really care whether a reverse DNS check is good or bad when it
comes to filtering spam. It is just a reality that enough parties are
using such checks that without reverse DNS you have a serious issue
getting mail delivered.

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] I-D Action: draft-ietf-dnsop-isp-ip6rdns-03.txt

2017-05-08 Thread Philip Homburg
In your letter dated Tue, 02 May 2017 15:03:15 -0400 you wrote:
>I agree that people reject mail if there=B9s no PTR; I think this is used in
>fighting spam, based on an inference that if there=B9s no PTR, you=B9re a s=
>pam
>bot rather than a legitimate mail server.
>The first case listed in 4.  Considerations and Recommendations says
>
>   There are six common uses for PTR lookups:
>
>   Rejecting mail: A PTR with a certain string or missing may indicate
>   "This host is not a mail server," which may be useful for rejecting
>   probable spam.  The absence of a PTR leads to the desired behavior.

In the case of e-mail, my issue is more with the structure of the document
than the actual text.

To give an example where I'm coming from. The ISP I use for my home internet
connection started out with IPv6 by offering tunnels. Given that tunnels are
only used by tech savvy people, a simple editor where you could provide
name servers for reverse DNS was offered and this worked well.

Now that they offer native IPv6 there is no reverse DNS anymore. For two
reasons. The first is that delegating reverse DNS is deemed to be to complex
for ordinary users. The solution they want to implement, an editor where
users can set individual records for hosts, has such a low priority that it
never gets done. It has a low priority because oridinary users don't really
need reverse DNS.

What seems lost is that the IPv6 users that really need reverse DNS at the
moment are the ones that try to have mail delivered and at the same time
all of the reasons why oridinary users can't run DNS servers don't apply
to that group.

Reading the document, those two arguments are hard to find. I.e., in
Section 4, I would say that most points are nice-to-haves, except for the one
related to e-mail. At the same time, most options in Section 2 are
quite tricky, except 2.4.

So an ISP reading this draft may not realise that reverse DNS is really
important for some customers and is relatively easy to provide to those
users.

>I=B9ve quoted much of that text above.
>Given that the document is about residential ISPs, and given the above,
>plus the guidance that the information should match if it=B9s possible to
>reconcile them, does this document need an affirmative statement about
>mail servers? =

So my preference would be something that really stands out and not grouped
together with, in my opinion, less important benefits.

>That=B9s probably true. Given that I need to update it to match what it says
>in the Privacy Considerations section and the examples, should I just
>remove mention of geolocation? Or should I tweak it to match the rest, and
>add text saying, =B3But reverse DNS is not a great source for geolocation
>information=B2?

I'd say that if having a location in reverse DNS serves the operational need
for the ISP itself, then it's a good idea. Third parties may try to use
that information, but it doesn't seem to be essential. By and large
geo-location of eyeballs works fine even without location information in
DNS.

>Proposed new sentence:
>Users of services which are dependent on a successful lookup
>will have a poor experience if they have to wait for a timeout; an
>NXDomain response is faster.

Yes, that's clear and an important issue.

>Proposed new sentence:
>For best user experience, then, it is important to
>return a response, including NXDomain, rather than have a lame delegation.

I think that technically a lame delegation includes getting an error
from the server. So by itself a lame delegation is not a problem. The
problem is getting a timeout.

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] on staleness of code points and code (mentions MD5 commentary from IETF98)

2017-03-29 Thread Philip Homburg
In your letter dated Tue, 28 Mar 2017 19:23:16 +0200 you wrote:
>On 28 Mar 2017, at 12:37, Philip Homburg wrote:
>
>> So if would be best if a validator that implements MD5 would still 
>> return
>> NXDOMAIN is validation fails, but would keep the AD-bit clear even if 
>> validation
>> passes for a domain signed using MD5.
>
>In the interest of nitpick correctness, please return SERVFAIL there, 
>not NXDOMAIN :)

Indeed. Though if somebody is foolish enough to sign with MD5, maybe they should
get a NXDOMAIN :-)


___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] on staleness of code points and code (mentions MD5 commentary from IETF98)

2017-03-28 Thread Philip Homburg
In your letter dated Tue, 28 Mar 2017 11:01:01 +0100 you wrote:
>Evan Hunt  wrote:
>>
>> MD5 is known to be breakable, but it's not *as* breakable that hasn't been
>> signed, or a resolver that hasn't turned on validation.
>
>It features Postscript, PDF/JPEG, and GIF MD5 quines (where the MD5 hash
>of the document appears in the text of the document itself) and is itself
>an MD5 quine in two different ways (PDF and NES ROM polyglot).

What makes it bad in the case of DNSSEC is that in various ways DNSSEC
validators indicate to the user that a result is validated without also 
reporting the algorithm(s) used.

So for any piece of client code that takes a security decision based on that
data, allowing weak algorithms or parameters means that either all of DNSSEC
should be treated as insecure, or potentially insecure configurations are
without warning treated as secure.

So if would be best if a validator that implements MD5 would still return 
NXDOMAIN is validation fails, but would keep the AD-bit clear even if validation
passes for a domain signed using MD5.

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] WG review of draft-ietf-homenet-dot-03

2017-03-21 Thread Philip Homburg
> This .home / .homenet issue has already been going on for a very
> long time. The longer we wait with resolving this issue, the worse
> the deployment situation will be of software mixing .home vs
> >homenet.

Do we really expect homenet to be only ever used in a 'home'? It seems to
me that homenet is an interesting technology that would work in any small 
IPv6 network, i.e. a small office. 

In that context, using the string 'homenet' would be very confusing to users
outside a home context. Maybe reserving a name that has less context would
be better.

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] WG review of draft-ietf-homenet-dot-03

2017-03-21 Thread Philip Homburg
> FWIW, when adding DANE support to Postfix, it was plainly obvious
> that DNSSEC validation belongs in the local resolver, and Postfix
> just needs to trust its "AD" bit.  The only thing missing from the
> traditional libresolv API is some way for the application to specify
> the resolver address list from the application (as "127.0.0.1"
> and/or "::1").  Some systems have a newer stub API (res_nquery,
> ...), but this API is not yet sufficiently universal.

For me (not DANE, but SSHFP, not a lot of difference) it was very clear that
an interface like getdns is a lot better than sending DNS packets to localhost
and hope that something will do the right thing.

Obviously, getdns could be implemented by talking to a local recursive
resolver. But that's just an implementation detail.


___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] I-D Action: draft-ietf-dnsop-isp-ip6rdns-03.txt

2017-03-16 Thread Philip Homburg
>1. I do not think there is consensus that having PTRs is or is not a best
>practice, so emphasizing the lack of consensus lets us move on to what an
>ISP can do, if they care to do anything.
>The first paragraph has been overhauled to say "While the need for a PTR
>record and for it to match
>   is debatable as a best practice, some network services [see Section 3]
>still do rely on PTR lookups, and some check the source address of
>incoming connections and verify that the PTR and A records match before
>providing service.=B2

Is it possible to have a separate section about e-mail?

In my experience, without reverse DNS it is essentially impossible to have
mail delivered to the internet at large.

So where most uses of PTR records are a nice to have to can be decided locally,
for e-mail it is other parties on the internet that force the use of PTR
records.

At the same time, if someone is capable of operating a mail server then 
operating an auth. DNS server is not really out of line.

So I'd like some text that describes the importance of reverse DNS for e-mail
and then basically says that if an ISP allows customers to handle their
own outgoing e-mail then that ISP SHOULD provide customers with a way of
setting up PTR records for those mail servers, even if it is just delegating
part of the name space by setting up NS records.

Do you have a reference for the following statement
Serving ads: "This host is probably in town.province."  An ISP that does not
provide PTR records might affect somebody else's geolocation.

Extracting geo information from reverse DNS is very hard. As far as I know,
geo location services for IPv4 mostly rely on other sources. 

The following is not clear to me:
Some ISP DNS administrators may choose to provide only a NXDomain
response to PTR queries for subscriber addresses. [...]
Providing a negative response in response to PTR
queries does not satisfy the expectation in [RFC1912] for entries to
match.  Users of services which are dependent on a successful lookup
will have a poor experience.  For instance, some web services and SSH
connections wait for a DNS response, even NXDOMAIN, before
responding.

Why would a NXDOMAIN response to a PTR query have a negative impact
on performance? If any, it would be faster because it saves a forward
lookup.

Maybe you want to say that a PTR lookup has to result in a quick reply,
even it is an NXDOMAIN. A delegation to a name server that does not respond
will cause a delay in applications that wait for the reverse DNS lookup to
complete.

I don't see a discussion about DNAME. Maybe that's worth adding?

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] DNSOP Call for Adoption draft-vixie-dns-rpz

2017-01-10 Thread Philip Homburg
>> 1) If the traver's laptop/phone uses Heathrow Airport resolvers then Heathro
>w
>
>>4) DNS is not really private so Google may offer their DNS services over HTTP
>S
>> 5) Governments may force Google to block popular sites, so users switch to
>>other DNS resolvers, again over HTTPS.
>
>See https://developers.google.com/speed/public-dns/docs/dns-over-https
>but I don't know how clients bootstrap that API without classic DNS.

The nice this about Google is that they are too big to block.

It is not a problem to use plaintext DNS to connect to Google because the
CA system and possibly DANE will protect the TLS connection.

>block child porn, drug, terrorist, and malware web pages as well as
>attempts by corrupted laptops and smart phones to bypass blocks on
>port 53 and reach evil or merely unfiltered DNS/HTTPS servers including
>those run by Google?

I'm not sure what 'HTTPS proxy' means in this context. If a public wifi
at an airport can decode TLS traffic then we have a serious security hole.
(Of course SNI is a problem. Hopefully TLS 1.3 will improve that)

But the bigger problem is that most users don't really care about the
difference between a public wifi operated by an airport and a public wifi
operated by a criminal. And maybe the criminal just took over the wifi
router at a respectable restaurant.

So an ISP can null route traffic to known bad destinations (though people
may switch to TOR to even deny an ISP even that option) but anything that
goes beyond that (ISP access to DNS, etc) also provides great opportunities
to attackers.

It doesn't really help much if an airport blocks malware but the bar next door
doesn't.  So for any mobile device, you have to secure the device anyhow.
Then this extra protection by ISP mostly becomes another attack vector.

So at least for mobile devices it makes sense to make sure that all 
traffic on the wifi interface is encrypted and we keep the ISP out of the
loop as much as possible.


___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] DNSOP Call for Adoption draft-vixie-dns-rpz

2017-01-09 Thread Philip Homburg
>See the recent discovery that Heathrow Airport runs a
>MITM TLS server on torproject.org. Do we want them to run RPZ where they
>can disappear torproject.org alltogether? No. Do we want them to run RPZ
>to prevent travelers from getting malware installed? Yes.

Just my crystal ball:
1) If the traver's laptop/phone uses Heathrow Airport resolvers then Heathrow
   Airport can mount a denial of service on DNS. So it does not matter if the
   malware zone is signed or not. If Heathrow Airport modifies the reply the
   traveler will be protected.
2) It makes sense to do local validation with something like getdns. If such a
   local validating resolver notices that DNSSEC validation fails ("Roadblock
   Avoidance") it may contact auth. DNS servers directly.
3) Heathrow Airport can move to deep packet inspection and also block
   direct access to malware DNS.
4) DNS is not really private so Google may offer their DNS services over HTTPS.
5) Governments may force Google to block popular sites, so users switch to
   other DNS resolvers, again over HTTPS.

After step 5, any benign malware filtering options are probably lost.

In that sense I don't care that much about the more philosophical arguments
arguments against rpz. If you care about DNS, run a local DNSSEC validating
resolver that does roadblock avoidance and possibly falls back to 
TLS or HTTPS to some trusted resolver.


___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] "let-localhost-be-localhost".

2016-11-23 Thread Philip Homburg
In your letter dated 23 Nov 2016 11:49:28 -0500 you wrote:
>> What if localhost is just inserted in the root as the equivalent of
>> localhost. IN A 127.0.0.1
>> localhost. IN  ::1
>
>Most systems I know special case a plain localhost name in the resolver or 
>cache.  The more interesting bits are .localhost which some of us 
>resolve to other addresses in 127/8.

I'm not sure how having something like 'localhost. IN  ::1' has
any effect on resolving localhost

I.e., today if you ask the root for .localhost you get
something like:
loans.  86400   IN  NSEClocker. NS DS RRSIG NSEC   
loans.  86400   IN  RRSIG   NSEC 8 1 86400 [...]

proving that localhost doesn't exist. You can add a localhost zone to your
resolver, but that works only if all stub resolvers use your resolver and
you can avoid client side validation.

After adding localhost to the root zone, the only thing that would change
is that asking the root zone for .localhost now results in
localhost. 86400   IN  NSEClocker. A 
localhost. 86400   IN  RRSIG   NSEC 8 1 86400 [...]

Which still proves that .localhost doesn't exist.

I'd say, no difference for that use case.

>Putting A and  records in the root is another thing that is 
>technically simple but would require a rule change at IANA, and I don't 
>think it's worth the hassle.

Does the MoU between the IETF and ICANN really say no A records in the root
zone? Or is there another policy document between IETF and IANA?


___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] "let-localhost-be-localhost".

2016-11-23 Thread Philip Homburg
>The problem is that the DNSSEC solution here is kind of complicated. 
>What you'd want is an opt-out signature in the root, showing that there 
>might be an insecure delegation to .localhost, but the root is signed with 
>NSEC and there's only opt-out in NSEC3.  Technically it's not complicated 
>to change from NSEC to NSEC3, but any change to the way the root is 
>managed is a big deal since the consequences of screwing it up are so 
>large.

What if localhost is just inserted in the root as the equivalent of
localhost. IN A 127.0.0.1
localhost. IN  ::1

(of course this can be done by directly inserting those entries in the root, or
by using CNAME or DNAME tricks, or even delegating localhost. to something like
as112)

I assume that anyone who wants different values for localhost can edit 
/etc/hosts
or use one of the many dns resolution tricks. This may break local validating
resolvers, but so what?


___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] [OT][rant-ish] Electronics & business models (was DNSSEC operational issues long term)

2016-11-17 Thread Philip Homburg
>For most electronics equipment (pre-IoT) once you sold it your job as a
>manufacturer was basically done. You don't have to issue security
>patches for the keyboard or firmware upgrades to the monitor because
>the meaning of the wires in the VGA standard has changed out from under
>it.
>
>With anything connected to the Internet it seems the only thing that we
>can do is constantly be patching and fighting against the latest
>exploits of our protocols and implementations. Unless we are going to
>throw away all practical engineering and only use systems that are
>provably correct in a mathematics sense(*), that's probably how it is
>going to stay.
>
>There are several possible models that would be better: subscription,
>open systems (so a 3rd party can sell improvements & upgrades), and
>so on. Unfortunately nobody seems to care about these issues, since the
>vendors are making money by the fistful (a few pennies at a time) and
>policy makers take that is a sign that everything is fine.

Looking at this from a European perspective (regulations vary in
different parts of the world), you can expect the manufacturer to build a 
device that will work correctly for some period of time. I think the
European (default) minimum is 2 years.

So if the device develops a fault during that period, it has to be repaired
or replaced by the seller. Needless to say, security issues are manufacturing
defects that are not exempt from this.

If internet connected devices continue to do damage in the next years, it is
not unreasonable to expect that the manufacturers will at some point be forced
to pay for the damages caused by the abuse of those devices.

So part of selling a device that is intended to be connected to the internet is
to make sure that security issues can be patched.

At the same time, if a device stops working because of a DNS root KSK roll over
then it's reasonable to demand that the seller makes it work again.

Of course, a manufacturer can choose whatever clumsy user interface is cheapest.

My preference for general purpose, autonomous devices, is that they check
if firmware updates are available. That way the manufacturer can promptly
fix security updates, but it also allows for changes in key material, etc.
I think this is easy to do. There are some things you can't protect against
(signature keys leaking) but there are plenty of other accidents that happen
already.

I guess anybody can write a BCP for that device class, it doesn't even have
to be the IETF.

A more complex class is a device that should be able to bootstrap even if only
a limited part of the internet is reachable. For example after some kind of
disaster. That may require carefully documenting what protocols are used and in
what way, to reach a stable secure state.

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] DNSSEC operational issues long term

2016-11-16 Thread Philip Homburg
>Did you see my original response? Proposals for automatic DNSSEC trust
>anchor updating *do* exist.

Is there any document that deals with the situation where a device has
been in a box for 10 years and then has to bootstrap automatically?

I'm not aware of any. But maybe there is.

Note that by and large such a device has no idea about time. NTP is not 
secure. Any key material stored on the box is no longer valid.

If the answer to DNSSEC bootstrapping is use TLS, then there is still the
question what about time, is the certificate that was stored on the box 
10 years ago still usable.

Are there resolvers (and libraries like getdns) that can transition from
not having any trust anchors to full DNSSEC validation. Do other parts of
the same system see either DNSSEC failures or answers that were not
validated.


___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] Fwd: New Version Notification for draft-dickinson-dnsop-dns-capture-format-00.txt

2016-11-01 Thread Philip Homburg
>We have just published a new draft on a proposed format for DNS packet 
>capture - please see below for details. We would very much appreciate 
>feedback on the overall problem discussed here in addition to the 
>details of the format proposed.

Did you consider not (partially) decoding the DNS payload and instead just
storing DNS payloads directly as binary blobs?

Experience with RIPE Atlas shows that binary the DNS data has a number of
advantages:
- future proof
- no maintainance required
- can store anything no matter how broken
- lack of processing equals lack of bugs
- parsers can be based on original DNS standards instead of a new scheme
  plus all DNS standards for the details.

Another issue is to consider whether the format would benefit from local
extensions. For example, enrichtment of data according to local specifications.
If so, then BSON would be another format to consider.


___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] DNS-in-JSON draft

2016-09-06 Thread Philip Homburg
In your letter dated Mon, 5 Sep 2016 15:47:37 +0800 you wrote:
>Finally, I note that the RIPE Atlas system uses a type of DNS JSON
>representation when you use their API to query for DNS measurement
>results. You can get a sample here:
>
>https://atlas.ripe.net/api/v2/measurements/5009360/results?start=3D14728608=
>00=3D1472947199=3Dtxt
>
>The RIPE Atlas results match your proposal pretty well - you can see
>it in the "results" object there - although they use "abuf" instead of
>"messageOctets!".

What you are referring is sort of the unofficial Atlas format. This what Atlas 
probes
provide. The real format is the 'abuf decoder' in Sagan. 

Here's an example of the output of the abuf decoder, when converted to JSON (by
itself the abuf decoder results in python objects):

{"HEADER": {"AA": true, "QR": true, "AD": false, "NSCOUNT": 1, "QDCOUNT": 1, 
"ANCOUNT": 0, "TC": false, "RD": false, "ARCOUNT": 1, "CD": false, 
"ReturnCode": "NXDOMAIN", "OpCode": "QUERY", "RA": false, "Z": 0, "ID": 36316}, 
"AuthoritySection": [{"Retry": 900, "Name": ".", "NegativeTtl": 86400, 
"Refresh": 1800, "MasterServerName": "www.yeti-dns.org.", "Expire": 604800, 
"MaintainerName": "hostmaster.yeti-dns.org.", "TTL": 86400, "Serial": 
2016050801, "Type": "SOA", "Class": "IN", "RDlength": 51}], "QuestionSection": 
[{"Qclass": "IN", "Qtype": "SOA", "Qname": "yetiroot."}], "EDNS0": 
{"ExtendedReturnCode": 0, "Option": [{"OptionCode": 3, "OptionName": "NSID", 
"NSID": "dahu1.yeti.eu.org", "OptionLength": 17}], "UDPsize": 4096, "Version": 
0, "Z": 0, "Type": "OPT", "Name": "."}}

Two obvious differences are the use of 'true' and 'false' for boolean bit 
fields and
the use of names ("Class": "IN") instead of a number of a name.

I'd like to point out the DNSSEC tends to use base64. So we also use that, for 
example:

{"AuthoritySection": [{"Target": "sec3.apnic.net.", "TTL": 2941, "Type": "NS", 
"Class": "IN", "RDlength": 13, "Name": "ripe.net."}, {"Target": 
"pri.authdns.ripe.net.", "TTL": 2941, "Type": "NS", "Class": "IN", "RDlength": 
14, "Name": "ripe.net."}, {"Target": "tinnie.arin.net.", "TTL": 2941, "Type": 
"NS", "Class": "IN", "RDlength": 14, "Name": "ripe.net."}, {"Target": 
"sns-pb.isc.org.", "TTL": 2941, "Type": "NS", "Class": "IN", "RDlength": 16, 
"Name": "ripe.net."}, {"Target": "sec1.apnic.net.", "TTL": 2941, "Type": "NS", 
"Class": "IN", "RDlength": 7, "Name": "ripe.net."}, {"Target": "ns3.nic.fr.", 
"TTL": 2941, "Type": "NS", "Class": "IN", "RDlength": 12, "Name": "ripe.net."}, 
{"KeyTag": 11587, "Name": "ripe.net.", "Algorithm": 5, "SignerName": 
"ripe.net.", "Labels": 2, "Signature": 
"C4WMH46cBWT/hhWvVrStIdXrqHA2fwfphGkx9+6wbss+mHg8mbfKvaFfcg43/MZh/PwdyAQkRN8I+v/OZ1JA3Gt3KvDc00PebtQZBYlXxssZVNtcx45DG5a3M/RGzhqjM5hfuigLmghIEhuvMhtrhmC4WS/7B3KrYOenFQUJmxk=",
 "Class": "IN", "TTL": 2941, "O
 riginalTTL": 3600, "SignatureInception": 1403074827, "SignatureExpiration": 
1405670427, "Type": "RRSIG", "TypeCovered": "NS", "RDlength": 156}], 
"QuestionSection": [{"Qclass": "IN", "Qtype": "A", "Qname": "www.ripe.net."}], 
"AdditionalSection": [{"Name": "pri.authdns.ripe.net.", "TTL": 1688, "Address": 
"193.0.9.5", "Type": "A", "Class": "IN", "RDlength": 4}, {"Name": 
"pri.authdns.ripe.net.", "TTL": 1688, "Address": "2001:67c:e0:0:0:0:0:5", 
"Type": "", "Class": "IN", "RDlength": 16}], "HEADER": {"AA": false, "QR": 
true, "AD": true, "NSCOUNT": 7, "QDCOUNT": 1, "ANCOUNT": 2, "TC": false, "RD": 
true, "ARCOUNT": 5, "CD": false, "ReturnCode": "NOERROR", "OpCode": "QUERY", 
"RA": true, "Z": 0, "ID": 22575}, "ERROR": [["_do_rr", 576, "offset out of 
range: buf size = 576"], ["additional", 574, "_do_rr failed, additional record 
2"]], "AnswerSection": [{"Name": "www.ripe.net.", "TTL": 20941, "Address": 
"193.0.6.139", "Type": "A", "Class": "IN", "RDlength": 4}, {"KeyTag": 11587, 
"Name": "www
 .ripe.net.", "Algorithm": 5, "SignerName": "ripe.net.", "Labels": 3, 
"Signature": 
"I7lQZF9ia3X83KTY01/orh3qRqAS0BYeozB7SZ/juSk0RfeTngWoIIkLzvbBV11ORrmr93FkH5xPrPtT9Wf4c0QAqZRN+RyyP8K5JaMI4TGT9cc2mAS5Gf8elg2c/fI2LvIMjVXKpkxMcEh/bSrbpBiS8tjR8z2p60CWOir0sE0=",
 "Class": "IN", "TTL": 20941, "OriginalTTL": 21600, "SignatureInception": 
1403074827, "SignatureExpiration": 1405670427, "Type": "RRSIG", "TypeCovered": 
"A", "RDlength": 156}]}

Another thing worth pointing out is that getdns has its own set of field names 
for
representing DNS. So it may be worth aligning this document as much as possible
with getdns.

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] DNS-in-JSON draft

2016-09-06 Thread Philip Homburg
In your letter dated Tue, 6 Sep 2016 06:26:58 + you wrote:
> I was more commenting on the fact that it is escaping in a format
> that already support escaping. The JSON output would be double
> escaped and implementations would need to unescape it themselves
> rather then let JSON handle it.

JSON strings are defined to be UTF-8. You get all kinds of weird problems
if you try to store 8-bit binary data directly in JSON strings.
(I.e. using the \u escape sequence)

An encoding that only requires US-ASCII to be stored in JSON strings is
a lot safer.

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] Working Group Last Call draft-ietf-dnsop-isp-ip6rdns

2016-04-29 Thread Philip Homburg
In your letter dated Fri, 29 Apr 2016 13:33:29 +0100 you wrote:
>"needed" is rather a strong word historically reverse DNS was a de
>facto requirement for access to some anonymous FTP servers (a use case
>that is now rather long in the tooth) and it was seized on by mail
>systems that were trying to deal with spam because
>
>Mayhap -- but reputation services for IPv6 are still in their infancy
>and so there needs to be some way to distinguish "legitimate" mail
>servers from malware infected end-user devices.

I'm talking about operational experience today.

Not whether it is required by any standards document or not. And of course
I cannot predict the future in this regard.

But operationally, if you are a small party trying to deliver e-mail today,
you have to have reverse DNS for IPv6 otherwise some mail will be rejected or
discarded. And some really big mail providers are right now rejecting mail
if there is no reverse DNS.




___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] Working Group Last Call draft-ietf-dnsop-isp-ip6rdns

2016-04-29 Thread Philip Homburg
In your letter dated Fri, 29 Apr 2016 14:26:27 +0200 you wrote:
>I see two simple solutions for that. You mention one (ip6.arpa DNS
>delegation), since, as you said, people who want to manage a mail
>server probably can manage a DNS zone.
>
>There is another one, apparently not mentioned by the draft but widely
>used in the VPS / dedicated server world: give the users a Web
>interface / API so they can add PTR themselves. After all, unlike what
>the current draft seems to imply, there is zero need to give a PTR to
>every thermostat or refrigerator in the house. You need PTR for a few
>machines only, such as the mail server. This can be delegated to the
>user.

Indeed. This second option is what my ISP is planning to do. Except that
they don't actually care about customers who need reverse DNS now.

But yes, manually adding a PTR is also a very good solution. Especially,
because it will be very hard for malware to automate that. Making it
hard for malware to send mail from a system with reverse DNS.

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] Working Group Last Call draft-ietf-dnsop-isp-ip6rdns

2016-04-29 Thread Philip Homburg
In your letter dated Fri, 29 Apr 2016 13:54:44 +0200 you wrote:
>Disclaimer: Personally I think that the whole notion of reverse IP is
>ridiculous, especially in IPv6. I proposed that we skip the whole
>notion in IPv6, possibly providing some alternate, non-DNS, method to
>get hostname from IPv6 addresses for the rare case where that is useful.
>
>However, administrators seem to really like reverse DNS. I don't know
>why. It's a mystery to me, but they do.
>
>Having said all of that, I don't see any strong requirement that this
>document provide motivation for reverse DNS solutions for IPv6. People
>ask about the problem, and want solutions, and it would be good to have
>a document to point them to with some help.

The problem I have with the current line of thinking (Section 2.4 in this
draft) is that there is a big disconnect between where reverse DNS is
needed and what ISPs are trying to do.

Whether we like it or not, mail admins are adding reverse DNS checks.
In fact, some really big mail providers require reverse DNS. Personally,
I also like to see names in tcpdump output, but that's just a nice to have.

So, ISPs not doing reverse DNS for IPv6, like my current ISP, are making it
impossible to use your own mail server to deliver mail over IPv6. I think
they are doing a serious disservice to the open internet.

In any case, people operating mail servers have to be more technical than
the average consumer. We are at the moment no where near just dropping a
box at somebody's home and expecting that to be a well maintained mail server.

But then, when reverse DNS is discussed, it tries to cater to the
non-technical consumers. The group that has no need for reverse DNS at the
moment.

For me, the obvious starting point for reverse DNS is to just delegate
the space of the prefix. Customers that want to run mail servers can also
set up a DNS server.

Instead there are too many options to automatic reverse DNS that are by and
large not needed.

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


  1   2   >