On 10/05/2010 11:56 AM, Phillip Hallam-Baker wrote:
Clearly if you have two controls, A and B and BOTH must be compromised,
the system is less likely to be compromised than either A or B.
But the design approach taken in the Hoffman et. al. proposal is that
publication of a DNSSEC assurance for a cert disables verification on
the PKIX chain unless the 'preferences' flag is set. This flag will be
buried in a base64 encoded sub-field encoding.
In practice only a proportion of clients will deploy this mechanism. So
if A is PKIX and B is DNSSEC, it will be possible for an attacker to
succeed if either A or B is compromised in one configuration and if
either A or A and B is compromised in the other.
+1
In practice, only a proportion of servers will deploy this mechanism
until such time as Firefox and Chrome are willing to shut them off.
I've read that because DNSSEC records are large, some resolvers cannot
pass them on. Testing a few months ago indicated that it worked on
something like 4 out of 5 internet connections I tried it on. But it's
quite possible that there are little wifi-router-firewall boxes in
millions of homes that don't expect such large packets on port 53.
So clients will have to continue to operate in the absence of DNSSEC
information for the forseeable future.
It's also reasonable to suspect that an attacker (who in our analysis is
already presumed to have a fraudulent server cert and the ability to
make use of it, i.e., MitM) would also be able to downgrade DNS to strip
out this "preferences" flag (if some intermediate box isn't doing it
already).
Therefore the attacker may be able to choose to attack the via a DNSSEC
problem (e.g., a website vuln at the site's DNS registrar) or via PKI
(e.g., a rouge sub-CA). So the attack surface is, in fact, increased.
Unless I'm missing something about how DNSSEC prevents downgrade
attacks, which is quite possible.
People seem to be confusing the security of the cryptographic protocols
with the security of the infrastructures that support them. The weakest
link in any competently designed security scheme is people and
processes.
Perhaps only because security designers have successfully washed their
hands of the harder part of the problem?
The PKIX infrastructure has been operating as a security
infrastructure for 15 years, its flaws are reasonably well understood at
this point.
That's a very nice way of saying it.
I would have said it this way:
IT SUCKS!!!!
The fact that the national telephone company of Qwertystan can delegate
a sub-CA that can delegate a sub-CA that can issue a certificate which
enables the MitM of the connection between my notebook and my company's
VPN server is a scandal. If anyone had ever proposed such a scheme to
people who weren't already steeped in the status quo, they'd be laughed
out of their industry.
DNS on the other hand is a non-security infrastructure that
people appear to want to immediately co-opt to duplicate functions of PKIX.
Rather than wanting to co-opt or duplicate the current PKI scheme, it
may be that people are simply eager to rid themselves of it.
Not saying it's perfect, I honestly haven't looked into it very deeply,
but DNSSEC holds a lot of promise:
1. It builds on top of DNS, which is reasonably mature.
2. It was designed for actual security with knowledge of modern attacks
and the benefit of hindsight.
3. It was designed at all, rather than simply evolving to enable various
sets of business deals.
4. It roots to one entity is ostensibly neutral and non-profit. They
have a track record which can be evaluated. They largely own DNS anyway.
5. Domain holders already understand the need to handle their domain
name registrations as super-critical assets.
6. It appears to delegate more control to the domain holder to define
and authenticate his own data. He doesn't have to deal with weird "per
server licensing" (whatever that means) agreements.
7. It appears to reduce or eliminate some recurring fees associated with
server authentication.
Of course, this list leaves out anything bad about it. Possibly, it's
worse on balance.
"What could possibly go wrong"
Lots, particularly during the transition period when parallel systems
must be trusted.
Which is why such a proposal needs a lot of thinking and real-world testing.
Which is why it would have been better for it to have been proposed earlier.
Given that the problem that instigated this proposal is mis-issue of a
certificate. It would appear to me that we should look at deploying
controls that reduce the probability of mis-issue of a certificate
It's not a question of probability, it's the option of the attacker. If
you have something worth protecting, you need to be defending against a
targeted attack. The attacker may or may not have the capability to
obtain a usable private key, but if he can, he can be expected to use
it. So it's not a question of "probability if" but of "who", i.e., who
are the attackers I wish to defend against.
There are may of users of the technology who do not desire to trust (or
actively mistrust) many of the entities controlling CAs.
before we rush to deploy a completely new validation scheme for
certificates in the six month timescale being proposed in this charter.
>
In particular, it would be rather useful to have controls of the form:
* Certificates only valid if issued into a certificate chain with
specified properties
* Obtain additional authentication according to protocol X and key Y.
Anything limiting the number and scope of CAs would be an improvement.
But such improvements could have been implemented at any time
previously, yet they were not. Maybe the prospect of DNSSEC replacing it
will provide motivation for PKI to improve. But by then the train may
already be rolling.
- Marsh
_______________________________________________
DNSOP mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/dnsop