On Jul 28, 2010, at 9:22 29AM, Peter Gutmann wrote:

> Steven Bellovin <s...@cs.columbia.edu> writes:
>> For the last issue, I'd note that using pki instead of PKI (i.e., many 
>> different per-realm roots, authorization certificates rather than identity 
>> certificates, etc.) doesn't help: Realtek et al. still have no better way or 
>> better incentive to revoke their own widely-used keys.
> I think the problems go a bit further than just Realtek's motivation, if you 
> look at the way it's supposed to work in all the PKI textbooks it's:
>  Time t: Malware appears signed with a stolen key.
>  Shortly after t: Realtek requests that the issuing CA revoke the cert.
>  Shortly after t': CA revokes the cert.
>  Shortly after t'': Signature is no longer regarded as valid.
> What actually happened was:
>  Time t: Malware appears signed with a stolen key.
>  Shortly after t: Widespread (well, relatively) news coverage of the issue.
>  Time t + 2-3 days: The issuing CA reads about the cert problem in the news.
>  Time t + 4-5 days: The certificate is revoked by the CA.
>  Time t + 2 weeks and counting: The certificate is regarded as still valid by
>    the sig-checking software.
> That's pretty much what you'd expect if you're familiar with the realities of 
> PKI, but definitely not PKI's finest hour.  In addition you have:
>  Time t - lots: Stuxnet malware appears (i.e. is noticed by people other than
>    the victims)
>  Shortly after t - lots: AV vendors add it to their AV databases and push out
>    updates
> (I don't know what "lots" is here, it seems to be anything from weeks to
> months depending on which news reports you go with).
> So if I'm looking for a defence against signed malware, it's not going to be 
> PKI.  That was the point of my previous exchange with Ben, assume that PKI 
> doesn't work and you won't be disappointed, and more importantly, you now 
> have 
> the freedom to design around it to try and find mechanisms that do work.

When I look at this, though, little of the problem is inherent to PKI.  Rather, 
there are faulty communications paths.

You note that at t+2-3 days, the CA read the news.  Apart from the question of 
whether or not "2-3 days" is "shortly after" -- the time you suggest the next 
step takes place -- how should the CA or Realtek know about the problem?  Did 
the folks who found the offending key contact either party?  Should they have?  
The AV companies are in the business of looking for malware or reports thereof; 
I think (though I'm not certain) that they have a sharing agreement for new 
samples.  (Btw -- I'm confused by your definition of "t" vs. "t-lots".  The 
first two scenarios appear to be "t == the published report appearing"; the 
third is confusing, but if you change the timeline to "t+lots" it works for "t 
== initial, unnoticed appearance in the wild".  Did the AV companies push 
something out long before the analysis showed the stolen key?)

Suppose, though, that Realtek has some Google profile set up to send them 
reports of malware affecting their anything.  Even leaving aside false 
positives, once they get the alert they should do something.  What should that 
something be?  Immediately revoke the key?  The initial reports I saw were not 
nearly specific enough to identify which key was involved.  Besides, maybe the 
report was not just bogus but malicious -- a DoS attack on their key.  They 
really need to investigate it; I don't regard 2-3 days as unreasonable to 
establish communications with an malware analysis company you've never heard of 
and which has to verify your bonafides, check it out, and verify that the 
allegedly malsigned code isn't something you actually released N years ago as 
release for a minor product line you've since discontinued.  At 
that point, a revocation request should go out; delays past that point are not 
justifiable.  The issue of software still accepting it, CRLs notwithstanding, 
is more a sign of buggy code.

The point about the communications delay is that it's inherent to anything 
involving the source company canceling anything -- whether it's a PKI cert, a 
pki cert, a self-validating URL, a KDC, or magic fairies who warn sysadmins not 
to trust certain software.  

What's interesting here is the claim that AV companies could respond much 
faster.  They have three inherent advantages: they're in the business of 
looking for malware; they don't have to complete the analysis to see if a 
stolen key is involved; and they can detect problems after installation, 
whereas certs are checked only at installation time.  Of course, speedy action 
can have its own problems; see 
 for a recent example, but there have been others.  

Note that I'm not saying that PKI is a good solution.  But it's important to 
factor out the different contributing factors in order to understand what needs 
to be fixed.  It's also important to understand the failure modes of 
replacements.  (To pick a bad example, Kerberos for the Internet is extremely 
vulnerable to compromise of the KDC, unless you use the public key variants of 

                --Steve Bellovin, http://www.cs.columbia.edu/~smb

The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com

Reply via email to