Hector, Good analysis. There are a lot of dimensions along which one could rate the attacks, so I just had to pick one (well, two).
I had some trouble interpreting from your message whether some of the attacks should be rated differently (according to the two parameters I used), or not. Do you have any specific suggestions? -Jim Hector Santos wrote: > ----- Original Message ----- > From: "Jim Fenton" <[EMAIL PROTECTED]> > > >> I do recognize my role on this is as a document editor, and >> I will adjust the document to group consensus, but I wanted you >> to know where I was coming from. >> > > I hate going crazy on people's documents. :-) So I will minimize my > comments to illustrating how I view the Attacks and Ratings. > > With my manager hat on, I read this document and ask myself a few > questions: > > "I trust the work. But what is the summary?" > > 1) Which attacks do I need to be concern about? > 2) Which attacks are addressed by DKIM/SSP? > > In the first question, I am interested in the "worst case" scenario. To > do this, I can associate the ratings with a value, lets say: > > High = 10 > M/H = 8 > Medium = 5 > Low = 1 > > and get a rough rating for each attack by adding impact + likelihood. > > When you do this, you will see the top 4 attacks are: > > --------------------------------------------+--------+------------+ > Attack Name | Impact | Likelihood | > --------------------------------------------+--------+------------+ > Denial-of-service attack against verifier | High | Medium | > Denial-of-service attack against key service| High | Medium | > Display name abuse | Medium | High | > Compromised system within originator's net | High | Medium | > > One might suggest or ask "Is this a true relection of a high concern > attacks?" > > So I will analyze this a different way as well, by measuring the > likehihood, i.e, how often do I have worry about these attacks? > > o Ordered by Likelihood/Impact > > --------------------------------------------+--------+------------+ > Attack Name | Impact | Likelihood | > --------------------------------------------+--------+------------+ > Display name abuse | Medium | High | > Signed message replay | Low | High | > Chosen message replay | Low | M/H | > Denial-of-service attack against verifier | High | Medium | > Denial-of-service attack against key service| High | Medium | > Compromised system within originator's net | High | Medium | > Theft of delegated private key | Medium | Medium | > Body length limit abuse | Medium | Medium | > Falsification of key service replies | Medium | Medium | > Verification probe attack | Medium | Medium | > Canonicalization abuse | Low | Medium | > Theft of private key for domain | High | Low | > Private key recovery via side-channel attack| High | Low | > Compromise of key server | High | Low | > Publication of bad key records and/or sigs | High | Low | > Cryptographic weaknesses in sigs | High | Low | > Use of revoked key | Medium | Low | > --------------------------------------------+--------+------------+ > > When viewed in this perspective, one might begin to question the level > of impact and also ask if the attack is detectable and if a system can > recover from such an attack. > > One might suggest that all theft has a high impact. > > One might suggest that any high occurence attack will always have a high > impact in some form or another as well, unless there is a detection > concept implemented in order to minimize the impact. > > For example, one might suggest that if the Signed/Chosen message replay > attacks have high likelihood of occurence, then the impact is high, > rather than low. > > So from a DKIM/SSP perspective, I think what might be useful, if we had > a 3rd column, called Detection/Recovery which also rates the > effectiveness of DKIM/SSP to detect and/or recover from the attack. > > I know this can be highly subjective, but we can probably use a rating > like: > > MEDIUM - Detectable using deterministic non-DKIM/SSP method > HIGH - Detectable using deterministic DKIM/SSP method > LOW - Not detectable until reported by human/scoring > > By doing it this way, we get to see an angle for the effective of the > DKIM/SSP protocol and how it can also be assisted by other > recommendations as well and also which ones we really have to worry > about. > > This is the rough cut at this, so it could be wrong: > > o LOW - Not detectable until reported by human/scoring > > --------------------------------------------+--------+------------+ > Attack Name | Impact | Likelihood | > --------------------------------------------+--------+------------+ > Compromised system within originator's net | High | Medium | > Theft of delegated private key | Medium | Medium | > Theft of private key for domain | High | Low | > Compromise of key server | High | Low | > Cryptographic weaknesses in sigs | High | Low | > > o MEDIUM - Detectable using deterministic non-DKIM/SSP method > > --------------------------------------------+--------+------------+ > Attack Name | Impact | Likelihood | > --------------------------------------------+--------+------------+ > Signed message replay | Low | High | > Chosen message replay | Low | M/H | > Denial-of-service attack against verifier | High | Medium | > Denial-of-service attack against key service| High | Medium | > Falsification of key service replies | Medium | Medium | > Verification probe attack | Medium | Medium | > Private key recovery via side-channel attack| High | Low | > > o HIGH - Detectable using deterministic DKIM/SSP method > > --------------------------------------------+--------+------------+ > Attack Name | Impact | Likelihood | > --------------------------------------------+--------+------------+ > Display name abuse | Medium | High | > Body length limit abuse | Medium | Medium | > Canonicalization abuse | Low | Medium | > Use of revoked key | Medium | Low | > Publication of bad key records and/or sigs | High | Low | > > > -- > Hector Santos, Santronics Software, Inc. > http://www.santronics.com > > _______________________________________________ ietf-dkim mailing list http://dkim.org
