On Feb 24, 2010, at 8:54 AM, Mark Delany wrote: > On Feb 24, 2010, at 5:51 AM, Michael Thomas wrote: > >> I'm sort of dubious about this. Unless you're using z=, your chances >> of >> figuring out why something broke are slim to none. With z=, your >> chances >> of figuring it out are merely slim. >> >> Mike, with far too much experience at that > > It got a luke-warm response a few years back, but now that a lot more > people are having to deal with "why did the verify failed", is it > worth re-vivifying the DKIM-Trace stuff, or whatever it was called > back then? We found it very useful for our early days of interop > testing. > > The idea is pretty simple: The signer adds a header that characterizes > the content before and after canonicalization. The verifier performs > the same characterization and compares the differences. The > characterizations we used at the time were simple character counts > represented in a relatively compressed form (27 a's, 60 b's, 40LFs, > 50spaces, etc) > > The form we used is kinda ugly as > http://www.ietf.org/mail-archive/web/ietf/current/msg39488.html > shows, but it was very illuminating as things like mis-match of > white-space counts, or case-conversions, etc, identified what changed > and where it changed (canonicalization vs transit).
Neat. A nice way of fingerprinting. I'm guessing that the verifier typically won't particularly care why it failed, certainly not to the extent they'll want to expend much effort to characterize it mechanically. The signer will already have (or be able to regenerate) a byte accurate copy of what was sent, if they care to diagnose things. If the verifier simply returns a copy of what they received back to the signer, is that adequate for any forensics needed, or does more metadata need to be sent along with the message? Cheers, Steve _______________________________________________ NOTE WELL: This list operates according to http://mipassoc.org/dkim/ietf-list-rules.html
