On 02/24/2010 08:54 AM, Mark Delany wrote: > On Feb 24, 2010, at 5:51 AM, Michael Thomas wrote: > >> I'm sort of dubious about this. Unless you're using z=, your chances >> of >> figuring out why something broke are slim to none. With z=, your >> chances >> of figuring it out are merely slim. >> >> Mike, with far too much experience at that > > It got a luke-warm response a few years back, but now that a lot more > people are having to deal with "why did the verify failed", is it > worth re-vivifying the DKIM-Trace stuff, or whatever it was called > back then? We found it very useful for our early days of interop > testing. > > The idea is pretty simple: The signer adds a header that characterizes > the content before and after canonicalization. The verifier performs > the same characterization and compares the differences. The > characterizations we used at the time were simple character counts > represented in a relatively compressed form (27 a's, 60 b's, 40LFs, > 50spaces, etc)
The thing that I'm skeptical about is that an automaton can be programmed to do this sort of analysis with any sort of accuracy. We're talking about a potential flood of reports coming in, I assume, so I doubt we're all going to be putting out job reqs for "DKIM Signature Breakage Analysis Engineer". There were far too many breakages even with tools and hunches that were very difficult to figure out, and even then there were lots of mysteries. And of course, there's an open question about what you do with this sort of forensic data... it can be gamed, after all. So if there's any advantage for bad guys to game it, it probably will be. But I guess this all rather begs the question about what people intend to do with those breakage stats and/or analysis. Mike _______________________________________________ NOTE WELL: This list operates according to http://mipassoc.org/dkim/ietf-list-rules.html
