(back from vacation and catching up) On Tue, Aug 14, 2018 at 8:58 PM, Seth Blank <[email protected]> wrote:
> There are THREE consumers of ARC data (forgive me for the names, they're > less specific than I'd like): > > 1) The ARC Validator. When the Validator sees a cv=fail, processing stops, > the chain is dead, and shall never be less dead. What is Sealed is > irrelevant. > Right. 2) The Receiver. An initial design decision inherent in the protocol is > that excess trace information will be collected, as it's unclear what will > actually be useful to receivers. 11.3.3 calls this out in detail. Without > Sealing the entire chain when attaching a cv=fail verdict, none of the > trace information is authenticatable to a receiver (see earlier message in > this thread as to why), which is the exact opposite of the design decision > the entire protocol is built on. To guarantee this trace information can be > authenticated, the Seal that contains cv=fail must include the entire chain > in its scope. This is where this thread started. > I see two possible workflows here: (1) Verifier (to use the DKIM term) detects "cv=fail" and stops, because there's nothing more to do. But the Receiver now has no ARC information except a raw "cv=fail" to relay via A-R or whatever. But this, as you point out, flies in the face of the notion of giving receivers details about the message. The Receiver now has to implement ARC to get whatever details might be prudent from the message. (2) Verifier sees "cv=fail" but will still attempt to verify it and maybe extract other salient details to add to an A-R. When you say "you see cv=fail and stop", I think of the first thing, which is alarming layer mush, and is also ambiguous in that if the Verifier stops dead at seeing "cv=fail", it doesn't matter at all what content got sealed. So if you mean the second thing, part of my issue goes away. 3) The receiver of reports that provide ARC data. For a domain owner to get > a report with ARC information in it, there needs to be some level of trust > in the information reported back. When a Chain passes, all the > intermediaries' header field signatures can be authenticated, and the > mailflow can be cleanly reported back. When a Chain fails, that is > important information to a domain owner (where is my mailflow failing me, > how can I figure this out so I can fix it?). Again, without Sealing over > the entire Chain when a failure is detected, this information is > unauthenticatable (and worse, totally forgeable now without even needing a > valid Chain to replay), and nothing of substance can be reported back. > Sealing the Chain when a cv=fail is determined blocks forgery as a vector > to report bogus information, and allows authenticatable information to be > reported back. > I think we're talking about distinct failure modes. I totally agree with you in the case where the chain has failed because content was altered. But doesn't your assertion here presuppose an at least syntactically intact chain? If the chain is damaged to the point where it cannot be deterministically interpreted, the sealer adding the "cv=fail" might add a seal that a downstream verifier cannot correctly interpret. I understand what you're after but I also understand the intent behind 5.1.2, which is to produce something unambiguous. My problem with 5.1.2 as it stands is that a verifier now has to try verifying the "cv=fail" two ways (one with everything, one with just the last instance), and at least one of them has to work. We've cornered ourselves here by rejecting "cv=invalid". > > And to be even clearer: what is Sealed when cv=fail is reached (itself, > the entire chain, or nothing at all) DOES NOT AFFECT INTEROPERABILITY. But > it does effect preserving trace information and preventing forged data from > being reportable. > I disagree, as stated above; a mangled chain cannot be sealed in a way guaranteed to interoperate. This is my very strong INDIVIDUAL opinion. But I'm fine if the group sees > differently, as this could be investigated as part of the experiment (i.e. > do any of the above points matter in the real world? I say they do, hence > the strong opinion.). As an editor, I'll make sure whatever the consensus > of the group is is reflected in the document. > I've no objection to collecting superfluous trace information to support the experiment. What I'm concerned about is the introduction of weird protocol artifacts or ambiguities that could get baked in and hard to remove later. -MSK
_______________________________________________ dmarc mailing list [email protected] https://www.ietf.org/mailman/listinfo/dmarc
