> Ned Freed writes:

>  > The scenario you propose makes no sense:

> True.

>  > If Yahoo! or whoever does what you describe, their messages would
>  > in effect have no attached signatures until the receiving systems
>  > out there upgrade their software to handle the new critical tags.

> No, they'll have effective signatures.  They'll use "v=1" signatures
> as well during a transition period (they work well enough to be used
> for a while ), and then do what they're doing now: tell their users to
> yell at recipients who don't handle vendor-specific "V=2" signatures
> to get their act together and "be part of the solution".

You've now changed the scenario. Previously you seemed to be talking about
switching to the new scheme, not running both in parallel.

Since the V2 signatures will be ignored, running them in parallel is
effectively the same as just having V1 signatures to old agents.

>  > You can't count on much from large players, but I think you can
>  > count on them not intentionally screwing themselves over.

> I don't see the strategy above as screwing a large player more than
> publishing p=reject already does.  They did that.

Again, you're talking about a different strategy now.

>  > > By design, DMARC renders that requirement inoperative, and a
>  > > "p=reject" policy is intended to render messages unprocessable exactly
>  > > when a particular DKIM-signature is invalid.  DKIM may not need to
>  > > worry about it, but we do.
>  >
>  > You're missing the point. We're *changing* the design here so things no
>  > longer work this way by associating this with a version bump. And we've
>  > already confirmed that a significant number of implementations ignore
>  > v=2 signatures.

> Changing the design of what?

DKIM and subsequently DMARC.

> I wish we could change the design of
> DMARC[1], but I don't think that is going to happen.

Then this entire effort is pointless and we might as well leave it all
to the MLMs to deal with as best they can.

> DMARC is a
> private agreement so far completely out of IETF control, it is known
> to suck in some ways for third parties, and the big players are doing
> those sucky things anyway because it accomplishes their goals without
> hurting them very much.  Changing DKIM is not going to change DMARC as
> far as I can see.  DMARC may adopt new features of DKIM, but only as
> it serves the consortium's purposes, and they will surely continue to
> apply the "p=reject" override to any "v=2" DKIM signature that fails
> (generalized) identity alignment or is invalid.  No?

Absolutely not. I would have thought this was obvious, but I guess it isn't, so
let me state it plainly: The goal here is to propose changes to DMARC that
improve its interoperability with lists while maintaining the security it
provides.

None of the present proposals make any sense if DMARC agents continue
to only see only V1 signatures.

> All the evidence I see says that even if the exact scenario I propose
> is unlikely to occur,

How can there possibly be evidence of anything? We have yet to reach
any sort of consensus on a proposal here. So unless you can cite
a case where a proposal was made to the folks using DMARC along
these lines which was subsequently turned down, I fail to see the
point you're trying to make here.

> it's possible, maybe even quite probable, that
> the big players will use the possibility of registering values and
> imposing criticality to serve their own purposes. 

Why would they bother? If they want to do things along those lines, they can
already do them by generating and then requiring a different header field and
there's nothing we can do about it. Indeed, we know for a fact that Google
already does this sort of thing for their own purposes.

For that matter, if someone wants to they can presently require a
DKIM-Signature field be present with various optional fields. And once again
there's nothing we can do about it. Just because DKIM declares some field to be
optional doesn't mean that son-of-DMARC has to. Or that all field values are
acceptable. And so on.

> You describe the
> same kind of thing happening in the past -- I understand that "that
> was then, this is now (and different)", but this "difference" is all
> hypothetical.  The fact is that fragmentation does occur under some
> circumstances.

No, the differences are very real. I really don't want to get deep into the
nitty-gritty of X.400, but suffice it to say that there are various
extensbility areas built in to the protocol. The ones defined at the P2
"message" level don't have criticality bits; but one of the ones defined at the
P1 "envelope" level does. And the specifications call for a message to be
outright rejected if there's a criticality bit set on P1 "envelope" extension
that the MTA doesn't understand.

This mechanism caused problems because some vendors added critical extensions
which would then cause other implementations to bounce those messages. Some
cases were probably the result of incompetence, but others seemed more like
attempts to achieve vendor lock-in. (No doubt there was some marketing BS
somewhere to justify what they were doing, but I never saw it.)

There is nothing comparable in Internet mail, and certainly nothing in the
present proposal, for the simple reason that the proposal calls for things
to be ignored that otherwise would be processed.

                                Ned

_______________________________________________
dmarc mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/dmarc

Reply via email to