On Mon, Dec 9, 2019 at 8:44 AM Dave Crocker <[email protected]> wrote:
> The IETF does not typically -- or, as far as I recall, ever -- promote > specifications known not to scale. (While I think of this concern as > foundational to the IETF, it's a bit odd that nothing like it is included in the IETF's "Mission and principles" statement.[1]) > I'm not sure that I would reasonably expect anything labeled "Experimental" to scale, especially if it were to make very explicit claims to the contrary. Nothing I've worked on at the IETF with such a label is something I would necessarily stand behind as Internet-scalable. But I would probably expect something at Informational probably to scale, and anything with "Standard" in it certainly to scale. > > Comparing it to the "obs" grammars of days of yore, the PSD proposal > > is much too expensive to become engrained as-is, whereas the old > > grammars were relatively easy to carry forward. > > I don't quite grok the reference to "obs", and mostly think of the > introduction of that construct in RFC 2822 as an interesting idea that, > itself, failed. (I see it as being instructive on the challenges of > designing for transition from an installed base.) > That was indeed the intended reference. All sorts of experimental specs fail. But they aren't /expected/ to > fail. And they aren't expected to be unable to scale. > This one isn't expected to fail, but its mechanism is not (as far as I can tell) intended to be permanent, nor could it become so. In terms of meeting its stated goal, we don't know the outcome yet. We have a guess, but we need data to confirm it, and everyone participating needs to agree on how to participate. That seems to me to be what an Experimental specification is for. The non-scalable component is part of the means by which participants agree on the operation of the test itself. Mostly, IETF/Experimental is used to check whether a spec is > operationally viable -- it's expected to be but the community isn't > quite sure -- or to check for community interest. The latter > constitutes market research, not technical research. > I would claim it's clear that this is the former. We're trying to assess whether this extended logic is a reasonable change to the accuracy of DMARC. Some of the supporting mechanism added in the experiment is ancillary to that goal, and is discardable. Nothing to do with market research. A pointedly friendly reading of the relevant Guidelines might seem to > support the publication under IETF/Experimental being proposed here, but > a more critical one probably doesn't and I think that this use of the > label doesn't really match common practice.[2] > The status chosen most closely reflects the intent and quality of the work, certainly as compared to something aiming for the standards track. And there's consensus to move forward, or was when WGLC ended. Quite a bit of time has now passed since then and we are no closer to getting the answers the working group needs to make progress on the core issues it's facing. Rightly, there's now a lot of grumbling going on. Since one of your core assertions is that the IETF shouldn't publish things like this, I have suggested that, as a compromise, interested parties proceed with the experiment using the document in its draft state. Unfortunately I am also regularly reminded that there are organizations wishing to participate in this experiment and related work but which simply cannot, by reason of policy, do so without this document being first approved for publication. I personally find that position peculiar -- many things from DKIM up to QUIC are implemented experimentally by very large operators during development, without an approved document -- and it's not really the IETF's responsibility to acquiesce, but nevertheless it results in some urgency for this community to find a way forward here. So: Can you propose any sort of specific restructuring of the document or the experiment that achieves the same goal as the current version while also resolving your concerns? The real challenge for most IETF specs is community engagement, not > engineering adequacy. > Interestingly I would claim we have clearly achieved the former here, though obviously not the latter. Also, any suggestion to rely on a published list ignores the history of > problems with such lists, as well as at least requiring a careful > specification for the list and a basis for believing it will be > maintained well. > The list, as I understand its use in the specification, amounts to a list of who's participating in the experiment. When the experiment is done, the list goes away, and the concerns of its maintenance would go with it. -MSK
_______________________________________________ dmarc mailing list [email protected] https://www.ietf.org/mailman/listinfo/dmarc
