On 12/3/2019 11:42 PM, Murray S. Kucherawy wrote:
....
> I think there's a healthy
dose of feeling that the experiment as it's currently designed
couldn't possibly scale to "the entire domain namespace" and/or "all
servers on the Internet", so in that sense from where I sit there's a
built in safeguard against this becoming a permanent wart. Rather,
it's primed as a possibly useful data collection exercise.
The IETF does not typically -- or, as far as I recall, ever -- promote
specifications known not to scale. (While I think of this concern as
foundational to the IETF, it's a bit odd that nothing like it is
included in the IETF's "Mission and principles" statement.[1])
Perhaps even more importantly, I don't recall the IETF ever promoting a
specification that was /expected/ to be thrown away, in favor of then
doing the 'real' specification. I do believe such work is sometimes
done in the I/R/TF. Note that, for example, this view of throwing a
spec away and starting over is quite different from wanting to let the
market choose between competing specs.
Also, viewing this scaling limitation as a safeguard has recently and
notably proved wrong. cf, DMARC. It was designed for a very limited
scenario. Then it got re-purposed in the field, by some operators having
significant leverage.
Worse, publishing a spec always carries the likelihood of operational
momentum. If the spec has real utility, it tends to get implemented and
used. That creates pressure against replacing it, because that's
expensive and possibly disruptive.
Comparing it to the "obs" grammars of days of yore, the PSD proposal
is much too expensive to become engrained as-is, whereas the old
grammars were relatively easy to carry forward.
I don't quite grok the reference to "obs", and mostly think of the
introduction of that construct in RFC 2822 as an interesting idea that,
itself, failed. (I see it as being instructive on the challenges of
designing for transition from an installed base.)
Perhaps there are exampls of IETF experiments that have permitted
entirely starting over, but mostly those only happen when there is a
complete failure, and those typically are called experiments.
ATPS (RFC 6541) was Experimental, and it flatly failed. For a more
visible example, Sender ID was Experimental, and I would argue it did
too. Should they not have been?
All sorts of experimental specs fail. But they aren't /expected/ to
fail. And they aren't expected to be unable to scale.
Mostly, IETF/Experimental is used to check whether a spec is
operationally viable -- it's expected to be but the community isn't
quite sure -- or to check for community interest. The latter
constitutes market research, not technical research.
A pointedly friendly reading of the relevant Guidelines might seem to
support the publication under IETF/Experimental being proposed here, but
a more critical one probably doesn't and I think that this use of the
label doesn't really match common practice.[2]
On 12/7/2019 12:11 PM, Scott Kitterman wrote:
Remind me again the the additional work is that might be too much?
Isn't it just another DNS lookup for the org domain -1... of which
there are maybe a couple thousand and easily cacheable?
This seems way less than say the additional work for ARC.
It's slightly more. There's also a check to see if a LPSD (org -1)
is a PSD > DMARC participant. Exactly how to document that is the major
unresolved question that we should evaluate experimentally. It might
be one of three
things:
First, this sort of exchange highlights the need for considering basic
operational issues carefully and before publication.
Second, it highlights the challenges of doing that in a way that isn't
myopic. What is easy/cheap for highly motivated, expert, well-resourced
participants might not be all that easy or cheap for the larger Internet
community. (This is the operational side of scalability.)
The real challenge for most IETF specs is community engagement, not
engineering adequacy.
Some additional thoughts:
The example that Tim added, of RFC 7706, is of an efficiency mechanism,
not a basic and required addition to the architecture. The difference is
important here.
Also, any suggestion to rely on a published list ignores the history of
problems with such lists, as well as at least requiring a careful
specification for the list and a basis for believing it will be
maintained well.
d/
[1] Mission and principles
[2] https://ietf.org/standards/process/informational-vs-experimental/
--
Dave Crocker
Brandenburg InternetWorking
bbiw.net
--
Dave Crocker
Brandenburg InternetWorking
bbiw.net
_______________________________________________
dmarc mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/dmarc