(I am trying to formulate a response on the higher-level technical and process issues under consideration, but decided to respond now on these particulars, since they are more focused...)

On 2/3/2020 10:47 AM, Murray S. Kucherawy wrote:
Dave,

On Fri, Jan 17, 2020 at 8:44 AM Dave Crocker <[email protected] <mailto:[email protected]>> wrote:

    Nothing I've worked on at the IETF with such a label is something
    I would necessarily stand behind as Internet-scalable.

    Such as?


RFC 6541 comes to mind.  To the best of my knowledge, it's an experiment that never even ran.


I don't recall that scaling limitation was an embedded and acknowledged fact about that spec. And with a quick scan I don't see anything about that in the document.

There is a difference between having some folk be critical of an experiment, versus have its non-scalability be an admitted limit to its future.  That is, you or I or whoever might know a spec sucks and can't succeed, but that's different from having the formal process declare that an experiment is /intended/ not to scale, which seems to be the case here.


Implementations shipped, but its use on the open Internet was never detected or reported.  And I had my doubts about the scalability of the second DNS check that was added to it, but it didn't seem like it could go forward without.

One that wasn't mine: RFC 6210, an experiment to prove how bad something can be.


There is a reasonable argument to be made that little about /any/ security spec actually scales well, but that's such a cheap shot, I wouldn't dream of taking it.

However, yeah, "to find out how bad including hash parameters will be" does seem to provide an existence proof for using IETF Experimental to bench-test something rather than as a gateway to standard for that something.  sigh.


    But I would probably expect something at Informational probably
    to scale, and anything with "Standard" in it certainly to scale.
    Laying any general expectation on an IETF Informational RFC would
    be a mistake, because there is so much variety in their content
    and intent.


Why would the expectations for Experimental be higher than for Informational?  LMTP is Informational, and it certainly needs to succeed.

As a rule -- or certainly a solid pattern -- Experimental means that the document wants to be standards track or BCP but needs some vetting before being permitted that honor.  Informational docs don't have an expectation of making it to standards track.


    So: Can you propose any sort of specific restructuring of the
    document or the experiment that achieves the same goal as the
    current version while also resolving your concerns?

    I'm pretty sure I've raised fundamental concerns about this work
    and that those concerns have not been addressed.  The simple
    summary is that the way to restructure this work is to go back to
    first principles.  But there doesn't seem to be any interest in
    having that sort of discussion.

I thought we were having that sort of a discussion right here.

Your position as I recall is that we have no choice but to take all of this back to first principles and separate DMARC from the determination of Organizational Domain (i.e., make them separate documents) before PSD can proceed.


Unfortunately, that's accurate. At the least, I'd expect to see thoughtful responses and some breadth of support for those responses, countering the fundamental concerns I expressed. I don't recall seeing responses with such substance.

(One of the challenges for me, in trying to formulate the 'thoughtful' response I'm considering is to provide/repeat a concise summary of those fundamental concerns.  As I recall they were both architectural and operational.)


d/

--
Dave Crocker
Brandenburg InternetWorking
bbiw.net

_______________________________________________
dmarc mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/dmarc

Reply via email to