Hi Murray,

Most of your comments appear to be refuting points that I didn't make:

 * I didn't suggest that static documents could change by themselves.
 * I didn't suggest that there was no possible consistent
   interpretation of "rules", "enforcement" etc., just that there was
   considerable scope for confusion because of the other meaning that
   these rules have.
 * I didn't suggest that the benefit to an implementer who diverged
   from a documented protocol in order to save time would accrue to
   anyone other than that implementer.
 * I didn't suggest that the existing processes for changing standards
   were failing to do so.


What is concerning me is the sense that I get that standards are often viewed as a means to instruct implementers rather than to advise them and that, therefore, when implementers act differently to what standards describe - whether because they perceived benefit in choosing to do so, found themselves in a dilemma that meant that couldn't do otherwise, took shortcuts or simply acted mistakenly - they should be looked down upon as lazy.

Assuming that you share this view (e.g. "laziness benefits approximately nobody"), I'd suggest that it's a mistaken one, that instead the opportunity is both:

 * to devise/amend protocols in such a way that additional
   implementation effort to advance ecosystem interests also delivers
   an observable improvement in the achievement of the
   _*implementer's*_ objectives, particularly for implementers who
   aren't reading standards - which I suspect describes most
   implementers - and
 * to communicate protocol details in standards documents in such a way
   that it is readily apparent to those who do read them what the
   benefits to the implementer are of doing it the way the standard
   describes; not merely "do it this way if you want to the 'right'
   thing" (implementers motivated by this will already conform their
   implementations to specification documents anyway) but "here is how
   you benefit from implementing this set of pieces this way".

I'd suggest that adopting this stance (rather than looking down upon "lazy" implementers) improves the likelihood that implementers will, in fact, implement in the way that a standard describes and, therefore, improves the likelihood of achievement of intended ecosystem benefits.

- Roland



On 08/06/2013 12:36 AM, Murray Kucherawy wrote:
Erm, I'm lost as to how a static document (a map, in your analogy) can do something active like "diverge" from something. It must therefore be the evolving thing, i.e., reality (the territory, in your analogy), that is diverging.

Specification documents like standards track RFCs do define rules. The dictionary defines "protocol" as "a set of conventions governing the treatment and especially the formatting of data in an electronic communications system." That sounds like rules to me. "Enforcement" comes from strict adherence by both ends of a transaction, and rejection of things that don’t also adhere.

Each portion of a standard provides some capability, but you're only certain to get the benefits of that capability if you implement according to the standard. Slack implementations create ambiguities, and I've yet to hear of a case where introducing ambiguity is of benefit other than to the developer or author who got to save some keystrokes.

I am very aware that some standards are hard to follow, either due to poor writing or complexity, or the difficulty of dealing with layers of protocol and legacy implementations. I empathize with people in that position, but this does not change the fact that laziness benefits approximately nobody. Poor implementations create pressure that eventually causes the tightness of the system as a whole to sag, and that's what leads us to the situation we're in today.

Now, if there's a problem with the standards or reality (e.g., shifting priorities) has evolved sufficiently that they need updating, then there exist public processes available to any comer for amending them. If they're broken or obsolete, let's fix them. But if that isn't happening, maybe the blame isn't rightly placed there after all.

-MSK

From: Roland Turner <[email protected] <mailto:[email protected]>>
Date: Sunday, August 4, 2013 10:54 PM
To: Murray Kucherawy <[email protected] <mailto:[email protected]>>
Cc: "[email protected] <mailto:[email protected]>" <[email protected] <mailto:[email protected]>>
Subject: Re: [dmarc-discuss] multiple from

Hi Murray,

On 07/18/2013 12:40 AM, Murray Kucherawy wrote:
On 7/16/13 8:46 PM, "Roland Turner"<[email protected]>  wrote:
Any time an RFC and reality diverge, it it the RFC that is
reality-ignorant, not reality that is RFC-ignorant.

If it happens that the DMARC specification reflects reality better than
existing RFCs - even standards track ones - then once again, it is those
RFCs that are in error, not the DMARC specification.
I don't agree with the first generalization.  RFCs specifying the format
of an Internet message have existed for a really long time.  It's reality
that decided to diverge,

Erm, if the map and the territory diverge then it's the map that's incorrect. RFCs are the map, not the territory.

RFCs do tend to reflect distilled wisdom, but to the extent that they embody designs that require practitioners to go beyond acting in their own interests then they can pretty reliably be expected to be incorrect, at least in the sense that they're presenting to readers a view that doesn't match actual practice.

largely out of laziness:

Sure. Laziness (/efficiency/different-priorities/...) is part of the reality that Internet protocols operate in. It is desirable that they be designed and documented to deal with this.

Email generating code
would be sloppy and cut corners, and user pressure caused mail submission
agents and other services to tolerate it rather than be strict about it.

Erm, user interests (/pressure) are not a distorting force in protocol design, they are the purpose of it.

We're left with a system where lots of software now supports the lazy
implementations.

This is both common sense, and desirable from a robustness perspective.

There's a draft making its way through the IETF now that describes this
situation, pleads with software to become strict once again, and then goes
into a list of common malformations and provides advice about how to
handle or interpret them.  But even that advice about safe handling
doesn't render those messages compliant; they are still broken.

If it depends for its approach upon pleading then, presumably, it depends for its success upon practitioners doing things that don't actually advance their interests. I'd suggest that the odds of success for such an approach aren't great. (If writing RFCs were the solution, then the problem would already have been solved.)

I've not read the draft (are you talking about a successor to draft-kucherawy-mta-malformed-03?), I'd hope that it's at least pointing out "you will make your own life better by doing X" in situations where that's possible.

DMARC's acknowledgement of reality doesn't make those RFCs wrong,

DMARC doesn't affect the correctness of the underlying protocols at all, their own [mis-]match with reality does.

nor does
it excuse various components' lax enforcement of the rules.

The use of terms like "enforcement" and "rules" is a large part of the problem because of the confusion with actual rules and actual enforcement (roughly: armed officers turning up to compel you) that this creates. Granted, I don't have better concise terms to suggest. Outside of those who are contractually or legally bound, no-one needs an excuse for the non-implementation of the actions described by a specific protocol specification. Well designed protocols describe the ways that practitioners tend to implement them, poorly designed ones describe something else.

I recognise of course that these decisions aren't actually being made in quite the way that I describe in that Internet protocols aren't [generally] executed by humans, consequently their execution is controlled by bodies of software which are typically reused and that tend to do things the same way repeatedly. An important consequence is that implementers will often pick up a component that appears to do what they intend to do and not look too closely at the details, because doing so would be wasteful (the benefit to be gained individually is almost certainly smaller than the resource cost to do so). I suspect that most of the progress that's available to be made is to identify these frequently reused components that are impeding most progress and work with their maintainers to get the issues resolved:

  * For open-source components this typically means providing patches.
  * For closed-source components this usually means making the
    economic case to the vendor organisation at a considerably higher
    level in the organisation than the implementer of a piece of
    software that generates, processes or consumes email and therefore
    at a higher level than that of anyone who's likely to read RFCs.

More RFCs, particularly those structured around encouraging compliance, rather than making the self-interest case for the maintainers of the relevant code, don't seem relevant to either of these cases.

- Roland

--
   Roland Turner | Director, Labs
   TrustSphere Pte Ltd | 3 Phillip Street #13-03, Singapore 048693
   Mobile: +65 96700022 | Skype: roland.turner
   [email protected]  |http://www.trustsphere.com/

--
  Roland Turner | Director, Labs
  TrustSphere Pte Ltd | 3 Phillip Street #13-03, Singapore 048693
  Mobile: +65 96700022 | Skype: roland.turner
  [email protected] | http://www.trustsphere.com/

_______________________________________________
dmarc-discuss mailing list
[email protected]
http://www.dmarc.org/mailman/listinfo/dmarc-discuss

NOTE: Participating in this list means you agree to the DMARC Note Well terms 
(http://www.dmarc.org/note_well.html)

Reply via email to