On 2/3/17, 6:27 PM, "David Mazieres" <dm-list-tcpcr...@scs.stanford.edu> wrote: >"Holland, Jake" <jholl...@akamai.com> writes: >> Should my app set the a-bit? I think this version of the ENO draft >> says yes, because I have altered my behavior in the presence of >> encrypted TCP (and it wasn’t practical for me to authenticate, so I >> qualify as an exception for the first SHOULD from 5.1). I publish my >> app this way, and it’s downloaded by a few hundred folks with >> accolades about my security-consciousness. > >You definitely should not set the "a" bit. The "a" bit is there if you >need it, but there is (or should be) no implication that you "SHOULD" >set it in cases where it is not required. Header bits are a precious >resource, so if anything applications SHOULD NOT use the "a" bit if they >do not need to.
This answer has a big part of the insight I needed, thanks. >So perhaps the clarification is that you SHOULD avoid using the "a" bit >unless you absolutely need to, and when you do use it, since there's >only one "a" bit, you SHOULD slot in a mechanism that has hooks for >future compatibility. (UIC = Updated-insight Comment) UIC #1: Yes, something that conveys the above message effectively would be a key edit, I think. UIC #2: On a related note, I will point to the initial definition for the a-bit in section 4.2: The application-aware bit "a" is an out-of-band signal indicating that the application on the sending host is aware of TCP-ENO and has been extended to alter its behavior in the presence of encrypted TCP. I think this initial definition doesn’t convey the right semantics for the backward-compatibility that you’ve outlined as the primary purpose for this bit, and it probably needs to do so. This is the main reason I thought the draft said my example app should set the a-bit. If the a-bit is intended to be used as a mechanism for backward compatibility for legacy protocols that didn’t have one built in previously, in order to avoid misunderstandings I think you’ll need to make that point in a more specific and deliberate explanation of that purpose. Ideally, somewhere there will also be good recommendations on how to achieve a smooth transition, and probably an applicability paragraph or sub-section that outlines the key characteristics of protocols that can derive a benefit. UIC #3: On another related note, I will also add that I think “application aware” is a misnomer for these semantics. When a higher-level protocol is extended in a way that makes use of this feature, the updated protocol specification should define how updated apps must behave in order to comply with the updated protocol, and should do so with a definition that permits interoperability with the prior version of the protocol. This is not about any choices an application makes or any awareness that an application has, except in the sense that the application risks incompatibility if it sets the bit without reference to an updated version of the protocol specification that defines semantics for how each side should handle a matched a-bit. So this is perhaps something like a “Legacy protocol upgraded” bit, which should be set only when a higher level protocol has been updated by a new specification that defines the semantics for how to interpret the bit within the new version of that protocol. I know that’s a lot of wide-ranging edits to propose, and I’m sorry about that. But I do think that making this point a lot more clear in the document would make a big difference in avoiding confusion. Thanks for taking the time to understand my objections fully and explain the misunderstandings patiently. I think this is a very good doc, and I hope this discussion has been helpful in improving it a little further. - Jake _______________________________________________ Tcpinc mailing list Tcpinc@ietf.org https://www.ietf.org/mailman/listinfo/tcpinc