Hi, Kathleen. Can you retrieve context and see if version -21 resolves your issues (and restart the discussion if not)? Thanks.
Barry On Mon, Aug 25, 2014 at 11:18 AM, Kathleen Moriarty <[email protected]> wrote: > Helo Ryan, > > Thank you for your response. Please keep in mind that in most cases, I am > trying to help you clear up the language and ensure that the security and > privacy concerns are clearly understood in the draft to readers that might > include security professionals, CSIRT teams, security administration staff, > and others. I do think the draft is good and would like to help progress > it, but do think some language fixes would be beneficial. > > I read the draft again and will try to clarify the points below providing > suggested language where possible. > > > On Mon, Aug 25, 2014 at 1:44 AM, Ryan Sleevi <[email protected]> wrote: >> >> >> >> >> On Wed, Aug 6, 2014 at 8:15 PM, Kathleen Moriarty >> <[email protected]> wrote: >>> >>> Kathleen Moriarty has entered the following ballot position for >>> draft-ietf-websec-key-pinning-19: Discuss >>> >>> When responding, please keep the subject line intact and reply to all >>> email addresses included in the To and CC lines. (Feel free to cut this >>> introductory paragraph, however.) >>> >>> >>> Please refer to http://www.ietf.org/iesg/statement/discuss-criteria.html >>> for more information about IESG DISCUSS and COMMENT positions. >>> >>> >>> The document, along with other ballot positions, can be found here: >>> http://datatracker.ietf.org/doc/draft-ietf-websec-key-pinning/ >>> >>> >>> >>> ---------------------------------------------------------------------- >>> DISCUSS: >>> ---------------------------------------------------------------------- >>> >>> Overall the draft is very good, thank you for writing it. I just wanted >>> to discuss some of the security/privacy considerations and see if we >>> could improve the language and make sure the set of concerns are clear. >>> >>> The privacy consideration section reads more like possible attack >>> scenarios that would fit into the security considerations. What privacy >>> related concerns result from these attacks? Having that spelled out to >>> differentiate the risks as privacy only would be helpful (if appropriate) >>> or moving this into the security consideration section *IF* it is more >>> generically applicable. If I am missing something and this is only >>> privacy related, it would be good to understand that in this discussion. >>> Adding some text on how these attacks could be used to expose privacy >>> related information would be helpful too. >> >> >> The first paragraph spells out precisely why this is listed as a privacy >> consideration: >> >> Hosts can use HSTS or HPKP as a "super-cookie", by setting distinct >> policies for a number of subdomains. For example, assume example.com >> wishes to track distinct UAs without explicitly setting a cookie, or >> if a previously-set cookie is deleted from the UA's cookie store. >> >> Neither of these attacks undermine the protections afforded by HPKP. >> Indeed, they exist precisely BECAUSE of HPKP offering a new means of web >> storage. Essentially, all storage mechanisms introduced when dealing with >> sites - whether it be cookies, new APIs like IndexedDB, exposure of >> persistent storage such as cryptographic keys, or, as in both HSTS and HPKP, >> remembering data about a previously visited site - represent new and >> potential ways to uniquely identify a user, which the two examples spell >> out. >> >>> >>> >>> For the first example, it seems it is the possibility that a report goes >>> outside out the intended scope is the concern. The mitigation seems to >>> be provided in the last sentence of #4, but couldn't this be other >>> information leakage and not just privacy? Let me know if I am missing >>> something that explains why this is privacy specific. >> >> >> I'm not sure how that was reached, as the description of the risk was >> explicitly enumerated >> >> and the ability to pin arbitrary identifiers to distinguish UAs. >> >> This is also reiterated in the #2 and #3. >> >> The same applies to the second example, which hopefully the above >> explanation is sufficient to demonstrate how the spec already highlights, in >> several means, how this is a privacy issue (distinguishing the user >> independent of cookies, aka a "super-cookie") > > > In reading the draft again, the language issue for me was with the usage of > "report" in the text. After looking at this draft again, it seems the only > report type discussed is a "pin validation failure report", in reading up on > other uses of report-uri (W3C) it seems to be tied to a header, in this case > PKP-RO response header. The term report is pretty generic on it's own and > this is where my confusion came in, since that wasn't explicitly stated > (these are tied together and it's the only report discussed). When you get > to the privacy section, it just lists the term report on it's own and not as > a specific report. There is no mention of the report type in that section, > and yes, I probably should have realized that it was tied to the response > header. I do see that is the only report discussed in the draft, but was > not well-versed in this area, so it wasn't clear to me on first read. That > may be fine for most readers, but it wouldn't hurt to state the use of the > term 'report' in this draft is specific to "pin validation failure report" > in section 2.1.3 or to mention the report type again in the privacy section. > My discuss comments above were all related to the generic term "report". > I'll let it go if you feel strongly that this is not necessary since I was > able to figure it out on a second read. >> >> >>> >>> >>> In #3 of the second example, the last sentence includes the following >>> clause: >>> and giving some UAs no >>> Valid Pinning Header for other subdomains (causing subsequent >>> requests for m.fingerprint.example.com to succeed). >>> >>> Does this mean that these subdomains are succeeding when they should >>> fail? It might just be me, but that is not clear in the text (or if they >>> are supposed to succeed). It sounds like they are not supposed to >>> succeed and this is the security issue. How is this privacy specific? >>> Again, this may just be me, but an explanation would be helpful. >> >> >> Do the above references to the existing portions of the spec make this >> clearer? > > > In this case, I'd like to see clearer language that describes the issue and > includes why it is an issue. I am okay on the privacy specific questions as > that was because of the generic use of the term "report" and I was able to > figure out that was the cause of my concerns from my first read of the > draft. Bullet #3 is a run-on sentence and both fixing that and including > implications would go a long way. Here is the current bullet: > > 3. example.com can distinguish 2^N UAs by serving Valid Pinning > Headers from an arbitrary number N distinct subdomains, giving > some UAs Valid Pinning Headers for some, but not all > subdomains (causing subsequent requests for > n.fingerprint.example.com to fail), and giving some UAs no > Valid Pinning Header for other subdomains (causing subsequent > requests for m.fingerprint.example.com to succeed). > > > Here is a suggestion, please teak the language if I didn't get this quite > right: > > > 3. example.com can distinguish 2^N UAs by serving Valid Pinning > Headers from an arbitrary number N distinct subdomains. > > Assume in this example, Valid Pinning headers are assigned > > for subdomains n.fingerprint.example.com and the includeSubDomain > > directive was intended to cover all subdomains > > m.fingerprint.example.com. Where Valid Pinning Headers were > > assigned, some were given to UAs but not for all subdomains > causing > > subsequent requests for n.fingerprint.example.com to > fail. Valid Pinning Header are > > not given to some UAs for other subdomains, causing > subsequent > > requests for m.fingerprint.example.com to succeed. >> >> >> As noted above, the attack is about identifying and tracking users through >> means other than cookies. Such attacks are also known as "super-cookies", >> which is the term explicitly used in the introduction of these attacks. >> >> As noted, the attacker is the site serving the headers itself, which is >> why it's a privacy issue. > > Thanks. >> >> >>> >>> >>> In the last sentence of the privacy considerations section, what is meant >>> by the description "forensic attacker"? I find this term confusing. Was >>> this intended to mean that techniques used in forensic analysis could be >>> used by an attacker to discern information that could be of interest? If >>> that's the case, I think it would be clearer to the reader if that were >>> stated instead. >> >> >> This was in response to Alissa Cooper's YES vote, in which a threat model >> of an attacker with physical access to the machine attempts to recover state >> of the user's browsing history, which the UA had otherwise cleared. > > > Sure, I have no problem with the point, but would like to see the commonly > used terms for this attack type to avoid confusion for the reader. The term > "forensic attacker" isn't one used in the incident response space, so if you > could replace that term with what I think is the intended meaning, "forensic > analysis techniques could be used by an attacker to discern this information > that could be of interest (or useful)", would help. We don't usually cover > attack types that require full exploit of the system or physical access, so > if this got dropped, that would be fine too. The use of such tools isn't > limited to physical access, but also is possible once the host is > compromised and such tools can be installed and used by an attacker (much > higher likelihood than a physical access attack). The current text does not > make a distinction and that is good, I don't think you should be getting too > deep here anyway. > > Propose change from: > > A forensic attacker might > find this information useful, even if the user has cleared other > parts of the UA's state. > > > To: > > Forensic analysis techniques could be used by an attacker to discern > this information and > > may find it useful, even if the user has cleared other > parts of the UA's state. > > >> >> Per Eric Lawrence's feedback, this third section will be reworked into >> it's own privacy consideration bullet, and hopefully you'll find the >> clarification suitable. >> > > Thanks for you work on this section. >>> >>> >>> >>> ---------------------------------------------------------------------- >>> COMMENT: >>> ---------------------------------------------------------------------- >>> >>> I agree with Richard's comment that the document is well written and an >>> important document, thank you for writing it. The style changed a little >>> toward the end and I had some trouble with long sentences in the security >>> & privacy considerations sections. This should be easy enough to fix and >>> may be done with the RFC editor anyway. >>> >>> To Richard's point on operational concerns versus security concerns, are >>> there explicit security attacks that could occur with the max-age >>> variations described? >>> >>> In 4.2, I can't see this being more than an operational concern since it >>> fails when overlapping pin sets are not used. Are we missing a gap that >>> leads to a security concern? >> >> >> I suppose it depends on your threat model and how you view domain >> authority. >> >> In the example given, subdomain.example.com is bypassing the pins set for >> example.com+includeSubDomains, depending on timing. That is, if example.com >> is visited first, then subdomain.example.com MUST be equal-to-or-a-subset-of >> the pins for example.com, by virtue of the known pinned host evaluation. >> >> Thus if example.com wishes to administer pins for their domain (and all >> subdomains), it's necessary for them to prevent subdomains from setting the >> header. >> >> Now, you can see this as an operational concern if you view the >> example.com and subdomain.example.com as the same administrative entity, but >> that's sometimes not the case (e.g. shared hosting sites like Amazon AWS, >> GitHub's pages, or Google AppEngine) > > > Thanks for the explanation, it would be helpful to have something along > those lines in the draft This is a non-blocking comment, so ignore if you so > choose, but here is a suggestion to clarify this as a security concern (in > addition to an operational one). Perhaps if you explicitly stated that a > denial of service could result from this configuration issue, that would > help. Also stating the concern is heightened in a service provider > environment or one where a single domain is used across administratively > distinct applications, recommending that the includeSubDomain directive > should not be used in such circumstances to avoid this issue. >> >> >> >>> >>> >>> 4.3 makes sense to me as a security concern that drives operational >>> practices. >>> >>> >> > Thanks! > > > -- > > Best regards, > Kathleen _______________________________________________ websec mailing list [email protected] https://www.ietf.org/mailman/listinfo/websec
