Re: [OAUTH-WG] DPoP followup I: freshness and coverage of signature

2020-12-13 Thread Jim Manico

Brian,

I just focus on web security and understand the risk of XSS well. It 
seems to me that many of the designers of OAuth 2 do not have a web 
security background and keep trying to address XSS with add-ons without 
success.


- Jim

On 12/11/20 2:01 PM, Brian Campbell wrote:

I think that puts Jim in the XSS Nihilism camp :)

Implicit type flows are being deprecated/discouraged. But keeping 
tokens out of browsers doesn't seem likely. There is some menton of 
CSP in 
https://tools.ietf.org/html/draft-ietf-oauth-browser-based-apps-07#section-9.7 
 



On Wed, Dec 9, 2020 at 4:10 PM Jim Manico > wrote:


The basic theme from the web attacker community is:

1) XSS is a game over event to web clients. XSS can steal or abuse
(request forgery) tokens, and more.

2) Even if you prevent stolen tokens from being used outside of a
web client, XSS still allows the attacker to force a user to make
any request in a fraudulent way, abusing browser based tokens as a
form of request forgery.

3) There are advanced measures to stop a token from being stolen
from a web client, like a HTTPonly cookies and to a lesser degree,
JS Closures and Webworkers.

4) However, these measures to protect cookies are mostly moot.
Attackers can just force clients to make fraudulent requests.

5) Many recommend the BFF pattern to hide tokens on the back end,
but still, request forgery via XSS allows all kinds of abuse.

XSS is game over no matter how you slice it.

Crypto solutions do not help. Perhaps the world of OAuth can start
suggesting that web clients use CSP 3.0 in specific ways, if you
still plan to support Implicit type flows or tokens in browsers?

Respectfully,

- Jim


On 12/9/20 12:57 PM, Brian Campbell wrote:

Thanks Philippe, I very much concur with your line of reasoning
and the important considerations. The scenario I was thinking of
is: browser based client where XSS is used to exfiltrate the
refresh token along with pre-computed proofs that would allow for
the RT to be exchanged for new access tokens and also
pre-computed proofs that would work with those access tokens for
resource access. With the pre-computed proofs that would allow
prolonged (as long as the RT is valid) access to protected
resources even when the victim is offline. Is that a concrete
attack scenario? I mean, kind of. It's pretty convoluted/complex.
And while an access token hash would reign it in somewhat (ATs
obtained from the stolen RT wouldn't be usable) it's hard to say
if the cost is worth the benefit.



On Tue, Dec 8, 2020 at 11:47 PM Philippe De Ryck
mailto:phili...@pragmaticwebsecurity.com>> wrote:

Yeah, browser-based apps are pure fun, aren’t they? :)

The reason I covered a couple of (pessimistic) XSS scenarios
is that the discussion started with an assumption that the
attacker already successfully exploited an XSS vulnerability.
I pointed out how, at that point, finetuning DPoP proof
contents will have little to no effect to stop an attack. I
believe it is important to make this very clear, to avoid
people turning to DPoP as a security mechanism for
browser-based applications.


Specifically to your question on including the hash in the
proof, I think these considerations are important:

1. Does the inclusion of the AT hash stop a concrete attack
scenario?
2. Is the “cost” (implementation, getting it right, …) worth
the benefits?


Here’s my view on these considerations (*/specifically for
browser-based apps, not for other types of applications/*):

1. The proof precomputation attack is already quite complex,
and short access token lifetimes already reduce the window of
attack. If the attacker can steal a future AT, they could
also precompute new proofs then.
2. For browser-based apps, it seems that doing this
complicates the implementation, without adding much benefit.
Of course, libraries could handle this, which significantly
reduces the cost.


Note that these comments are specifically to complicating the
spec and implementation. DPoP’s capabilities of using
sender-constrained access tokens are still useful to counter
various other scenarios (e.g., middleboxes or APIs abusing
access tokens). If other applications would significantly
benefit from having the hash in the proof, I’m all for it.

On a final note, I would be happy to help clear up the
details on web-based threats and defenses if necessary.

—
*Pragmatic Web Security*
/Security for developers/
https://pragmaticwebsecurity.com/
   

Re: [OAUTH-WG] DPoP followup I: freshness and coverage of signature

2020-12-13 Thread Neil Madden

> On 13 Dec 2020, at 09:11, Torsten Lodderstedt  wrote:
> [...]
>> 
>>> - generating (self contained) or using (handles) per URL access tokens 
>>> might be rather expensive. Can you sketch out how you wanna cope with that 
>>> challenge?
>> 
>> A decent HMAC implementation takes about 1-2 microseconds for typical size 
>> of token we’re talking about. 
> 
> The generation of a self contained access token typically requires querying 
> claim values from at least a single data source. That might take more time. 
> For handle based tokens/token introspection, one needs to add the time it 
> takes to obtain the token data, which requires a HTTPS communication. That 
> could be even more time consuming.

This is typically true of identity-based tokens, where access to a resource is 
based on who is accessing it. But in a capability-based model this is not the 
case and the capability itself grants access and is not (usually) tied to an 
individual identity. 

Where you do want to include claims in a token, or tie capabilities to an 
identity, then there are more efficient strategies than looking up those claims 
every time you create a new capability token. For example, in my book I 
implement a variant in which simple capability URIs are used for access but 
these are bound to a traditional identity-based session cookie that can be used 
to look up identity attributes as required. This provides security benefits to 
both the cookie (CSRF protection) and the capability URIs (linked to a HttpOnly 
cookie makes them harder to steal). 

If you use macaroons then typically you’d mint a single token with the claims 
in it and then derive lots of individual tokens from it by appending caveats. 
For example, when generating a directory listing in a Dropbox-like app you’d 
mint a single token with details of the user etc and then derive individual 
tokens to access each file by appending a caveat like “file = 
/path/to/specific/file”. 

— Neil
-- 
ForgeRock values your Privacy 
___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] DPoP followup I: freshness and coverage of signature

2020-12-13 Thread Torsten Lodderstedt
Hi Neil,

thanks for your comprehensive answers. Please find my comments inline.

best regards,
Torsten.

> Am 12.12.2020 um 21:11 schrieb Neil Madden :
> 
> 
> Good questions! Answers inline:
> 
>>> On 12 Dec 2020, at 10:07, Torsten Lodderstedt  
>>> wrote:
>>> 
>> 
>> Thanks for sharing, Neil!
>> 
>> I‘ve got some questions:
>> Note: I assume the tokens you are referring in your article are OAuth access 
>> tokens.
> 
> No, probably not. Just auth tokens more generically. 
> 
>> - carrying tokens in URLs wie considered bad practice by the Security BCP 
>> and OAuth 2.1 due to leakage via referrer headers and so on. Why isn’t this 
>> an issue with your approach?
> 
> This is generally safe advice, but it is often over-cautious for three 
> reasons:
> 
> 1. Referer headers (and window.referrer) apply when embedding/linking 
> resources in HTML. But when we’re talking about browser-based apps (eg SPAs), 
> that usually means JavaScript calling some backend API that returns JSON or 
> some other data format. These data formats don’t have links or embedded 
> resources (as far as the browser is concerned), so they don’t leak Referer 
> headers in the same way. When the app loads a resource from a URI in a JSON 
> response the Referer header will contain the URI of the app itself (most 
> likely a generic HTML template page), not the capability URI from which the 
> JSON was loaded. Similar arguments apply to browser history and other typical 
> ways that URIs leak. 
> 
> 2. You can now use the Referrer-Policy header [1] and rel=“noopener 
> noreferrer” to opt out of this leakage, and browsers are moving to doing this 
> by default for cross-origin requests/embeds. (This is already enabled by 
> default in Safari). 
> 
> 3. When you do want to use capability URIs for top-level navigation, there 
> are places in the URI you can put a token that aren’t ever included in 
> Referer headers or window.referrer or ever sent to the server at all - such 
> as the fragment. JavaScript can then extract the token from the fragment (and 
> then wipe it) and send it to the server in an Authorization header or 
> whatever. See [2] for more details and alternatives. 
> 
> [1]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Referrer-Policy
> [2]: 
> https://neilmadden.blog/2019/01/16/can-you-ever-safely-include-credentials-in-a-url/
> 
>> - generating (self contained) or using (handles) per URL access tokens might 
>> be rather expensive. Can you sketch out how you wanna cope with that 
>> challenge?
> 
> A decent HMAC implementation takes about 1-2 microseconds for typical size of 
> token we’re talking about. 

The generation of a self contained access token typically requires querying 
claim values from at least a single data source. That might take more time. For 
handle based tokens/token introspection, one needs to add the time it takes to 
obtain the token data, which requires a HTTPS communication. That could be even 
more time consuming.

> 
>> - per URL Access tokens are a very consequent Form or audience restriction. 
>> How do you wanna signal the audience to the AS?
> 
> As I said, this isn’t OAuth, but for example you can already do this with the 
> macaroon access tokens in ForgeRock AM 7.0 - issue a single access token and 
> then make copies with specific audience restrictions added as caveats, as 
> discussed in [3]. Such audience restrictions are then returned in the token 
> introspection response and the RS can enforce them. 

> 
> My comment in the article about ideas for future OAuth is really just that 
> the token endpoint should be able to issue multiple fine-grained access 
> tokens in one go, each associated with a particular endpoint (or endpoints). 
> You could either return these as separate items like:
> 
> “access_tokens”: [
> { “token”: “abc...”, 
>“aud”: “https://api.example.com/foo” },
> { “token”: “def...”,
>“aud”: “https://api.example.com/bar” }
> ]

I like the idea (and have liked it for a long time  
https://mailarchive.ietf.org/arch/msg/oauth/JcKGhoKy2S_2gAQ2ilMxCPWbgPw/).

resource indicators or authorization_details (with locations) could basically 
be used for that purpose but OAuth2 lacks multiple tokens support in the token 
endpoint.

> 
> Or just go ahead and combine those into capability URIs. (I think I already 
> mentioned this a long time ago when GNAP was first being discussed). 
> 
> Speaking even more wishfully, what I would really love to see is a new URL 
> scheme for these, something like:
> 
>   bearer://@api.example.com/foo
> 
> Which is equivalent to a HTTPS link, but the browser knows about this format 
> and when clicking on/accessing such a URI it sends the token as an 
> Authorization: Bearer header automatically. Ideally the browser would also 
> not allow the token to be accessible from the DOM. 

Interesting. That would allow to elevate browser support to the level of BASIC.

> 
> Even without browser support I think such