Re: [OAUTH-WG] New Version Notification for draft-fett-oauth-dpop-03.txt

2019-12-01 Thread Torsten Lodderstedt

Annabelle,

> Am 27.11.2019 um 02:46 schrieb Richard Backman, Annabelle 
> :
> 
> Torsten,
> 
> I'm not tracking how cookies are relevant to the discussion.

I’m still trying to understand why you and others argue mTLS cannot be used in 
public cloud deployments (and thus focus on application level PoP).

Session cookies serve the same purpose in web apps as access tokens for APIs 
but there are much more web apps than APIs. I use the analogy to illustrate 
that either there are security issues with cloud deployments of web apps or the 
techniques used to secure web apps are ok for APIs as well.

Here are the two main arguments and my conclusions/questions:  

1) mTLS it’s not end 2 end: although that’s true from a connection perspective, 
there are solutions employed to secure the last hop(s) between TLS terminating 
proxy and service (private net, VPN, TLS). That works and is considered secure 
enough for (session) cookies, it should be the same for access tokens.

2) TLS terminating proxies do not forward cert data: if the service itself 
terminates TLS this is feasible, we do it for our public-cloud-hosted 
mTLS-protected APIs. If TLS termination is provided by a component run by the 
cloud provider, the question is: is this component able to forward the client 
certificate to the service? If not, web apps using certs for authentication 
cannot be supported straightway by the cloud provider. Any insights?

> I'm guessing that's because we're not on the same page regarding use cases, 
> so allow me to clearly state mine:

I think we are, we are just focusing on different ends of the TLS tunnel. My 
focus is on the service provider’s side, esp. public cloud hosting, whereas you 
are focusing on client side TLS terminating proxies.

> 
> The use case I am concerned with is requests between services where 
> end-to-end TLS cannot be guaranteed. For example, an enterprise service 
> running on-premise, communicating with a service in the cloud, where the 
> enterprise's outbound traffic is routed through a TLS Inspection (TLSI) 
> appliance. The TLSI appliance sits in the middle of the communication, 
> terminating the TLS session established by the on-premise service and 
> establishing a separate TLS connection with the cloud service.
> 
> In this kind of environment, there is no end-to-end TLS connection between 
> on-premise service and cloud service, and it is very unlikely that the TLSI 
> appliance is configurable enough to support TLS-based sender-constraint 
> mechanisms without significantly compromising on the scope of "sender" (e.g., 
> "this service at this enterprise" becomes "this enterprise”).

I’m not familiar with these kind of proxies, but happy to learn more and to 
discuss potential solutions.

Here are some questions:
- Have you seen this kind of proxies intercepting the connection from on-prem 
service deployments to service provider? I’m asking because I thought the main 
use case was to intercept employees PC internet traffic. 
- Are you saying this kind of proxy does not support mutual TLS at all? At 
least theoretically, the proxy could combine source and destination to select a 
cert/key pair to use for outbound TLS client authentication. 

> Even if it is possible, it is likely to require advanced configuration that 
> is non-trivial for administrators to deploy. It's no longer as simple as the 
> developer passing a self-signed certificate to the HTTP stack.

I agree. Cert binding is established in OAuth protocol messages, which would 
require the appliance to understand the protocol. On the other hand, I would 
expect these kind of proxy to understand a lot about the protocols running 
through it, otherwise they cannot fulfil their task of inspecting this traffic. 

best regards,
Torsten. 



> 
> – 
> Annabelle Richard Backman
> AWS Identity
> 
> 
> On 11/23/19, 9:50 AM, "Torsten Lodderstedt"  wrote:
> 
> 
> 
> On 23. Nov 2019, at 00:34, Richard Backman, Annabelle 
>  wrote:
> how are cookies protected from leakage, replay, injection in a setup 
> like this?
>> They aren’t.
> 
> Thats very interesting when compared to what we are discussing with respect 
> to API security. 
> 
> It effectively means anyone able to capture a session cookie, e.g. between 
> TLS termination point and application, by way of an HTML injection, or any 
> other suitable attack is able to impersonate a legitimate user by injecting 
> the cookie(s) in an arbitrary user agent. The impact of such an attack might 
> be even worse than abusing an access token given the (typically) broad scope 
> of a session.
> 
> TLS-based methods for sender constrained access tokens, in contrast, prevent 
> this type of replay, even if the requests are protected between client and 
> TLS terminating proxy, only. Ensuring the authenticity of the client 
> certificate when forwarded from TLS terminating proxy to service, e.g. 
> through another authenticated TLS connection, will even 

Re: [OAUTH-WG] New Version Notification for draft-fett-oauth-dpop-03.txt

2019-11-30 Thread Neil Madden
I think that is probably secure, although I’d like to see a formal proof of 
correctness like the ones for macaroons in 
https://cs.nyu.edu/media/publications/TR2013-962.pdf. There are often subtle 
details to these things. 

Using chained public key signatures in this way goes back to SDSI 
(http://people.csail.mit.edu/rivest/sdsi11.html#secoverview) and has more 
recently been used in Vanadium under the name “blessings”: 
https://vanadium.github.io/concepts/security.html

It has some nice properties, but it is very expensive in terms of CPU costs and 
size of tokens. It also requires every hop to add a signature even if they’re 
not adding any new caveats. This is very secure but can add a lot of overhead, 
and of course means that all parties have to be aware of this format/protocol. 

I still think it’s much easier and more efficient to just use HMAC-based 
macaroons in most cases. You can upgrade them to PoP tokens by appending a 
“cnf” caveat (eg for mTLS). And you can still do things like the “phantom 
token” pattern where a macaroon-based “by ref” token is received by an API 
gateway, introspected, and then replaced with an equivalent (but short-lived) 
signed JWT for consumption by backend microservices. 

— Neil

>> On 29 Nov 2019, at 22:13, Richard Backman, Annabelle  
>> wrote:
> 
> > That is the easiest way to let the RS verify the macaroon on the assumption 
> > that the RS is trusted. I’m not aware of an alternative for asymmetric 
> > crypto when the RS is untrusted other than using the signature-based 
> > macaroon variant or having per-RS keys. 
>  
> It occurred to me that my previous example of how to do layering with JWTs 
> was needlessly complicated. You can prevent removal of layered constraints by 
> constraining each inner layer to require a wrapper. Consider a "foobar" claim 
> that specifies a public key and indicates that the token must be presented 
> wrapped within a JWS signed with the corresponding private key. The wrapper 
> JWS may introduce additional constraints, and may or may not permit the 
> recipient to present the access token to others, depending on the value of 
> the "foobar" claim. For example:
>  
> The AS generates an access token, with a public key registered by the client 
> as the value of the "foobar" claim. This registration could’ve happened via a 
> dev console, dynamic client reg., or as part of the token request.
>  = JWS(, {
> "iss": "as.example.com",
> "client_id": "...",
> "user_id": "...",
> "scope": "a b",
> "exp": ,
> "foobar": 
> })
>  
> To call rs1.example.com, the client wraps the token in a JWS signed with the 
> client’s private key. They further restrict the scope and expiration time, 
> and authorize the RS to use the token with other RSes by setting the value of 
> the "foobar" claim to the RS’s public key.
>  = JWS(, {
> "token": ,
> "aud": "rs1.example.com",
> "exp": ,
> "scope": "a b",
> "foobar": 
> })
>  
> To call rs2.example.com, rs1.example.com wraps the token in a JWS signed with 
> the RS’s private key. The RS prohibits rs2.example.com from further use of 
> the token by setting the "foobar" claim to null.
>  = JWS(, {
> "token": ,
> "aud": "rs2.example.com",
> "scope": "b",
> "foobar": null
> })
>  
> Similarly, the client can call rs2.example.com with a token that is 
> restricted from further use.
>  = JWS(, {
> "token": ,
> "aud": "rs2.example.com",
> "exp": ,
> "scope": "b",
> "foobar": null
> })
>  
>  
> This pattern allows for layered constraints, local introspection, and local 
> validation. The requirements (that I’ve identified) are that:
> The client must register a public key with the AS (this could be done during 
> the token request).
> The AS must know whether or not to give the client a plain bearer token or a 
> token with the "foobar" claim (presentation of a key possession proof in the 
> token request could be enough).
> Any recipient that wishes to validate the token must have the public key for 
> the AS.
> Any recipient that wishes to add a layer must have a public key that is known 
> to its callers.
> Any recipient that performs local validation must understand the meaning of 
> the "foobar" claim.
>  
> I haven’t thought too deeply on this so I wouldn’t consider the idea fully 
> baked, but I’m curious to he

Re: [OAUTH-WG] New Version Notification for draft-fett-oauth-dpop-03.txt

2019-11-27 Thread Neil Madden

> On 27 Nov 2019, at 20:30, Richard Backman, Annabelle  
> wrote:
> 
> > That is true, but is IMO more of a hindrance than an advantage for a PoP 
> > scheme. The very fact that the signature is valid at every RS is why you 
> > need additional measures to prevent cross-RS token reuse.

> The other methods you mention require their own additional measures in the 
> form of key exchanges/handshakes. And you still need to prove possession of 
> that shared key somehow.

This is true. The difference being that the derived key can then be reused for 
many requests. Because the key derivation is cryptographically tied to this 
context the RS can’t replay these symmetric tokens anywhere else. 

> In some cases, “derive a shared key and encrypt this blob” is easier; in some 
> cases “sign this blob declaring your audience” is easier.

The ECDH scheme does challenge-response to ensure freshness. This was designed 
to match the anti-replay measures in the DPoP draft but without requiring the 
server store any state. If you don’t need replay protection (if TLS is enough) 
then you can indeed just sign the audience, or for ECDH you can do completely 
static ECDH between the client’s private key and the RS’s public key to derive 
a shared key that is the same for all time (until key rotation). But in that 
case you may as well just return a symmetric key directly from the AS... 
attached to a macaroon, say. 

>  
> > The easiest way to use macaroons with asymmetric crypto is to make the 
> > macaroon identifier be an encrypted random HMAC key that the RS can decrypt 
> > (or a derived key using diffie-hellman). You can concatenate multiple 
> > encrypted keys for multiple RSes. Alternatively in a closed ecosystem you 
> > can encrypt the random HMAC with a key stored in a KMS (such as AWS KMS) 
> > and grant each RS decrypt permissions for that KMS key.
>  
> Is the “random HMAC key that the RS can decrypt” the root key used to 
> generate the macaroon? If so, how would you prevent one targeted RS from 
> using the root key and macaroon identifier to construct an arbitrary macaroon 
> for replay against another targeted RS? If not, how does the targeted RS use 
> the decrypted “random HMAC key” to validate the macaroon? Is there a paper on 
> this approach?

That is the easiest way to let the RS verify the macaroon on the assumption 
that the RS is trusted. I’m not aware of an alternative for asymmetric crypto 
when the RS is untrusted other than using the signature-based macaroon variant 
or having per-RS keys. 

I’m not really a fan of purely signature-based JWT access tokens because those 
tokens often contain PII and so should really be encrypted to avoid leaking 
details to the client (or anyone else if the token does leak). This came up in 
the discussion of the JWT-based access tokens draft, which is why I proposed 
https://tools.ietf.org/html/draft-madden-jose-ecdh-1pu-02 for use in that 
draft. But if you’re doing encryption then you’re already down the path of 
having per-RS access tokens (and keys) - the compact encoding of JWE only 
allows a single recipient. 

>  
> The KMS approach is just symmetric crypto mediated through a third party (and 
> has the same centralization problem as validation at the AS).
>  
> > Clients can then later start adding caveats…, while RSes still don't have 
> > to make any changes….
> > DPoP only effectively prevents cross-RS replay if all RSes implement it, 
> > otherwise the ones that don't are still vulnerable.
> This is because macaroons bake the proof into the “bearer” token (which is no 
> longer really a bearer token) in the Authorization header, whereas DPoP puts 
> it in a separate header.

That’s not the only difference. The other is that the AS does the validation. 
If the client appended the DPoP claims to the access token and signed the whole 
thing, and then the RS took that and sent it to the AS introspection endpoint 
to validate it, then that would have the same advantage of not requiring any 
changes at the RS. 

But if you do this then there’s no longer any reason to use public key 
signatures because the client and AS may as well agree a shared secret. (The AS 
can always impersonate a client anyway). At which point we’re basically back 
using macaroons. 

> draft-ietf-oauth-signed-http-request is another way to do this that doesn’t 
> rely on macaroons.
>  
> > Your previous point was that they require "non-trivial work to use ... and 
> > require developers to learn a new token format".
> By “non-trivial work to use” I was referring to work required from the 
> working group, that I did not feel was being acknowledged.

Do you believe it’s a disproportionate amount of work compared to any other 
draft the WG works on?

> Looking back over the thread, I think my objection stems from you referring 
> to macaroons as an “access token format” when they’re really an applied 
> cryptography pattern. The “format” part would need to be defined by the 
> 

Re: [OAUTH-WG] New Version Notification for draft-fett-oauth-dpop-03.txt

2019-11-27 Thread Neil Madden

> On 27 Nov 2019, at 19:19, Brian Campbell  wrote:
> 
>> On Wed, Nov 27, 2019 at 3:31 AM Neil Madden  
>> wrote:
>> 
>> That is true, but is IMO more of a hindrance than an advantage for a PoP 
>> scheme. The very fact that the signature is valid at every RS is why you 
>> need additional measures to prevent cross-RS token reuse. This downside of 
>> signatures for authentication was pointed out by djb 18 years ago 
>> (https://groups..google.com/forum/m/#!msg/sci.crypt/73yb5a9pz2Y/LNgRO7IYXOwJ),
>>  which is why most modern crypto protocols either use Diffie-Hellman for 
>> authN (https://noiseprotocol.org) or sign a hash of an interactive handshake 
>> transcript (TLS 1..3 - https://tools.ietf.org/html/rfc8446#section-4.4.3) so 
>> that the signature is tightly bound to a specific interactive protocol run.
>> 
> 
> Mostly for my own edification -  using Diffie-Hellman for authN (that a key 
> was held) was effectively at the heart of the "tentative suggestion for an 
> alternative design" that you had much early in this thread?

Yes, exactly.

— Neil___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] New Version Notification for draft-fett-oauth-dpop-03.txt

2019-11-27 Thread Brian Campbell
On Tue, Nov 26, 2019 at 6:26 PM Richard Backman, Annabelle <
richa...@amazon.com> wrote:

> > That’s not directly attached to the access token. This means that every
> RS has to know about DPoP.
>
> True, but you could avoid that by embedding the access token in the DPoP
> proof (similar to draft-ietf-oauth-signed-http-request) and sending that as
> the sole token. Technically, that’s no longer a bearer token so sending it
> as “Authorization: bearer ” would be wrong, but DPoP already commits
> that sin.
>

To clairy FWIW the current DPoP draft doesn't commit that sin. It uses
“Authorization: dpop ” and "DPoP: " headers.
There were some examples attempting to illustrate how all the pieces of the
proposal worked, including this particular part, in the slides I had for
Singapore. But unfortunately I never made it past slide #6.

On the other hand the OAuth MTLS draft does commit said sin. But it was
intentional with the aim of easing adoption/migration to it.

-- 
_CONFIDENTIALITY NOTICE: This email may contain confidential and privileged 
material for the sole use of the intended recipient(s). Any review, use, 
distribution or disclosure by others is strictly prohibited.  If you have 
received this communication in error, please notify the sender immediately 
by e-mail and delete the message and any file attachments from your 
computer. Thank you._
___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] New Version Notification for draft-fett-oauth-dpop-03.txt

2019-11-27 Thread Brian Campbell
On Wed, Nov 27, 2019 at 3:31 AM Neil Madden 
wrote:

>
> That is true, but is IMO more of a hindrance than an advantage for a PoP
> scheme. The very fact that the signature is valid at every RS is why you
> need additional measures to prevent cross-RS token reuse. This downside of
> signatures for authentication was pointed out by djb 18 years ago (
> https://groups.google.com/forum/m/#!msg/sci.crypt/73yb5a9pz2Y/LNgRO7IYXOwJ),
> which is why most modern crypto protocols either use Diffie-Hellman for
> authN (https://noiseprotocol.org) or sign a hash of an interactive
> handshake transcript (TLS 1.3 -
> https://tools.ietf.org/html/rfc8446#section-4.4.3) so that the signature
> is tightly bound to a specific interactive protocol run.
>
>
Mostly for my own edification -  using Diffie-Hellman for authN (that a key
was held) was effectively at the heart of the "tentative suggestion for an
alternative design" that you had much early in this thread?

-- 
_CONFIDENTIALITY NOTICE: This email may contain confidential and privileged 
material for the sole use of the intended recipient(s). Any review, use, 
distribution or disclosure by others is strictly prohibited.  If you have 
received this communication in error, please notify the sender immediately 
by e-mail and delete the message and any file attachments from your 
computer. Thank you._
___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] New Version Notification for draft-fett-oauth-dpop-03.txt

2019-11-27 Thread Neil Madden
On 27 Nov 2019, at 01:26, Richard Backman, Annabelle  
wrote:
> 
> 
> > That’s not proof of possession, that’s just verifying a MAC. PoP requires 
> > the other party (client) to provide a fresh proof that they control a key. 
> > The client isn’t using any key in this case. 
>  
> I think we’re operating with slightly different definitions for PoP. My 
> definition is something along the lines of “a possessor of a key generated 
> (or was in possession of) this data blob at some point.” You can probably see 
> why we’re disagreeing over whether or not PoP is fundamental. I don’t think 
> there is any point in continuing this semantic debate. 

See https://tools.ietf.org/html/rfc7800  
for a definition (section 3.6 in particular).

>  
> > That’s not directly attached to the access token. This means that every RS 
> > has to know about DPoP.
> True, but you could avoid that by embedding the access token in the DPoP 
> proof (similar to draft-ietf-oauth-signed-http-request) and sending that as 
> the sole token. Technically, that’s no longer a bearer token so sending it as 
> “Authorization: bearer ” would be wrong, but DPoP already commits that 
> sin.
>  
> Also, if the AS is doing all authentication checks, then in a lot of cases 
> the RS will need to provide the AS with additional request metadata along 
> with the macaroon, such as the POST method used, origin (if it’s not 
> inferable from whatever credentials the RS uses when calling the AS), request 
> path, sender IP, client TLS certificate, token binding ID, etc. Obviously 
> there are some caveats that don’t require this (e.g., timestamp). It remains 
> to be seen whether the caveats required to meet DPoP’s use case fall into the 
> former or latter category.

That’s true - and the RS being able to send more contextual info to the token 
introspection endpoint would be useful regardless of token format. 

The current model is that the AS validates the token and checks basic things 
like the expiry time or audience and then returns any other constraints to the 
RS such as the scope, any confirmation key, etc. This model can be followed 
with macaroons - eg the scope returned should be the intersection of the 
original token scope and any scope caveats on the token.

But for many of the things discussed in this thread, the AS can validate by 
itself. For example, if the client appends an audience restricting a token to 
one RS then the AS can validate that because the RS authenticates when it calls 
the introspection endpoint. If the client appends something like a “jti” caveat 
(probably renamed), then the AS can centrally record that to prevent replay - 
this has the same caveats on scalability, but at least can be done once at the 
AS rather than for each RS. 

>  
> > Please explain how to achieve the examples I gave of layered attenuation 
> > without using macaroons.
> > 1. The client adds caveats (eg exp = now+5s) to an access token and sends 
> > it to the RS. The RS creates four copies of the token with different scope 
> > constraints and sends them to four individual microservices.
> 
> For my example below:
> Let  be the access token obtained by the client from the AS
> Let JWE be a function that generates a JWE given a key and payload.
> Let  be the public encryption key for the AS.
>  
> Client:
>  = JWE(, { at: , exp: … })
> 
> RS:
>  = JWE(, { at: , scope: scope_a })
>  = JWE(, { at: , scope: scope_b })
>  = JWE(, { at: , scope: scope_c })
>  = JWE(, { at: , scope: scope_d })
> 

Assuming you can only append caveats here, not new claims, then this is 
functionally equivalent to macaroons. But only the AS can decrypt these layers, 
so the RS is still forced to call the AS's token introspection endpoint to 
validate this. So you've gained nothing over HMAC and added considerable CPU 
and size overhead and a reduction in security.

This is also only secure if the encryption scheme is non-malleable, which (if 
you want provable security) requires IND-CCA2. Not all JWE encryption schemes 
provide this, e.g. RSA1_5 would not be secure for this. The ones that are 
secure largely achieve that by the use of HMAC or another MAC in the 
authenticated content encryption because they are hybrid encryption schemes - 
effectively this is equivalent to using a macaroon where the identifier is an 
encrypted HMAC key, which you can already do with macaroons.

> This pattern can be applied to the other scenarios you provided. The 
> difference between macaroons and the above is that the former relies on 
> chained HMACs and the latter on asymmetric crypto. You also lose the ability 
> to inspect caveats or context that are already in the token, which may or may 
> not be important. This is an interesting property of the macaroon pattern 
> that I’m not sure you could replicate without basically implementing the 
> macaroon pattern in a JWT format.
>  
> > Validation at the AS is an advantage in most cases…
> 

Re: [OAUTH-WG] New Version Notification for draft-fett-oauth-dpop-03.txt

2019-11-26 Thread Richard Backman, Annabelle
Torsten,

I'm not tracking how cookies are relevant to the discussion. I'm guessing 
that's because we're not on the same page regarding use cases, so allow me to 
clearly state mine:

The use case I am concerned with is requests between services where end-to-end 
TLS cannot be guaranteed. For example, an enterprise service running 
on-premise, communicating with a service in the cloud, where the enterprise's 
outbound traffic is routed through a TLS Inspection (TLSI) appliance. The TLSI 
appliance sits in the middle of the communication, terminating the TLS session 
established by the on-premise service and establishing a separate TLS 
connection with the cloud service.

In this kind of environment, there is no end-to-end TLS connection between 
on-premise service and cloud service, and it is very unlikely that the TLSI 
appliance is configurable enough to support TLS-based sender-constraint 
mechanisms without significantly compromising on the scope of "sender" (e.g., 
"this service at this enterprise" becomes "this enterprise"). Even if it is 
possible, it is likely to require advanced configuration that is non-trivial 
for administrators to deploy. It's no longer as simple as the developer passing 
a self-signed certificate to the HTTP stack.

– 
Annabelle Richard Backman
AWS Identity
 

On 11/23/19, 9:50 AM, "Torsten Lodderstedt"  wrote:



> On 23. Nov 2019, at 00:34, Richard Backman, Annabelle 
 wrote:
> 
>> how are cookies protected from leakage, replay, injection in a setup 
like this?
> They aren’t.

Thats very interesting when compared to what we are discussing with respect 
to API security. 

It effectively means anyone able to capture a session cookie, e.g. between 
TLS termination point and application, by way of an HTML injection, or any 
other suitable attack is able to impersonate a legitimate user by injecting the 
cookie(s) in an arbitrary user agent. The impact of such an attack might be 
even worse than abusing an access token given the (typically) broad scope of a 
session.

TLS-based methods for sender constrained access tokens, in contrast, 
prevent this type of replay, even if the requests are protected between client 
and TLS terminating proxy, only. Ensuring the authenticity of the client 
certificate when forwarded from TLS terminating proxy to service, e.g. through 
another authenticated TLS connection, will even prevent injection within the 
data center/cloud environment. 

I come to the conclusion that we already have the mechanism at hand to 
implement APIs with a considerable higher security level than what is accepted 
today for web applications. So what problem do we want to solve?

> But my primary concern here isn't web browser traffic, it's calls from 
services/apps running inside a corporate network to services outside a 
corporate network (e.g., service-to-service API calls that pass through a 
corporate TLS gateway).

Can you please describe the challenges arising in these settings? I assume 
those proxies won’t support CONNECT style pass through otherwise we wouldn’t 
talk about them.

> 
>> That’s a totally valid point. But again, such a solution makes the life 
of client developers harder. 
>> I personally think, we as a community need to understand the pros and 
cons of both approaches. I also think we have not even come close to this 
point, which, in my option, is the prerequisite for making informed decisions.
> 
> Agreed. It's clear that there are a number of parties coming at this from 
a number of different directions, and that's coloring our perceptions. That's 
why I think we need to nail down the scope of what we're trying to solve with 
DPoP before we can have a productive conversation how it should work.

We will do so.

> 
> – 
> Annabelle Richard Backman
> AWS Identity
> 
> 
> On 11/22/19, 10:51 PM, "Torsten Lodderstedt"  
wrote:
> 
> 
> 
>> On 22. Nov 2019, at 22:12, Richard Backman, Annabelle 
 wrote:
>> 
>> The service provider doesn't own the entire connection. They have no 
control over corporate or government TLS gateways, or other terminators that 
might exist on the client's side. In larger organizations, or when cloud 
hosting is involved, the service team may not even own all the hops on their 
side.
> 
>how are cookies protected from leakage, replay, injection in a setup 
like this?
> 
>> While presumably they have some trust in them, protection against leaked 
bearer tokens is an attractive defense-in-depth measure.
> 
>That’s a totally valid point. But again, such a solution makes the 
life of client developers harder. 
> 
>I personally think, we as a community need to understand the pros and 
cons of both approaches. I also think we have not even come close to this 
point, which, in my option, is the prerequisite for making informed 

Re: [OAUTH-WG] New Version Notification for draft-fett-oauth-dpop-03.txt

2019-11-26 Thread Neil Madden
ation at the AS is 
> viable, or where RSes can be trusted with symmetric keys.

See above. 

> The value provided by macaroons (e.g., sender-constrained tokens without 
> client key negotiation/registration/distribution) is worth the cost of 
> defining the format of a DPoP macaroon, specification of algorithms used, 
> etc., and the cognitive load on developers who now have to learn a new token 
> format (instead of JWT, which they might already work with).

There are plenty of existing interoperable macaroon libraries - see links from 
http://macaroons.io . HMAC-SHA256 is very widely implemented (and usually 
securely). That’s all you need. 

And as I said before, one of the key advantages is that clients and RSes only 
need to care about the format when they want to take advantage of it. They can 
happily treat them as pure bearer tokens until then. 

Contrast with DPoP where the RS needs to potentially support 10 different 
public key JWS algorithms, or otherwise have some way of negotiating algorithm 
support with the client and/or AS. (In which case they can negotiate a key).. 
And the client, AS, and *every* RS needs to be simultaneously upgraded to 
support it. (Otherwise a rogue RS can replay the access token at an RS that 
hasn’t been upgraded yet. Not possible with macaroons). 

Neil


>  
> – 
> Annabelle Richard Backman
> AWS Identity
>  
>  
> From: Neil Madden 
> Date: Sunday, November 24, 2019 at 12:56 AM
> To: "Richard Backman, Annabelle" 
> Cc: Brian Campbell , oauth 
> Subject: Re: [OAUTH-WG] New Version Notification for 
> draft-fett-oauth-dpop-03.txt
>  
> On 22 Nov 2019, at 12:26, Richard Backman, Annabelle  
> wrote:
> > Yes of course. But this is the HMAC *tag* not the original key.
> Sure. And if the client attenuates the macaroon, it is used as a key that the 
> client proves possession of by presenting the chained HMAC. Clients doing 
> DPoP aren’t proving possession of the “original key” (i.e., a key used to 
> generate the access token) either.
> A way to think of this is that macaroons bridge the gap between bearer tokens 
> and proof of possession tokens. A client can receive a macaroon and use it 
> like a pure bearer token if they want. On the other hand they can append 
> contextual caveats that tightly constrain a token at the point of use, like a 
> PoP token. You can even do a full challenge-response protocol where the RS 
> sends a challenge and the client appends it as a caveat. 
>  
> > Well, you don’t have to return a key from the token endpoint for a start.
> Yes, that’s what I meant by saying that it eliminates key negotiation. Though 
> I suppose it’s more correct to say that it inlines it. The AS still provides 
> a key, it just happens to be part of the access token.
> Which helps a lot with backwards compat. 
> Macaroons are an interesting pattern, but not because they’re not doing PoP. 
> Proof of possession is pretty core to the whole idea of digital signatures 
> and HMACs.
> I would argue that third party verifiability and non-repudiation are also 
> core to digital signatures, but aren’t required or used by DPoP (and actually 
> cause problems). 
>  
> I also don’t think PoP is core to HMAC. Many ASes issue HMAC-signed access 
> tokens already without the client doing any kind of proof of possession. They 
> are a convenient way of minting bearer tokens. 
> What makes them interesting is the way they inline key distribution. Whether 
> or not they’re applicable to DPoP depends, ultimately, on the use cases DPoP 
> is targeting and the threats it is trying to mitigate.
> There are many more interesting things than the key being inline for 
> macaroons. For example:
>  
> - the attenuations (caveats) are attached directly to the access token and 
> are verified by the AS. Contrast this to DPoP where every RS has to correctly 
> validate the proof token - if any don’t then the security is significantly 
> reduced. The AS is responsible for all security-critical checks with 
> macaroons.
>  
> - macaroon caveats can be layered. The initial client can add some 
> restrictions and then pass the token to an RS. That RS can then add its own 
> restrictions when passing the token to backend services. This is a big deal 
> for microservice architectures. 
>  
> - you can add caveats at a gateway or proxy and know these will be enforced 
> without having to inspect incoming traffic. 
>  
> Even when used in combination with PoP, macaroons add unique capabilities. 
> For example, a client can retrieve a plain bearer token from the AS and then 
> after-the-fact bind it to its TLS client certificate by appending a x5t#S256 
> caveat and use that new access token for all API calls. But that client still 
> has the original access token so 

Re: [OAUTH-WG] New Version Notification for draft-fett-oauth-dpop-03.txt

2019-11-25 Thread Torsten Lodderstedt


> On 25. Nov 2019, at 17:06, Aaron Parecki  wrote:
> 
> I agree, the Facebook issue had nothing to do with extracting access tokens 
> via a hack, it was entirely facebook’s fault for issuing access tokens 
> improperly in the first place. They posted some amazing details on what 
> happened on their website.
> 
> https://about.fb.com/news/2018/09/security-update/
> 
> If they couldn’t even get this right, it’s unlikely a sender constrained 
> token would have helped here, and may have been bypassed just like the other 
> three issues that led to the breach.
> 
> > I tend to agree with your assessment. The simplest way with current OAuth 
> > is use of code+pkce+refresh tokens, narrowly scoped access tokens, and 
> > resource indicators to mint RS-specific, privilege restricted, short lived 
> > access tokens. 
> > 
> > Do you think we should spell this out in the SPA BCP?
> 
> I agree that this is probably the best advice we can give. Ultimately people 
> will still make mistakes like the ones that led to the Facebook issue, so all 
> we can do is point people in the right direction.

It’s also softens the requirements since to would not require SPAs to have a 
PoP mechanism. 

I think that’s ok, since SPAs with all APIs and token handling in the browser 
will anyway not be the security critical ones. SPAs serving as frontend for 
security sensitive applications can cover API interactions in the backend 
(where we have practical measures available for sender constrained access 
tokens. 

> 
> Aaron
> 
> 
> 
> On Mon, Nov 25, 2019 at 6:08 AM Neil Madden  wrote:
> On 25 Nov 2019, at 12:09, Torsten Lodderstedt  wrote:
> > 
> > Hi Neil, 
> > 
> >> On 25. Nov 2019, at 12:38, Neil Madden  wrote:
> >> 
> >> But for web-based SPAs and so on, I'm not sure the cost/benefit trade off 
> >> is really that good. The biggest threat for tokens being stolen/misused is 
> >> still XSS, and DPoP does nothing to protect against that. It also doesn't 
> >> protect against many other ways that tokens leak in browsers - e.g. if a 
> >> token leaks in your browser history then the threat is that the attacker 
> >> is physically using your device, in which case they also have access to 
> >> your DPoP keys. In the cases like the Facebook breach, where highly 
> >> automated mass compromise was achieved, I think we're lacking evidence 
> >> that PoP would help there either.
> >> 
> >> The single most important thing we can do to protect web-based apps is to 
> >> encourage the principle of least privilege. Every access token should be 
> >> as tightly constrained as possible - in scope, in audience, and in expiry 
> >> time. Ideally at the point of being issued ...
> > 
> > I tend to agree with your assessment. The simplest way with current OAuth 
> > is use of code+pkce+refresh tokens, narrowly scoped access tokens, and 
> > resource indicators to mint RS-specific, privilege restricted, short lived 
> > access tokens. 
> > 
> > Do you think we should spell this out in the SPA BCP?
> 
> I think that would certainly be a great start.
> 
> -- Neil
> ___
> OAuth mailing list
> OAuth@ietf.org
> https://www.ietf.org/mailman/listinfo/oauth
> -- 
> 
> Aaron Parecki
> aaronparecki.com
> @aaronpk
> 



smime.p7s
Description: S/MIME cryptographic signature
___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] New Version Notification for draft-fett-oauth-dpop-03.txt

2019-11-25 Thread Aaron Parecki
I agree, the Facebook issue had nothing to do with extracting access tokens
via a hack, it was entirely facebook’s fault for issuing access tokens
improperly in the first place. They posted some amazing details on what
happened on their website.

https://about.fb.com/news/2018/09/security-update/

If they couldn’t even get this right, it’s unlikely a sender constrained
token would have helped here, and may have been bypassed just like the
other three issues that led to the breach.

> I tend to agree with your assessment. The simplest way with current OAuth
is use of code+pkce+refresh tokens, narrowly scoped access tokens, and
resource indicators to mint RS-specific, privilege restricted, short lived
access tokens.
>
> Do you think we should spell this out in the SPA BCP?

I agree that this is probably the best advice we can give. Ultimately
people will still make mistakes like the ones that led to the Facebook
issue, so all we can do is point people in the right direction.

Aaron



On Mon, Nov 25, 2019 at 6:08 AM Neil Madden 
wrote:

> On 25 Nov 2019, at 12:09, Torsten Lodderstedt 
> wrote:
> >
> > Hi Neil,
> >
> >> On 25. Nov 2019, at 12:38, Neil Madden 
> wrote:
> >>
> >> But for web-based SPAs and so on, I'm not sure the cost/benefit trade
> off is really that good. The biggest threat for tokens being stolen/misused
> is still XSS, and DPoP does nothing to protect against that. It also
> doesn't protect against many other ways that tokens leak in browsers - e.g.
> if a token leaks in your browser history then the threat is that the
> attacker is physically using your device, in which case they also have
> access to your DPoP keys. In the cases like the Facebook breach, where
> highly automated mass compromise was achieved, I think we're lacking
> evidence that PoP would help there either.
> >>
> >> The single most important thing we can do to protect web-based apps is
> to encourage the principle of least privilege. Every access token should be
> as tightly constrained as possible - in scope, in audience, and in expiry
> time. Ideally at the point of being issued ...
> >
> > I tend to agree with your assessment. The simplest way with current
> OAuth is use of code+pkce+refresh tokens, narrowly scoped access tokens,
> and resource indicators to mint RS-specific, privilege restricted, short
> lived access tokens.
> >
> > Do you think we should spell this out in the SPA BCP?
>
> I think that would certainly be a great start.
>
> -- Neil
> ___
> OAuth mailing list
> OAuth@ietf.org
> https://www.ietf.org/mailman/listinfo/oauth
>
-- 

Aaron Parecki
aaronparecki.com
@aaronpk 
___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] New Version Notification for draft-fett-oauth-dpop-03.txt

2019-11-25 Thread Jared Jennings
+1

-Jared
Skype:jaredljennings
Signal:+1 816.730.9540
WhatsApp: +1 816.678.4152


On Mon, Nov 25, 2019 at 8:08 AM Neil Madden 
wrote:

> On 25 Nov 2019, at 12:09, Torsten Lodderstedt 
> wrote:
> >
> > Hi Neil,
> >
> >> On 25. Nov 2019, at 12:38, Neil Madden 
> wrote:
> >>
> > Do you think we should spell this out in the SPA BCP?
>
> I think that would certainly be a great start.
>
> -- Neil
> ___
> OAuth mailing list
> OAuth@ietf.org
> https://www.ietf.org/mailman/listinfo/oauth
>
___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] New Version Notification for draft-fett-oauth-dpop-03.txt

2019-11-25 Thread Neil Madden
On 25 Nov 2019, at 12:09, Torsten Lodderstedt  wrote:
> 
> Hi Neil, 
> 
>> On 25. Nov 2019, at 12:38, Neil Madden  wrote:
>> 
>> But for web-based SPAs and so on, I'm not sure the cost/benefit trade off is 
>> really that good. The biggest threat for tokens being stolen/misused is 
>> still XSS, and DPoP does nothing to protect against that. It also doesn't 
>> protect against many other ways that tokens leak in browsers - e.g. if a 
>> token leaks in your browser history then the threat is that the attacker is 
>> physically using your device, in which case they also have access to your 
>> DPoP keys. In the cases like the Facebook breach, where highly automated 
>> mass compromise was achieved, I think we're lacking evidence that PoP would 
>> help there either.
>> 
>> The single most important thing we can do to protect web-based apps is to 
>> encourage the principle of least privilege. Every access token should be as 
>> tightly constrained as possible - in scope, in audience, and in expiry time. 
>> Ideally at the point of being issued ...
> 
> I tend to agree with your assessment. The simplest way with current OAuth is 
> use of code+pkce+refresh tokens, narrowly scoped access tokens, and resource 
> indicators to mint RS-specific, privilege restricted, short lived access 
> tokens. 
> 
> Do you think we should spell this out in the SPA BCP?

I think that would certainly be a great start.

-- Neil
___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] New Version Notification for draft-fett-oauth-dpop-03.txt

2019-11-25 Thread Torsten Lodderstedt
Hi Neil, 

> On 25. Nov 2019, at 12:38, Neil Madden  wrote:
> 
> But for web-based SPAs and so on, I'm not sure the cost/benefit trade off is 
> really that good. The biggest threat for tokens being stolen/misused is still 
> XSS, and DPoP does nothing to protect against that. It also doesn't protect 
> against many other ways that tokens leak in browsers - e.g. if a token leaks 
> in your browser history then the threat is that the attacker is physically 
> using your device, in which case they also have access to your DPoP keys. In 
> the cases like the Facebook breach, where highly automated mass compromise 
> was achieved, I think we're lacking evidence that PoP would help there either.
> 
> The single most important thing we can do to protect web-based apps is to 
> encourage the principle of least privilege. Every access token should be as 
> tightly constrained as possible - in scope, in audience, and in expiry time. 
> Ideally at the point of being issued ...

I tend to agree with your assessment. The simplest way with current OAuth is 
use of code+pkce+refresh tokens, narrowly scoped access tokens, and resource 
indicators to mint RS-specific, privilege restricted, short lived access 
tokens. 

Do you think we should spell this out in the SPA BCP?

best regards,
Torsten. 

smime.p7s
Description: S/MIME cryptographic signature
___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] New Version Notification for draft-fett-oauth-dpop-03.txt

2019-11-25 Thread Neil Madden
Hi Dave,

> On 25 Nov 2019, at 08:28, Dave Tonge  wrote:
> 
> Hi Neil and Torsten
> 
> I agree that the risk is about token theft / leakage. My understanding is 
> that we should assume that at some point access tokens will be leaked, 
> e.g.Facebook: 
> https://auth0.com/blog/facebook-access-token-data-breach-early-look/ 
> 
I think this example is interesting because DPoP (or mTLS) wouldn't have 
prevented it. The access tokens in this case were deliberately issued by 
Facebook to the wrong user to implement the "View As" feature, so PoP wouldn't 
prevent this as the tokens weren't stolen/leaked they were mis-issued and 
incorrectly scoped. (As I understand it the incorrect access token "had the 
permissions of the mobile app" - i.e. incorrect scope. It wasn't actually a 
token issued to the mobile app).

> If access tokens were cryptographically sender-constrained, then 
> leaked/stolen access tokens would be useless.
> I take your point that some of the ways in which an access token would leak, 
> would also leak the dPOP headers, this is why section 9.1 has the 
> recommendations around `iat` and `jti`. While this doesn't eliminate the 
> risk, it does reduce it.
> 
> So my perspective is that dPOP allows sender-constrained access tokens in 
> scenarios where mutual tls / token binding is not possible. This is a good 
> protection against token leakage / theft.

My perspective is that it's the claims that are doing the heavy lifting here. 
The signature is, by definition, valid for all RSes. Given that the claims 
(restrictions really) are the important bit there are simpler ways to achieve 
this - macaroons being my preference.

Some broader points about the uses and costs of PoP tokens:

In a backend microservice architecture, service to service calls are often 
authorized by service account tokens. These tokens often have significantly 
higher privileges compared to normal users because the same token is used for 
every request. So PoP-binding these tokens makes a lot of sense because 
compromise of one of these tokens has a large blast radius. It's also much 
easier to achieve PoP-bound tokens in a closed ecosystem - e.g., just spin up a 
service mesh with automatic mTLS between all service instances and bind your 
access tokens to those certs.

For some deployment models like IoT that have much riskier threat profiles, it 
can also make sense to do PoP because tokens might pass through various 
protocol-translating proxies and over riskier communication channels. In this 
case you're probably willing to accept a bit of extra complexity because you 
accept that as part of the cost of operating securely in these environments. 
(Or you don't and your internet-enabled lightbulbs become a botnet). But you 
almost certainly have power and resource budgets that you need to keep within, 
so amortizing the cost of any public key crypto over many requests is crucial.

But for web-based SPAs and so on, I'm not sure the cost/benefit trade off is 
really that good. The biggest threat for tokens being stolen/misused is still 
XSS, and DPoP does nothing to protect against that. It also doesn't protect 
against many other ways that tokens leak in browsers - e.g. if a token leaks in 
your browser history then the threat is that the attacker is physically using 
your device, in which case they also have access to your DPoP keys. In the 
cases like the Facebook breach, where highly automated mass compromise was 
achieved, I think we're lacking evidence that PoP would help there either.

The single most important thing we can do to protect web-based apps is to 
encourage the principle of least privilege. Every access token should be as 
tightly constrained as possible - in scope, in audience, and in expiry time. 
Ideally at the point of being issued - which is why I think any next-gen OAuth 
must support issuing multiple fine-grained access tokens. Where tokens can't be 
constrained at the point of issue, then the client should be able to constrain 
them afterwards at the point of use. They could do this via DPoP, but for all 
the reasons I've mentioned before I think macaroons make more sense here.

For mobile apps however, where the situation is much better than for SPAs, DPoP 
may have real value. A mobile app can realistically generate keys within a 
secure enclave that requires local user authentication to access (enforced by 
the hardware), and there's typically no risk of XSS. Mobile phones are at risk 
of attacks by physically present attackers (e.g., being left unlocked while you 
go for a bathroom break), so DPoP could add real value here by making it much 
harder to use those apps without the user's consent - against even quite 
determined and sophisticated attackers.

If there is support in the WG to move the draft forward, then I'm happy that 
I've made the points I wanted to make. I would still like to see a much 
expanded rationale 

Re: [OAUTH-WG] New Version Notification for draft-fett-oauth-dpop-03.txt

2019-11-25 Thread Dave Tonge
Hi Neil and Torsten

I agree that the risk is about token theft / leakage. My understanding is
that we should assume that at some point access tokens will be leaked,
e.g.Facebook:
https://auth0.com/blog/facebook-access-token-data-breach-early-look/

If access tokens were cryptographically sender-constrained, then
leaked/stolen access tokens would be useless.
I take your point that some of the ways in which an access token would
leak, would also leak the dPOP headers, this is why section 9.1 has the
recommendations around `iat` and `jti`. While this doesn't eliminate the
risk, it does reduce it.

So my perspective is that dPOP allows sender-constrained access tokens in
scenarios where mutual tls / token binding is not possible. This is a good
protection against token leakage / theft.

Dave


On Sun, 24 Nov 2019 at 10:43, Neil Madden  wrote:

> On 24 Nov 2019, at 07:59, Torsten Lodderstedt 
> wrote:
>
>
> Hi Neil,
>
> I would like to summarize what I believe to have understood is your
> opinion before commenting:
> 1) audience restricted access tokens is the way to cope with replay
> attempts between RSs
>
>
> It’s one way, but yes that is sufficient.
>
> 2) TLS prevents replay at the same RS
>
> re 1) that works as long as ASs support audience restrictions and the
> audience restriction is the actual resource server URL, otherwise a staged
> RS can obtain access tokens audience restricted for a different RS and
> replay it there
>
>
> Yes, audience restrictions only work if the AS supports it. DPoP only
> works if the AS, client, and *all* RSes all support it, right?
>
> I’m not sure of your second point.. Obviously an audience restriction
> needs to be unambiguous if it is to have any effect.
>
> re 2) it seems you look onto that threat from the inside of a TLS
> connection. Let’s assume the attacker obtains the access tokens at the
> application layer, e.g. through a log file, referrer header, mix-up,
> browser history and then sends it through a new TLS connection to the same
> RS. How does TLS help to detect this replay?
>
>
> These are token leakage/theft not replay -
> https://en.m.wikipedia.org/wiki/Replay_attack
>
> And TLS has done a lot to protect against even these threats. For example,
> leaking credentials in logs was much more of a threat when you had to
> consider all kinds of proxies and middleboxes along the route. TLS has
> completely eliminated that threat, leaving just the logs at the RS itself..
> And the others are largely protected against by not putting access tokens
> in URLs, and things like Referrer-Policy/rel=no-referrer..
>
> Leaking an audience-restricted access token into the logs of the RS itself
> seems a relatively minor threat to worry about. If you’re not managing logs
> securely then you’re probably already leaking all kinds of PII and other
> sensitive data that the access token grants access to.
>
> If the client and RS can’t get these things right then I would question
> whether public key signatures and associated key management is more likely
> to be done right.
>
> With macaroons the complexity is reduced and the AS performs all the
> checks.
>
> With ECDH, although complex, the critical security checks are encoded into
> the key derivation process - leading to the very desirable property that
> security failures become interoperability failures and so are more likely
> to be found and fixed in testing. (See the work done on using implicit
> nonces in TLS for an example of this principle -
> https://blog.cloudflare.com/tls-nonce-nse/)
>
> — Neil
>
>
> Am 24.11.2019 um 08:40 schrieb Neil Madden :
>
>
> On 22 Nov 2019, at 13:33, Torsten Lodderstedt 
> wrote:
>
>
> Hi Neil,
>
>
> On 22. Nov 2019, at 20:50, Neil Madden  wrote:
>
>
> Hi Torsten,
>
>
> On 22 Nov 2019, at 12:15, Torsten Lodderstedt 
> wrote:
>
>
> Hi Neil,
>
>
> On 22. Nov 2019, at 18:08, Neil Madden  wrote:
>
>
> I think the phrase "token replay" is ambiguous. Traditionally it refers to
> an attacker being able to capture a token (or whole requests) in use and
> then replay it against the same RS. This is already protected against by
> the use of normal TLS on the connection between the client and the RS. I
> think instead you are referring to a malicious/compromised RS replaying the
> token to a different RS - which has more of the flavour of a man in the
> middle attack (of the phishing kind).
>
>
> I would argue TLS basically prevents leakage and not replay.
>
>
> It also protects against replay. If you capture TLS-encrypted packets with
> Wireshark you not only cannot decipher them but also cannot replay them
> because they include specific anti-replay measures at the record level in
> the form of unique session keys and record sequence numbers included in the
> MAC calculations. This is essential to the security of TLS.
>
>
> I understand. I was looking onto TLS from an application perspective, that
> might explain differing perception.
>
>
>
> The threats we try to cope with can be found in the 

Re: [OAUTH-WG] New Version Notification for draft-fett-oauth-dpop-03.txt

2019-11-24 Thread Neil Madden
On 24 Nov 2019, at 07:59, Torsten Lodderstedt  wrote:
> 
> Hi Neil,
> 
> I would like to summarize what I believe to have understood is your opinion 
> before commenting:
> 1) audience restricted access tokens is the way to cope with replay attempts 
> between RSs

It’s one way, but yes that is sufficient. 

> 2) TLS prevents replay at the same RS
> 
> re 1) that works as long as ASs support audience restrictions and the 
> audience restriction is the actual resource server URL, otherwise a staged RS 
> can obtain access tokens audience restricted for a different RS and replay it 
> there

Yes, audience restrictions only work if the AS supports it. DPoP only works if 
the AS, client, and *all* RSes all support it, right?

I’m not sure of your second point. Obviously an audience restriction needs to 
be unambiguous if it is to have any effect. 

> re 2) it seems you look onto that threat from the inside of a TLS connection. 
> Let’s assume the attacker obtains the access tokens at the application layer, 
> e.g. through a log file, referrer header, mix-up, browser history and then 
> sends it through a new TLS connection to the same RS. How does TLS help to 
> detect this replay?

These are token leakage/theft not replay - 
https://en.m.wikipedia.org/wiki/Replay_attack

And TLS has done a lot to protect against even these threats. For example, 
leaking credentials in logs was much more of a threat when you had to consider 
all kinds of proxies and middleboxes along the route. TLS has completely 
eliminated that threat, leaving just the logs at the RS itself. And the others 
are largely protected against by not putting access tokens in URLs, and things 
like Referrer-Policy/rel=no-referrer.

Leaking an audience-restricted access token into the logs of the RS itself 
seems a relatively minor threat to worry about. If you’re not managing logs 
securely then you’re probably already leaking all kinds of PII and other 
sensitive data that the access token grants access to. 

If the client and RS can’t get these things right then I would question whether 
public key signatures and associated key management is more likely to be done 
right. 

With macaroons the complexity is reduced and the AS performs all the checks. 

With ECDH, although complex, the critical security checks are encoded into the 
key derivation process - leading to the very desirable property that security 
failures become interoperability failures and so are more likely to be found 
and fixed in testing. (See the work done on using implicit nonces in TLS for an 
example of this principle - https://blog.cloudflare.com/tls-nonce-nse/)

— Neil

> 
>>> Am 24.11.2019 um 08:40 schrieb Neil Madden :
>>> 
>>> On 22 Nov 2019, at 13:33, Torsten Lodderstedt  
>>> wrote:
>>> 
>>> Hi Neil,
>>> 
> On 22. Nov 2019, at 20:50, Neil Madden  wrote:
 
 Hi Torsten,
 
> On 22 Nov 2019, at 12:15, Torsten Lodderstedt  
> wrote:
> 
> Hi Neil,
> 
>> On 22. Nov 2019, at 18:08, Neil Madden  wrote:
>> 
>> I think the phrase "token replay" is ambiguous. Traditionally it refers 
>> to an attacker being able to capture a token (or whole requests) in use 
>> and then replay it against the same RS. This is already protected 
>> against by the use of normal TLS on the connection between the client 
>> and the RS. I think instead you are referring to a malicious/compromised 
>> RS replaying the token to a different RS - which has more of the flavour 
>> of a man in the middle attack (of the phishing kind).
> 
> I would argue TLS basically prevents leakage and not replay.
 
 It also protects against replay. If you capture TLS-encrypted packets with 
 Wireshark you not only cannot decipher them but also cannot replay them 
 because they include specific anti-replay measures at the record level in 
 the form of unique session keys and record sequence numbers included in 
 the MAC calculations. This is essential to the security of TLS.
>>> 
>>> I understand. I was looking onto TLS from an application perspective, that 
>>> might explain differing perception.
>>> 
 
> The threats we try to cope with can be found in the Security BCP. There 
> are multiple ways access tokens can leak, including referrer headers, 
> mix-up, open redirection, browser history, and all sorts of access token 
> leakage at the resource server
> 
> Please have a look at 
> https://tools.ietf.org/html/draft-ietf-oauth-security-topics-13#section-4.
> 
> https://tools.ietf.org/html/draft-ietf-oauth-security-topics-13#section-4.8
>  also has an extensive discussion of potential counter measures, 
> including audience restricted access tokens and a conclusion to recommend 
> sender constrained access tokens over other mechanisms.
 
 OK, good - these are threats beyond token replay (at least as I understand 
 that term). It would be good to 

Re: [OAUTH-WG] New Version Notification for draft-fett-oauth-dpop-03.txt

2019-11-24 Thread Neil Madden
On 22 Nov 2019, at 12:26, Richard Backman, Annabelle  
wrote:
> > Yes of course. But this is the HMAC *tag* not the original key.
> 
> Sure. And if the client attenuates the macaroon, it is used as a key that the 
> client proves possession of by presenting the chained HMAC. Clients doing 
> DPoP aren’t proving possession of the “original key” (i.e., a key used to 
> generate the access token) either.
> 
A way to think of this is that macaroons bridge the gap between bearer tokens 
and proof of possession tokens. A client can receive a macaroon and use it like 
a pure bearer token if they want. On the other hand they can append contextual 
caveats that tightly constrain a token at the point of use, like a PoP token. 
You can even do a full challenge-response protocol where the RS sends a 
challenge and the client appends it as a caveat. 

> > Well, you don’t have to return a key from the token endpoint for a start.
> 
> Yes, that’s what I meant by saying that it eliminates key negotiation. Though 
> I suppose it’s more correct to say that it inlines it. The AS still provides 
> a key, it just happens to be part of the access token.
> 
Which helps a lot with backwards compat. 
> Macaroons are an interesting pattern, but not because they’re not doing PoP. 
> Proof of possession is pretty core to the whole idea of digital signatures 
> and HMACs.
> 
I would argue that third party verifiability and non-repudiation are also core 
to digital signatures, but aren’t required or used by DPoP (and actually cause 
problems). 

I also don’t think PoP is core to HMAC. Many ASes issue HMAC-signed access 
tokens already without the client doing any kind of proof of possession. They 
are a convenient way of minting bearer tokens. 
> What makes them interesting is the way they inline key distribution. Whether 
> or not they’re applicable to DPoP depends, ultimately, on the use cases DPoP 
> is targeting and the threats it is trying to mitigate.
> 
There are many more interesting things than the key being inline for macaroons. 
For example:

- the attenuations (caveats) are attached directly to the access token and are 
verified by the AS. Contrast this to DPoP where every RS has to correctly 
validate the proof token - if any don’t then the security is significantly 
reduced. The AS is responsible for all security-critical checks with macaroons.

- macaroon caveats can be layered. The initial client can add some restrictions 
and then pass the token to an RS. That RS can then add its own restrictions 
when passing the token to backend services. This is a big deal for microservice 
architectures. 

- you can add caveats at a gateway or proxy and know these will be enforced 
without having to inspect incoming traffic. 

Even when used in combination with PoP, macaroons add unique capabilities. For 
example, a client can retrieve a plain bearer token from the AS and then 
after-the-fact bind it to its TLS client certificate by appending a x5t#S256 
caveat and use that new access token for all API calls. But that client still 
has the original access token so they can get the certificate for a different 
client (eg another microservice) and create a new copy of the access token 
bound to that client’s certificate. It can then safely send this access token 
to the other client, even over a completely insecure connection. It can do this 
for every microservice it needs to talk to, effectively providing transfer of 
ownership for PoP tokens without needing to call a central token exchange 
service.

All this and I haven’t even begun talking about 3rd party caveats. 

So the really interesting thing about macaroons is that they enable all kinds 
of new authorization patterns to be built without requiring a new spec for each 
one. 

Neil

> From: Neil Madden 
> Date: Friday, November 22, 2019 at 3:09 PM
> To: "Richard Backman, Annabelle" 
> Cc: Brian Campbell , oauth 
> Subject: Re: [OAUTH-WG] New Version Notification for 
> draft-fett-oauth-dpop-03.txt
> 
>  
> 
> On 22 Nov 2019, at 01:42, Richard Backman, Annabelle  
> wrote:
> 
>  
> 
> Macaroons are built on proof of possession. In order to add a caveat to a 
> macaroon, the sender has to have the HMAC of the macaroon without their 
> caveat.
> 
>  
> 
> Yes of course. But this is the HMAC *tag* not the original key. They can’t 
> change anything the AS originally signed. 
> 
> 
> 
> 
> The distinctive property of macaroons as I see it is that they eliminate the 
> need for key negotiation with the bearer. How much value this has over the AS 
> just returning a symmetric key alongside the access token in the token 
> request, I’m not sure.
> 
>  
> 
> Well, you don’t have to return a key from the token endpoint for a start. The 
> client doesn’t need to create and send any additional token. 

Re: [OAUTH-WG] New Version Notification for draft-fett-oauth-dpop-03.txt

2019-11-23 Thread Torsten Lodderstedt
Hi Neil,

I would like to summarize what I believe to have understood is your opinion 
before commenting:
1) audience restricted access tokens is the way to cope with replay attempts 
between RSs
2) TLS prevents replay at the same RS

re 1) that works as long as ASs support audience restrictions and the audience 
restriction is the actual resource server URL, otherwise a staged RS can obtain 
access tokens audience restricted for a different RS and replay it there
re 2) it seems you look onto that threat from the inside of a TLS connection. 
Let’s assume the attacker obtains the access tokens at the application layer, 
e.g. through a log file, referrer header, mix-up, browser history and then 
sends it through a new TLS connection to the same RS. How does TLS help to 
detect this replay?

best regards,
Torsten.

> Am 24.11.2019 um 08:40 schrieb Neil Madden :
> 
> On 22 Nov 2019, at 13:33, Torsten Lodderstedt  
> wrote:
>> 
>> Hi Neil,
>> 
 On 22. Nov 2019, at 20:50, Neil Madden  wrote:
>>> 
>>> Hi Torsten,
>>> 
 On 22 Nov 2019, at 12:15, Torsten Lodderstedt  
 wrote:
 
 Hi Neil,
 
> On 22. Nov 2019, at 18:08, Neil Madden  wrote:
> 
> I think the phrase "token replay" is ambiguous. Traditionally it refers 
> to an attacker being able to capture a token (or whole requests) in use 
> and then replay it against the same RS. This is already protected against 
> by the use of normal TLS on the connection between the client and the RS. 
> I think instead you are referring to a malicious/compromised RS replaying 
> the token to a different RS - which has more of the flavour of a man in 
> the middle attack (of the phishing kind).
 
 I would argue TLS basically prevents leakage and not replay.
>>> 
>>> It also protects against replay. If you capture TLS-encrypted packets with 
>>> Wireshark you not only cannot decipher them but also cannot replay them 
>>> because they include specific anti-replay measures at the record level in 
>>> the form of unique session keys and record sequence numbers included in the 
>>> MAC calculations. This is essential to the security of TLS.
>> 
>> I understand. I was looking onto TLS from an application perspective, that 
>> might explain differing perception.
>> 
>>> 
 The threats we try to cope with can be found in the Security BCP. There 
 are multiple ways access tokens can leak, including referrer headers, 
 mix-up, open redirection, browser history, and all sorts of access token 
 leakage at the resource server
 
 Please have a look at 
 https://tools.ietf.org/html/draft-ietf-oauth-security-topics-13#section-4.
 
 https://tools.ietf.org/html/draft-ietf-oauth-security-topics-13#section-4.8
  also has an extensive discussion of potential counter measures, including 
 audience restricted access tokens and a conclusion to recommend sender 
 constrained access tokens over other mechanisms.
>>> 
>>> OK, good - these are threats beyond token replay (at least as I understand 
>>> that term). It would be good to explicitly add them to the DPoP document 
>>> motivation.
>>> 
>>> Note that most of these ways that an access token can leak also apply 
>>> equally to leak of the DPoP JWT, so the protection afforded by DPoP boils 
>>> down to how well the restrictions encoded into the JWT prevent it from 
>>> being reused in this case - e.g., restricting the expiry time, audience, 
>>> scope, linking it to a specific request (htm/htu) etc. 
>>> 
>>> Every single one of those restrictions can be equally well encoded as 
>>> caveats on a macaroon access token without any need for public key 
>>> signatures or additional tokens and headers.
>>> 
> But if that's the case then there are much simpler defences than those 
> proposed in the current draft:
> 
> 1. Get separate access tokens for each RS with correct audience and 
> scopes. The consensus appears to be that this is hard to do in some 
> cases, hence the draft.
 
 How many deployments do you know that today are able to issue RS-specific 
 access tokens?
 BTW: how would you identify the RS?
 
 I agree that would be an alternative and I’m a great fan of such tokens 
 (and used them a lot at Deutsche Telekom) but in my perception this 
 pattern needs still to be established in the market. Moreover, they 
 basically protect from a rough RS (if the URL is used as audience) 
 replaying the token someplace else, but they do not protect from all other 
 kinds of leakage/replay (e.g. log files).
>>> 
>>> Many services already do this. For example, Google encodes the intended RS 
>>> into the scopes on GCP 
>>> (https://developers.google.com/identity/protocols/googlescopes). A client 
>>> can do a single authorization flow to authorize all the scopes it needs and 
>>> then use repeated calls to the refresh token endpoint to obtain individual 
>>> access tokens with 

Re: [OAUTH-WG] New Version Notification for draft-fett-oauth-dpop-03.txt

2019-11-23 Thread Neil Madden
On 22 Nov 2019, at 13:33, Torsten Lodderstedt  wrote:
> 
> Hi Neil,
> 
>> On 22. Nov 2019, at 20:50, Neil Madden  wrote:
>> 
>> Hi Torsten,
>> 
>>> On 22 Nov 2019, at 12:15, Torsten Lodderstedt  
>>> wrote:
>>> 
>>> Hi Neil,
>>> 
 On 22. Nov 2019, at 18:08, Neil Madden  wrote:
 
 I think the phrase "token replay" is ambiguous. Traditionally it refers to 
 an attacker being able to capture a token (or whole requests) in use and 
 then replay it against the same RS. This is already protected against by 
 the use of normal TLS on the connection between the client and the RS. I 
 think instead you are referring to a malicious/compromised RS replaying 
 the token to a different RS - which has more of the flavour of a man in 
 the middle attack (of the phishing kind).
>>> 
>>> I would argue TLS basically prevents leakage and not replay.
>> 
>> It also protects against replay. If you capture TLS-encrypted packets with 
>> Wireshark you not only cannot decipher them but also cannot replay them 
>> because they include specific anti-replay measures at the record level in 
>> the form of unique session keys and record sequence numbers included in the 
>> MAC calculations. This is essential to the security of TLS.
> 
> I understand. I was looking onto TLS from an application perspective, that 
> might explain differing perception.
> 
>> 
>>> The threats we try to cope with can be found in the Security BCP. There are 
>>> multiple ways access tokens can leak, including referrer headers, mix-up, 
>>> open redirection, browser history, and all sorts of access token leakage at 
>>> the resource server
>>> 
>>> Please have a look at 
>>> https://tools.ietf.org/html/draft-ietf-oauth-security-topics-13#section-4.
>>> 
>>> https://tools.ietf.org/html/draft-ietf-oauth-security-topics-13#section-4.8 
>>> also has an extensive discussion of potential counter measures, including 
>>> audience restricted access tokens and a conclusion to recommend sender 
>>> constrained access tokens over other mechanisms.
>> 
>> OK, good - these are threats beyond token replay (at least as I understand 
>> that term). It would be good to explicitly add them to the DPoP document 
>> motivation.
>> 
>> Note that most of these ways that an access token can leak also apply 
>> equally to leak of the DPoP JWT, so the protection afforded by DPoP boils 
>> down to how well the restrictions encoded into the JWT prevent it from being 
>> reused in this case - e.g., restricting the expiry time, audience, scope, 
>> linking it to a specific request (htm/htu) etc. 
>> 
>> Every single one of those restrictions can be equally well encoded as 
>> caveats on a macaroon access token without any need for public key 
>> signatures or additional tokens and headers.
>> 
 But if that's the case then there are much simpler defences than those 
 proposed in the current draft:
 
 1. Get separate access tokens for each RS with correct audience and 
 scopes. The consensus appears to be that this is hard to do in some cases, 
 hence the draft.
>>> 
>>> How many deployments do you know that today are able to issue RS-specific 
>>> access tokens?
>>> BTW: how would you identify the RS?
>>> 
>>> I agree that would be an alternative and I’m a great fan of such tokens 
>>> (and used them a lot at Deutsche Telekom) but in my perception this pattern 
>>> needs still to be established in the market. Moreover, they basically 
>>> protect from a rough RS (if the URL is used as audience) replaying the 
>>> token someplace else, but they do not protect from all other kinds of 
>>> leakage/replay (e.g. log files).
>> 
>> Many services already do this. For example, Google encodes the intended RS 
>> into the scopes on GCP 
>> (https://developers.google.com/identity/protocols/googlescopes). A client 
>> can do a single authorization flow to authorize all the scopes it needs and 
>> then use repeated calls to the refresh token endpoint to obtain individual 
>> access tokens with subsets of the authorized scopes for each endpoint.
> 
> And that works at google? How does the client indicate the RS it wants to use 
> the first access token (that is obtains in the course of the code exchange)?

It doesn’t. The initial access token would be for all scopes and the client 
simply discards that one (or revokes it if the AS supports revoking individual 
tokens). 

>> (I think Brian also mentioned this pattern at OSW, but it might have been 
>> somebody else).
> 
> I know the pattern and we used this at Deutsche Telekom, but I don’t know any 
> other deployment utilising this pattern. In my observation, most people treat 
> access tokens as cookies and use them across RSs. Another reason might be 
> that, before resource indicators, there was no interoperable way to ask for a 
> token for a certain RS.

I don’t know anybody using DPoP either. The point is that you can do this kind 
of thing right now, so DPoP needs to have a stronger 

Re: [OAUTH-WG] New Version Notification for draft-fett-oauth-dpop-03.txt

2019-11-23 Thread Torsten Lodderstedt


> On 23. Nov 2019, at 00:34, Richard Backman, Annabelle  
> wrote:
> 
>> how are cookies protected from leakage, replay, injection in a setup like 
>> this?
> They aren’t.

Thats very interesting when compared to what we are discussing with respect to 
API security. 

It effectively means anyone able to capture a session cookie, e.g. between TLS 
termination point and application, by way of an HTML injection, or any other 
suitable attack is able to impersonate a legitimate user by injecting the 
cookie(s) in an arbitrary user agent. The impact of such an attack might be 
even worse than abusing an access token given the (typically) broad scope of a 
session.

TLS-based methods for sender constrained access tokens, in contrast, prevent 
this type of replay, even if the requests are protected between client and TLS 
terminating proxy, only. Ensuring the authenticity of the client certificate 
when forwarded from TLS terminating proxy to service, e.g. through another 
authenticated TLS connection, will even prevent injection within the data 
center/cloud environment. 

I come to the conclusion that we already have the mechanism at hand to 
implement APIs with a considerable higher security level than what is accepted 
today for web applications. So what problem do we want to solve?

> But my primary concern here isn't web browser traffic, it's calls from 
> services/apps running inside a corporate network to services outside a 
> corporate network (e.g., service-to-service API calls that pass through a 
> corporate TLS gateway).

Can you please describe the challenges arising in these settings? I assume 
those proxies won’t support CONNECT style pass through otherwise we wouldn’t 
talk about them.

> 
>> That’s a totally valid point. But again, such a solution makes the life of 
>> client developers harder. 
>> I personally think, we as a community need to understand the pros and cons 
>> of both approaches. I also think we have not even come close to this point, 
>> which, in my option, is the prerequisite for making informed decisions.
> 
> Agreed. It's clear that there are a number of parties coming at this from a 
> number of different directions, and that's coloring our perceptions. That's 
> why I think we need to nail down the scope of what we're trying to solve with 
> DPoP before we can have a productive conversation how it should work.

We will do so.

> 
> – 
> Annabelle Richard Backman
> AWS Identity
> 
> 
> On 11/22/19, 10:51 PM, "Torsten Lodderstedt"  wrote:
> 
> 
> 
>> On 22. Nov 2019, at 22:12, Richard Backman, Annabelle 
>>  wrote:
>> 
>> The service provider doesn't own the entire connection. They have no control 
>> over corporate or government TLS gateways, or other terminators that might 
>> exist on the client's side. In larger organizations, or when cloud hosting 
>> is involved, the service team may not even own all the hops on their side.
> 
>how are cookies protected from leakage, replay, injection in a setup like 
> this?
> 
>> While presumably they have some trust in them, protection against leaked 
>> bearer tokens is an attractive defense-in-depth measure.
> 
>That’s a totally valid point. But again, such a solution makes the life of 
> client developers harder. 
> 
>I personally think, we as a community need to understand the pros and cons 
> of both approaches. I also think we have not even come close to this point, 
> which, in my option, is the prerequisite for making informed decisions.
> 
>> 
>> – 
>> Annabelle Richard Backman
>> AWS Identity
>> 
>> 
>> On 11/22/19, 9:37 PM, "OAuth on behalf of Torsten Lodderstedt" 
>> > torsten=40lodderstedt@dmarc.ietf.org> wrote:
>> 
>> 
>> 
>>> On 22. Nov 2019, at 21:21, Richard Backman, Annabelle 
>>>  wrote:
>>> 
>>> The dichotomy of "TLS working" and "TLS failed" only applies to a single 
>>> TLS connection. In non-end-to-end TLS environments, each TLS terminator 
>>> between client and RS introduces additional token leakage/exfiltration 
>>> risk, irrespective of the quality of the TLS connections themselves. Each 
>>> terminator also introduces complexity for implementing mTLS, Token Binding, 
>>> or any other TLS-based sender constraint solution, which means developers 
>>> with non-end-to-end TLS use cases will be more likely to turn to DPoP.
>> 
>>   The point is we are talking about different developers here. The client 
>> developer does not need to care about the connection between proxy and 
>> service. She relies on the service provider to get it right. So the 
>> developers (or DevOps or admins) of the service provider need to ensure end 
>> to end security. And if the path is secured once, it will work for all 
>> clients. 
>> 
>>> If DPoP is intended to address "cases where neither mTLS nor OAuth Token 
>>> Binding are available" [1], then it should address this risk of token 
>>> leakage between client and RS. If on the other hand DPoP is only intended 
>>> to support the SPA use case and assumes 

Re: [OAUTH-WG] New Version Notification for draft-fett-oauth-dpop-03.txt

2019-11-22 Thread Richard Backman, Annabelle
> how are cookies protected from leakage, replay, injection in a setup like 
> this?
They aren't. But my primary concern here isn't web browser traffic, it's calls 
from services/apps running inside a corporate network to services outside a 
corporate network (e.g., service-to-service API calls that pass through a 
corporate TLS gateway).

> That’s a totally valid point. But again, such a solution makes the life of 
> client developers harder. 
> I personally think, we as a community need to understand the pros and cons of 
> both approaches. I also think we have not even come close to this point, 
> which, in my option, is the prerequisite for making informed decisions.

Agreed. It's clear that there are a number of parties coming at this from a 
number of different directions, and that's coloring our perceptions. That's why 
I think we need to nail down the scope of what we're trying to solve with DPoP 
before we can have a productive conversation how it should work.

– 
Annabelle Richard Backman
AWS Identity
 

On 11/22/19, 10:51 PM, "Torsten Lodderstedt"  wrote:



> On 22. Nov 2019, at 22:12, Richard Backman, Annabelle 
 wrote:
> 
> The service provider doesn't own the entire connection. They have no 
control over corporate or government TLS gateways, or other terminators that 
might exist on the client's side. In larger organizations, or when cloud 
hosting is involved, the service team may not even own all the hops on their 
side.

how are cookies protected from leakage, replay, injection in a setup like 
this?

> While presumably they have some trust in them, protection against leaked 
bearer tokens is an attractive defense-in-depth measure.

That’s a totally valid point. But again, such a solution makes the life of 
client developers harder. 

I personally think, we as a community need to understand the pros and cons 
of both approaches. I also think we have not even come close to this point, 
which, in my option, is the prerequisite for making informed decisions.

> 
> – 
> Annabelle Richard Backman
> AWS Identity
> 
> 
> On 11/22/19, 9:37 PM, "OAuth on behalf of Torsten Lodderstedt" 
 
wrote:
> 
> 
> 
>> On 22. Nov 2019, at 21:21, Richard Backman, Annabelle 
 wrote:
>> 
>> The dichotomy of "TLS working" and "TLS failed" only applies to a single 
TLS connection. In non-end-to-end TLS environments, each TLS terminator between 
client and RS introduces additional token leakage/exfiltration risk, 
irrespective of the quality of the TLS connections themselves. Each terminator 
also introduces complexity for implementing mTLS, Token Binding, or any other 
TLS-based sender constraint solution, which means developers with 
non-end-to-end TLS use cases will be more likely to turn to DPoP.
> 
>The point is we are talking about different developers here. The 
client developer does not need to care about the connection between proxy and 
service. She relies on the service provider to get it right. So the developers 
(or DevOps or admins) of the service provider need to ensure end to end 
security. And if the path is secured once, it will work for all clients. 
> 
>> If DPoP is intended to address "cases where neither mTLS nor OAuth Token 
Binding are available" [1], then it should address this risk of token leakage 
between client and RS. If on the other hand DPoP is only intended to support 
the SPA use case and assumes the use of end-to-end TLS, then the document 
should be updated to reflect that.
> 
>I agree. 
> 
>> 
>> [1]: https://tools.ietf.org/html/draft-fett-oauth-dpop-03#section-1
>> 
>> – 
>> Annabelle Richard Backman
>> AWS Identity
>> 
>> 
>> On 11/22/19, 8:17 PM, "OAuth on behalf of Torsten Lodderstedt" 
 
wrote:
>> 
>>   Hi Neil,
>> 
>>> On 22. Nov 2019, at 18:08, Neil Madden  
wrote:
>>> 
>>> On 22 Nov 2019, at 07:53, Torsten Lodderstedt 
 wrote:
 
 
 
> On 22. Nov 2019, at 15:24, Justin Richer  wrote:
> 
> I’m going to +1 Dick and Annabelle’s question about the scope here. 
That was the one major thing that struck me during the DPoP discussions in 
Singapore yesterday: we don’t seem to agree on what DPoP is for. Some 
(including the authors, it seems) see it as a quick point-solution to a 
specific use case. Others see it as a general PoP mechanism. 
> 
> If it’s the former, then it should be explicitly tied to one specific 
set of things. If it’s the latter, then it needs to be expanded. 
 
 as a co-author of the DPoP draft I state again what I said yesterday: 
DPoP is a mechanism for sender-constraining access tokens sent from SPAs only. 
The threat to be prevented is token replay.
>>> 
>>> I think the phrase "token replay" is ambiguous. Traditionally it refers 
to an attacker being able to capture 

Re: [OAUTH-WG] New Version Notification for draft-fett-oauth-dpop-03.txt

2019-11-22 Thread Torsten Lodderstedt


> On 22. Nov 2019, at 22:12, Richard Backman, Annabelle 
>  wrote:
> 
> The service provider doesn't own the entire connection. They have no control 
> over corporate or government TLS gateways, or other terminators that might 
> exist on the client's side. In larger organizations, or when cloud hosting is 
> involved, the service team may not even own all the hops on their side.

how are cookies protected from leakage, replay, injection in a setup like this?

> While presumably they have some trust in them, protection against leaked 
> bearer tokens is an attractive defense-in-depth measure.

That’s a totally valid point. But again, such a solution makes the life of 
client developers harder. 

I personally think, we as a community need to understand the pros and cons of 
both approaches. I also think we have not even come close to this point, which, 
in my option, is the prerequisite for making informed decisions.

> 
> – 
> Annabelle Richard Backman
> AWS Identity
> 
> 
> On 11/22/19, 9:37 PM, "OAuth on behalf of Torsten Lodderstedt" 
>  torsten=40lodderstedt@dmarc.ietf.org> wrote:
> 
> 
> 
>> On 22. Nov 2019, at 21:21, Richard Backman, Annabelle 
>>  wrote:
>> 
>> The dichotomy of "TLS working" and "TLS failed" only applies to a single TLS 
>> connection. In non-end-to-end TLS environments, each TLS terminator between 
>> client and RS introduces additional token leakage/exfiltration risk, 
>> irrespective of the quality of the TLS connections themselves. Each 
>> terminator also introduces complexity for implementing mTLS, Token Binding, 
>> or any other TLS-based sender constraint solution, which means developers 
>> with non-end-to-end TLS use cases will be more likely to turn to DPoP.
> 
>The point is we are talking about different developers here. The client 
> developer does not need to care about the connection between proxy and 
> service. She relies on the service provider to get it right. So the 
> developers (or DevOps or admins) of the service provider need to ensure end 
> to end security. And if the path is secured once, it will work for all 
> clients. 
> 
>> If DPoP is intended to address "cases where neither mTLS nor OAuth Token 
>> Binding are available" [1], then it should address this risk of token 
>> leakage between client and RS. If on the other hand DPoP is only intended to 
>> support the SPA use case and assumes the use of end-to-end TLS, then the 
>> document should be updated to reflect that.
> 
>I agree. 
> 
>> 
>> [1]: https://tools.ietf.org/html/draft-fett-oauth-dpop-03#section-1
>> 
>> – 
>> Annabelle Richard Backman
>> AWS Identity
>> 
>> 
>> On 11/22/19, 8:17 PM, "OAuth on behalf of Torsten Lodderstedt" 
>> > torsten=40lodderstedt@dmarc.ietf.org> wrote:
>> 
>>   Hi Neil,
>> 
>>> On 22. Nov 2019, at 18:08, Neil Madden  wrote:
>>> 
>>> On 22 Nov 2019, at 07:53, Torsten Lodderstedt 
>>>  wrote:
 
 
 
> On 22. Nov 2019, at 15:24, Justin Richer  wrote:
> 
> I’m going to +1 Dick and Annabelle’s question about the scope here. That 
> was the one major thing that struck me during the DPoP discussions in 
> Singapore yesterday: we don’t seem to agree on what DPoP is for. Some 
> (including the authors, it seems) see it as a quick point-solution to a 
> specific use case. Others see it as a general PoP mechanism. 
> 
> If it’s the former, then it should be explicitly tied to one specific set 
> of things. If it’s the latter, then it needs to be expanded. 
 
 as a co-author of the DPoP draft I state again what I said yesterday: DPoP 
 is a mechanism for sender-constraining access tokens sent from SPAs only. 
 The threat to be prevented is token replay.
>>> 
>>> I think the phrase "token replay" is ambiguous. Traditionally it refers to 
>>> an attacker being able to capture a token (or whole requests) in use and 
>>> then replay it against the same RS. This is already protected against by 
>>> the use of normal TLS on the connection between the client and the RS. I 
>>> think instead you are referring to a malicious/compromised RS replaying the 
>>> token to a different RS - which has more of the flavour of a man in the 
>>> middle attack (of the phishing kind).
>> 
>>   I would argue TLS basically prevents leakage and not replay. The threats 
>> we try to cope with can be found in the Security BCP. There are multiple 
>> ways access tokens can leak, including referrer headers, mix-up, open 
>> redirection, browser history, and all sorts of access token leakage at the 
>> resource server
>> 
>>   Please have a look at 
>> https://tools.ietf.org/html/draft-ietf-oauth-security-topics-13#section-4.
>> 
>>   
>> https://tools.ietf.org/html/draft-ietf-oauth-security-topics-13#section-4.8 
>> also has an extensive discussion of potential counter measures, including 
>> audience restricted access tokens and a conclusion to recommend sender 
>> constrained access tokens over other mechanisms.
>> 
>>> 

Re: [OAUTH-WG] New Version Notification for draft-fett-oauth-dpop-03.txt

2019-11-22 Thread Richard Backman, Annabelle
The service provider doesn't own the entire connection. They have no control 
over corporate or government TLS gateways, or other terminators that might 
exist on the client's side. In larger organizations, or when cloud hosting is 
involved, the service team may not even own all the hops on their side. While 
presumably they have some trust in them, protection against leaked bearer 
tokens is an attractive defense-in-depth measure.

– 
Annabelle Richard Backman
AWS Identity
 

On 11/22/19, 9:37 PM, "OAuth on behalf of Torsten Lodderstedt" 
 
wrote:



> On 22. Nov 2019, at 21:21, Richard Backman, Annabelle 
 wrote:
> 
> The dichotomy of "TLS working" and "TLS failed" only applies to a single 
TLS connection. In non-end-to-end TLS environments, each TLS terminator between 
client and RS introduces additional token leakage/exfiltration risk, 
irrespective of the quality of the TLS connections themselves. Each terminator 
also introduces complexity for implementing mTLS, Token Binding, or any other 
TLS-based sender constraint solution, which means developers with 
non-end-to-end TLS use cases will be more likely to turn to DPoP.

The point is we are talking about different developers here. The client 
developer does not need to care about the connection between proxy and service. 
She relies on the service provider to get it right. So the developers (or 
DevOps or admins) of the service provider need to ensure end to end security. 
And if the path is secured once, it will work for all clients. 

> If DPoP is intended to address "cases where neither mTLS nor OAuth Token 
Binding are available" [1], then it should address this risk of token leakage 
between client and RS. If on the other hand DPoP is only intended to support 
the SPA use case and assumes the use of end-to-end TLS, then the document 
should be updated to reflect that.

I agree. 

> 
> [1]: https://tools.ietf.org/html/draft-fett-oauth-dpop-03#section-1
> 
> – 
> Annabelle Richard Backman
> AWS Identity
> 
> 
> On 11/22/19, 8:17 PM, "OAuth on behalf of Torsten Lodderstedt" 
 
wrote:
> 
>Hi Neil,
> 
>> On 22. Nov 2019, at 18:08, Neil Madden  wrote:
>> 
>> On 22 Nov 2019, at 07:53, Torsten Lodderstedt 
 wrote:
>>> 
>>> 
>>> 
 On 22. Nov 2019, at 15:24, Justin Richer  wrote:
 
 I’m going to +1 Dick and Annabelle’s question about the scope here. 
That was the one major thing that struck me during the DPoP discussions in 
Singapore yesterday: we don’t seem to agree on what DPoP is for. Some 
(including the authors, it seems) see it as a quick point-solution to a 
specific use case. Others see it as a general PoP mechanism. 
 
 If it’s the former, then it should be explicitly tied to one specific 
set of things. If it’s the latter, then it needs to be expanded. 
>>> 
>>> as a co-author of the DPoP draft I state again what I said yesterday: 
DPoP is a mechanism for sender-constraining access tokens sent from SPAs only. 
The threat to be prevented is token replay.
>> 
>> I think the phrase "token replay" is ambiguous. Traditionally it refers 
to an attacker being able to capture a token (or whole requests) in use and 
then replay it against the same RS. This is already protected against by the 
use of normal TLS on the connection between the client and the RS. I think 
instead you are referring to a malicious/compromised RS replaying the token to 
a different RS - which has more of the flavour of a man in the middle attack 
(of the phishing kind).
> 
>I would argue TLS basically prevents leakage and not replay. The 
threats we try to cope with can be found in the Security BCP. There are 
multiple ways access tokens can leak, including referrer headers, mix-up, open 
redirection, browser history, and all sorts of access token leakage at the 
resource server
> 
>Please have a look at 
https://tools.ietf.org/html/draft-ietf-oauth-security-topics-13#section-4.
> 
>
https://tools.ietf.org/html/draft-ietf-oauth-security-topics-13#section-4.8 
also has an extensive discussion of potential counter measures, including 
audience restricted access tokens and a conclusion to recommend sender 
constrained access tokens over other mechanisms.
> 
>> 
>> But if that's the case then there are much simpler defences than those 
proposed in the current draft:
>> 
>> 1. Get separate access tokens for each RS with correct audience and 
scopes. The consensus appears to be that this is hard to do in some cases, 
hence the draft.
> 
>How many deployments do you know that today are able to issue 
RS-specific access tokens?
>BTW: how would you identify the RS?
> 
>I agree that would be an alternative and I’m a great fan of such 
tokens (and used them a lot at Deutsche Telekom) but in my perception this 
pattern 

Re: [OAUTH-WG] New Version Notification for draft-fett-oauth-dpop-03.txt

2019-11-22 Thread Torsten Lodderstedt


> On 22. Nov 2019, at 21:21, Richard Backman, Annabelle 
>  wrote:
> 
> The dichotomy of "TLS working" and "TLS failed" only applies to a single TLS 
> connection. In non-end-to-end TLS environments, each TLS terminator between 
> client and RS introduces additional token leakage/exfiltration risk, 
> irrespective of the quality of the TLS connections themselves. Each 
> terminator also introduces complexity for implementing mTLS, Token Binding, 
> or any other TLS-based sender constraint solution, which means developers 
> with non-end-to-end TLS use cases will be more likely to turn to DPoP.

The point is we are talking about different developers here. The client 
developer does not need to care about the connection between proxy and service. 
She relies on the service provider to get it right. So the developers (or 
DevOps or admins) of the service provider need to ensure end to end security. 
And if the path is secured once, it will work for all clients. 

> If DPoP is intended to address "cases where neither mTLS nor OAuth Token 
> Binding are available" [1], then it should address this risk of token leakage 
> between client and RS. If on the other hand DPoP is only intended to support 
> the SPA use case and assumes the use of end-to-end TLS, then the document 
> should be updated to reflect that.

I agree. 

> 
> [1]: https://tools.ietf.org/html/draft-fett-oauth-dpop-03#section-1
> 
> – 
> Annabelle Richard Backman
> AWS Identity
> 
> 
> On 11/22/19, 8:17 PM, "OAuth on behalf of Torsten Lodderstedt" 
>  torsten=40lodderstedt@dmarc.ietf.org> wrote:
> 
>Hi Neil,
> 
>> On 22. Nov 2019, at 18:08, Neil Madden  wrote:
>> 
>> On 22 Nov 2019, at 07:53, Torsten Lodderstedt 
>>  wrote:
>>> 
>>> 
>>> 
 On 22. Nov 2019, at 15:24, Justin Richer  wrote:
 
 I’m going to +1 Dick and Annabelle’s question about the scope here. That 
 was the one major thing that struck me during the DPoP discussions in 
 Singapore yesterday: we don’t seem to agree on what DPoP is for. Some 
 (including the authors, it seems) see it as a quick point-solution to a 
 specific use case. Others see it as a general PoP mechanism. 
 
 If it’s the former, then it should be explicitly tied to one specific set 
 of things. If it’s the latter, then it needs to be expanded. 
>>> 
>>> as a co-author of the DPoP draft I state again what I said yesterday: DPoP 
>>> is a mechanism for sender-constraining access tokens sent from SPAs only. 
>>> The threat to be prevented is token replay.
>> 
>> I think the phrase "token replay" is ambiguous. Traditionally it refers to 
>> an attacker being able to capture a token (or whole requests) in use and 
>> then replay it against the same RS. This is already protected against by the 
>> use of normal TLS on the connection between the client and the RS. I think 
>> instead you are referring to a malicious/compromised RS replaying the token 
>> to a different RS - which has more of the flavour of a man in the middle 
>> attack (of the phishing kind).
> 
>I would argue TLS basically prevents leakage and not replay. The threats 
> we try to cope with can be found in the Security BCP. There are multiple ways 
> access tokens can leak, including referrer headers, mix-up, open redirection, 
> browser history, and all sorts of access token leakage at the resource server
> 
>Please have a look at 
> https://tools.ietf.org/html/draft-ietf-oauth-security-topics-13#section-4.
> 
>
> https://tools.ietf.org/html/draft-ietf-oauth-security-topics-13#section-4.8 
> also has an extensive discussion of potential counter measures, including 
> audience restricted access tokens and a conclusion to recommend sender 
> constrained access tokens over other mechanisms.
> 
>> 
>> But if that's the case then there are much simpler defences than those 
>> proposed in the current draft:
>> 
>> 1. Get separate access tokens for each RS with correct audience and scopes. 
>> The consensus appears to be that this is hard to do in some cases, hence the 
>> draft.
> 
>How many deployments do you know that today are able to issue RS-specific 
> access tokens?
>BTW: how would you identify the RS?
> 
>I agree that would be an alternative and I’m a great fan of such tokens 
> (and used them a lot at Deutsche Telekom) but in my perception this pattern 
> needs still to be established in the market. Moreover, they basically protect 
> from a rough RS (if the URL is used as audience) replaying the token 
> someplace else, but they do not protect from all other kinds of 
> leakage/replay (e.g. log files).
> 
>> 2. Make the DPoP token be a simple JWT with an "iat" and the origin of the 
>> RS. This stops the token being reused elsewhere but the client can reuse it 
>> (replay it) for many requests.
>> 3. Issue a macaroon-based access token and the client can add a correct 
>> audience and scope restrictions at the point of use.
> 
>Why is this needed if the access token is 

Re: [OAUTH-WG] New Version Notification for draft-fett-oauth-dpop-03.txt

2019-11-22 Thread Torsten Lodderstedt
Hi Neil,

> On 22. Nov 2019, at 20:50, Neil Madden  wrote:
> 
> Hi Torsten,
> 
> On 22 Nov 2019, at 12:15, Torsten Lodderstedt  wrote:
>> 
>> Hi Neil,
>> 
>>> On 22. Nov 2019, at 18:08, Neil Madden  wrote:
>>> 
>>> I think the phrase "token replay" is ambiguous. Traditionally it refers to 
>>> an attacker being able to capture a token (or whole requests) in use and 
>>> then replay it against the same RS. This is already protected against by 
>>> the use of normal TLS on the connection between the client and the RS. I 
>>> think instead you are referring to a malicious/compromised RS replaying the 
>>> token to a different RS - which has more of the flavour of a man in the 
>>> middle attack (of the phishing kind).
>> 
>> I would argue TLS basically prevents leakage and not replay.
> 
> It also protects against replay. If you capture TLS-encrypted packets with 
> Wireshark you not only cannot decipher them but also cannot replay them 
> because they include specific anti-replay measures at the record level in the 
> form of unique session keys and record sequence numbers included in the MAC 
> calculations. This is essential to the security of TLS.

I understand. I was looking onto TLS from an application perspective, that 
might explain differing perception.

> 
>> The threats we try to cope with can be found in the Security BCP. There are 
>> multiple ways access tokens can leak, including referrer headers, mix-up, 
>> open redirection, browser history, and all sorts of access token leakage at 
>> the resource server
>> 
>> Please have a look at 
>> https://tools.ietf.org/html/draft-ietf-oauth-security-topics-13#section-4.
>> 
>> https://tools.ietf.org/html/draft-ietf-oauth-security-topics-13#section-4.8 
>> also has an extensive discussion of potential counter measures, including 
>> audience restricted access tokens and a conclusion to recommend sender 
>> constrained access tokens over other mechanisms.
> 
> OK, good - these are threats beyond token replay (at least as I understand 
> that term). It would be good to explicitly add them to the DPoP document 
> motivation.
> 
> Note that most of these ways that an access token can leak also apply equally 
> to leak of the DPoP JWT, so the protection afforded by DPoP boils down to how 
> well the restrictions encoded into the JWT prevent it from being reused in 
> this case - e.g., restricting the expiry time, audience, scope, linking it to 
> a specific request (htm/htu) etc. 
> 
> Every single one of those restrictions can be equally well encoded as caveats 
> on a macaroon access token without any need for public key signatures or 
> additional tokens and headers.
> 
>>> But if that's the case then there are much simpler defences than those 
>>> proposed in the current draft:
>>> 
>>> 1. Get separate access tokens for each RS with correct audience and scopes. 
>>> The consensus appears to be that this is hard to do in some cases, hence 
>>> the draft.
>> 
>> How many deployments do you know that today are able to issue RS-specific 
>> access tokens?
>> BTW: how would you identify the RS?
>> 
>> I agree that would be an alternative and I’m a great fan of such tokens (and 
>> used them a lot at Deutsche Telekom) but in my perception this pattern needs 
>> still to be established in the market. Moreover, they basically protect from 
>> a rough RS (if the URL is used as audience) replaying the token someplace 
>> else, but they do not protect from all other kinds of leakage/replay (e.g. 
>> log files).
> 
> Many services already do this. For example, Google encodes the intended RS 
> into the scopes on GCP 
> (https://developers.google.com/identity/protocols/googlescopes). A client can 
> do a single authorization flow to authorize all the scopes it needs and then 
> use repeated calls to the refresh token endpoint to obtain individual access 
> tokens with subsets of the authorized scopes for each endpoint.

And that works at google? How does the client indicate the RS it wants to use 
the first access token (that is obtains in the course of the code exchange)?

> 
> (I think Brian also mentioned this pattern at OSW, but it might have been 
> somebody else).

I know the pattern and we used this at Deutsche Telekom, but I don’t know any 
other deployment utilising this pattern. In my observation, most people treat 
access tokens as cookies and use them across RSs. Another reason might be that, 
before resource indicators, there was no interoperable way to ask for a token 
for a certain RS.

> 
>> 
>>> 2. Make the DPoP token be a simple JWT with an "iat" and the origin of the 
>>> RS. This stops the token being reused elsewhere but the client can reuse it 
>>> (replay it) for many requests.
>>> 3. Issue a macaroon-based access token and the client can add a correct 
>>> audience and scope restrictions at the point of use.
>> 
>> Why is this needed if the access token is already audience restricted? Or do 
>> you propose this as alternative? 
> 
> These 

Re: [OAUTH-WG] New Version Notification for draft-fett-oauth-dpop-03.txt

2019-11-22 Thread Neil Madden
Hi Torsten,

On 22 Nov 2019, at 12:15, Torsten Lodderstedt  wrote:
> 
> Hi Neil,
> 
>> On 22. Nov 2019, at 18:08, Neil Madden  wrote:
>> 
>> I think the phrase "token replay" is ambiguous. Traditionally it refers to 
>> an attacker being able to capture a token (or whole requests) in use and 
>> then replay it against the same RS. This is already protected against by the 
>> use of normal TLS on the connection between the client and the RS. I think 
>> instead you are referring to a malicious/compromised RS replaying the token 
>> to a different RS - which has more of the flavour of a man in the middle 
>> attack (of the phishing kind).
> 
> I would argue TLS basically prevents leakage and not replay.

It also protects against replay. If you capture TLS-encrypted packets with 
Wireshark you not only cannot decipher them but also cannot replay them because 
they include specific anti-replay measures at the record level in the form of 
unique session keys and record sequence numbers included in the MAC 
calculations. This is essential to the security of TLS.

> The threats we try to cope with can be found in the Security BCP. There are 
> multiple ways access tokens can leak, including referrer headers, mix-up, 
> open redirection, browser history, and all sorts of access token leakage at 
> the resource server
> 
> Please have a look at 
> https://tools.ietf.org/html/draft-ietf-oauth-security-topics-13#section-4 
> .
> 
> https://tools.ietf.org/html/draft-ietf-oauth-security-topics-13#section-4.8 
> 
>  also has an extensive discussion of potential counter measures, including 
> audience restricted access tokens and a conclusion to recommend sender 
> constrained access tokens over other mechanisms.

OK, good - these are threats beyond token replay (at least as I understand that 
term). It would be good to explicitly add them to the DPoP document motivation.

Note that most of these ways that an access token can leak also apply equally 
to leak of the DPoP JWT, so the protection afforded by DPoP boils down to how 
well the restrictions encoded into the JWT prevent it from being reused in this 
case - e.g., restricting the expiry time, audience, scope, linking it to a 
specific request (htm/htu) etc. 

Every single one of those restrictions can be equally well encoded as caveats 
on a macaroon access token without any need for public key signatures or 
additional tokens and headers.

>> But if that's the case then there are much simpler defences than those 
>> proposed in the current draft:
>> 
>> 1. Get separate access tokens for each RS with correct audience and scopes. 
>> The consensus appears to be that this is hard to do in some cases, hence the 
>> draft.
> 
> How many deployments do you know that today are able to issue RS-specific 
> access tokens?
> BTW: how would you identify the RS?
> 
> I agree that would be an alternative and I’m a great fan of such tokens (and 
> used them a lot at Deutsche Telekom) but in my perception this pattern needs 
> still to be established in the market. Moreover, they basically protect from 
> a rough RS (if the URL is used as audience) replaying the token someplace 
> else, but they do not protect from all other kinds of leakage/replay (e.g. 
> log files).

Many services already do this. For example, Google encodes the intended RS into 
the scopes on GCP 
(https://developers.google.com/identity/protocols/googlescopes 
). A client can 
do a single authorization flow to authorize all the scopes it needs and then 
use repeated calls to the refresh token endpoint to obtain individual access 
tokens with subsets of the authorized scopes for each endpoint.

(I think Brian also mentioned this pattern at OSW, but it might have been 
somebody else).

> 
>> 2. Make the DPoP token be a simple JWT with an "iat" and the origin of the 
>> RS. This stops the token being reused elsewhere but the client can reuse it 
>> (replay it) for many requests.
>> 3. Issue a macaroon-based access token and the client can add a correct 
>> audience and scope restrictions at the point of use.
> 
> Why is this needed if the access token is already audience restricted? Or do 
> you propose this as alternative? 

These are all alternatives. Any one of them prevents the specific attack of 
replay by the RS to another RS.

-- Neil___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] New Version Notification for draft-fett-oauth-dpop-03.txt

2019-11-22 Thread Jim Manico
> I would argue TLS basically prevents leakage and not replay

Doesn’t token binding, which is esentially a TLS extension, prevent some forms 
of token replay?

--
Jim Manico
@Manicode
Secure Coding Education
+1 (808) 652-3805

> On Nov 22, 2019, at 7:26 AM, Richard Backman, Annabelle 
>  wrote:
> 
> 
> > Yes of course. But this is the HMAC *tag* not the original key.
> Sure. And if the client attenuates the macaroon, it is used as a key that the 
> client proves possession of by presenting the chained HMAC. Clients doing 
> DPoP aren’t proving possession of the “original key” (i.e., a key used to 
> generate the access token) either.
>  
> > Well, you don’t have to return a key from the token endpoint for a start.
> Yes, that’s what I meant by saying that it eliminates key negotiation. Though 
> I suppose it’s more correct to say that it inlines it. The AS still provides 
> a key, it just happens to be part of the access token.
>  
> Macaroons are an interesting pattern, but not because they’re not doing PoP. 
> Proof of possession is pretty core to the whole idea of digital signatures 
> and HMACs. What makes them interesting is the way they inline key 
> distribution. Whether or not they’re applicable to DPoP depends, ultimately, 
> on the use cases DPoP is targeting and the threats it is trying to mitigate.
>  
> – 
> Annabelle Richard Backman
> AWS Identity
>  
>  
> From: Neil Madden 
> Date: Friday, November 22, 2019 at 3:09 PM
> To: "Richard Backman, Annabelle" 
> Cc: Brian Campbell , oauth 
> Subject: Re: [OAUTH-WG] New Version Notification for 
> draft-fett-oauth-dpop-03.txt
>  
> On 22 Nov 2019, at 01:42, Richard Backman, Annabelle  
> wrote:
>  
> Macaroons are built on proof of possession. In order to add a caveat to a 
> macaroon, the sender has to have the HMAC of the macaroon without their 
> caveat.
>  
> Yes of course. But this is the HMAC *tag* not the original key. They can’t 
> change anything the AS originally signed. 
> 
> 
> The distinctive property of macaroons as I see it is that they eliminate the 
> need for key negotiation with the bearer. How much value this has over the AS 
> just returning a symmetric key alongside the access token in the token 
> request, I’m not sure.
>  
> Well, you don’t have to return a key from the token endpoint for a start. The 
> client doesn’t need to create and send any additional token. The whole thing 
> works with existing standards and technologies and can be incrementally 
> adopted as required. If RSes do token introspection already then they need 
> zero changes to support this.
> 
> 
> There are key distribution challenges with that if you are doing validation 
> at the RS, but validation at the RS using either approach means you’ve lost 
> protection against replay by the RS. This brings us back to a core question: 
> what threats are in scope for DPoP, and in what contexts?
>  
> Agreed, but validation at the RS is premature optimisation in many cases. And 
> if you do need protection against that the client can even append a 
> confirmation key as a caveat and retrospectively upgrade a bearer token to a 
> pop token. They can even do transfer of ownership by creating copies of the 
> original token bound to other certificates/public keys. 
>  
> Neil
>  
> 
> 
>  
> – 
> Annabelle Richard Backman
> AWS Identity
>  
>  
> From: OAuth  on behalf of Neil Madden 
> 
> Date: Friday, November 22, 2019 at 4:40 AM
> To: Brian Campbell 
> Cc: oauth 
> Subject: Re: [OAUTH-WG] New Version Notification for 
> draft-fett-oauth-dpop-03.txt
>  
> At the end of my previous email I mentioned that you can achieve some of the 
> same aims as DPoP without needing a PoP mechanism at all. This email is that 
> follow-up.
>  
> OAuth is agnostic about the format of access tokens and many vendors support 
> either random string database tokens or JWTs. But there are other choices for 
> access token format, some of which have more interesting properties. In 
> particular, Google proposed Macaroons a few years ago as a "better cookie" 
> [1] and I think they systematically address many of these issues when used as 
> an access token format.
>  
> For those who aren't familiar with them, Macaroons are a bit like a HS256 
> JWT. They have a location (a bit like the audience in a JWT) and an 
> identifier (an arbitrary string) and then are signed with HMAC-SHA256 using a 
> secret key. (There's no claims set or headers - they are very minimal). In 
> this case the secret key would be owned by the AS and used to sign 
> macaroon-based access tokens. Validating the token would be done via token 
> introspection at the AS.
&

Re: [OAUTH-WG] New Version Notification for draft-fett-oauth-dpop-03.txt

2019-11-22 Thread Richard Backman, Annabelle
> Yes of course. But this is the HMAC *tag* not the original key.
Sure. And if the client attenuates the macaroon, it is used as a key that the 
client proves possession of by presenting the chained HMAC. Clients doing DPoP 
aren’t proving possession of the “original key” (i.e., a key used to generate 
the access token) either.

> Well, you don’t have to return a key from the token endpoint for a start.
Yes, that’s what I meant by saying that it eliminates key negotiation. Though I 
suppose it’s more correct to say that it inlines it. The AS still provides a 
key, it just happens to be part of the access token.

Macaroons are an interesting pattern, but not because they’re not doing PoP. 
Proof of possession is pretty core to the whole idea of digital signatures and 
HMACs. What makes them interesting is the way they inline key distribution. 
Whether or not they’re applicable to DPoP depends, ultimately, on the use cases 
DPoP is targeting and the threats it is trying to mitigate.

–
Annabelle Richard Backman
AWS Identity


From: Neil Madden 
Date: Friday, November 22, 2019 at 3:09 PM
To: "Richard Backman, Annabelle" 
Cc: Brian Campbell , oauth 
Subject: Re: [OAUTH-WG] New Version Notification for 
draft-fett-oauth-dpop-03.txt

On 22 Nov 2019, at 01:42, Richard Backman, Annabelle  
wrote:

Macaroons are built on proof of possession. In order to add a caveat to a 
macaroon, the sender has to have the HMAC of the macaroon without their caveat.

Yes of course. But this is the HMAC *tag* not the original key. They can’t 
change anything the AS originally signed.


The distinctive property of macaroons as I see it is that they eliminate the 
need for key negotiation with the bearer. How much value this has over the AS 
just returning a symmetric key alongside the access token in the token request, 
I’m not sure.

Well, you don’t have to return a key from the token endpoint for a start. The 
client doesn’t need to create and send any additional token. The whole thing 
works with existing standards and technologies and can be incrementally adopted 
as required. If RSes do token introspection already then they need zero changes 
to support this.


There are key distribution challenges with that if you are doing validation at 
the RS, but validation at the RS using either approach means you’ve lost 
protection against replay by the RS. This brings us back to a core question: 
what threats are in scope for DPoP, and in what contexts?

Agreed, but validation at the RS is premature optimisation in many cases. And 
if you do need protection against that the client can even append a 
confirmation key as a caveat and retrospectively upgrade a bearer token to a 
pop token. They can even do transfer of ownership by creating copies of the 
original token bound to other certificates/public keys.

Neil




–
Annabelle Richard Backman
AWS Identity


From: OAuth  on behalf of Neil Madden 

Date: Friday, November 22, 2019 at 4:40 AM
To: Brian Campbell 
Cc: oauth 
Subject: Re: [OAUTH-WG] New Version Notification for 
draft-fett-oauth-dpop-03.txt

At the end of my previous email I mentioned that you can achieve some of the 
same aims as DPoP without needing a PoP mechanism at all. This email is that 
follow-up.

OAuth is agnostic about the format of access tokens and many vendors support 
either random string database tokens or JWTs. But there are other choices for 
access token format, some of which have more interesting properties. In 
particular, Google proposed Macaroons a few years ago as a "better cookie" [1] 
and I think they systematically address many of these issues when used as an 
access token format.

For those who aren't familiar with them, Macaroons are a bit like a HS256 JWT. 
They have a location (a bit like the audience in a JWT) and an identifier (an 
arbitrary string) and then are signed with HMAC-SHA256 using a secret key. 
(There's no claims set or headers - they are very minimal). In this case the 
secret key would be owned by the AS and used to sign macaroon-based access 
tokens. Validating the token would be done via token introspection at the AS.

The clever bit is that anybody at all can append "caveats" to a macaroon at any 
time, but nobody can remove one once added. Caveats are restrictions on the use 
of a token - they only ever reduce the authority granted by the token, never 
expand it. The AS can validate the token and all the caveats with its secret 
key. So, for example, if an access token was a macaroon then the client could 
append a caveat to reduce the scope, or reduce the expiry time, or reduce the 
audience, and so on.

The really clever bit is that the client can keep a copy of the original token 
and create restricted versions to send to different resource servers. Because 
HMAC is very cheap, the client can even do this before each and every request. 
(This is what the original paper refers to as "contextual caveats"). This means 
that a c

Re: [OAUTH-WG] New Version Notification for draft-fett-oauth-dpop-03.txt

2019-11-22 Thread Torsten Lodderstedt
I would love see this happen!
Note: you would also need to create a cert.

> On 22. Nov 2019, at 19:38, Petteri Stenius  
> wrote:
> 
> Hi all,
> 
> For browser based apps it is basically limitations of Fetch API that prevent 
> MTLS binding, as Fetch uses client certificate dialogs and stores. Does it 
> make sense to suggest browser vendors fix the Fetch API to better support 
> MTLS?
> 
> For example if Fetch API allowed setting up a MTLS request with a WebCrypto 
> generated and managed key it would be sufficient for MTLS binding. 
> 
> Petteri
> 
> Fetch API - https://fetch.spec.whatwg.org/ 
> 
> -Original Message-
> From: OAuth  On Behalf Of Torsten Lodderstedt
> Sent: perjantai 22. marraskuuta 2019 10.54
> To: Mike Jones 
> Cc: oauth ; Torsten Lodderstedt 
> ; Rob Otto 
> 
> Subject: Re: [OAUTH-WG] New Version Notification for 
> draft-fett-oauth-dpop-03.txt
> 
> I couldn't agree more. I think we should, again, try to find a way to utilise 
> TLS in the browser as well. 
> 
>> On 22. Nov 2019, at 16:50, Mike Jones 
>>  wrote:
>> 
>> I hear you about the difference between Web apps and native apps, Torsten.  
>> But using different mechanisms for different application types is a cost in 
>> and of itself.
>> 
>> It's good to understand the tradeoffs.
>> 
>> -- Mike
>> 
>> 
>> From: OAuth  on behalf of Torsten Lodderstedt 
>> 
>> Sent: Friday, November 22, 2019 4:20:58 PM
>> To: Rob Otto 
>> Cc: oauth 
>> Subject: [EXTERNAL] Re: [OAUTH-WG] New Version Notification for 
>> draft-fett-oauth-dpop-03.txt
>> 
>> Hi Rob,
>> 
>>> On 22. Nov 2019, at 16:10, Rob Otto 
>>>  wrote:
>>> 
>>> Hi Torsten - thanks for the reply..
>>> 
>>> Responses in line.
>>> 
>>> Grüsse
>>> Rob
>>> 
>>> On Fri, 22 Nov 2019 at 07:59, Torsten Lodderstedt 
>>>  wrote:
>>> Hi Rob, 
>>> 
>>>> On 22. Nov 2019, at 15:52, Rob Otto 
>>>>  wrote:
>>>> 
>>>> Hi everyone
>>>> 
>>>> I'd agree with this. I'm looking at DPOP as an alternative and ultimately 
>>>> simpler way to accomplish what we can already do with MTLS-bound Access 
>>>> Tokens, for use cases such as the ones we address in Open Banking; these 
>>>> are API transactions that demand a high level of assurance and as such we 
>>>> absolutely must have a mechanism to constrain those tokens to the intended 
>>>> bearer. Requiring MTLS across the ecosystem, however, adds significant 
>>>> overhead in terms of infrastructural complexity and is always going to 
>>>> limit the extent to which such a model can scale.
>>> 
>>> I would like to unterstand why mTLS adds “significant overhead in terms of 
>>> infrastructural complexity”. Can you please dig into details?
>>> 
>>> I guess it's mostly that every RS-endpoint (or what sits in front of it) 
>>> has to have a mechanism for accepting/terminating mTLS, managing roots of 
>>> trust, validating/OCSP, etc
>> 
>> You use a PKI then. We use mTLS with self-signed certs. That requires the RS 
>> to not check the X.509 trust chain, which requires a special setting 
>> (optionalNoCA). 
>> 
>>> and then passing the certificates downstream as headers. None of it is 
>>> necessarily difficult or impossible to do in isolation, but I meet many 
>>> many people every week who simply don't know how to do any of this stuff. 
>>> And these are typically "network people", for want of a better word. There 
>>> are quite a few SaaS API management and edge solutions out there that don't 
>>> even support mTLS at all. You also have the difficulty in handling a 
>>> combination of MTLS and non-MTLS traffic to the same endpoints.
>> 
>> yep. You better split them, especially if that’s a user facing endpoint.
>> 
>>> Again, it's possible to do, but far from straightforward. 
>>> 
>>> 
>>> 
>>> Our experience so far: It can be a headache to set up in a microservice 
>>> architecture with TLS terminating proxies but once it runs it’s ok. On the 
>>> other side, it’s easy to use for client developers and it combines client 
>>> authentication and sender constraining nicely.  
>>> 
>>> I do think its an elegant solution, don't get me wrong. It's just that 
>>> there are plenty of moving parts that you need to get right and that can be 
>>> a challenge, particularly in l

Re: [OAUTH-WG] New Version Notification for draft-fett-oauth-dpop-03.txt

2019-11-22 Thread Torsten Lodderstedt
Hi Neil,

> On 22. Nov 2019, at 18:08, Neil Madden  wrote:
> 
> On 22 Nov 2019, at 07:53, Torsten Lodderstedt 
>  wrote:
>> 
>> 
>> 
>>> On 22. Nov 2019, at 15:24, Justin Richer  wrote:
>>> 
>>> I’m going to +1 Dick and Annabelle’s question about the scope here. That 
>>> was the one major thing that struck me during the DPoP discussions in 
>>> Singapore yesterday: we don’t seem to agree on what DPoP is for. Some 
>>> (including the authors, it seems) see it as a quick point-solution to a 
>>> specific use case. Others see it as a general PoP mechanism. 
>>> 
>>> If it’s the former, then it should be explicitly tied to one specific set 
>>> of things. If it’s the latter, then it needs to be expanded. 
>> 
>> as a co-author of the DPoP draft I state again what I said yesterday: DPoP 
>> is a mechanism for sender-constraining access tokens sent from SPAs only. 
>> The threat to be prevented is token replay.
> 
> I think the phrase "token replay" is ambiguous. Traditionally it refers to an 
> attacker being able to capture a token (or whole requests) in use and then 
> replay it against the same RS. This is already protected against by the use 
> of normal TLS on the connection between the client and the RS. I think 
> instead you are referring to a malicious/compromised RS replaying the token 
> to a different RS - which has more of the flavour of a man in the middle 
> attack (of the phishing kind).

I would argue TLS basically prevents leakage and not replay. The threats we try 
to cope with can be found in the Security BCP. There are multiple ways access 
tokens can leak, including referrer headers, mix-up, open redirection, browser 
history, and all sorts of access token leakage at the resource server

Please have a look at 
https://tools.ietf.org/html/draft-ietf-oauth-security-topics-13#section-4.

https://tools.ietf.org/html/draft-ietf-oauth-security-topics-13#section-4.8 
also has an extensive discussion of potential counter measures, including 
audience restricted access tokens and a conclusion to recommend sender 
constrained access tokens over other mechanisms.

> 
> But if that's the case then there are much simpler defences than those 
> proposed in the current draft:
> 
> 1. Get separate access tokens for each RS with correct audience and scopes. 
> The consensus appears to be that this is hard to do in some cases, hence the 
> draft.

How many deployments do you know that today are able to issue RS-specific 
access tokens?
BTW: how would you identify the RS?

I agree that would be an alternative and I’m a great fan of such tokens (and 
used them a lot at Deutsche Telekom) but in my perception this pattern needs 
still to be established in the market. Moreover, they basically protect from a 
rough RS (if the URL is used as audience) replaying the token someplace else, 
but they do not protect from all other kinds of leakage/replay (e.g. log files).

> 2. Make the DPoP token be a simple JWT with an "iat" and the origin of the 
> RS. This stops the token being reused elsewhere but the client can reuse it 
> (replay it) for many requests.
> 3. Issue a macaroon-based access token and the client can add a correct 
> audience and scope restrictions at the point of use.

Why is this needed if the access token is already audience restricted? Or do 
you propose this as alternative? 

> 
> Protecting against the first kind of replay attacks only becomes an issue if 
> we assume the protections in TLS have failed. But if DPoP is only intended 
> for cases where mTLS can't be used, it shouldn't have to protect against a 
> stronger threat model in which we assume that TLS security has been lost.

I agree. 

best regards,
Torsten. 

> 
> -- Neil



smime.p7s
Description: S/MIME cryptographic signature
___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] New Version Notification for draft-fett-oauth-dpop-03.txt

2019-11-22 Thread Petteri Stenius
Hi all,

For browser based apps it is basically limitations of Fetch API that prevent 
MTLS binding, as Fetch uses client certificate dialogs and stores. Does it make 
sense to suggest browser vendors fix the Fetch API to better support MTLS?

For example if Fetch API allowed setting up a MTLS request with a WebCrypto 
generated and managed key it would be sufficient for MTLS binding. 

Petteri

Fetch API - https://fetch.spec.whatwg.org/ 

-Original Message-
From: OAuth  On Behalf Of Torsten Lodderstedt
Sent: perjantai 22. marraskuuta 2019 10.54
To: Mike Jones 
Cc: oauth ; Torsten Lodderstedt 
; Rob Otto 

Subject: Re: [OAUTH-WG] New Version Notification for 
draft-fett-oauth-dpop-03.txt

I couldn't agree more. I think we should, again, try to find a way to utilise 
TLS in the browser as well. 

> On 22. Nov 2019, at 16:50, Mike Jones 
>  wrote:
> 
> I hear you about the difference between Web apps and native apps, Torsten.  
> But using different mechanisms for different application types is a cost in 
> and of itself.
> 
> It's good to understand the tradeoffs.
> 
> -- Mike
> 
> 
> From: OAuth  on behalf of Torsten Lodderstedt 
> 
> Sent: Friday, November 22, 2019 4:20:58 PM
> To: Rob Otto 
> Cc: oauth 
> Subject: [EXTERNAL] Re: [OAUTH-WG] New Version Notification for 
> draft-fett-oauth-dpop-03.txt
>  
> Hi Rob,
> 
> > On 22. Nov 2019, at 16:10, Rob Otto 
> >  wrote:
> > 
> > Hi Torsten - thanks for the reply..
> > 
> > Responses in line.
> > 
> > Grüsse
> > Rob
> > 
> > On Fri, 22 Nov 2019 at 07:59, Torsten Lodderstedt 
> >  wrote:
> > Hi Rob, 
> > 
> > > On 22. Nov 2019, at 15:52, Rob Otto 
> > >  wrote:
> > > 
> > > Hi everyone
> > > 
> > > I'd agree with this. I'm looking at DPOP as an alternative and ultimately 
> > > simpler way to accomplish what we can already do with MTLS-bound Access 
> > > Tokens, for use cases such as the ones we address in Open Banking; these 
> > > are API transactions that demand a high level of assurance and as such we 
> > > absolutely must have a mechanism to constrain those tokens to the 
> > > intended bearer. Requiring MTLS across the ecosystem, however, adds 
> > > significant overhead in terms of infrastructural complexity and is always 
> > > going to limit the extent to which such a model can scale.
> > 
> > I would like to unterstand why mTLS adds “significant overhead in terms of 
> > infrastructural complexity”. Can you please dig into details?
> > 
> > I guess it's mostly that every RS-endpoint (or what sits in front of it) 
> > has to have a mechanism for accepting/terminating mTLS, managing roots of 
> > trust, validating/OCSP, etc
> 
> You use a PKI then. We use mTLS with self-signed certs. That requires the RS 
> to not check the X.509 trust chain, which requires a special setting 
> (optionalNoCA). 
> 
> > and then passing the certificates downstream as headers. None of it is 
> > necessarily difficult or impossible to do in isolation, but I meet many 
> > many people every week who simply don't know how to do any of this stuff. 
> > And these are typically "network people", for want of a better word. There 
> > are quite a few SaaS API management and edge solutions out there that don't 
> > even support mTLS at all. You also have the difficulty in handling a 
> > combination of MTLS and non-MTLS traffic to the same endpoints.
> 
> yep. You better split them, especially if that’s a user facing endpoint.
> 
> > Again, it's possible to do, but far from straightforward. 
> > 
> >  
> > 
> > Our experience so far: It can be a headache to set up in a microservice 
> > architecture with TLS terminating proxies but once it runs it’s ok. On the 
> > other side, it’s easy to use for client developers and it combines client 
> > authentication and sender constraining nicely.  
> > 
> > I do think its an elegant solution, don't get me wrong. It's just that 
> > there are plenty of moving parts that you need to get right and that can be 
> > a challenge, particularly in large, complex environments. 
> 
> I agree. I also tend there is a tendency to think Client TLS authentication 
> is bad. I understand that from historical and recent experience with PKI. 
> 
> But anybody considering to use a application level signing solution based on 
> _raw_ public keys should directly move towards self-signed certificates. That 
> brings you all the benefits of TLS without the (PKI) headache. 
> 
> > 
> >  
> > 
> > > 
> > > DPOP, to me, appears 

Re: [OAUTH-WG] New Version Notification for draft-fett-oauth-dpop-03.txt

2019-11-22 Thread Neil Madden
It's not a different threat profile. This is the same assumption people made 
when introducing HttpOnly cookies, which just led to attackers switching to 
proxy everything through the browser as per things like https://beefproject.com 
 . (This is actually nicer for the attacker as their 
requests then appear to come from the legitimate user, masking their true 
origin and allowing them to carry out attacks bypassing the corporate 
firewall). DPoP is not a protection against XSS and shouldn't be sold as such.

-- Neil

> On 22 Nov 2019, at 10:19, Aaron Parecki  wrote:
> 
> The main concern about token replay in a SPA is that the access token may be 
> extracted from the app, such as via XSS. Using the Web Crypto API has the 
> advantage of being able to generate a public private key pair where the JS 
> code can't access the private key at all, it can only be used to sign things, 
> making it impossible for an attacker to extract an access token and use it 
> for anything. You might then say that if a JS app is vulnerable to XSS then 
> the attacker could just call the signing API anyway, which is a concern, but 
> that's a different threat profile. 
> 
> Aaron
> 
> 
> 
> 
> On Fri, Nov 22, 2019 at 6:08 PM Neil Madden  > wrote:
> On 22 Nov 2019, at 07:53, Torsten Lodderstedt 
>  > wrote:
> > 
> > 
> > 
> >> On 22. Nov 2019, at 15:24, Justin Richer  >> > wrote:
> >> 
> >> I’m going to +1 Dick and Annabelle’s question about the scope here. That 
> >> was the one major thing that struck me during the DPoP discussions in 
> >> Singapore yesterday: we don’t seem to agree on what DPoP is for. Some 
> >> (including the authors, it seems) see it as a quick point-solution to a 
> >> specific use case. Others see it as a general PoP mechanism. 
> >> 
> >> If it’s the former, then it should be explicitly tied to one specific set 
> >> of things. If it’s the latter, then it needs to be expanded. 
> > 
> > as a co-author of the DPoP draft I state again what I said yesterday: DPoP 
> > is a mechanism for sender-constraining access tokens sent from SPAs only. 
> > The threat to be prevented is token replay.
> 
> I think the phrase "token replay" is ambiguous. Traditionally it refers to an 
> attacker being able to capture a token (or whole requests) in use and then 
> replay it against the same RS. This is already protected against by the use 
> of normal TLS on the connection between the client and the RS. I think 
> instead you are referring to a malicious/compromised RS replaying the token 
> to a different RS - which has more of the flavour of a man in the middle 
> attack (of the phishing kind).
> 
> But if that's the case then there are much simpler defences than those 
> proposed in the current draft:
> 
> 1. Get separate access tokens for each RS with correct audience and scopes. 
> The consensus appears to be that this is hard to do in some cases, hence the 
> draft.
> 2. Make the DPoP token be a simple JWT with an "iat" and the origin of the 
> RS. This stops the token being reused elsewhere but the client can reuse it 
> (replay it) for many requests.
> 3. Issue a macaroon-based access token and the client can add a correct 
> audience and scope restrictions at the point of use.
> 
> Protecting against the first kind of replay attacks only becomes an issue if 
> we assume the protections in TLS have failed. But if DPoP is only intended 
> for cases where mTLS can't be used, it shouldn't have to protect against a 
> stronger threat model in which we assume that TLS security has been lost.
> 
> -- Neil
> ___
> OAuth mailing list
> OAuth@ietf.org 
> https://www.ietf.org/mailman/listinfo/oauth 
> 
> -- 
> 
> Aaron Parecki
> aaronparecki.com 
> @aaronpk 
> 

___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] New Version Notification for draft-fett-oauth-dpop-03.txt

2019-11-22 Thread Aaron Parecki
The main concern about token replay in a SPA is that the access token may
be extracted from the app, such as via XSS. Using the Web Crypto API has
the advantage of being able to generate a public private key pair where the
JS code can't access the private key at all, it can only be used to sign
things, making it impossible for an attacker to extract an access token and
use it for anything. You might then say that if a JS app is vulnerable to
XSS then the attacker could just call the signing API anyway, which is a
concern, but that's a different threat profile.

Aaron




On Fri, Nov 22, 2019 at 6:08 PM Neil Madden 
wrote:

> On 22 Nov 2019, at 07:53, Torsten Lodderstedt  40lodderstedt@dmarc.ietf.org> wrote:
> >
> >
> >
> >> On 22. Nov 2019, at 15:24, Justin Richer  wrote:
> >>
> >> I’m going to +1 Dick and Annabelle’s question about the scope here.
> That was the one major thing that struck me during the DPoP discussions in
> Singapore yesterday: we don’t seem to agree on what DPoP is for. Some
> (including the authors, it seems) see it as a quick point-solution to a
> specific use case. Others see it as a general PoP mechanism.
> >>
> >> If it’s the former, then it should be explicitly tied to one specific
> set of things. If it’s the latter, then it needs to be expanded.
> >
> > as a co-author of the DPoP draft I state again what I said yesterday:
> DPoP is a mechanism for sender-constraining access tokens sent from SPAs
> only. The threat to be prevented is token replay.
>
> I think the phrase "token replay" is ambiguous. Traditionally it refers to
> an attacker being able to capture a token (or whole requests) in use and
> then replay it against the same RS. This is already protected against by
> the use of normal TLS on the connection between the client and the RS. I
> think instead you are referring to a malicious/compromised RS replaying the
> token to a different RS - which has more of the flavour of a man in the
> middle attack (of the phishing kind).
>
> But if that's the case then there are much simpler defences than those
> proposed in the current draft:
>
> 1. Get separate access tokens for each RS with correct audience and
> scopes. The consensus appears to be that this is hard to do in some cases,
> hence the draft.
> 2. Make the DPoP token be a simple JWT with an "iat" and the origin of the
> RS. This stops the token being reused elsewhere but the client can reuse it
> (replay it) for many requests.
> 3. Issue a macaroon-based access token and the client can add a correct
> audience and scope restrictions at the point of use.
>
> Protecting against the first kind of replay attacks only becomes an issue
> if we assume the protections in TLS have failed. But if DPoP is only
> intended for cases where mTLS can't be used, it shouldn't have to protect
> against a stronger threat model in which we assume that TLS security has
> been lost.
>
> -- Neil
> ___
> OAuth mailing list
> OAuth@ietf.org
> https://www.ietf.org/mailman/listinfo/oauth
>
-- 

Aaron Parecki
aaronparecki.com
@aaronpk 
___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] New Version Notification for draft-fett-oauth-dpop-03.txt

2019-11-22 Thread Neil Madden
On 22 Nov 2019, at 07:53, Torsten Lodderstedt 
 wrote:
> 
> 
> 
>> On 22. Nov 2019, at 15:24, Justin Richer  wrote:
>> 
>> I’m going to +1 Dick and Annabelle’s question about the scope here. That was 
>> the one major thing that struck me during the DPoP discussions in Singapore 
>> yesterday: we don’t seem to agree on what DPoP is for. Some (including the 
>> authors, it seems) see it as a quick point-solution to a specific use case. 
>> Others see it as a general PoP mechanism. 
>> 
>> If it’s the former, then it should be explicitly tied to one specific set of 
>> things. If it’s the latter, then it needs to be expanded. 
> 
> as a co-author of the DPoP draft I state again what I said yesterday: DPoP is 
> a mechanism for sender-constraining access tokens sent from SPAs only. The 
> threat to be prevented is token replay.

I think the phrase "token replay" is ambiguous. Traditionally it refers to an 
attacker being able to capture a token (or whole requests) in use and then 
replay it against the same RS. This is already protected against by the use of 
normal TLS on the connection between the client and the RS. I think instead you 
are referring to a malicious/compromised RS replaying the token to a different 
RS - which has more of the flavour of a man in the middle attack (of the 
phishing kind).

But if that's the case then there are much simpler defences than those proposed 
in the current draft:

1. Get separate access tokens for each RS with correct audience and scopes. The 
consensus appears to be that this is hard to do in some cases, hence the draft.
2. Make the DPoP token be a simple JWT with an "iat" and the origin of the RS. 
This stops the token being reused elsewhere but the client can reuse it (replay 
it) for many requests.
3. Issue a macaroon-based access token and the client can add a correct 
audience and scope restrictions at the point of use.

Protecting against the first kind of replay attacks only becomes an issue if we 
assume the protections in TLS have failed. But if DPoP is only intended for 
cases where mTLS can't be used, it shouldn't have to protect against a stronger 
threat model in which we assume that TLS security has been lost.

-- Neil
___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] New Version Notification for draft-fett-oauth-dpop-03.txt

2019-11-22 Thread Neil Madden
On 22 Nov 2019, at 07:13, Dick Hardt  wrote:
> 
> On Fri, Nov 22, 2019 at 3:08 PM Neil Madden  > wrote:
> On 22 Nov 2019, at 01:42, Richard Backman, Annabelle  > wrote:
>> There are key distribution challenges with that if you are doing validation 
>> at the RS, but validation at the RS using either approach means you’ve lost 
>> protection against replay by the RS. This brings us back to a core question: 
>> what threats are in scope for DPoP, and in what contexts?
> 
> 
> Agreed, but validation at the RS is premature optimisation in many cases. And 
> if you do need protection against that the client can even append a 
> confirmation key as a caveat and retrospectively upgrade a bearer token to a 
> pop token. They can even do transfer of ownership by creating copies of the 
> original token bound to other certificates/public keys. 
> 
> While validation at the RS may be an optimization in many cases, it is still 
> a requirement for deployments.

It's a pattern currently used in some deployments. But as Brian (I believe) 
mentioned at the last OSW in Trento, you often really want to setup a shared 
key between the AS and the RS and use authenticated encryption instead for 
performance and PII protection reasons. And if you do that then (a) replay by 
the RS is not possible because each RS has a different key and (b) you can use 
the shared key for macaroons too.

(This is why I proposed adding public key authenticated encryption to JOSE [1] 
after OSW, and why the initial version of the draft included a simple two-way 
handshake to derive a symmetric session key that could be used for subsequent 
messages. That handshake had perfect forward secrecy and key compromise 
impersonation protection as well, which is overkill for DPoP hence my later 
simplified challenge-response version).

> 
> I echo Annabelle's last question: what threats are in scope (and out of 
> scope) for DPoP?

I agree this is the crucial question as per my original post a week ago asking 
what the intended threat model is [2].

[1]: https://tools.ietf.org/html/draft-madden-jose-ecdh-1pu-02 
 
[2]: https://mailarchive.ietf.org/arch/msg/oauth/1Zltt75p5taPw0DRmhoKLbavu9s 


-- Neil___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] New Version Notification for draft-fett-oauth-dpop-03.txt

2019-11-22 Thread Dick Hardt
Another dimension on SPA is that lots of 1P deployments use only SPA. For
them, there is only one type of deployment.

On Fri, Nov 22, 2019 at 4:50 PM Mike Jones  wrote:

> I hear you about the difference between Web apps and native apps,
> Torsten.  But using different mechanisms for different application types is
> a cost in and of itself.
>
> It's good to understand the tradeoffs.
>
> -- Mike
>
>
> --
> *From:* OAuth  on behalf of Torsten Lodderstedt
> 
> *Sent:* Friday, November 22, 2019 4:20:58 PM
> *To:* Rob Otto 
> *Cc:* oauth 
> *Subject:* [EXTERNAL] Re: [OAUTH-WG] New Version Notification for
> draft-fett-oauth-dpop-03.txt
>
> Hi Rob,
>
> > On 22. Nov 2019, at 16:10, Rob Otto  40pingidentity@dmarc.ietf.org> wrote:
> >
> > Hi Torsten - thanks for the reply..
> >
> > Responses in line.
> >
> > Grüsse
> > Rob
> >
> > On Fri, 22 Nov 2019 at 07:59, Torsten Lodderstedt  40lodderstedt@dmarc.ietf.org> wrote:
> > Hi Rob,
> >
> > > On 22. Nov 2019, at 15:52, Rob Otto  .ietf.org> wrote:
> > >
> > > Hi everyone
> > >
> > > I'd agree with this. I'm looking at DPOP as an alternative and
> ultimately simpler way to accomplish what we can already do with MTLS-bound
> Access Tokens, for use cases such as the ones we address in Open Banking;
> these are API transactions that demand a high level of assurance and as
> such we absolutely must have a mechanism to constrain those tokens to the
> intended bearer. Requiring MTLS across the ecosystem, however, adds
> significant overhead in terms of infrastructural complexity and is always
> going to limit the extent to which such a model can scale.
> >
> > I would like to unterstand why mTLS adds “significant overhead in terms
> of infrastructural complexity”. Can you please dig into details?
> >
> > I guess it's mostly that every RS-endpoint (or what sits in front of it)
> has to have a mechanism for accepting/terminating mTLS, managing roots of
> trust, validating/OCSP, etc
>
> You use a PKI then. We use mTLS with self-signed certs. That requires the
> RS to not check the X.509 trust chain, which requires a special setting
> (optionalNoCA).
>
> > and then passing the certificates downstream as headers. None of it is
> necessarily difficult or impossible to do in isolation, but I meet many
> many people every week who simply don't know how to do any of this stuff.
> And these are typically "network people", for want of a better word. There
> are quite a few SaaS API management and edge solutions out there that don't
> even support mTLS at all. You also have the difficulty in handling a
> combination of MTLS and non-MTLS traffic to the same endpoints.
>
> yep. You better split them, especially if that’s a user facing endpoint.
>
> > Again, it's possible to do, but far from straightforward.
> >
> >
> >
> > Our experience so far: It can be a headache to set up in a microservice
> architecture with TLS terminating proxies but once it runs it’s ok. On the
> other side, it’s easy to use for client developers and it combines client
> authentication and sender constraining nicely.
> >
> > I do think its an elegant solution, don't get me wrong. It's just that
> there are plenty of moving parts that you need to get right and that can be
> a challenge, particularly in large, complex environments.
>
> I agree. I also tend there is a tendency to think Client TLS
> authentication is bad. I understand that from historical and recent
> experience with PKI.
>
> But anybody considering to use a application level signing solution based
> on _raw_ public keys should directly move towards self-signed certificates.
> That brings you all the benefits of TLS without the (PKI) headache.
>
> >
> >
> >
> > >
> > > DPOP, to me, appears to be a rather more elegant way of solving the
> same problem, with the benefit of significantly reducing the complexity of
> (and dependency on) the transport layer. I would not argue, however, that
> it is meant to be a solution intended for ubiquitous adoption across all
> OAuth-protected API traffic. Clients still need to manage private keys
> under this model and my experience is that there is typically a steep
> learning curve for developers to negotiate any time you introduce a
> requirement to hold and use keys within  an application.
> >
> > My experience is most developer don’t even get the URL right (in the
> signature and the value used on the receiving end). So the total cost of
> ownership is increased by numerous support inquiries.
> > I'll not comment, at t

Re: [OAUTH-WG] New Version Notification for draft-fett-oauth-dpop-03.txt

2019-11-22 Thread Torsten Lodderstedt
I couldn't agree more. I think we should, again, try to find a way to utilise 
TLS in the browser as well. 

> On 22. Nov 2019, at 16:50, Mike Jones 
>  wrote:
> 
> I hear you about the difference between Web apps and native apps, Torsten.  
> But using different mechanisms for different application types is a cost in 
> and of itself.
> 
> It's good to understand the tradeoffs.
> 
> -- Mike
> 
> 
> From: OAuth  on behalf of Torsten Lodderstedt 
> 
> Sent: Friday, November 22, 2019 4:20:58 PM
> To: Rob Otto 
> Cc: oauth 
> Subject: [EXTERNAL] Re: [OAUTH-WG] New Version Notification for 
> draft-fett-oauth-dpop-03.txt
>  
> Hi Rob,
> 
> > On 22. Nov 2019, at 16:10, Rob Otto 
> >  wrote:
> > 
> > Hi Torsten - thanks for the reply..
> > 
> > Responses in line.
> > 
> > Grüsse
> > Rob
> > 
> > On Fri, 22 Nov 2019 at 07:59, Torsten Lodderstedt 
> >  wrote:
> > Hi Rob, 
> > 
> > > On 22. Nov 2019, at 15:52, Rob Otto 
> > >  wrote:
> > > 
> > > Hi everyone
> > > 
> > > I'd agree with this. I'm looking at DPOP as an alternative and ultimately 
> > > simpler way to accomplish what we can already do with MTLS-bound Access 
> > > Tokens, for use cases such as the ones we address in Open Banking; these 
> > > are API transactions that demand a high level of assurance and as such we 
> > > absolutely must have a mechanism to constrain those tokens to the 
> > > intended bearer. Requiring MTLS across the ecosystem, however, adds 
> > > significant overhead in terms of infrastructural complexity and is always 
> > > going to limit the extent to which such a model can scale.
> > 
> > I would like to unterstand why mTLS adds “significant overhead in terms of 
> > infrastructural complexity”. Can you please dig into details?
> > 
> > I guess it's mostly that every RS-endpoint (or what sits in front of it) 
> > has to have a mechanism for accepting/terminating mTLS, managing roots of 
> > trust, validating/OCSP, etc
> 
> You use a PKI then. We use mTLS with self-signed certs. That requires the RS 
> to not check the X.509 trust chain, which requires a special setting 
> (optionalNoCA). 
> 
> > and then passing the certificates downstream as headers. None of it is 
> > necessarily difficult or impossible to do in isolation, but I meet many 
> > many people every week who simply don't know how to do any of this stuff. 
> > And these are typically "network people", for want of a better word. There 
> > are quite a few SaaS API management and edge solutions out there that don't 
> > even support mTLS at all. You also have the difficulty in handling a 
> > combination of MTLS and non-MTLS traffic to the same endpoints.
> 
> yep. You better split them, especially if that’s a user facing endpoint.
> 
> > Again, it's possible to do, but far from straightforward. 
> > 
> >  
> > 
> > Our experience so far: It can be a headache to set up in a microservice 
> > architecture with TLS terminating proxies but once it runs it’s ok. On the 
> > other side, it’s easy to use for client developers and it combines client 
> > authentication and sender constraining nicely.  
> > 
> > I do think its an elegant solution, don't get me wrong. It's just that 
> > there are plenty of moving parts that you need to get right and that can be 
> > a challenge, particularly in large, complex environments. 
> 
> I agree. I also tend there is a tendency to think Client TLS authentication 
> is bad. I understand that from historical and recent experience with PKI. 
> 
> But anybody considering to use a application level signing solution based on 
> _raw_ public keys should directly move towards self-signed certificates. That 
> brings you all the benefits of TLS without the (PKI) headache. 
> 
> > 
> >  
> > 
> > > 
> > > DPOP, to me, appears to be a rather more elegant way of solving the same 
> > > problem, with the benefit of significantly reducing the complexity of 
> > > (and dependency on) the transport layer. I would not argue, however, that 
> > > it is meant to be a solution intended for ubiquitous adoption across all 
> > > OAuth-protected API traffic. Clients still need to manage private keys 
> > > under this model and my experience is that there is typically a steep 
> > > learning curve for developers to negotiate any time you introduce a 
> > > requirement to hold and use keys within  an application. 
> > 
> > My experience is most developer don’t ev

Re: [OAUTH-WG] New Version Notification for draft-fett-oauth-dpop-03.txt

2019-11-22 Thread Mike Jones
I hear you about the difference between Web apps and native apps, Torsten.  But 
using different mechanisms for different application types is a cost in and of 
itself.

It's good to understand the tradeoffs.

-- Mike



From: OAuth  on behalf of Torsten Lodderstedt 

Sent: Friday, November 22, 2019 4:20:58 PM
To: Rob Otto 
Cc: oauth 
Subject: [EXTERNAL] Re: [OAUTH-WG] New Version Notification for 
draft-fett-oauth-dpop-03.txt

Hi Rob,

> On 22. Nov 2019, at 16:10, Rob Otto 
>  wrote:
>
> Hi Torsten - thanks for the reply..
>
> Responses in line.
>
> Grüsse
> Rob
>
> On Fri, 22 Nov 2019 at 07:59, Torsten Lodderstedt 
>  wrote:
> Hi Rob,
>
> > On 22. Nov 2019, at 15:52, Rob Otto 
> >  wrote:
> >
> > Hi everyone
> >
> > I'd agree with this. I'm looking at DPOP as an alternative and ultimately 
> > simpler way to accomplish what we can already do with MTLS-bound Access 
> > Tokens, for use cases such as the ones we address in Open Banking; these 
> > are API transactions that demand a high level of assurance and as such we 
> > absolutely must have a mechanism to constrain those tokens to the intended 
> > bearer. Requiring MTLS across the ecosystem, however, adds significant 
> > overhead in terms of infrastructural complexity and is always going to 
> > limit the extent to which such a model can scale.
>
> I would like to unterstand why mTLS adds “significant overhead in terms of 
> infrastructural complexity”. Can you please dig into details?
>
> I guess it's mostly that every RS-endpoint (or what sits in front of it) has 
> to have a mechanism for accepting/terminating mTLS, managing roots of trust, 
> validating/OCSP, etc

You use a PKI then. We use mTLS with self-signed certs. That requires the RS to 
not check the X.509 trust chain, which requires a special setting 
(optionalNoCA).

> and then passing the certificates downstream as headers. None of it is 
> necessarily difficult or impossible to do in isolation, but I meet many many 
> people every week who simply don't know how to do any of this stuff. And 
> these are typically "network people", for want of a better word. There are 
> quite a few SaaS API management and edge solutions out there that don't even 
> support mTLS at all. You also have the difficulty in handling a combination 
> of MTLS and non-MTLS traffic to the same endpoints.

yep. You better split them, especially if that’s a user facing endpoint.

> Again, it's possible to do, but far from straightforward.
>
>
>
> Our experience so far: It can be a headache to set up in a microservice 
> architecture with TLS terminating proxies but once it runs it’s ok. On the 
> other side, it’s easy to use for client developers and it combines client 
> authentication and sender constraining nicely.
>
> I do think its an elegant solution, don't get me wrong. It's just that there 
> are plenty of moving parts that you need to get right and that can be a 
> challenge, particularly in large, complex environments.

I agree. I also tend there is a tendency to think Client TLS authentication is 
bad. I understand that from historical and recent experience with PKI.

But anybody considering to use a application level signing solution based on 
_raw_ public keys should directly move towards self-signed certificates. That 
brings you all the benefits of TLS without the (PKI) headache.

>
>
>
> >
> > DPOP, to me, appears to be a rather more elegant way of solving the same 
> > problem, with the benefit of significantly reducing the complexity of (and 
> > dependency on) the transport layer. I would not argue, however, that it is 
> > meant to be a solution intended for ubiquitous adoption across all 
> > OAuth-protected API traffic. Clients still need to manage private keys 
> > under this model and my experience is that there is typically a steep 
> > learning curve for developers to negotiate any time you introduce a 
> > requirement to hold and use keys within  an application.
>
> My experience is most developer don’t even get the URL right (in the 
> signature and the value used on the receiving end). So the total cost of 
> ownership is increased by numerous support inquiries.
> I'll not comment, at the risk of offending developers :)

Alright. Ultimately, I just want to get in touch with those who respond :-)

best regards,
Torsten.

>
> best regards,
> Torsten.
>
> >
> > I guess I'm with Justin - let's look at DPOP as an alternative to 
> > MTLS-bound tokens for high-assurance use cases, at least initially, without 
> > trying to make it solve every problem.
> >
> > Best regards
> > Rob
> >
> >
> > On Fri, 22 Nov 

Re: [OAUTH-WG] New Version Notification for draft-fett-oauth-dpop-03.txt

2019-11-22 Thread Torsten Lodderstedt
Hi Rob,

> On 22. Nov 2019, at 16:10, Rob Otto 
>  wrote:
> 
> Hi Torsten - thanks for the reply..
> 
> Responses in line.
> 
> Grüsse
> Rob
> 
> On Fri, 22 Nov 2019 at 07:59, Torsten Lodderstedt 
>  wrote:
> Hi Rob, 
> 
> > On 22. Nov 2019, at 15:52, Rob Otto 
> >  wrote:
> > 
> > Hi everyone
> > 
> > I'd agree with this. I'm looking at DPOP as an alternative and ultimately 
> > simpler way to accomplish what we can already do with MTLS-bound Access 
> > Tokens, for use cases such as the ones we address in Open Banking; these 
> > are API transactions that demand a high level of assurance and as such we 
> > absolutely must have a mechanism to constrain those tokens to the intended 
> > bearer. Requiring MTLS across the ecosystem, however, adds significant 
> > overhead in terms of infrastructural complexity and is always going to 
> > limit the extent to which such a model can scale.
> 
> I would like to unterstand why mTLS adds “significant overhead in terms of 
> infrastructural complexity”. Can you please dig into details?
> 
> I guess it's mostly that every RS-endpoint (or what sits in front of it) has 
> to have a mechanism for accepting/terminating mTLS, managing roots of trust, 
> validating/OCSP, etc

You use a PKI then. We use mTLS with self-signed certs. That requires the RS to 
not check the X.509 trust chain, which requires a special setting 
(optionalNoCA). 

> and then passing the certificates downstream as headers. None of it is 
> necessarily difficult or impossible to do in isolation, but I meet many many 
> people every week who simply don't know how to do any of this stuff. And 
> these are typically "network people", for want of a better word. There are 
> quite a few SaaS API management and edge solutions out there that don't even 
> support mTLS at all. You also have the difficulty in handling a combination 
> of MTLS and non-MTLS traffic to the same endpoints.

yep. You better split them, especially if that’s a user facing endpoint.

> Again, it's possible to do, but far from straightforward. 
> 
>  
> 
> Our experience so far: It can be a headache to set up in a microservice 
> architecture with TLS terminating proxies but once it runs it’s ok. On the 
> other side, it’s easy to use for client developers and it combines client 
> authentication and sender constraining nicely.  
> 
> I do think its an elegant solution, don't get me wrong. It's just that there 
> are plenty of moving parts that you need to get right and that can be a 
> challenge, particularly in large, complex environments. 

I agree. I also tend there is a tendency to think Client TLS authentication is 
bad. I understand that from historical and recent experience with PKI. 

But anybody considering to use a application level signing solution based on 
_raw_ public keys should directly move towards self-signed certificates. That 
brings you all the benefits of TLS without the (PKI) headache. 

> 
>  
> 
> > 
> > DPOP, to me, appears to be a rather more elegant way of solving the same 
> > problem, with the benefit of significantly reducing the complexity of (and 
> > dependency on) the transport layer. I would not argue, however, that it is 
> > meant to be a solution intended for ubiquitous adoption across all 
> > OAuth-protected API traffic. Clients still need to manage private keys 
> > under this model and my experience is that there is typically a steep 
> > learning curve for developers to negotiate any time you introduce a 
> > requirement to hold and use keys within  an application. 
> 
> My experience is most developer don’t even get the URL right (in the 
> signature and the value used on the receiving end). So the total cost of 
> ownership is increased by numerous support inquiries.
> I'll not comment, at the risk of offending developers :)  

Alright. Ultimately, I just want to get in touch with those who respond :-)

best regards,
Torsten. 

> 
> best regards,
> Torsten. 
> 
> > 
> > I guess I'm with Justin - let's look at DPOP as an alternative to 
> > MTLS-bound tokens for high-assurance use cases, at least initially, without 
> > trying to make it solve every problem. 
> > 
> > Best regards
> > Rob
> > 
> > 
> > On Fri, 22 Nov 2019 at 07:24, Justin Richer  wrote:
> > I’m going to +1 Dick and Annabelle’s question about the scope here. That 
> > was the one major thing that struck me during the DPoP discussions in 
> > Singapore yesterday: we don’t seem to agree on what DPoP is for. Some 
> > (including the authors, it seems) see it as a quick point-solution to a 
> > specific use case. Others see it as a general PoP mechanism. 
> > 
> > If it’s the former, then it should be explicitly tied to one specific set 
> > of things. If it’s the latter, then it needs to be expanded. 
> > 
> > I’ll repeat what I said at the mic line: My take is that we should 
> > explicitly narrow down DPoP so that it does exactly one thing and solves 
> > one narrow use case. And for a general solution? Let’s move that 

Re: [OAUTH-WG] New Version Notification for draft-fett-oauth-dpop-03.txt

2019-11-22 Thread Filip Skokan
Rob, I agree that managing roots of trust, validating/OCSP etc is not
"easy" per se, but the MTLS setup gets really simple with the Self-Signed
Certificate Mutual-TLS Method
 and we
made sure combined traffic is simple to signal by the AS and simple to
detect and use by clients using the mtls_endpoint_aliases discovery
metadata.

S pozdravem,
*Filip Skokan*


On Fri, 22 Nov 2019 at 09:10, Rob Otto  wrote:

> Hi Torsten - thanks for the reply..
>
> Responses in line.
>
> Grüsse
> Rob
>
> On Fri, 22 Nov 2019 at 07:59, Torsten Lodderstedt  40lodderstedt@dmarc.ietf.org> wrote:
>
>> Hi Rob,
>>
>> > On 22. Nov 2019, at 15:52, Rob Otto > 40pingidentity@dmarc..ietf.org <40pingidentity@dmarc.ietf.org>>
>> wrote:
>> >
>> > Hi everyone
>> >
>> > I'd agree with this. I'm looking at DPOP as an alternative and
>> ultimately simpler way to accomplish what we can already do with MTLS-bound
>> Access Tokens, for use cases such as the ones we address in Open Banking;
>> these are API transactions that demand a high level of assurance and as
>> such we absolutely must have a mechanism to constrain those tokens to the
>> intended bearer. Requiring MTLS across the ecosystem, however, adds
>> significant overhead in terms of infrastructural complexity and is always
>> going to limit the extent to which such a model can scale.
>>
>> I would like to unterstand why mTLS adds “significant overhead in terms
>> of infrastructural complexity”. Can you please dig into details?
>>
>
> I guess it's mostly that every RS-endpoint (or what sits in front of it)
> has to have a mechanism for accepting/terminating mTLS, managing roots of
> trust, validating/OCSP, etc and then passing the certificates downstream as
> headers. None of it is necessarily difficult or impossible to do in
> isolation, but I meet many many people every week who simply don't know how
> to do any of this stuff. And these are typically "network people", for want
> of a better word. There are quite a few SaaS API management and edge
> solutions out there that don't even support mTLS at all. You also have the
> difficulty in handling a combination of MTLS and non-MTLS traffic to the
> same endpoints. Again, it's possible to do, but far from straightforward.
>
>
>
>>
>> Our experience so far: It can be a headache to set up in a microservice
>> architecture with TLS terminating proxies but once it runs it’s ok. On the
>> other side, it’s easy to use for client developers and it combines client
>> authentication and sender constraining nicely.
>>
>
> I do think its an elegant solution, don't get me wrong. It's just that
> there are plenty of moving parts that you need to get right and that can be
> a challenge, particularly in large, complex environments.
>
>
>
>>
>> >
>> > DPOP, to me, appears to be a rather more elegant way of solving the
>> same problem, with the benefit of significantly reducing the complexity of
>> (and dependency on) the transport layer. I would not argue, however, that
>> it is meant to be a solution intended for ubiquitous adoption across all
>> OAuth-protected API traffic. Clients still need to manage private keys
>> under this model and my experience is that there is typically a steep
>> learning curve for developers to negotiate any time you introduce a
>> requirement to hold and use keys within  an application.
>>
>> My experience is most developer don’t even get the URL right (in the
>> signature and the value used on the receiving end). So the total cost of
>> ownership is increased by numerous support inquiries.
>>
> I'll not comment, at the risk of offending developers :)
>
>>
>> best regards,
>> Torsten.
>>
>> >
>> > I guess I'm with Justin - let's look at DPOP as an alternative to
>> MTLS-bound tokens for high-assurance use cases, at least initially, without
>> trying to make it solve every problem.
>> >
>> > Best regards
>> > Rob
>> >
>> >
>> > On Fri, 22 Nov 2019 at 07:24, Justin Richer  wrote:
>> > I’m going to +1 Dick and Annabelle’s question about the scope here.
>> That was the one major thing that struck me during the DPoP discussions in
>> Singapore yesterday: we don’t seem to agree on what DPoP is for. Some
>> (including the authors, it seems) see it as a quick point-solution to a
>> specific use case. Others see it as a general PoP mechanism.
>> >
>> > If it’s the former, then it should be explicitly tied to one specific
>> set of things. If it’s the latter, then it needs to be expanded.
>> >
>> > I’ll repeat what I said at the mic line: My take is that we should
>> explicitly narrow down DPoP so that it does exactly one thing and solves
>> one narrow use case. And for a general solution? Let’s move that discussion
>> into the next major revision of the protocol where we’ll have a bit more
>> running room to figure things out..
>> >
>> >  — Justin
>> >
>> >> On Nov 22, 2019, at 3:13 PM, Dick Hardt > > wrote:
>> >>
>> >>
>> >>
>> >> On Fri, Nov 22, 2019 

Re: [OAUTH-WG] New Version Notification for draft-fett-oauth-dpop-03.txt

2019-11-22 Thread Rob Otto
Hi Torsten - thanks for the reply.

Responses in line.

Grüsse
Rob

On Fri, 22 Nov 2019 at 07:59, Torsten Lodderstedt  wrote:

> Hi Rob,
>
> > On 22. Nov 2019, at 15:52, Rob Otto  40pingidentity@dmarc.ietf.org> wrote:
> >
> > Hi everyone
> >
> > I'd agree with this. I'm looking at DPOP as an alternative and
> ultimately simpler way to accomplish what we can already do with MTLS-bound
> Access Tokens, for use cases such as the ones we address in Open Banking;
> these are API transactions that demand a high level of assurance and as
> such we absolutely must have a mechanism to constrain those tokens to the
> intended bearer. Requiring MTLS across the ecosystem, however, adds
> significant overhead in terms of infrastructural complexity and is always
> going to limit the extent to which such a model can scale.
>
> I would like to unterstand why mTLS adds “significant overhead in terms of
> infrastructural complexity”. Can you please dig into details?
>

I guess it's mostly that every RS-endpoint (or what sits in front of it)
has to have a mechanism for accepting/terminating mTLS, managing roots of
trust, validating/OCSP, etc and then passing the certificates downstream as
headers. None of it is necessarily difficult or impossible to do in
isolation, but I meet many many people every week who simply don't know how
to do any of this stuff. And these are typically "network people", for want
of a better word. There are quite a few SaaS API management and edge
solutions out there that don't even support mTLS at all. You also have the
difficulty in handling a combination of MTLS and non-MTLS traffic to the
same endpoints. Again, it's possible to do, but far from straightforward.



>
> Our experience so far: It can be a headache to set up in a microservice
> architecture with TLS terminating proxies but once it runs it’s ok. On the
> other side, it’s easy to use for client developers and it combines client
> authentication and sender constraining nicely.
>

I do think its an elegant solution, don't get me wrong. It's just that
there are plenty of moving parts that you need to get right and that can be
a challenge, particularly in large, complex environments.



>
> >
> > DPOP, to me, appears to be a rather more elegant way of solving the same
> problem, with the benefit of significantly reducing the complexity of (and
> dependency on) the transport layer. I would not argue, however, that it is
> meant to be a solution intended for ubiquitous adoption across all
> OAuth-protected API traffic. Clients still need to manage private keys
> under this model and my experience is that there is typically a steep
> learning curve for developers to negotiate any time you introduce a
> requirement to hold and use keys within  an application.
>
> My experience is most developer don’t even get the URL right (in the
> signature and the value used on the receiving end). So the total cost of
> ownership is increased by numerous support inquiries.
>
I'll not comment, at the risk of offending developers :)

>
> best regards,
> Torsten.
>
> >
> > I guess I'm with Justin - let's look at DPOP as an alternative to
> MTLS-bound tokens for high-assurance use cases, at least initially, without
> trying to make it solve every problem.
> >
> > Best regards
> > Rob
> >
> >
> > On Fri, 22 Nov 2019 at 07:24, Justin Richer  wrote:
> > I’m going to +1 Dick and Annabelle’s question about the scope here. That
> was the one major thing that struck me during the DPoP discussions in
> Singapore yesterday: we don’t seem to agree on what DPoP is for. Some
> (including the authors, it seems) see it as a quick point-solution to a
> specific use case. Others see it as a general PoP mechanism.
> >
> > If it’s the former, then it should be explicitly tied to one specific
> set of things. If it’s the latter, then it needs to be expanded.
> >
> > I’ll repeat what I said at the mic line: My take is that we should
> explicitly narrow down DPoP so that it does exactly one thing and solves
> one narrow use case. And for a general solution? Let’s move that discussion
> into the next major revision of the protocol where we’ll have a bit more
> running room to figure things out..
> >
> >  — Justin
> >
> >> On Nov 22, 2019, at 3:13 PM, Dick Hardt  wrote:
> >>
> >>
> >>
> >> On Fri, Nov 22, 2019 at 3:08 PM Neil Madden 
> wrote:
> >> On 22 Nov 2019, at 01:42, Richard Backman, Annabelle <
> richa...@amazon.com> wrote:
> >>> There are key distribution challenges with that if you are doing
> validation at the RS, but validation at the RS using either approach means
> you’ve lost protection against replay by the RS. This brings us back to a
> core question: what threats are in scope for DPoP, and in what contexts?
> >>
> >> Agreed, but validation at the RS is premature optimisation in many
> cases. And if you do need protection against that the client can even
> append a confirmation key as a caveat and retrospectively upgrade a bearer
> token to a pop token. They can 

Re: [OAUTH-WG] New Version Notification for draft-fett-oauth-dpop-03.txt

2019-11-22 Thread Filip Skokan
I agree with Torsten,

plus we're getting sender-constrained refresh tokens for said public
clients and SPAs so that the AS doesn't have to (according to the browser
based apps draft) rotate them, we all know the pain SPA developers have
with those.

S pozdravem,
*Filip Skokan*


On Fri, 22 Nov 2019 at 08:54, Torsten Lodderstedt  wrote:

>
>
> > On 22. Nov 2019, at 15:24, Justin Richer  wrote:
> >
> > I’m going to +1 Dick and Annabelle’s question about the scope here. That
> was the one major thing that struck me during the DPoP discussions in
> Singapore yesterday: we don’t seem to agree on what DPoP is for. Some
> (including the authors, it seems) see it as a quick point-solution to a
> specific use case. Others see it as a general PoP mechanism.
> >
> > If it’s the former, then it should be explicitly tied to one specific
> set of things. If it’s the latter, then it needs to be expanded.
>
> as a co-author of the DPoP draft I state again what I said yesterday: DPoP
> is a mechanism for sender-constraining access tokens sent from SPAs only.
> The threat to be prevented is token replay.
>
> The general mechanism for sender constrained access token should be
> TLS-based as recommended by the Security BCP (see
> https://tools.ietf.org/html/draft-ietf-oauth-security-topics-13#section-3..2
> ).
>
> Why: that’s the easiest way from a client developer's perspective..
>
> Application level signatures, on the other hand, are inherently more
> fragile as illustrated by the OAuth 1 experience. They also require
> additional effort (and state) on the server side to implement replay
> detection.
>
> As kind of an entertaining read I added two posts/threads from 2010, when
> this WG discussed whether TLS/SSL should be the primary OAuth 2.0 security
> mechanism.
>
> https://mailarchive.ietf.org/arch/msg/oauth/crVvDNtbdN0E0ccmk5fLdNS66v0
>
> https://mailarchive.ietf.org/arch/browse/oauth/?gbt=1=xvlxuly1DjQiZgWZpHwgj7q2k0g
>
> The decision to go with TLS only was, in my opinion, one of the key
> success factors that made OAuth 2 so incredibly successful.
>
> To re-state: From my perspective, DPoP is intended to be used by SPA
> developers only for token replay detection (or better put to provide RSs
> with the pre-requisites to do so).
>
> Why? Because we unfortunately currently lack a TLS-based mechanism for
> sender-constraining.
>
> Building it on asymmetrical crypto only makes it easier to implement and
> to handle than methods based on shared secrets.
>
> I also think we must look for alternative methods to enable TLS-based
> methods in the browser.
>
>
> >
> > I’ll repeat what I said at the mic line: My take is that we should
> explicitly narrow down DPoP so that it does exactly one thing and solves
> one narrow use case. And for a general solution? Let’s move that discussion
> into the next major revision of the protocol where we’ll have a bit more
> running room to figure things out.
> >
> >  — Justin
> >
> >> On Nov 22, 2019, at 3:13 PM, Dick Hardt  wrote:
> >>
> >>
> >>
> >> On Fri, Nov 22, 2019 at 3:08 PM Neil Madden 
> wrote:
> >> On 22 Nov 2019, at 01:42, Richard Backman, Annabelle <
> richa...@amazon.com> wrote:
> >>> There are key distribution challenges with that if you are doing
> validation at the RS, but validation at the RS using either approach means
> you’ve lost protection against replay by the RS. This brings us back to a
> core question: what threats are in scope for DPoP, and in what contexts?
> >>
> >> Agreed, but validation at the RS is premature optimisation in many
> cases. And if you do need protection against that the client can even
> append a confirmation key as a caveat and retrospectively upgrade a bearer
> token to a pop token. They can even do transfer of ownership by creating
> copies of the original token bound to other certificates/public keys.
> >>
> >> While validation at the RS may be an optimization in many cases, it is
> still a requirement for deployments.
> >>
> >> I echo Annabelle's last question: what threats are in scope (and out of
> scope) for DPoP?
> >>
> >>
> >> ___
> >> OAuth mailing list
> >> OAuth@ietf.org
> >> https://www.ietf.org/mailman/listinfo/oauth
> >
> > ___
> > OAuth mailing list
> > OAuth@ietf.org
> > https://www.ietf.org/mailman/listinfo/oauth
>
> ___
> OAuth mailing list
> OAuth@ietf.org
> https://www.ietf.org/mailman/listinfo/oauth
>
___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] New Version Notification for draft-fett-oauth-dpop-03.txt

2019-11-22 Thread Torsten Lodderstedt
Hi Mike, 

> On 22. Nov 2019, at 16:00, Mike Jones 
>  wrote:
> 
> TLS on Web Servers is nearly ubiquitous now and works great.  Trying to use 
> mutual TLS on many platforms results in a nearly intractable user experience, 
> where the end-users are asked to install certificates into certificate 
> stores.  Success rates for those UXs are very low.
> 
> And it's even worse than that.  If you use multiple browsers, you'll have to 
> get the person to install the client certificates into multiple certificate 
> stores.  For instance, on Windows, Edge, Firefox, and Chrome all use 
> different certificate stores.
> 
> Server-side TLS works because end-users don't have to do anything difficult 
> to use it.  That can't be said for client-side TLS.

That’s true for the user experience in a browser. That’s why we currently need 
an alternative for exactly this client type. 

It’s completely different for mobile apps and server-side web applications 
since mTLS does not have any impact on the user experience at all. 

Instead, the developer just needs to drop the key pair/cert into the HTTP stack 
and is done with both client authentication and sender constrained tokens. 

best regards,
Torsten. 

> 
>   -- Mike
> 
> -Original Message-
> From: OAuth  On Behalf Of Torsten Lodderstedt
> Sent: Thursday, November 21, 2019 11:54 PM
> To: Justin Richer 
> Cc: oauth 
> Subject: [EXTERNAL] Re: [OAUTH-WG] New Version Notification for 
> draft-fett-oauth-dpop-03.txt
> 
> 
> 
>> On 22. Nov 2019, at 15:24, Justin Richer  wrote:
>> 
>> I’m going to +1 Dick and Annabelle’s question about the scope here. That was 
>> the one major thing that struck me during the DPoP discussions in Singapore 
>> yesterday: we don’t seem to agree on what DPoP is for. Some (including the 
>> authors, it seems) see it as a quick point-solution to a specific use case. 
>> Others see it as a general PoP mechanism. 
>> 
>> If it’s the former, then it should be explicitly tied to one specific set of 
>> things. If it’s the latter, then it needs to be expanded. 
> 
> as a co-author of the DPoP draft I state again what I said yesterday: DPoP is 
> a mechanism for sender-constraining access tokens sent from SPAs only. The 
> threat to be prevented is token replay.
> 
> The general mechanism for sender constrained access token should be TLS-based 
> as recommended by the Security BCP (see 
> https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Ftools.ietf.org%2Fhtml%2Fdraft-ietf-oauth-security-topics-13%23section-3.2data=02%7C01%7CMichael.Jones%40microsoft.com%7Cfa8cfb57efe34b5dfafa08d76f21234d%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637100060427907236sdata=tvqS9JnASGHWeVZnxm0x6Rr7sSCaMX5Hd7ImpyoN%2BqE%3Dreserved=0)..
> 
> Why: that’s the easiest way from a client developer's perspective. 
> 
> Application level signatures, on the other hand, are inherently more fragile 
> as illustrated by the OAuth 1 experience. They also require additional effort 
> (and state) on the server side to implement replay detection. 
> 
> As kind of an entertaining read I added two posts/threads from 2010, when 
> this WG discussed whether TLS/SSL should be the primary OAuth 2.0 security 
> mechanism.
> 
> https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fmailarchive.ietf.org%2Farch%2Fmsg%2Foauth%2FcrVvDNtbdN0E0ccmk5fLdNS66v0data=02%7C01%7CMichael.Jones%40microsoft.com%7Cfa8cfb57efe34b5dfafa08d76f21234d%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637100060427907236sdata=EYJcPaUIPorvsaZtHTcRhztyoc7aT5HvoISpCe%2FJi2w%3Dreserved=0
> https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fmailarchive.ietf.org%2Farch%2Fbrowse%2Foauth%2F%3Fgbt%3D1%26index%3Dxvlxuly1DjQiZgWZpHwgj7q2k0gdata=02%7C01%7CMichael.Jones%40microsoft.com%7Cfa8cfb57efe34b5dfafa08d76f21234d%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637100060427907236sdata=EaoRe%2BF2guKYB%2B9exMnl3oeyAEmS3%2FvQXV2BcXgYyOg%3Dreserved=0
> 
> The decision to go with TLS only was, in my opinion, one of the key success 
> factors that made OAuth 2 so incredibly successful.
> 
> To re-state: From my perspective, DPoP is intended to be used by SPA 
> developers only for token replay detection (or better put to provide RSs with 
> the pre-requisites to do so).  
> 
> Why? Because we unfortunately currently lack a TLS-based mechanism for 
> sender-constraining.
> 
> Building it on asymmetrical crypto only makes it easier to implement and to 
> handle than methods based on shared secrets.
> 
> I also think we must look for alternative methods to enable TLS-based methods 
> in the browser. 
> 
> 
>> 
>> I’ll repeat what I said at the mic line: My take is that we should 
>> expli

Re: [OAUTH-WG] New Version Notification for draft-fett-oauth-dpop-03.txt

2019-11-22 Thread Mike Jones
TLS on Web Servers is nearly ubiquitous now and works great.  Trying to use 
mutual TLS on many platforms results in a nearly intractable user experience, 
where the end-users are asked to install certificates into certificate stores.  
Success rates for those UXs are very low.

And it's even worse than that.  If you use multiple browsers, you'll have to 
get the person to install the client certificates into multiple certificate 
stores.  For instance, on Windows, Edge, Firefox, and Chrome all use different 
certificate stores.

Server-side TLS works because end-users don't have to do anything difficult to 
use it.  That can't be said for client-side TLS.

-- Mike

-Original Message-
From: OAuth  On Behalf Of Torsten Lodderstedt
Sent: Thursday, November 21, 2019 11:54 PM
To: Justin Richer 
Cc: oauth 
Subject: [EXTERNAL] Re: [OAUTH-WG] New Version Notification for 
draft-fett-oauth-dpop-03.txt



> On 22. Nov 2019, at 15:24, Justin Richer  wrote:
> 
> I’m going to +1 Dick and Annabelle’s question about the scope here. That was 
> the one major thing that struck me during the DPoP discussions in Singapore 
> yesterday: we don’t seem to agree on what DPoP is for. Some (including the 
> authors, it seems) see it as a quick point-solution to a specific use case. 
> Others see it as a general PoP mechanism. 
> 
> If it’s the former, then it should be explicitly tied to one specific set of 
> things. If it’s the latter, then it needs to be expanded. 

as a co-author of the DPoP draft I state again what I said yesterday: DPoP is a 
mechanism for sender-constraining access tokens sent from SPAs only. The threat 
to be prevented is token replay.

The general mechanism for sender constrained access token should be TLS-based 
as recommended by the Security BCP (see 
https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Ftools.ietf.org%2Fhtml%2Fdraft-ietf-oauth-security-topics-13%23section-3.2data=02%7C01%7CMichael.Jones%40microsoft.com%7Cfa8cfb57efe34b5dfafa08d76f21234d%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637100060427907236sdata=tvqS9JnASGHWeVZnxm0x6Rr7sSCaMX5Hd7ImpyoN%2BqE%3Dreserved=0).

Why: that’s the easiest way from a client developer's perspective. 

Application level signatures, on the other hand, are inherently more fragile as 
illustrated by the OAuth 1 experience. They also require additional effort (and 
state) on the server side to implement replay detection. 

As kind of an entertaining read I added two posts/threads from 2010, when this 
WG discussed whether TLS/SSL should be the primary OAuth 2.0 security mechanism.

https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fmailarchive.ietf.org%2Farch%2Fmsg%2Foauth%2FcrVvDNtbdN0E0ccmk5fLdNS66v0data=02%7C01%7CMichael.Jones%40microsoft.com%7Cfa8cfb57efe34b5dfafa08d76f21234d%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637100060427907236sdata=EYJcPaUIPorvsaZtHTcRhztyoc7aT5HvoISpCe%2FJi2w%3Dreserved=0
https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fmailarchive.ietf.org%2Farch%2Fbrowse%2Foauth%2F%3Fgbt%3D1%26index%3Dxvlxuly1DjQiZgWZpHwgj7q2k0gdata=02%7C01%7CMichael.Jones%40microsoft.com%7Cfa8cfb57efe34b5dfafa08d76f21234d%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637100060427907236sdata=EaoRe%2BF2guKYB%2B9exMnl3oeyAEmS3%2FvQXV2BcXgYyOg%3Dreserved=0

The decision to go with TLS only was, in my opinion, one of the key success 
factors that made OAuth 2 so incredibly successful.

To re-state: From my perspective, DPoP is intended to be used by SPA developers 
only for token replay detection (or better put to provide RSs with the 
pre-requisites to do so).  

Why? Because we unfortunately currently lack a TLS-based mechanism for 
sender-constraining.

Building it on asymmetrical crypto only makes it easier to implement and to 
handle than methods based on shared secrets.

I also think we must look for alternative methods to enable TLS-based methods 
in the browser. 


> 
> I’ll repeat what I said at the mic line: My take is that we should explicitly 
> narrow down DPoP so that it does exactly one thing and solves one narrow use 
> case. And for a general solution? Let’s move that discussion into the next 
> major revision of the protocol where we’ll have a bit more running room to 
> figure things out.
> 
>  — Justin
> 
>> On Nov 22, 2019, at 3:13 PM, Dick Hardt  wrote:
>> 
>> 
>> 
>> On Fri, Nov 22, 2019 at 3:08 PM Neil Madden  
>> wrote:
>> On 22 Nov 2019, at 01:42, Richard Backman, Annabelle  
>> wrote:
>>> There are key distribution challenges with that if you are doing validation 
>>> at the RS, but validation at the RS using either approach means you’ve lost 
>>> protection against replay by the RS. This brings us back to a core 
>>> question: what threats are in scope for DPoP, and in what contexts?
>> 
>

Re: [OAUTH-WG] New Version Notification for draft-fett-oauth-dpop-03.txt

2019-11-21 Thread Torsten Lodderstedt
Hi Rob, 

> On 22. Nov 2019, at 15:52, Rob Otto 
>  wrote:
> 
> Hi everyone
> 
> I'd agree with this. I'm looking at DPOP as an alternative and ultimately 
> simpler way to accomplish what we can already do with MTLS-bound Access 
> Tokens, for use cases such as the ones we address in Open Banking; these are 
> API transactions that demand a high level of assurance and as such we 
> absolutely must have a mechanism to constrain those tokens to the intended 
> bearer. Requiring MTLS across the ecosystem, however, adds significant 
> overhead in terms of infrastructural complexity and is always going to limit 
> the extent to which such a model can scale.

I would like to unterstand why mTLS adds “significant overhead in terms of 
infrastructural complexity”. Can you please dig into details?

Our experience so far: It can be a headache to set up in a microservice 
architecture with TLS terminating proxies but once it runs it’s ok. On the 
other side, it’s easy to use for client developers and it combines client 
authentication and sender constraining nicely.  

> 
> DPOP, to me, appears to be a rather more elegant way of solving the same 
> problem, with the benefit of significantly reducing the complexity of (and 
> dependency on) the transport layer. I would not argue, however, that it is 
> meant to be a solution intended for ubiquitous adoption across all 
> OAuth-protected API traffic. Clients still need to manage private keys under 
> this model and my experience is that there is typically a steep learning 
> curve for developers to negotiate any time you introduce a requirement to 
> hold and use keys within  an application. 

My experience is most developer don’t even get the URL right (in the signature 
and the value used on the receiving end). So the total cost of ownership is 
increased by numerous support inquiries.

best regards,
Torsten. 

> 
> I guess I'm with Justin - let's look at DPOP as an alternative to MTLS-bound 
> tokens for high-assurance use cases, at least initially, without trying to 
> make it solve every problem. 
> 
> Best regards
> Rob
> 
> 
> On Fri, 22 Nov 2019 at 07:24, Justin Richer  wrote:
> I’m going to +1 Dick and Annabelle’s question about the scope here. That was 
> the one major thing that struck me during the DPoP discussions in Singapore 
> yesterday: we don’t seem to agree on what DPoP is for. Some (including the 
> authors, it seems) see it as a quick point-solution to a specific use case. 
> Others see it as a general PoP mechanism. 
> 
> If it’s the former, then it should be explicitly tied to one specific set of 
> things. If it’s the latter, then it needs to be expanded. 
> 
> I’ll repeat what I said at the mic line: My take is that we should explicitly 
> narrow down DPoP so that it does exactly one thing and solves one narrow use 
> case. And for a general solution? Let’s move that discussion into the next 
> major revision of the protocol where we’ll have a bit more running room to 
> figure things out..
> 
>  — Justin
> 
>> On Nov 22, 2019, at 3:13 PM, Dick Hardt  wrote:
>> 
>> 
>> 
>> On Fri, Nov 22, 2019 at 3:08 PM Neil Madden  
>> wrote:
>> On 22 Nov 2019, at 01:42, Richard Backman, Annabelle  
>> wrote:
>>> There are key distribution challenges with that if you are doing validation 
>>> at the RS, but validation at the RS using either approach means you’ve lost 
>>> protection against replay by the RS. This brings us back to a core 
>>> question: what threats are in scope for DPoP, and in what contexts?
>> 
>> Agreed, but validation at the RS is premature optimisation in many cases. 
>> And if you do need protection against that the client can even append a 
>> confirmation key as a caveat and retrospectively upgrade a bearer token to a 
>> pop token. They can even do transfer of ownership by creating copies of the 
>> original token bound to other certificates/public keys. 
>> 
>> While validation at the RS may be an optimization in many cases, it is still 
>> a requirement for deployments.
>> 
>> I echo Annabelle's last question: what threats are in scope (and out of 
>> scope) for DPoP?
>> 
>> 
>> ___
>> OAuth mailing list
>> OAuth@ietf.org
>> https://www.ietf.org/mailman/listinfo/oauth
> 
> ___
> OAuth mailing list
> OAuth@ietf.org
> https://www.ietf.org/mailman/listinfo/oauth
> 
> 
> -- 
>   
> Rob Otto  
> EMEA Field CTO/Solutions Architect
> roberto...@pingidentity.com   
>   
> c: +44 (0) 777 135 6092
> Connect with us:  
>   
>   
> 
> 
> CONFIDENTIALITY NOTICE: This email may contain confidential and privileged 
> material for the sole use of the intended recipient(s). Any review, use, 
> distribution or disclosure by others is strictly prohibited..  If you have 
> received this 

Re: [OAUTH-WG] New Version Notification for draft-fett-oauth-dpop-03.txt

2019-11-21 Thread Rob Otto
Hi everyone

I'd agree with this. I'm looking at DPOP as an alternative and
ultimately simpler way to accomplish what we can already do with MTLS-bound
Access Tokens, for use cases such as the ones we address in Open Banking;
these are API transactions that demand a high level of assurance and as
such we absolutely must have a mechanism to constrain those tokens to the
intended bearer. Requiring MTLS across the ecosystem, however, adds
significant overhead in terms of infrastructural complexity and is always
going to limit the extent to which such a model can scale.

DPOP, to me, appears to be a rather more elegant way of solving the same
problem, with the benefit of significantly reducing the complexity of (and
dependency on) the transport layer. I would not argue, however, that it is
meant to be a solution intended for ubiquitous adoption across all
OAuth-protected API traffic. Clients still need to manage private keys
under this model and my experience is that there is typically a steep
learning curve for developers to negotiate any time you introduce a
requirement to hold and use keys within  an application.

I guess I'm with Justin - let's look at DPOP as an alternative to
MTLS-bound tokens for high-assurance use cases, at least initially, without
trying to make it solve every problem.

Best regards
Rob


On Fri, 22 Nov 2019 at 07:24, Justin Richer  wrote:

> I’m going to +1 Dick and Annabelle’s question about the scope here. That
> was the one major thing that struck me during the DPoP discussions in
> Singapore yesterday: we don’t seem to agree on what DPoP is for. Some
> (including the authors, it seems) see it as a quick point-solution to a
> specific use case. Others see it as a general PoP mechanism.
>
> If it’s the former, then it should be explicitly tied to one specific set
> of things. If it’s the latter, then it needs to be expanded.
>
> I’ll repeat what I said at the mic line: My take is that we should
> explicitly narrow down DPoP so that it does exactly one thing and solves
> one narrow use case. And for a general solution? Let’s move that discussion
> into the next major revision of the protocol where we’ll have a bit more
> running room to figure things out.
>
>  — Justin
>
> On Nov 22, 2019, at 3:13 PM, Dick Hardt  wrote:
>
>
>
> On Fri, Nov 22, 2019 at 3:08 PM Neil Madden 
> wrote:
>
>> On 22 Nov 2019, at 01:42, Richard Backman, Annabelle 
>> wrote:
>>
>> There are key distribution challenges with that if you are doing
>> validation at the RS, but validation at the RS using either approach means
>> you’ve lost protection against replay by the RS. This brings us back to a
>> core question: what threats are in scope for DPoP, and in what contexts?
>>
>>
>> Agreed, but validation at the RS is premature optimisation in many cases..
>> And if you do need protection against that the client can even append a
>> confirmation key as a caveat and retrospectively upgrade a bearer token to
>> a pop token. They can even do transfer of ownership by creating copies of
>> the original token bound to other certificates/public keys.
>>
>
> While validation at the RS may be an optimization in many cases, it is
> still a requirement for deployments.
>
> I echo Annabelle's last question: what threats are in scope (and out of
> scope) for DPoP?
>
>
> ___
> OAuth mailing list
> OAuth@ietf.org
> https://www.ietf.org/mailman/listinfo/oauth
>
>
> ___
> OAuth mailing list
> OAuth@ietf.org
> https://www.ietf.org/mailman/listinfo/oauth
>


-- 
[image: Ping Identity]

Rob Otto
EMEA Field CTO/Solutions Architect
roberto...@pingidentity.com

c: +44 (0) 777 135 6092
Connect with us: [image: Glassdoor logo]

[image:
LinkedIn logo]  [image: twitter
logo]  [image: facebook logo]
 [image: youtube logo]
 [image: Google+ logo]
 [image: Blog logo]




-- 
_CONFIDENTIALITY NOTICE: This email may contain confidential and privileged 
material for the sole use of the intended recipient(s). Any review, use, 
distribution or disclosure by others is strictly prohibited.  If you have 
received this communication in error, please notify the sender immediately 
by e-mail and delete the message and any file attachments from your 
computer. Thank you._

Re: [OAUTH-WG] New Version Notification for draft-fett-oauth-dpop-03.txt

2019-11-21 Thread Justin Richer
I’m going to +1 Dick and Annabelle’s question about the scope here. That was 
the one major thing that struck me during the DPoP discussions in Singapore 
yesterday: we don’t seem to agree on what DPoP is for. Some (including the 
authors, it seems) see it as a quick point-solution to a specific use case. 
Others see it as a general PoP mechanism. 

If it’s the former, then it should be explicitly tied to one specific set of 
things. If it’s the latter, then it needs to be expanded. 

I’ll repeat what I said at the mic line: My take is that we should explicitly 
narrow down DPoP so that it does exactly one thing and solves one narrow use 
case. And for a general solution? Let’s move that discussion into the next 
major revision of the protocol where we’ll have a bit more running room to 
figure things out.

 — Justin

> On Nov 22, 2019, at 3:13 PM, Dick Hardt  wrote:
> 
> 
> 
> On Fri, Nov 22, 2019 at 3:08 PM Neil Madden  > wrote:
> On 22 Nov 2019, at 01:42, Richard Backman, Annabelle  > wrote:
>> There are key distribution challenges with that if you are doing validation 
>> at the RS, but validation at the RS using either approach means you’ve lost 
>> protection against replay by the RS. This brings us back to a core question: 
>> what threats are in scope for DPoP, and in what contexts?
> 
> 
> Agreed, but validation at the RS is premature optimisation in many cases. And 
> if you do need protection against that the client can even append a 
> confirmation key as a caveat and retrospectively upgrade a bearer token to a 
> pop token. They can even do transfer of ownership by creating copies of the 
> original token bound to other certificates/public keys. 
> 
> While validation at the RS may be an optimization in many cases, it is still 
> a requirement for deployments.
> 
> I echo Annabelle's last question: what threats are in scope (and out of 
> scope) for DPoP?
> 
> 
> ___
> OAuth mailing list
> OAuth@ietf.org
> https://www.ietf.org/mailman/listinfo/oauth

___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] New Version Notification for draft-fett-oauth-dpop-03.txt

2019-11-21 Thread Dick Hardt
On Fri, Nov 22, 2019 at 3:08 PM Neil Madden 
wrote:

> On 22 Nov 2019, at 01:42, Richard Backman, Annabelle 
> wrote:
>
> There are key distribution challenges with that if you are doing
> validation at the RS, but validation at the RS using either approach means
> you’ve lost protection against replay by the RS. This brings us back to a
> core question: what threats are in scope for DPoP, and in what contexts?
>
>
> Agreed, but validation at the RS is premature optimisation in many cases.
> And if you do need protection against that the client can even append a
> confirmation key as a caveat and retrospectively upgrade a bearer token to
> a pop token. They can even do transfer of ownership by creating copies of
> the original token bound to other certificates/public keys.
>

While validation at the RS may be an optimization in many cases, it is
still a requirement for deployments.

I echo Annabelle's last question: what threats are in scope (and out of
scope) for DPoP?
___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] New Version Notification for draft-fett-oauth-dpop-03.txt

2019-11-21 Thread Neil Madden
On 22 Nov 2019, at 01:42, Richard Backman, Annabelle  
wrote:
> 
> 
> Macaroons are built on proof of possession. In order to add a caveat to a 
> macaroon, the sender has to have the HMAC of the macaroon without their 
> caveat.

Yes of course. But this is the HMAC *tag* not the original key. They can’t 
change anything the AS originally signed. 

> The distinctive property of macaroons as I see it is that they eliminate the 
> need for key negotiation with the bearer. How much value this has over the AS 
> just returning a symmetric key alongside the access token in the token 
> request, I’m not sure.

Well, you don’t have to return a key from the token endpoint for a start. The 
client doesn’t need to create and send any additional token. The whole thing 
works with existing standards and technologies and can be incrementally adopted 
as required. If RSes do token introspection already then they need zero changes 
to support this.

> There are key distribution challenges with that if you are doing validation 
> at the RS, but validation at the RS using either approach means you’ve lost 
> protection against replay by the RS. This brings us back to a core question: 
> what threats are in scope for DPoP, and in what contexts?

Agreed, but validation at the RS is premature optimisation in many cases. And 
if you do need protection against that the client can even append a 
confirmation key as a caveat and retrospectively upgrade a bearer token to a 
pop token. They can even do transfer of ownership by creating copies of the 
original token bound to other certificates/public keys. 

Neil


>  
> – 
> Annabelle Richard Backman
> AWS Identity
>  
>  
> From: OAuth  on behalf of Neil Madden 
> 
> Date: Friday, November 22, 2019 at 4:40 AM
> To: Brian Campbell 
> Cc: oauth 
> Subject: Re: [OAUTH-WG] New Version Notification for 
> draft-fett-oauth-dpop-03.txt
>  
> At the end of my previous email I mentioned that you can achieve some of the 
> same aims as DPoP without needing a PoP mechanism at all. This email is that 
> follow-up.
>  
> OAuth is agnostic about the format of access tokens and many vendors support 
> either random string database tokens or JWTs. But there are other choices for 
> access token format, some of which have more interesting properties. In 
> particular, Google proposed Macaroons a few years ago as a "better cookie" 
> [1] and I think they systematically address many of these issues when used as 
> an access token format.
>  
> For those who aren't familiar with them, Macaroons are a bit like a HS256 
> JWT. They have a location (a bit like the audience in a JWT) and an 
> identifier (an arbitrary string) and then are signed with HMAC-SHA256 using a 
> secret key. (There's no claims set or headers - they are very minimal). In 
> this case the secret key would be owned by the AS and used to sign 
> macaroon-based access tokens. Validating the token would be done via token 
> introspection at the AS.
>  
> The clever bit is that anybody at all can append "caveats" to a macaroon at 
> any time, but nobody can remove one once added. Caveats are restrictions on 
> the use of a token - they only ever reduce the authority granted by the 
> token, never expand it. The AS can validate the token and all the caveats 
> with its secret key. So, for example, if an access token was a macaroon then 
> the client could append a caveat to reduce the scope, or reduce the expiry 
> time, or reduce the audience, and so on.
>  
> The really clever bit is that the client can keep a copy of the original 
> token and create restricted versions to send to different resource servers. 
> Because HMAC is very cheap, the client can even do this before each and every 
> request. (This is what the original paper refers to as "contextual caveats"). 
> This means that a client can be issued a single access token from the AS with 
> broad scope and applicable to many different RS and can then locally create 
> restricted copies for each individual RS.
>  
> The relevance to DPoP is that the client could even append caveats equivalent 
> to "htm" and "htu" just before sending the access token to the RS, and maybe 
> add an "exp" for 5 seconds in the future, reduce the scope, and so on:
>  
>   newAccessToken = accessToken.withCaveats({
> exp: now + 5seconds,
> scope: "a b",
> htm: "POST",
> 
>   });
>   httpClient.post(data, Authorization: Bearer newAccessToken);
>  
> Note that the client doesn't need anything extra here - no keys, extra tokens 
> etc. They just have the access token and a macaroon library.
>  
> The RS will see an opaque access token, send it to the AS

Re: [OAUTH-WG] New Version Notification for draft-fett-oauth-dpop-03.txt

2019-11-21 Thread Richard Backman, Annabelle
Macaroons are built on proof of possession. In order to add a caveat to a 
macaroon, the sender has to have the HMAC of the macaroon without their caveat. 
The distinctive property of macaroons as I see it is that they eliminate the 
need for key negotiation with the bearer. How much value this has over the AS 
just returning a symmetric key alongside the access token in the token request, 
I’m not sure. There are key distribution challenges with that if you are doing 
validation at the RS, but validation at the RS using either approach means 
you’ve lost protection against replay by the RS. This brings us back to a core 
question: what threats are in scope for DPoP, and in what contexts?

–
Annabelle Richard Backman
AWS Identity


From: OAuth  on behalf of Neil Madden 

Date: Friday, November 22, 2019 at 4:40 AM
To: Brian Campbell 
Cc: oauth 
Subject: Re: [OAUTH-WG] New Version Notification for 
draft-fett-oauth-dpop-03.txt

At the end of my previous email I mentioned that you can achieve some of the 
same aims as DPoP without needing a PoP mechanism at all. This email is that 
follow-up.

OAuth is agnostic about the format of access tokens and many vendors support 
either random string database tokens or JWTs. But there are other choices for 
access token format, some of which have more interesting properties. In 
particular, Google proposed Macaroons a few years ago as a "better cookie" [1] 
and I think they systematically address many of these issues when used as an 
access token format.

For those who aren't familiar with them, Macaroons are a bit like a HS256 JWT. 
They have a location (a bit like the audience in a JWT) and an identifier (an 
arbitrary string) and then are signed with HMAC-SHA256 using a secret key. 
(There's no claims set or headers - they are very minimal). In this case the 
secret key would be owned by the AS and used to sign macaroon-based access 
tokens. Validating the token would be done via token introspection at the AS.

The clever bit is that anybody at all can append "caveats" to a macaroon at any 
time, but nobody can remove one once added. Caveats are restrictions on the use 
of a token - they only ever reduce the authority granted by the token, never 
expand it. The AS can validate the token and all the caveats with its secret 
key. So, for example, if an access token was a macaroon then the client could 
append a caveat to reduce the scope, or reduce the expiry time, or reduce the 
audience, and so on.

The really clever bit is that the client can keep a copy of the original token 
and create restricted versions to send to different resource servers. Because 
HMAC is very cheap, the client can even do this before each and every request. 
(This is what the original paper refers to as "contextual caveats"). This means 
that a client can be issued a single access token from the AS with broad scope 
and applicable to many different RS and can then locally create restricted 
copies for each individual RS.

The relevance to DPoP is that the client could even append caveats equivalent 
to "htm" and "htu" just before sending the access token to the RS, and maybe 
add an "exp" for 5 seconds in the future, reduce the scope, and so on:

  newAccessToken = accessToken.withCaveats({
exp: now + 5seconds,
scope: "a b",
htm: "POST",

  });
  httpClient.post(data, Authorization: Bearer newAccessToken);

Note that the client doesn't need anything extra here - no keys, extra tokens 
etc. They just have the access token and a macaroon library.

The RS will see an opaque access token, send it to the AS for introspection. 
The AS however, will see and validate the new caveats on the token and return 
an introspection response with the restricted scope and expiry time, and return 
the htm/htu restrictions that the RS can then enforce.

For clients this is transparent until they want to take advantage of it and 
then they can just use an off-the-shelf macaroon library. For the RS it is also 
completely transparent. All the (relatively small) complexity lives in the AS, 
which just has to be able to produce and verify macaroons and take caveats into 
account when performing token introspection - e.g. the returned scope should be 
the intersection of the original token scope and any scope caveats. But I don't 
think this would be too much effort.

[1]: https://ai.google/research/pubs/pub41892

-- Neil


On 21 Nov 2019, at 06:23, Brian Campbell 
mailto:bcampb...@pingidentity.com>> wrote:

Yeah, suggestions and/or an MTI about algorithm support would probably be 
worthwhile. Perhaps also some defined means of signaling when an unsupported 
algorithm is used along with any other reason a DPoP is invalid or rejected.

There are a lot of tradeoffs in what claims are required and what protections 
are provided etc. The aim of what was chosen was to do just enough to provide 
some reasonable pr

Re: [OAUTH-WG] New Version Notification for draft-fett-oauth-dpop-03.txt

2019-11-21 Thread Neil Madden
At the end of my previous email I mentioned that you can achieve some of the 
same aims as DPoP without needing a PoP mechanism at all. This email is that 
follow-up.

OAuth is agnostic about the format of access tokens and many vendors support 
either random string database tokens or JWTs. But there are other choices for 
access token format, some of which have more interesting properties. In 
particular, Google proposed Macaroons a few years ago as a "better cookie" [1] 
and I think they systematically address many of these issues when used as an 
access token format.

For those who aren't familiar with them, Macaroons are a bit like a HS256 JWT. 
They have a location (a bit like the audience in a JWT) and an identifier (an 
arbitrary string) and then are signed with HMAC-SHA256 using a secret key. 
(There's no claims set or headers - they are very minimal). In this case the 
secret key would be owned by the AS and used to sign macaroon-based access 
tokens. Validating the token would be done via token introspection at the AS.

The clever bit is that anybody at all can append "caveats" to a macaroon at any 
time, but nobody can remove one once added. Caveats are restrictions on the use 
of a token - they only ever reduce the authority granted by the token, never 
expand it. The AS can validate the token and all the caveats with its secret 
key. So, for example, if an access token was a macaroon then the client could 
append a caveat to reduce the scope, or reduce the expiry time, or reduce the 
audience, and so on.

The really clever bit is that the client can keep a copy of the original token 
and create restricted versions to send to different resource servers. Because 
HMAC is very cheap, the client can even do this before each and every request. 
(This is what the original paper refers to as "contextual caveats"). This means 
that a client can be issued a single access token from the AS with broad scope 
and applicable to many different RS and can then locally create restricted 
copies for each individual RS.

The relevance to DPoP is that the client could even append caveats equivalent 
to "htm" and "htu" just before sending the access token to the RS, and maybe 
add an "exp" for 5 seconds in the future, reduce the scope, and so on:

  newAccessToken = accessToken.withCaveats({
exp: now + 5seconds,
scope: "a b",
htm: "POST",
...
  });
  httpClient.post(data, Authorization: Bearer newAccessToken);

Note that the client doesn't need anything extra here - no keys, extra tokens 
etc. They just have the access token and a macaroon library.

The RS will see an opaque access token, send it to the AS for introspection. 
The AS however, will see and validate the new caveats on the token and return 
an introspection response with the restricted scope and expiry time, and return 
the htm/htu restrictions that the RS can then enforce. 

For clients this is transparent until they want to take advantage of it and 
then they can just use an off-the-shelf macaroon library. For the RS it is also 
completely transparent. All the (relatively small) complexity lives in the AS, 
which just has to be able to produce and verify macaroons and take caveats into 
account when performing token introspection - e.g. the returned scope should be 
the intersection of the original token scope and any scope caveats. But I don't 
think this would be too much effort.

[1]: https://ai.google/research/pubs/pub41892 


-- Neil

> On 21 Nov 2019, at 06:23, Brian Campbell  wrote:
> 
> Yeah, suggestions and/or an MTI about algorithm support would probably be 
> worthwhile. Perhaps also some defined means of signaling when an unsupported 
> algorithm is used along with any other reason a DPoP is invalid or rejected.  
> 
> There are a lot of tradeoffs in what claims are required and what protections 
> are provided etc. The aim of what was chosen was to do just enough to provide 
> some reasonable protections against reuse or use in a different context while 
> being simple to implement and deploy.
> 
> 
> On Wed, Nov 20, 2019 at 6:34 AM Neil Madden  > wrote:
> Thanks for the reply, Brian. 
> 
> Collecting my thoughts up here rather than responding blow by blow.
> 
> Public key signatures are simpler in some respects, more complex in others. 
> There are currently 10 public key JWS signature schemes defined 
> (ES256/384/512, RS256/384/512, PS256/384/512, EdDSA) - does an RS potentially 
> have to support them all? If not, how do they negotiate algorithm support 
> with the client?
> 
> On the other hand, the ECDH scheme I proposed can be implemented by adapting 
> an existing ECDH-ES encryption support in a JWT library. For example, I 
> discovered while playing with this that our own internal library can 
> implement the full flow I described entirely via the existing public API [1], 
> so it's not necessarily as complex as it first looks. I even 

Re: [OAUTH-WG] New Version Notification for draft-fett-oauth-dpop-03.txt

2019-11-20 Thread Brian Campbell
Yeah, suggestions and/or an MTI about algorithm support would probably be
worthwhile. Perhaps also some defined means of signaling when an
unsupported algorithm is used along with any other reason a DPoP is invalid
or rejected.

There are a lot of tradeoffs in what claims are required and what
protections are provided etc. The aim of what was chosen was to do just
enough to provide some reasonable protections against reuse or use in a
different context while being simple to implement and deploy.


On Wed, Nov 20, 2019 at 6:34 AM Neil Madden 
wrote:

> Thanks for the reply, Brian.
>
> Collecting my thoughts up here rather than responding blow by blow.
>
> Public key signatures are simpler in some respects, more complex in
> others. There are currently 10 public key JWS signature schemes defined
> (ES256/384/512, RS256/384/512, PS256/384/512, EdDSA) - does an RS
> potentially have to support them all? If not, how do they negotiate
> algorithm support with the client?
>
> On the other hand, the ECDH scheme I proposed can be implemented by
> adapting an existing ECDH-ES encryption support in a JWT library. For
> example, I discovered while playing with this that our own internal library
> can implement the full flow I described entirely via the existing public
> API [1], so it's not necessarily as complex as it first looks. I even
> knocked up a from-scratch implementation in WebCrypto (JavaScript) without
> too much code [2].
>
> But I admit that using an existing JWT library to sign a JWT with an
> existing algorithm is even easier, and that counts for a lot. Perhaps we
> can make concrete suggestions/requirements about algorithm support? e.g.
> "The RS MUST support RS256 and SHOULD support EdDSA. Other algorithms MAY
> be supported."
>
> With regards to replay protection, I think there are at least two
> reasonable positions:
>
> 1. We assume that TLS is secure and don't try to defend against any
> compromise at that level. (Clearly none of the TLS-based PoP mechanisms
> survive if TLS is compromised, by definition). In this case the main attack
> to defend against is a malicious RS replaying the access token elsewhere.
> Simply signing the origin of the RS would be enough to prevent this attack,
> while letting the client reuse the same JWT for many requests (and the RS
> to cache the JWT validation). None of "jti", "htu", or "htm" seem relevant
> to this model.
>
> 2. We don't assume that TLS is secure (or it's not fully end-to-end) and
> try to provide some defense in depth against a MitM attacker replaying a
> token against the same RS. There is a graduated series of steps you can
> take here, depending on how much you want to prevent this:
>a. The DPoP token can be replayed for arbitrary requests to the same RS
> but has a short time limit (e.g., exp claim or RS-enforced max lifetime
> from iat)
>b. The DPoP token can be replayed for the same request (htu/htm claims)
>c. The DPoP token can't be replayed at all - either because of jti
> blacklisting on the RS or a challenge-response protocol on each request.
>
> (There are also variants such as including a hash of the request
> body/headers, or encoding an ETag into the JWT).
>
> I think either are reasonable design goals, but aiming for 2 adds more
> value. I think aiming for 2a is a reasonable default baseline that allows
> the client to reuse a DPoP token for a few requests, reducing the cost of
> the signature (and the RS can cache the validated JWT). Support for 2b or
> 2c can then be listed as optional additions.
>
> PS - 2a/2b can be achieved without PoP. I'll save that for another email
> in the next few days though.
>
> [1]: https://gist.github.com/NeilMadden/685ea66fb79d37a50c2310f853bd9496
> [2]: https://gist.github.com/NeilMadden/70e1b232a3b273de02ed731eb36ec4a7
>
>
> -- Neil
>
> On 19 Nov 2019, at 07:43, Brian Campbell 
> wrote:
>
>
>
> On Thu, Nov 14, 2019 at 7:20 PM Neil Madden 
> wrote:
>
>> I can't attend Singapore either in person or remotely due to other
>> commitments. I broadly support adoption of this draft, but I have some
>> comments/suggestions about it.
>>
>
> Thanks Neil. And sorry to hear that you won't be in Singapore. This kind
> of stuff is definitely more easily discussed in person (for me anyway). But
> I'll try and comment on your comments here as best I can. I also plan to
> also mention them in the Wednesday and/or Thursday presentation.
>
>
>
>> Section 2 lists the main objective as being to harden against
>> compromised/malicious AS or RS, which may attempt to replay captured tokens
>> elsewhere. While this is a good idea, a casual reader might wonder why a
>> simple audience claim in the access token/introspection response is not
>> sufficient to prevent this. Because interactions between the client and RS
>> are supposed to be over TLS, is the intended threat model one in which
>> these protections have broken down? ("counterfeit" in the description
>> suggests this). Or is the motivation that clients 

Re: [OAUTH-WG] New Version Notification for draft-fett-oauth-dpop-03.txt

2019-11-19 Thread Neil Madden
Thanks for the reply, Brian. 

Collecting my thoughts up here rather than responding blow by blow.

Public key signatures are simpler in some respects, more complex in others. 
There are currently 10 public key JWS signature schemes defined (ES256/384/512, 
RS256/384/512, PS256/384/512, EdDSA) - does an RS potentially have to support 
them all? If not, how do they negotiate algorithm support with the client?

On the other hand, the ECDH scheme I proposed can be implemented by adapting an 
existing ECDH-ES encryption support in a JWT library. For example, I discovered 
while playing with this that our own internal library can implement the full 
flow I described entirely via the existing public API [1], so it's not 
necessarily as complex as it first looks. I even knocked up a from-scratch 
implementation in WebCrypto (JavaScript) without too much code [2].

But I admit that using an existing JWT library to sign a JWT with an existing 
algorithm is even easier, and that counts for a lot. Perhaps we can make 
concrete suggestions/requirements about algorithm support? e.g. "The RS MUST 
support RS256 and SHOULD support EdDSA. Other algorithms MAY be supported." 

With regards to replay protection, I think there are at least two reasonable 
positions:

1. We assume that TLS is secure and don't try to defend against any compromise 
at that level. (Clearly none of the TLS-based PoP mechanisms survive if TLS is 
compromised, by definition). In this case the main attack to defend against is 
a malicious RS replaying the access token elsewhere. Simply signing the origin 
of the RS would be enough to prevent this attack, while letting the client 
reuse the same JWT for many requests (and the RS to cache the JWT validation). 
None of "jti", "htu", or "htm" seem relevant to this model.

2. We don't assume that TLS is secure (or it's not fully end-to-end) and try to 
provide some defense in depth against a MitM attacker replaying a token against 
the same RS. There is a graduated series of steps you can take here, depending 
on how much you want to prevent this:
   a. The DPoP token can be replayed for arbitrary requests to the same RS but 
has a short time limit (e.g., exp claim or RS-enforced max lifetime from iat)
   b. The DPoP token can be replayed for the same request (htu/htm claims)
   c. The DPoP token can't be replayed at all - either because of jti 
blacklisting on the RS or a challenge-response protocol on each request.

(There are also variants such as including a hash of the request body/headers, 
or encoding an ETag into the JWT).

I think either are reasonable design goals, but aiming for 2 adds more value. I 
think aiming for 2a is a reasonable default baseline that allows the client to 
reuse a DPoP token for a few requests, reducing the cost of the signature (and 
the RS can cache the validated JWT). Support for 2b or 2c can then be listed as 
optional additions.

PS - 2a/2b can be achieved without PoP. I'll save that for another email in the 
next few days though.

[1]: https://gist.github.com/NeilMadden/685ea66fb79d37a50c2310f853bd9496
[2]: https://gist.github.com/NeilMadden/70e1b232a3b273de02ed731eb36ec4a7


-- Neil

> On 19 Nov 2019, at 07:43, Brian Campbell  wrote:
> 
> 
> 
> On Thu, Nov 14, 2019 at 7:20 PM Neil Madden  > wrote:
> I can't attend Singapore either in person or remotely due to other 
> commitments. I broadly support adoption of this draft, but I have some 
> comments/suggestions about it.
> 
> Thanks Neil. And sorry to hear that you won't be in Singapore. This kind of 
> stuff is definitely more easily discussed in person (for me anyway). But I'll 
> try and comment on your comments here as best I can. I also plan to also 
> mention them in the Wednesday and/or Thursday presentation. 
>  
> Section 2 lists the main objective as being to harden against 
> compromised/malicious AS or RS, which may attempt to replay captured tokens 
> elsewhere. While this is a good idea, a casual reader might wonder why a 
> simple audience claim in the access token/introspection response is not 
> sufficient to prevent this. Because interactions between the client and RS 
> are supposed to be over TLS, is the intended threat model one in which these 
> protections have broken down? ("counterfeit" in the description suggests 
> this). Or is the motivation that clients want to get a single broad-scoped 
> access token (for usability/performance reasons) and use it to access 
> multiple resource servers without giving each of them the ability to replay 
> the token to the other servers? Or are we thinking of a phishing-type 
> vulnerability were a general-purpose client might accidentally visit a 
> malicious site which prompts for an access token that the client then blindly 
> goes off and gets? (UMA?) It's not clear to me which of these scenarios is 
> being considered, so it would be good to tighten up this section.
> 
> It is admittedly a bit loose and I agree 

Re: [OAUTH-WG] New Version Notification for draft-fett-oauth-dpop-03.txt

2019-11-18 Thread Brian Campbell
On Thu, Nov 14, 2019 at 7:20 PM Neil Madden 
wrote:

> I can't attend Singapore either in person or remotely due to other
> commitments. I broadly support adoption of this draft, but I have some
> comments/suggestions about it.
>

Thanks Neil. And sorry to hear that you won't be in Singapore. This kind of
stuff is definitely more easily discussed in person (for me anyway). But
I'll try and comment on your comments here as best I can. I also plan to
also mention them in the Wednesday and/or Thursday presentation.


> Section 2 lists the main objective as being to harden against
> compromised/malicious AS or RS, which may attempt to replay captured tokens
> elsewhere. While this is a good idea, a casual reader might wonder why a
> simple audience claim in the access token/introspection response is not
> sufficient to prevent this. Because interactions between the client and RS
> are supposed to be over TLS, is the intended threat model one in which
> these protections have broken down? ("counterfeit" in the description
> suggests this). Or is the motivation that clients want to get a single
> broad-scoped access token (for usability/performance reasons) and use it to
> access multiple resource servers without giving each of them the ability to
> replay the token to the other servers? Or are we thinking of a
> phishing-type vulnerability were a general-purpose client might
> accidentally visit a malicious site which prompts for an access token that
> the client then blindly goes off and gets? (UMA?) It's not clear to me
> which of these scenarios is being considered, so it would be good to
> tighten up this section.
>

It is admittedly a bit loose and I agree it'd be good to tighten it up. But
part of why it's loose is that it tries to offer some protections for all
those scenarios and more such as a general lost/stolen token. It's
effectively trying to provide as many of the same types of
protections/assurances that you'd get with TLS based PoP mechanisms (like
OAuth MTLS or Token binding) to the extent that can be done at the HTTP
application layer. Which can't realistically be exactly the same but can
maybe be kinda close while actually being accessible and implementable
because it's all done at the application layer. There are trade-offs, of
course, and the document writers have endeavored to find a good balance in
the trade-off decisions we've made. But that doesn't mean they are
necessarily the right decisions or are closed to discussion. To the casual
reader I would say that it turns out that getting an appropriate simple
audience claim into an access token isn't nearly as simple as it might
seem. And while it will prevent RS to RS replay (as long as both RSs aren't
legit audiences) it doesn't help with preventing the use of tokens stolen
or leaked by other means (including for refresh tokens issued to public
clients).



> Another potential motivation is for mobile apps. Some customers of ours
> would like to tie access/refresh tokens to private key material generated
> on a secure element in the device, that can only be accessed after local
> biometric authentication (e.g. TouchID/FaceID on iOS). I have suggested
> using mTLS cert-bound tokens for this, but have heard some pushback due to
> the difficulty of configuring support for client certs across diverse
> infrastructure. A simple JWT-based solution like DPoP could fill this need.
>

It's maybe not stated in the draft but this kind of thing is among the
objectives (in my mind anyway).



> My main concerns with the draft though are about efficiency and
> scalability of the proposed approach:
>
> 1. The requirement to use public key signatures, along with the
> anti-replay nonce, means that the RS is required to perform an expensive
> signature verification check on every request. That is not going to scale
> up well. While there are more efficient schemes like Ed25519 now, these are
> still typically an order of magnitude slower than HMAC and the latency and
> CPU overhead is likely to be a non-starter for many APIs (especially when
> you're billed by CPU usage). Public key signatures are also notoriously
> fragile (see e.g. the history of nonce reuse/leakage vulnerabilities in
> ECDSA or
>

Yes, asymmetric is more processing intensive than symmetric. But if you
take away the distributed replay check (see next response), it will scale
out just fine. I'm not so sure latency is a real issue here - while these
operations are an order of magnitude slower we're still talking about times
that are not perceptible to a human. CPU usage/cost is a part of a
trade-off for the simplicity afforded by public/private keys.  And it is
significantly simpler. The design you sketched out is admittedly quite
clever but it's not even in the same ballpark with respect to complexity.
And, as you pointed out, the other suggestion around symmetric keys has
rather different security properties while still adding complexity. Adding
symmetric key support isn't something 

Re: [OAUTH-WG] New Version Notification for draft-fett-oauth-dpop-03.txt

2019-11-17 Thread Torsten Lodderstedt


> Am 17.11.2019 um 04:06 schrieb David Waite :
> 
> You’ll be audience-scoping either way, so it may make sense to use a 
> symmetric algorithm for both. It starts to look like kerberos in HTTP and 
> JSON when you squint.

Even if audience restriction is a recommended practice, I‘m not fully sure this 
is a broadly established practice.

As you pointed out, symmetrical keys require RS-specific access tokens, i.e. 
the client needs to tell the AS what RS it is going to use the token at. Using 
resource indicators or rar?

This reminds me the simplicity of the approach based on asymmetric crypto re 
programming model and key management.

smime.p7s
Description: S/MIME cryptographic signature
___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] New Version Notification for draft-fett-oauth-dpop-03.txt

2019-11-16 Thread David Waite
On Nov 15, 2019, at 8:32 AM, Paul Querna  wrote:
> Supporting `HS256` or similar signing of the proof would be one way to
> reduce the CPU usage concerns.

There are a number of other potential asymmetrically signed messages, such as 
the access token. Is the assumption that these are also symmetrically 
protected, or that the cost here is amortized by caching?

If you are changing either your access tokens or dPoP proofs to use symmetric 
keys, you want to limit the number of parties who know that secret to the 
client, AS, and a single resource server. You’ll be audience-scoping either 
way, so it may make sense to use a symmetric algorithm for both. It starts to 
look like kerberos in HTTP and JSON when you squint.

> 
> The challenge seems to be getting the symmetric key to the RS in a
> distributed manner.

Yes, you need the same infrastructure for HMAC and AEAD in this case.

> 
> This use case could be scoped as a separate specification if that
> makes the most sense, building upon DPoP.
> 
> Throwing out a potential scheme here:
> 
> - **5.  Token Request (Binding Tokens to a Public Key)**: The request
> from the client is unchanged. If the AS decides this access token
> should use a symmetric key it:
> 1) Returns the `token_type` as `DPoP+symmetric`
> 2) Adds a new field to the token response: `token_key`.  This should
> be a symmetric key in JWK format, encrypted to the client's DPoP-bound
> asymmetric key using JWE.  This means the client still must be able to
> decrypt this JWE before proceeding using its private key.

If you encrypt the key to the resource, then there is a risk that the key is 
retained while unprotected in memory. ECDH may be better here, although then we 
are making assumptions on the types of keys being used.

> - **6.  Resource Access (Proof of Possession for Access Tokens)**: The
> DPoP Proof from the client would use the `token_key` issued by the AS.
> 
> - **7.  Public Key Confirmation**: Instead of the `jkt` claim, add a
> new `cnf` claim type: JSON Encrypted Key or  `jek`.  The `jek` claim
> would be an JWE encrypted value, containing the symmetric key used for
> signing the `DPoP` proof header in the RS request.   The JWE
> relationship between the AS and RS would be outside the scope of the
> specification -- many AS's have registries of RS and their
> capabilities, and might agree upon a symmetric key distribution system
> ahead of time, in order to decrypt the `jek` confirmation.

If you are negotiating a symmetric key with the RS for access tokens (again, 
why not at this point, just call it a JOSE Service Ticket) you can just use 
AEAD and not bother with wrapping/encrypting the client-negotiated key within 
the access token.

> I think this scheme would change RS validation of an DPoP-bound proof
> from one asymmetric key verify, into two symmetric key operations: one
> signature verify on the DPoP token, and potentially one symmetric
> decrypt on the `jek` claim.

-DW

___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] New Version Notification for draft-fett-oauth-dpop-03.txt

2019-11-15 Thread Neil Madden
A few comments below.

On 15 Nov 2019, at 15:32, Paul Querna  wrote:
> 
> Echoing Neil's concerns, I posted this to the issue tracker:
> https://github.com/danielfett/draft-dpop/issues/56
> 
> I've been talking to several large scale API operators about DPoP.  A
> consistent concern is the CPU cost of doing an asymmetric key
> validation on every HTTP Request at the RS.
> 
> Micro-benchmarks on this are easy to make, and at lower in the
> protocol stack, eg TLS, there is only one asymmetric operation before
> a symmetric key is exchanged, so maybe DPoP as it stands would be hard
> to deploy.

Right, which was the intention of my proposed alternative scheme: the client 
and RS do a single ECDH operation each and then can reuse the derived HMAC key 
for many requests.

> 
> I think the primary concern is at the RS level of validation.
> Depending on the RS, the "work" of a request can be highly variable,
> so adding a single asymmetric key operation could be a significant
> portion of CPU usage at scale.
> 
> In my discussions, at the AS layer, there is a general belief that the
> request rate and overhead of validating a DPoP signature can be OK.
> (I work at Okta -- the AS CPU usage is important too, but we already
> do a bunch of "other" expensive work on token requests, such that
> adding one more EdDSA validate is a rounding error in the short term).
> 
> Supporting `HS256` or similar signing of the proof would be one way to
> reduce the CPU usage concerns.
> 
> The challenge seems to be getting the symmetric key to the RS in a
> distributed manner.
> 
> This use case could be scoped as a separate specification if that
> makes the most sense, building upon DPoP.
> 
> Throwing out a potential scheme here:
> 
> - **5.  Token Request (Binding Tokens to a Public Key)**: The request
> from the client is unchanged. If the AS decides this access token
> should use a symmetric key it:
> 1) Returns the `token_type` as `DPoP+symmetric`
> 2) Adds a new field to the token response: `token_key`.  This should
> be a symmetric key in JWK format, encrypted to the client's DPoP-bound
> asymmetric key using JWE.  This means the client still must be able to
> decrypt this JWE before proceeding using its private key.
> 
> - **6.  Resource Access (Proof of Possession for Access Tokens)**: The
> DPoP Proof from the client would use the `token_key` issued by the AS.
> 
> - **7.  Public Key Confirmation**: Instead of the `jkt` claim, add a
> new `cnf` claim type: JSON Encrypted Key or  `jek`.  The `jek` claim
> would be an JWE encrypted value, containing the symmetric key used for
> signing the `DPoP` proof header in the RS request.   The JWE
> relationship between the AS and RS would be outside the scope of the
> specification -- many AS's have registries of RS and their
> capabilities, and might agree upon a symmetric key distribution system
> ahead of time, in order to decrypt the `jek` confirmation.

If the RS has a client secret to access the token introspection endpoint, this 
could be reused in this case.

Whether this scheme is acceptable depends on clarifying the threat model that 
DPoP is intended to address. If we are only concerned about a fake RS (without 
any genuine relationship with the AS) and a client being tricked into sending 
it an access token, then this approach would be fine. The fake RS is unable to 
obtain the HMAC key and so all it can do is try and replay the access token and 
DPoP token somewhere else (which should be prevented by the claims in the DPoP 
token).

But if we are concerned about a potentially malicious RS that *does* have valid 
credentials (but is not the RS that the client thinks it is, or is a genuine RS 
that has been compromised), then this scheme fails because the malicious RS 
learns the HMAC key and then can use it to create any DPoP proofs that it 
wants, so the access token is completely compromised.

In the ECDH solution I proposed, a unique key is derived for each RS and the 
hostname of that RS is included in the key derivation. Assuming the client 
doesn't make a mistake when including this information then this means that the 
RS does not ever learn a HMAC key that is valid for any other server and so is 
unable to make any forgeries.

> 
> I think this scheme would change RS validation of an DPoP-bound proof
> from one asymmetric key verify, into two symmetric key operations: one
> signature verify on the DPoP token, and potentially one symmetric
> decrypt on the `jek` claim.

This does mean that the AS either needs to know ahead of time which RS will 
receive the access token (so that it can pre-encrypt the key for them), or else 
it needs to keep the symmetric DPoP keys around in a recoverable form - which 
creates a potential risk if clients reuse keys for multiple access tokens.

-- Neil
___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] New Version Notification for draft-fett-oauth-dpop-03.txt

2019-11-15 Thread Paul Querna
Echoing Neil's concerns, I posted this to the issue tracker:
https://github.com/danielfett/draft-dpop/issues/56

I've been talking to several large scale API operators about DPoP.  A
consistent concern is the CPU cost of doing an asymmetric key
validation on every HTTP Request at the RS.

Micro-benchmarks on this are easy to make, and at lower in the
protocol stack, eg TLS, there is only one asymmetric operation before
a symmetric key is exchanged, so maybe DPoP as it stands would be hard
to deploy.

I think the primary concern is at the RS level of validation.
Depending on the RS, the "work" of a request can be highly variable,
so adding a single asymmetric key operation could be a significant
portion of CPU usage at scale.

In my discussions, at the AS layer, there is a general belief that the
request rate and overhead of validating a DPoP signature can be OK.
(I work at Okta -- the AS CPU usage is important too, but we already
do a bunch of "other" expensive work on token requests, such that
adding one more EdDSA validate is a rounding error in the short term).

Supporting `HS256` or similar signing of the proof would be one way to
reduce the CPU usage concerns.

The challenge seems to be getting the symmetric key to the RS in a
distributed manner.

This use case could be scoped as a separate specification if that
makes the most sense, building upon DPoP.

Throwing out a potential scheme here:

- **5.  Token Request (Binding Tokens to a Public Key)**: The request
from the client is unchanged. If the AS decides this access token
should use a symmetric key it:
1) Returns the `token_type` as `DPoP+symmetric`
2) Adds a new field to the token response: `token_key`.  This should
be a symmetric key in JWK format, encrypted to the client's DPoP-bound
asymmetric key using JWE.  This means the client still must be able to
decrypt this JWE before proceeding using its private key.

- **6.  Resource Access (Proof of Possession for Access Tokens)**: The
DPoP Proof from the client would use the `token_key` issued by the AS.

- **7.  Public Key Confirmation**: Instead of the `jkt` claim, add a
new `cnf` claim type: JSON Encrypted Key or  `jek`.  The `jek` claim
would be an JWE encrypted value, containing the symmetric key used for
signing the `DPoP` proof header in the RS request.   The JWE
relationship between the AS and RS would be outside the scope of the
specification -- many AS's have registries of RS and their
capabilities, and might agree upon a symmetric key distribution system
ahead of time, in order to decrypt the `jek` confirmation.

I think this scheme would change RS validation of an DPoP-bound proof
from one asymmetric key verify, into two symmetric key operations: one
signature verify on the DPoP token, and potentially one symmetric
decrypt on the `jek` claim.

On Thu, Nov 14, 2019 at 3:20 AM Neil Madden  wrote:
>
> I can't attend Singapore either in person or remotely due to other 
> commitments. I broadly support adoption of this draft, but I have some 
> comments/suggestions about it.
>
> Section 2 lists the main objective as being to harden against 
> compromised/malicious AS or RS, which may attempt to replay captured tokens 
> elsewhere. While this is a good idea, a casual reader might wonder why a 
> simple audience claim in the access token/introspection response is not 
> sufficient to prevent this. Because interactions between the client and RS 
> are supposed to be over TLS, is the intended threat model one in which these 
> protections have broken down? ("counterfeit" in the description suggests 
> this). Or is the motivation that clients want to get a single broad-scoped 
> access token (for usability/performance reasons) and use it to access 
> multiple resource servers without giving each of them the ability to replay 
> the token to the other servers? Or are we thinking of a phishing-type 
> vulnerability were a general-purpose client might accidentally visit a 
> malicious site which prompts for an access token that the client then blindly 
> goes off and gets? (UMA?) It's not clear to me w
 hich of these scenarios is being considered, so it would be good to tighten up 
this section.
>
> Another potential motivation is for mobile apps. Some customers of ours would 
> like to tie access/refresh tokens to private key material generated on a 
> secure element in the device, that can only be accessed after local biometric 
> authentication (e.g. TouchID/FaceID on iOS). I have suggested using mTLS 
> cert-bound tokens for this, but have heard some pushback due to the 
> difficulty of configuring support for client certs across diverse 
> infrastructure. A simple JWT-based solution like DPoP could fill this need.
>
> My main concerns with the draft though are about efficiency and scalability 
> of the proposed approach:
>
> 1. The requirement to use public key signatures, along with the anti-replay 
> nonce, means that the RS is required to perform an expensive signature 
> verification check 

Re: [OAUTH-WG] New Version Notification for draft-fett-oauth-dpop-03.txt

2019-11-14 Thread Neil Madden
I can't attend Singapore either in person or remotely due to other commitments. 
I broadly support adoption of this draft, but I have some comments/suggestions 
about it.

Section 2 lists the main objective as being to harden against 
compromised/malicious AS or RS, which may attempt to replay captured tokens 
elsewhere. While this is a good idea, a casual reader might wonder why a simple 
audience claim in the access token/introspection response is not sufficient to 
prevent this. Because interactions between the client and RS are supposed to be 
over TLS, is the intended threat model one in which these protections have 
broken down? ("counterfeit" in the description suggests this). Or is the 
motivation that clients want to get a single broad-scoped access token (for 
usability/performance reasons) and use it to access multiple resource servers 
without giving each of them the ability to replay the token to the other 
servers? Or are we thinking of a phishing-type vulnerability were a 
general-purpose client might accidentally visit a malicious site which prompts 
for an access token that the client then blindly goes off and gets? (UMA?) It's 
not clear to me which of these scenarios is being considered, so it would be 
good to tighten up this section.

Another potential motivation is for mobile apps. Some customers of ours would 
like to tie access/refresh tokens to private key material generated on a secure 
element in the device, that can only be accessed after local biometric 
authentication (e.g. TouchID/FaceID on iOS). I have suggested using mTLS 
cert-bound tokens for this, but have heard some pushback due to the difficulty 
of configuring support for client certs across diverse infrastructure. A simple 
JWT-based solution like DPoP could fill this need.

My main concerns with the draft though are about efficiency and scalability of 
the proposed approach:

1. The requirement to use public key signatures, along with the anti-replay 
nonce, means that the RS is required to perform an expensive signature 
verification check on every request. That is not going to scale up well. While 
there are more efficient schemes like Ed25519 now, these are still typically an 
order of magnitude slower than HMAC and the latency and CPU overhead is likely 
to be a non-starter for many APIs (especially when you're billed by CPU usage). 
Public key signatures are also notoriously fragile (see e.g. the history of 
nonce reuse/leakage vulnerabilities in ECDSA or 

2. The advice for the RS to store a set of previously used nonces to prevent 
replay will also hamper scalability, especially in large deployments where such 
state would need to be replicated to all servers (or use sticky load balancing, 
which comes with its own problems). This violates the statelessness of HTTP, 
and it also potentially breaks idempotency of operations: Think of the case 
where the JWT validation and replay protection is done at an API gateway but 
then the call to the backend API server fails for a transient reason. The 
client (or a proxy/library) cannot simply replay the (idempotent) request in 
this case because it will be rejected by the gateway. It must instead recreate 
the DPoP JWT, incurring additional overheads.

3. Minor: The use of a custom header for communicating the DPoP proof will 
require additional CORS configuration on top of that already done for the 
Authorization header, and so adds a small amount of additional friction for 
adoption. Given that CORS configuration changes often require approval by a 
security team, this may make more of an impact than you'd expect.

It's also not clear to me exactly what threat the anti-replay nonce is 
protecting against. It does nothing against the replay scenario discussed in 
section 2, as I understand it - which really seems to be more of a MitM 
scenario. Given that the connection between the client and the RS is supposed 
to be over TLS, and TLS is already protected against replay attacks, I think 
this part needs to be better motivated given the obvious costs of implementing 
it.

I have a tentative suggestion for an alternative design which avoids these 
problems, but at a cost of potentially more complexity elsewhere. I'll 
summarise it here for consideration:

1. The client obtains an access token in the normal way. When calling the token 
endpoint it provides an EC/okp public key as the confirmation key to be 
associated with the access/refresh tokens.

2. The first time the client calls an RS it passes its access token in the 
Authorization: Bearer header as normal. (If the RS doesn't support DPoP then 
this would just succeed and no further action is required by the client - 
allowing clients to opportunistically ask for DPoP without needing a priori 
knowledge of RS capabilities).

3. The RS introspects the access token and learns the EC public key associated 
with the access token. As there is no DPoP proof with the access token, the RS 
will generate a challenge in