Re: [OAUTH-WG] [UNVERIFIED SENDER] Call for Adoption - OAuth Proof of Possession Tokens with HTTP Message Signature

2021-10-13 Thread Richard Backman, Annabelle
Agreed with keeping DPoP simple, which was why I was asking if the proposal 
could indicate it was targeting some of these other use cases.

It's clear from the feedback that the current draft does not clearly express 
these use cases. There is overlap with DPoP – on a technical level, Message 
Signatures ought to be able to handle DPoP's use cases, albeit complexity in 
different places. But not vice-versa.


The current draft being proposed for adoption I believe is fixed to the same 
HTTP properties that DPoP leverages, and thus appears to be targeting the same 
use cases with a different proof expression.

This is a misconception that should get clarified in the draft. One of the core 
concepts behind Message Signatures is that it defines a standard mechanism for 
signature components of an HTTP message and for communicating which components 
are signed, but it does not dictate which components need to be signed. That 
will vary from deployment to deployment, based on which components are 
semantically meaningful. E.g., an RS that accepts POST requests and receives 
parameters in the request body may require that the `Digest` header field be 
signed, whereas one that only accepts GET requests may only need the URL to be 
signed. The draft defines an `Accept-Signature` header field that can be 
included in a message to indicate which components need to be signed.

Justin's draft introduces a requirement that signatures intended to provide PoP 
for OAuth MUST sign the `Authorization` header field, since that's where the 
access token is. Beyond that, RSes can indicate their API-specific signing 
requirements via `Accept-Signature`. (And/or through out-of-band documentation)

The duplication within the token is also a trade-off: it allows an 
implementation to have a white list of acceptable internal values, if say the 
host and path are rewritten by reverse proxies.

I agree, there are trade-offs. For some, especially those already using JWTs 
all over the place, the simplicity of using a format and libraries they are 
already familiar with may outweigh the risk I described.

FWIW, the Message Signatures approach to the situation you described would be 
for the reverse proxy to add an `X-Forwarded-Host` or similar header field, and 
add a signature over that, the `Host` header field, and the client's signature. 
(Message Signatures supports multiple signatures on one message) I like this 
approach as the signatures are guaranteed to be validated against the same data 
the application uses. (Since there's only one copy of the data)

—
Annabelle Backman (she/her)
richa...@amazon.com<mailto:richa...@amazon.com>




On Oct 13, 2021, at 11:41 AM, David Waite 
mailto:da...@alkaline-solutions.com>> wrote:

CAUTION: This email originated from outside of the organization. Do not click 
links or open attachments unless you can confirm the sender and know the 
content is safe.



On Oct 13, 2021, at 12:26 PM, Richard Backman, Annabelle 
mailto:richa...@amazon.com>> wrote:

Those issues that could be addressed without completely redesigning DPoP have 
been discussed within the Working Group multiple times. (See quotes and meeting 
notes references in my previous message) The authors have pushed back on 
extending DPoP to cover additional use cases them due to a desire to keep DPoP 
simple and lightweight. I don't begrudge them that. I think it's reasonable to 
have a "dirt simple" solution, particularly for SPAs given the relative 
limitations of the browser environment.

Other issues are inherent to fundamental design choices, such as the use of JWS 
to prove possession of the key. E.g., you cannot avoid the data duplication 
issue since a JWS signature only covers a specific serialization of the JWT 
header and body.

Agreed with keeping DPoP simple, which was why I was asking if the proposal 
could indicate it was targeting some of these other use cases. The current 
draft being proposed for adoption I believe is fixed to the same HTTP 
properties that DPoP leverages, and thus appears to be targeting the same use 
cases with a different proof expression.

The duplication within the token is also a trade-off: it allows an 
implementation to have a white list of acceptable internal values, if say the 
host and path are rewritten by reverse proxies. It also allows an 
implementation to give richer diagnostic information when receiving 
unacceptable DPoP tokens, which may very well come at runtime from an 
independently-operating portion of an organization reconfiguring intermediaries.

-DW

___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] Authorization code reuse and OAuth 2.1

2021-10-13 Thread Richard Backman, Annabelle
Even a distributed system with server-side state can still encounter challenges 
maintaining one-time usage, particularly if server-side state itself is 
distributed and eventually consistent. Given all the redirects and user actions 
in OAuth 2.0, a storage layer may reach consistency fast enough for the 
protocol to work while still being slow enough to leave open a window of replay 
opportunity.

—
Annabelle Backman (she/her)
richa...@amazon.com




On Oct 13, 2021, at 1:56 PM, Aaron Parecki 
mailto:aa...@parecki.com>> wrote:


CAUTION: This email originated from outside of the organization. Do not click 
links or open attachments unless you can confirm the sender and know the 
content is safe.


The PKCE spec actually says "Typically, the "code_challenge" and 
"code_challenge_method" values are stored in encrypted form in the "code" 
itself" which I feel like might be a stretch to say that's typical, but this 
scenario was clearly thought of ahead of time. Doing that would enable an AS to 
avoid storing server-side state.

On Wed, Oct 13, 2021 at 1:50 PM Sascha Preibisch 
mailto:saschapreibi...@gmail.com>> wrote:
If the challenge is based on distributed authorization server configurations, 
how would they handle PKCE? I imagine that managing the state for PKCE is not 
less challenging than managing authorization codes on the server side, 
preventing reuse of them.
With that in mind I am not sure if I follow the given argument. I would prefer 
to keep MUST as it is today.


On Wed, 13 Oct 2021 at 13:37, Aaron Parecki 
mailto:aa...@parecki.com>> wrote:
HTTPS, because if that's broken then the rest of OAuth falls apart too.

On Wed, Oct 13, 2021 at 1:36 PM Warren Parad 
mailto:wpa...@rhosys.ch>> wrote:
I feel like I'm missing something, what stops just plain old network sniffing 
and replying the whole encrypted payload to the AS and getting back a valid 
token?

[https://lh6.googleusercontent.com/DNiDx1QGIrSqMPKDN1oKevxYuyVRXsqhXdfZOsW56Rf2A74mUKbAPtrJSNw4qynkSjoltWkPYdBhaZJg1BO45YOc1xs6r9KJ1fYsNHogY-nh6hjuIm9GCeBRRzrSc8kWcUSNtuA]

Warren Parad
Founder, CTO

Secure your user data with IAM authorization as a service. Implement 
Authress.


On Wed, Oct 13, 2021 at 10:33 PM Aaron Parecki 
mailto:aa...@parecki.com>> wrote:
Aside from the "plain" method, the PKCE code verifier never leaves the client 
until it's sent along with the authorization code in the POST request to the 
token endpoint. The only place it can leak at that point is if the 
authorization server itself leaks it. If you have things leaking from the 
authorization server log, you likely have much bigger problems than 
authorization code replays.

Keep in mind that even with the proposed change to drop the requirement of 
authorization codes being one time use, authorization servers are free to 
enforce this still if they want. Authorization code lifetimes are still 
expected to be short lived as well.

Aaron


On Wed, Oct 13, 2021 at 1:25 PM Pieter Kasselman 
mailto:pieter.kassel...@microsoft.com>> wrote:
Aaron, I was curious what prevents an attacker from presenting an Authorization 
Code and a PKCE Code Verifier for a second time if the one time use requirement 
is removed. Is there another countermeasure in  PKCE that would prevent it? For 
example, an attacker may obtain the Authorization Code and the Code Verifier 
from a log and replay it.

Cheers

Pieter

From: OAuth mailto:oauth-boun...@ietf.org>> On Behalf 
Of Aaron Parecki
Sent: Wednesday 13 October 2021 18:40
To: Warren Parad 
mailto:40rhosys...@dmarc.ietf.org>>
Cc: Mike Jones 
mailto:40microsoft@dmarc.ietf.org>>;
 oauth@ietf.org
Subject: [EXTERNAL] Re: [OAUTH-WG] Authorization code reuse and OAuth 2.1

Warren, I didn't see you on the interim call, so you might be missing some 
context.

The issue that was discussed is that using PKCE already provides all the 
security benefit that is gained by enforcing single-use authorization codes. 
Therefore, requiring that they are single-use isn't necessary as it doesn't 
provide any additional benefit.

If anyone can think of a possible attack by allowing authorization codes to be 
reused *even with a valid PKCE code verifier* then that would warrant keeping 
this requirement.

---
Aaron Parecki


On Wed, Oct 13, 2021 at 10:27 AM Warren Parad 
mailto:40rhosys...@dmarc.ietf.org>> wrote:
Isn't it better for it to be worded as we want it to be, with the implication 
being that of course it might be difficult to do that, but that AS devs will 
think long and hard about sometimes not denying the request? Even with MUST, 
some AS will still allow reuse of auth codes. Isn't that better than flat out 
saying: sure, there's a valid reason

In other words, how do we think about RFCs? Do they exist to be followed to the 
letter or not at all? Or do they exist to stipulate this is the way, but 
acknowledge that not everyone will build a solution that holds them as law.


Re: [OAUTH-WG] Call for Adoption - OAuth Proof of Possession Tokens with HTTP Message Signature

2021-10-08 Thread Richard Backman, Annabelle
IE, if the success of HTTP Signing is tied to the OAuth WG adopting the draft, 
then Mike's arguments about the WG already doing this work is valid.

It's not the success of HTTP Message Signatures that concerns me here; that 
draft will reach RFC regardless of what the OAuth WG does. But I and others 
would like to use Message Signatures with OAuth 2.0, and would like to have 
some confidence that there will be a standard, interoperable way to do that.

There are other, non-OAuth 2.0 use cases for HTTP Message Signatures. I don't 
see the rationale behind waiting for implementations for completely unrelated 
use cases, or by parties that aren't using OAuth 2.0 for authorization. How are 
they relevant?

—
Annabelle Backman (she/her)
richa...@amazon.com<mailto:richa...@amazon.com>




On Oct 8, 2021, at 1:33 PM, Dick Hardt 
mailto:dick.ha...@gmail.com>> wrote:


CAUTION: This email originated from outside of the organization. Do not click 
links or open attachments unless you can confirm the sender and know the 
content is safe.


On Fri, Oct 8, 2021 at 12:39 PM Richard Backman, Annabelle 
mailto:richa...@amazon.com>> wrote:

Blocking WG development of an OAuth 2.0 profile of Message Signatures behind 
widespread deployment of Message Signatures risks creating a deadlock where the 
WG is waiting for implementations from would-be implementers who are waiting 
for guidance from the WG. Worse, rejecting the draft is likely to further 
discourage these parties from implementing Message Signatures, as it suggests 
the WG is not interested in standardizing its usage with OAuth 2.0.

If the main use case for HTTP Signing is the OAuth WG, then effectively the 
OAuth WG is developing HTTP Signing and it is not really a general purpose 
standard.

IE, if the success of HTTP Signing is tied to the OAuth WG adopting the draft, 
then Mike's arguments about the WG already doing this work is valid.



[https://mailfoogae.appspot.com/t?sender=aZGljay5oYXJkdEBnbWFpbC5jb20%3D=zerocontent=052c9f85-ef8e-44d3-aca8-40ffb9bce5ef]ᐧ

___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] Call for Adoption - OAuth Proof of Possession Tokens with HTTP Message Signature

2021-10-08 Thread Richard Backman, Annabelle
We need to come up with a better argument such as: This allows a client to 
reduce use of the token to a smaller window to where the signature is also 
valid.

We have one: prevent unauthorized parties from using access tokens. Since a 
client's signing key never needs to leave the client's system, requiring a 
signature over the `Authorization` header raises the bar from "read an access 
token out of a server log" to "compromise the client itself." Additionally, 
Message Signatures defines an optional `nonce` parameter that can be used by 
the recipient of the message to enforce one-time usage of a signature, meaning 
that a signature cannot be replayed in another request, even if the signed 
message components are the same as in the original request.


 I reject the use of an additional header to transmit additional authorization 
information. Due to the nature of this information may be conveyed and stored 
throughout a number of systems which would all need access to this complexity. 
As headers and authorization evolves the introduction and replacement of 
existing headers provides additional security vulnerabilities.

I'm not sure I follow the concern here. I agree that this adds complexity. 
While there are many use cases for which this complexity/security tradeoff is 
worth it, this will not be the case for every OAuth 2.0 deployment out there.


The draft does not support multiple clients with access to the access token who 
all should be able to provide different client signatures and all be verifiable 
by the RS.

Access tokens are not intended to be used by multiple clients. They may be used 
by multiple hosts/devices, for example if the client is a distributed system 
running on multiple hosts. Different hosts within such a client could use 
different keys, provided they are all identified in the JWKS the client 
registers with the AS.


This forces the RS to lookup two pieces of information in the case of signed 
JWTs, the JWKs from the AS and the JWKs from the client

Yes, however this could be changed. For example, the AS could embed the 
client's public key in the access token JWT, so the RS only has to retrieve the 
AS key. That comes with its own set of tradeoffs, but this is likely not the 
only solution – it's just the one I came up with while writing this email. :)

That's the thing about adopting a draft: once the Working Group adopts it, the 
Working Group gets to change it to cover all the use cases and requirements 
that the original author didn't think of. "The draft doesn't cover my use case" 
is really an argument for adoption.

—
Annabelle Backman (she/her)
richa...@amazon.com




On Oct 7, 2021, at 3:08 AM, Warren Parad 
mailto:wparad=40rhosys...@dmarc.ietf.org>> 
wrote:


CAUTION: This email originated from outside of the organization. Do not click 
links or open attachments unless you can confirm the sender and know the 
content is safe.


I do not support adoption.

While I love the idea to be able to restrict the usage of the access token by 
the client, enabling longer expiry access tokens, and preventing usage to 
perform unexpected actions with that token, the reasons I do not support the 
adoption are as follows:

* The draft depends on another draft and personally not one that I even agree 
with. Saying that the "draft is officially adopted" doesn't justify depending 
on it.
* I reject the use of an additional header to transmit additional authorization 
information. Due to the nature of this information may be conveyed and stored 
throughout a number of systems which would all need access to this complexity. 
As headers and authorization evolves the introduction and replacement of 
existing headers provides additional security vulnerabilities.
* The draft does not support multiple clients with access to the access token 
who all should be able to provide different client signatures and all be 
verifiable by the RS.
* This forces the RS to lookup two pieces of information in the case of signed 
JWTs, the JWKs from the AS and the JWKs from the client
* The argument in the introduction is flawed

Bearer tokens are simple to implement but also have the significant security 
downside of allowing anyone who sees the access token to use that token.

This is not a complete argument because I can replace the words in the sentence 
and justify that HTTPSig should never be used by the same argument:
HTTPSig tokens are incredibly complex to implement but also have the 
significant security downside of allowing anyone who sees the access token and 
signature to use that token.

We need to come up with a better argument such as: This allows a client to 
reduce use of the token to a smaller window to where the signature is also 
valid.
That would allow us to better focus on the value that the RFC would provide, 
rather than getting stuck with arbitrary implementation of another RFC draft as 
it would apply to OAuth.




Re: [OAUTH-WG] Call for Adoption - OAuth Proof of Possession Tokens with HTTP Message Signature

2021-10-08 Thread Richard Backman, Annabelle
I agree that Token Binding is not an experience we want to repeat, and I 
understand having a "once bitten, twice shy" reaction to this. However, the 
circumstances that led to Token Binding's failure do not apply to Message 
Signatures. Token Binding required changes in the user agent, meaning that an 
AS/RS/client could only use Token Binding if an unrelated party – the browser 
vendor – had already deployed support for it. This is not the case for Message 
Signatures. Any service can use Message Signatures, as long as they can 
convince their clients to use it. (And vice versa)

Regarding adoption, bear in mind that Message Signatures is not a complete 
security solution (by design) – it is a building block, intended to be used 
within some higher level authorization protocol. (e.g., OAuth 2.0) I'd expect 
that most parties that use OAuth 2.0 today and are interested in Message 
Signatures will hold off on implementing the latter until the WG tells them how 
to use it with their existing OAuth 2.0 deployment. Blocking WG development of 
an OAuth 2.0 profile of Message Signatures behind widespread deployment of 
Message Signatures risks creating a deadlock where the WG is waiting for 
implementations from would-be implementers who are waiting for guidance from 
the WG. Worse, rejecting the draft is likely to further discourage these 
parties from implementing Message Signatures, as it suggests the WG is not 
interested in standardizing its usage with OAuth 2.0.


As I mentioned, I think Justin and Annabelle (and anyone else interested) can 
influence HTTP Sig to cover OAuth use cases.

While I appreciate your faith in our abilities, it is difficult to advocate for 
requirements that have not been defined, and harder still to when advocating on 
behalf of a Working Group that has said it is not interested in Message 
Signatures.

—
Annabelle Backman (she/her)
richa...@amazon.com




On Oct 6, 2021, at 2:55 PM, Dick Hardt 
mailto:dick.ha...@gmail.com>> wrote:


CAUTION: This email originated from outside of the organization. Do not click 
links or open attachments unless you can confirm the sender and know the 
content is safe.


Remember token binding? It was a stable draft. The OAuth WG spent a bunch of 
cycles building on top of token binding, but token binding did not get 
deployed, so no token binding for OAuth.

As I mentioned, I think Justin and Annabelle (and anyone else interested) can 
influence HTTP Sig to cover OAuth use cases.

/Dick






[https://mailfoogae.appspot.com/t?sender=aZGljay5oYXJkdEBnbWFpbC5jb20%3D=zerocontent=220e8879-1cf9-481e-804c-a9ca9622d19e]ᐧ

On Wed, Oct 6, 2021 at 2:48 PM Aaron Parecki 
mailto:aa...@parecki.com>> wrote:
This actually seems like a great time for the OAuth group to start working on 
this more closely given the relative stability of this draft as well as the 
fact that it is not yet an RFC. This is a perfect time to be able to influence 
the draft if needed, rather than wait for it to be finalized and then have to 
find a less-than-ideal workaround for something unforeseen.

Aaron

On Wed, Oct 6, 2021 at 2:25 PM Dick Hardt 
mailto:dick.ha...@gmail.com>> wrote:
I meant it is not yet adopted as an RFC.

To be clear, I think you are doing great work on the HTTP Sig doc, and a number 
of concerns I have with HTTP signing have been addressed => I just think that 
doing work in the OAuth WG on a moving and unproven draft in the HTTP WG is not 
a good use of resources in the OAuth WG at this time.


[https://mailfoogae.appspot.com/t?sender=aZGljay5oYXJkdEBnbWFpbC5jb20%3D=zerocontent=43ada4a0-1251-44ee-b32c-f82f530a9e53]ᐧ

On Wed, Oct 6, 2021 at 2:20 PM Justin Richer 
mailto:jric...@mit.edu>> wrote:
> HTTP Sig looks very promising, but it has not been adopted as a draft

Just to be clear, the HTTP Sig draft is an official adopted document of the 
HTTP Working Group since about a year ago. I would not have suggested we depend 
on it for a document within this WG otherwise.

 — Justin

On Oct 6, 2021, at 5:08 PM, Dick Hardt 
mailto:dick.ha...@gmail.com>> wrote:

I am not supportive of adoption of this document at this time.

I am supportive of the concepts in the document. Building upon existing, widely 
used, proven security mechanisms gives us better security.

HTTP Sig looks very promising, but it has not been adopted as a draft, and as 
far as I know, it is not widely deployed.

We should wait to do work on extending HTTP Sig for OAuth until it has 
stabilized and proven itself in the field. We have more than enough work to do 
in the WG now, and having yet-another PoP mechanism is more likely to confuse 
the community at this time.

An argument to adopt the draft would be to ensure HTTP Sig can be used in OAuth.
Given Justin and Annabelle are also part of the OAuth community, I'm sure they 
will be considering how HTTP Sig can apply to OAuth, so the overlap is serving 
us already.

/Dick



Re: [OAUTH-WG] Call for Adoption - OAuth Proof of Possession Tokens with HTTP Message Signature

2021-10-08 Thread Richard Backman, Annabelle
I support adoption of this draft as a working group document.

I have seen a few objections due in part to the draft not addressing this or 
that topic. Bear in mind that this is a call for adoption of a draft; this is 
not WGLC. It is generally expected that a draft may be far from complete before 
being adopted. One advantage of adopting an incomplete draft is that it can be 
further developed within the Working Group, with changes subject to the IETF 
consensus process.

—
Annabelle Backman (she/her)
richa...@amazon.com




On Oct 6, 2021, at 2:01 PM, Rifaat Shekh-Yusef 
mailto:rifaat.s.i...@gmail.com>> wrote:


CAUTION: This email originated from outside of the organization. Do not click 
links or open attachments unless you can confirm the sender and know the 
content is safe.


All,

As a followup on the interim meeting today, this is a call for adoption for the 
OAuth Proof of Possession Tokens with HTTP Message Signature draft as a WG 
document:
https://datatracker.ietf.org/doc/draft-richer-oauth-httpsig/

Please, provide your feedback on the mailing list by October 20th.

Regards,
 Rifaat & Hannes

___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth

___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] A proposal for OAuth WG Interim Meetings in place of IETF108

2020-06-23 Thread Richard Backman, Annabelle
+1

–
Annabelle Backman (she/her)
AWS Identity
https://aws.amazon.com/identity/


From: OAuth  on behalf of Phillip Hunt 

Date: Monday, June 22, 2020 at 5:45 PM
To: Mike Jones 
Cc: Torsten Lodderstedt , oauth 

Subject: RE: [EXTERNAL] [OAUTH-WG] A proposal for OAuth WG Interim Meetings in 
place of IETF108


CAUTION: This email originated from outside of the organization. Do not click 
links or open attachments unless you can confirm the sender and know the 
content is safe.


+1
Phil


On Jun 22, 2020, at 3:16 PM, Mike Jones 
 wrote:
+1 from me too

From: OAuth  On Behalf Of Torsten Lodderstedt
Sent: Sunday, June 21, 2020 2:42 PM
To: Falk Andreas 
Cc: oauth 
Subject: Re: [OAUTH-WG] A proposal for OAuth WG Interim Meetings in place of 
IETF108

+1



Am 21.06.2020 um 22:39 schrieb Falk Andreas 
mailto:andreas.f...@novatec-gmbh.de>>:
+1

Von: OAuth mailto:oauth-boun...@ietf.org>> im Auftrag 
von Dick Hardt mailto:dick.ha...@gmail.com>>
Gesendet: Sonntag, 21. Juni 2020 20:42
An: Rifaat Shekh-Yusef mailto:rifaat.s.i...@gmail.com>>
Cc: oauth mailto:oauth@ietf.org>>
Betreff: Re: [OAUTH-WG] A proposal for OAuth WG Interim Meetings in place of 
IETF108

+1

On Sat, Jun 20, 2020 at 12:34 PM Rifaat Shekh-Yusef 
mailto:rifaat.s.i...@gmail.com>> wrote:
All,

As you know, IETF108 will be online, and based on the discussion during the 
last interim meeting series, our plan is to schedule another series of these 
meetings for the OAuth WG.
Based on the WG feedback, the plan is to have a series of one hour meetings, 
and to discuss one document per meeting.

Based on the above, we would like to get the WG opinion on the following 
proposal:

1. Meeting day/time would be Monday @ 12:00pm Eastern Time (same as the last 
interim series)

2. Have around 9 meetings spread over 3 months:
· July meetings, before IETF108: July 6, 13, and 20
· August meetings: August 3, 10, 17
· September meetings: September 7, 14, 21

Thoughts?

Regards,
 Rifaat & Hannes

___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth
___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth
___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth
___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] WGLC on "JSON Web Token (JWT) Profile for OAuth 2.0 Access Tokens"

2020-04-13 Thread Richard Backman, Annabelle
There are use cases where the AS can be expected to know (and in fact needs to 
know) which RSes a token will be used with, and use cases where there is value 
in obscuring this fact. This spec should not be limited to only one or the 
other. The work you suggest to support obscuring the RS identity is not unique 
to any particular access token format, and is therefore out of scope for this 
document.

Likewise, token format negotiation is an interesting concept, but by its nature 
is not bound to any specific format. If you are interested in pursuing that 
work, I suggest submitting it as a separate I-D. A JWT access token profile 
provides significant value without this addition, so I don’t see any benefit in 
trying to incorporate it into this draft.

Note that permitting the client to inspect the access token does not prevent 
the AS and RS from exchanging information about the client or resource owner, 
as this could be done via proprietary claims with encrypted values or via 
separate service-to-service interactions between the RS and AS. It should also 
be noted that in many cases (notably ones where the AS and RS are closely 
affiliated) encrypting the access token preserves the resource owner’s privacy 
by preventing the client from seeing information that the resource owner has 
shared with the AS and RS but not the client.

Lastly I am against adding a mechanism for requesting arbitrary claims to be 
included in the access token. The purpose of the access token is to serve as 
credentials that the client can use to access protected resources. If the 
client wants claims, they should request the appropriate scopes and retrieve 
them from a profile or userinfo endpoint, or use OIDC. The information required 
by the resource server should be understood by the AS based on the resource 
indicators and scopes in the request. If this is not the case, then that 
suggests there is a trust boundary between the AS and RS and therefore needs to 
make its own authorization request. Since OAuth 2.0 access tokens represent an 
authorization grant to the client, it would be inappropriate to bundle the 
resource server’s authorization request into the client’s authorization request.

–
Annabelle Backman (she/her)
AWS Identity
https://aws.amazon.com/identity/


From: OAuth  on behalf of Denis 
Date: Sunday, April 12, 2020 at 8:48 AM
To: Benjamin Kaduk 
Cc: Vittorio Bertocci , oauth 
Subject: RE: [EXTERNAL] [OAUTH-WG] WGLC on "JSON Web Token (JWT) Profile for 
OAuth 2.0 Access Tokens"


CAUTION: This email originated from outside of the organization. Do not click 
links or open attachments unless you can confirm the sender and know the 
content is safe.


Hi Benjamin,

Since there is tomorrow a virtual meeting and additionally I am locked down, by 
exception, I reply today.

We both agree that when the authorization server and resource server are not 
co-located, are not run by the same entity,
or are otherwise separated by some boundary" this does indeed present some 
consequences for the privacy considerations
of the protocol that will need to be addressed in some manner.

Today OAuth 2.0 has not been constructed taking into consideration this case, 
i.e. it has not been designed taking into consideration
"Privacy by Design". The fact is that OAuth 2.0 has nothing specific to take 
care of privacy principles.

There would be a simple way to solve some of my concerns: to remove in the 
Introduction (Section 1) lines 7 to 10 :

   The approach is particularly common in
   topologies where the authorization server and resource server are not
   co-located, are not ran by the same entity, or are otherwise
   separated by some boundary.

Let us now assume that privacy concerns would be addressed in either OAuth 2.1. 
or OAuth 3.0.

This spec. is supposed to be targeted to OAuth 2.0 but the header at the top of 
the page omits to mention it. Currently, it is :
Internet-Draft   OAuth Access Token JWT Profile   March 2020
It should rather be:
Internet-Draft   OAuth 2.0 Access Token JWT Profile   March 2020
There is however a problem to be solved now: to allow to easily distinguish 
between OAuth 2.0 JWT and future JWT versions.

In addition, there will be the need to be able to carry unsigned parameters 
associated with an OAuth 2.1. or OAuth 3.0 JWT.

An historical example was to be able to distinguish the version of a X.509 
public key certificate using a version parameter.
Something similar should be done.

Section 6 (Privacy Considerations) should be modified.

The MUST NOT (inspect the content of the access token) should be removed.

Some text, like the following text, should be added:
In topologies where the authorization server and resource server are not 
co-located, are not ran by the same entity, or are otherwise
separated by some boundary, some privacy concerns arise. For security reasons, 
some clients may want to target their access tokens
but, for privacy reasons, may be unwilling 

Re: [OAUTH-WG] [UNVERIFIED SENDER] Re: WGLC on "JSON Web Token (JWT) Profile for OAuth 2.0 Access Tokens"

2020-03-25 Thread Richard Backman, Annabelle
Yes, there isn’t a clear solution to this problem. My main concern at this 
point is that we don’t give the impression that an AS can establish security 
boundaries or prevent token mix up by using different keys. The text changes 
you suggest would address that.

–
Annabelle Backman (she/her)
AWS Identity
https://aws.amazon.com/identity/


From: Vittorio Bertocci 
Date: Wednesday, March 25, 2020 at 5:10 PM
To: "Richard Backman, Annabelle" , 
"vittorio.bertocci=40auth0@dmarc.ietf.org" , 
'George Fletcher' , 'Brian Campbell' 

Cc: 'oauth' 
Subject: [EXTERNAL] [UNVERIFIED SENDER] Re: [OAUTH-WG] WGLC on "JSON Web Token 
(JWT) Profile for OAuth 2.0 Access Tokens"

OK, I caught up with the discussion. Very interesting.
It seems that the conclusion is that there’s no simple mechanism we can add at 
this point that would easily gel with existing deployment, hence either we tell 
people to STOP using multiple keys, or we make them aware of the futility of 
doing so as a way of enforcing security boundaries.
Is that the correct conclusion? If yes, I would suggest we use the language I 
suggested in Brian’s thread (“The RS should expect the AS to use any of the 
keys published in the JWKS doc to sign JWT ATs”) to warn the RS developer that 
the AS could do that, and in the security section we warn the AS developer that 
using multiple keys won’t help much given that the RS won’t differentiate 
between tokens signed with keys from the same metadata collection anyway, hence 
it’s enough to compromise one key to generate tokens that will be accepted 
regardless of type or any other classification.
WDYT?

From: Vittorio Bertocci 
Date: Wednesday, March 25, 2020 at 16:53
To: "Richard Backman, Annabelle" , 
"vittorio.bertocci=40auth0@dmarc.ietf.org" 
, 'George Fletcher' 
, 'Brian Campbell' 

Cc: 'oauth' 
Subject: Re: [OAUTH-WG] WGLC on "JSON Web Token (JWT) Profile for OAuth 2.0 
Access Tokens"

Oh wow, I completely missed that thread. Thanks for the link. Reading…

From: OAuth  on behalf of "Richard Backman, Annabelle" 

Date: Wednesday, March 25, 2020 at 14:26
To: "vittorio.bertocci=40auth0@dmarc.ietf.org" 
, 'George Fletcher' 
, 'Brian Campbell' 

Cc: 'oauth' 
Subject: Re: [OAUTH-WG] WGLC on "JSON Web Token (JWT) Profile for OAuth 2.0 
Access Tokens"

This is another manifestation of the limits of jwks_uri that I’ve brought up on 
the list 
previously<https://mailarchive.ietf.org/arch/msg/oauth/eCZ-wUU2iwTyfx-zuqr2K3bM8-8/>.

Using different signing keys does not actually limit the blast radius of each 
key, since the validator doesn’t know that each key should only be considered 
valid for one type of token. This takes away one of the major drivers for using 
different keys. If the text says deployments can use different keys, it needs 
to clarify the limited value of that.

–
Annabelle Backman (she/her)
AWS Identity
https://aws.amazon.com/identity/


From: OAuth  on behalf of 
"vittorio.bertocci=40auth0@dmarc.ietf.org" 

Date: Wednesday, March 25, 2020 at 12:01 PM
To: 'George Fletcher' , 'Brian Campbell' 

Cc: 'oauth' 
Subject: RE: [EXTERNAL] [OAUTH-WG] WGLC on "JSON Web Token (JWT) Profile for 
OAuth 2.0 Access Tokens"


CAUTION: This email originated from outside of the organization. Do not click 
links or open attachments unless you can confirm the sender and know the 
content is safe.


That works for me!

From: George Fletcher 
Sent: Wednesday, March 25, 2020 11:56 AM
To: vittorio.berto...@auth0.com; 'Brian Campbell' 

Cc: 'Brian Campbell' ; 'oauth' 
Subject: Re: [OAUTH-WG] WGLC on "JSON Web Token (JWT) Profile for OAuth 2.0 
Access Tokens"

If we don't want to give guidance on how the RS determines the correct key to 
use to validate the token, then maybe we should state that explicitly. "The 
mechanism used by the RS to determine the correct key to use to validate the 
access token is out of scope for this specification".

That way at least we are being very clear that the spec is not trying to 
specify how that happens.

Thoughts?
On 3/25/20 2:44 PM, 
vittorio.berto...@auth0.com<mailto:vittorio.berto...@auth0.com> wrote:

Brian, there are plenty of ways in which an RS can surprise you with odd 
behavior- for example, developers might see that you used a key for signing an 
IDtoken and use that for init all their validation middleware for ATs as well, 
say because the library only supports one key at a time, and then end up 
failing at runtime when/if the assumption ceases to apply in the future.



Would that be legitimate of them to take such a dependency, even without 
warning text? No. However I am not looking at this from the “lawyering up” 
perspective, but from the useful guidance standpoint as well. I am well aware 
that being concise is a feature, but I am also not crazy about making every 
specification into an intelligence test for

Re: [OAUTH-WG] WGLC on "JSON Web Token (JWT) Profile for OAuth 2.0 Access Tokens"

2020-03-24 Thread Richard Backman, Annabelle
To borrow a term from ML, I think the "aud", "scope", and resource 
indicator-related text is overfitted to a specific set of deployment scenarios, 
and a specific way of using scopes and resource indicators.

Consider the following:

1. There may be no "scope" parameter
The "scope" parameter is OPTIONAL in authorization requests. So an AS/RS 
operator could decide they're going to omit "scope" entirely and use multiple 
resource parameters instead. Since there are no scopes, there is no opportunity 
for confusion. In this case, a JWT AT with a multi-valued "aud" claim and no 
"scope" claim would seem appropriate. While multiple resource indicators could 
be pushed into a single scope string, this introduces opportunities for serious 
security impacting encoding/decoding/parsing bugs. The more I think about it, 
the more "I don't have to deal with parsing a scope string" seems like a 
compelling reason to go this route... __

2. The scopes may apply to all audiences
An AS/RS operator may use "scope" to indicate a role or policy (or set of 
policies) that the client wants, and allow the client to narrow their 
permissions using "resource" parameters. This would allow the client to obtain 
narrowly scoped access tokens for specific use cases without needing to define 
separate roles/policies for each. In this case, a JWT AT with a multi-valued 
"aud" claim and a "scope" claim would seem appropriate, as the scope claim is 
intended to apply to all of the audience values.

3. The mapping between audience and scope may be unambiguous
There are a lot of deployments to which the blast radius risk you're trying to 
address by requiring "aud" simply does not apply. It may seem innocuous to 
require these deployments to explicitly include a broad audience like 
"api.example.com" anyway, that can lead to implementers ignoring the 
requirement (leading to interop issues), not validating it (also leading to 
interop issues or security issues if the deployment wants to start actually 
using it for real), or doing something funky with it since there isn't anything 
"real" that the value needs to conform to. 

–
Annabelle Backman (she/her)
AWS Identity
https://aws.amazon.com/identity/
 

On 3/24/20, 3:31 PM, "OAuth on behalf of Vittorio Bertocci" 
 wrote:

CAUTION: This email originated from outside of the organization. Do not click 
links or open attachments unless you can confirm the sender and know the 
content is safe.



Thanks George for the super thorough review and feedback!
Inline

  >  Section 1. Introduction
 ��� second line: scenario should be plural --> scenarios
 ��� second sentence: "are not ran by" --> "are not run by"
�� cofidentiality --> confidentiality
Fixed. Thanks!

> Section 2.2.1 Authentication Information Claims
 ��� I'm not sure that this definition of `auth_time` allows for the
case where a user is required to solve an additional challenge.
If the challenge entails going back to the AS, then I believe the language (in 
the initial paragraph of 2.2.1 and in auth_time itself)  accommodates for that 
and does require the auth_time to be updated.
If you hit the AS and present an authentication factor (such as your challenge) 
and obtain a new token in the process, the auth_time will reflect the time of 
your latest authentication just like an id_token would in the same 
circumstances (think protected route in a web app requiring step up auth) and 
(likely) associated session artifacts (think RTs or cookies with sliding 
expiration, the challenge would count as activity and move the expiration).

> ��� I think there is a difference between session_start_time and 
> last
auth_time. This feels more like it's defining the session_start_time
concept.
>�� These same issues can apply to the `acr` and `amr` values as well.
Per the above, the intent is more to express the last time the user performed 
any authentication action rather than the start time. The intent is to provide 
information as current as possible, as it might be relevant to the RS decisions 
whereas the history before current conditions might not be consequential.

  >   �� Even if for this secondary challenge a new refresh_token is issued,
it is unlikely many relying parties will want to treat that as issuing a
new session. The goal is to keep the user logged in to a single session.
Could you expand on the practical implications of the above? The intent isn't 
as much to reflect session identifying information per se, but to provide the 
RS with the most up to date information about the circumstances in which the 
current AT was obtained. The fact that a session was initially established 
using acr level 0 doesn’t really matter if the AT I am receiving now has been 
obtained after a stepup that brought acr to 1, if my RS cares auth 
authentication levels my authorization decision shouldn't be influenced by 
whether somewhere the session artifact didn’t 

Re: [OAUTH-WG] OAuth 2.1: dropping password grant

2020-02-21 Thread Richard Backman, Annabelle
ROPC without a client ID or authentication is functionally equivalent to Client 
Credentials grant with client secret authentication in the request body. You've 
just renamed "client_id" to "username" and "client_secret" to "password".

The AS simply needs to be able to resolve the client ID to the service account. 
You could use any of the following strategies, depending on the environment:
* Use service account identifiers as client IDs
* Use encrypted blobs containing service account identifiers as client IDs, so 
someone can't choose a client ID by creating a service account with a specific 
identifier
* Use opaque values that the AS can resolve to service account identifiers, 
e.g., via a database lookup

If the AS needs the service account's password because it's authenticating 
against a legacy system, then use the service account password as the client 
secret. Stack mTLS on top, if you want. If the AS just needs to resolve the 
account so it can put it in tokens that RSes will look at, then you can use 
whatever client authentication mechanism you want.

Is there a scenario I'm missing here?

–
Annabelle Backman (she/her)
AWS Identity
https://aws.amazon.com/identity/
 

On 2/21/20, 1:53 PM, "Neil Madden"  wrote:

The AS doesn’t issue the service account IDs, that’s the whole point - 
integration with existing systems. Lot’s of people don’t have the luxury of 
rebuilding systems from scratch to fit in with the preferences of the OAuth WG.

ROPC doesn’t require client authentication, or even a client identifier. If 
you’re using a service account you indeed don’t need to bother issuing client 
credentials. The same is true when using the JWT bearer grant. If you want to 
increase security you can use cert-bound access tokens.
    
> On 21 Feb 2020, at 20:28, Richard Backman, Annabelle 
 wrote:
> 
> The client IDs can still be opaque identifiers provided by the AS, they 
just happen to be associated with specific service accounts. Or they could be 
the opaque IDs that the AS already issued for the service account. Either way, 
the AS could issue a token with the appropriate subject and other claims for 
the service account.
> 
> If your client identity is bound to a specific service account identity 
(i.e., the resource owner), then ROPC reduces down to Client Credentials. 
What's the point in passing two identifiers and two credentials for the same 
identity?
> 
> –
> Annabelle Backman (she/her)
> AWS Identity
> https://aws.amazon.com/identity/
> 
> 
> On 2/21/20, 6:48 AM, "OAuth on behalf of Neil Madden" 
 wrote:
> 
>Sorry, I missed that message. 
> 
>While this may be a solution in specific circumstances, I don’t think 
it’s a general solution. e.g. an AS may not allow manually choosing the 
client_id to avoid things like 
https://tools.ietf.org/html/draft-ietf-oauth-security-topics-14#section-4.13 or 
may return different introspection results for client credentials tokens (e.g. 
with no “sub”) and so on. In practice, this adds even more steps for somebody 
to migrate from existing ROPC usage.
> 
>This is asking people to make fundamental changes to their identity 
architecture rather than simply switching to a new grant type.
> 
>— Neil
> 
>> On 21 Feb 2020, at 14:34, Torsten Lodderstedt  
wrote:
>> 
>> I see - we have gone full cycle :-) 
>> 
>> Annabelle’s proposal would solve that. Relate a client id to a service 
account and obtain the token data from there. 
>> 
>>> On 21. Feb 2020, at 15:31, Neil Madden  
wrote:
>>> 
>>> Yes, that is great. But mTLS doesn’t support service accounts (!= 
clients). Maybe it should? Should there be a mTLS *grant type*?
>>> 
>>> — Neil
>>> 
>>>> On 21 Feb 2020, at 14:20, Torsten Lodderstedt 
 wrote:
>>>> 
>>>> Have you ever tried the client credentials grant with mTLS? After 
reading your description it seems to be simpler than JWT Bearer.
>>>> 
>>>> * work out if the AS even supports mTLS
>>>> * work out how to configure the AS to trust my cert(s)
>>>> * Create key pair and cert using openssl
>>>> * Register your (self-signed) cert along with your client_id
>>>> * Configure the HTTP client to use your key pair for TLS Client 
Authentication
>>>> 
>>>> Works very well for us. 
>>>> 
>>>>> On 21. Feb 2020, at 15:12, Neil Madden  
wrote:
>>>>> 
>>>>> No failures, but it is a much more complex grant type to set up, when 
you consider e

Re: [OAUTH-WG] OAuth 2.1: dropping password grant

2020-02-21 Thread Richard Backman, Annabelle
The client IDs can still be opaque identifiers provided by the AS, they just 
happen to be associated with specific service accounts. Or they could be the 
opaque IDs that the AS already issued for the service account. Either way, the 
AS could issue a token with the appropriate subject and other claims for the 
service account.

If your client identity is bound to a specific service account identity (i.e., 
the resource owner), then ROPC reduces down to Client Credentials. What's the 
point in passing two identifiers and two credentials for the same identity?

–
Annabelle Backman (she/her)
AWS Identity
https://aws.amazon.com/identity/
 

On 2/21/20, 6:48 AM, "OAuth on behalf of Neil Madden"  wrote:

Sorry, I missed that message. 

While this may be a solution in specific circumstances, I don’t think it’s 
a general solution. e.g. an AS may not allow manually choosing the client_id to 
avoid things like 
https://tools.ietf.org/html/draft-ietf-oauth-security-topics-14#section-4.13 or 
may return different introspection results for client credentials tokens (e.g. 
with no “sub”) and so on. In practice, this adds even more steps for somebody 
to migrate from existing ROPC usage.

This is asking people to make fundamental changes to their identity 
architecture rather than simply switching to a new grant type.

— Neil

> On 21 Feb 2020, at 14:34, Torsten Lodderstedt  
wrote:
> 
> I see - we have gone full cycle :-) 
> 
> Annabelle’s proposal would solve that. Relate a client id to a service 
account and obtain the token data from there. 
> 
>> On 21. Feb 2020, at 15:31, Neil Madden  wrote:
>> 
>> Yes, that is great. But mTLS doesn’t support service accounts (!= 
clients). Maybe it should? Should there be a mTLS *grant type*?
>> 
>> — Neil
>> 
>>> On 21 Feb 2020, at 14:20, Torsten Lodderstedt  
wrote:
>>> 
>>> Have you ever tried the client credentials grant with mTLS? After 
reading your description it seems to be simpler than JWT Bearer.
>>> 
>>> * work out if the AS even supports mTLS
>>> * work out how to configure the AS to trust my cert(s)
>>> * Create key pair and cert using openssl
>>> * Register your (self-signed) cert along with your client_id
>>> * Configure the HTTP client to use your key pair for TLS Client 
Authentication
>>> 
>>> Works very well for us. 
>>> 
 On 21. Feb 2020, at 15:12, Neil Madden  
wrote:
 
 No failures, but it is a much more complex grant type to set up, when 
you consider everything you have to do:
 
 * work out if the AS even supports JWT bearer and how to turn it on
 * work out how to configure the AS to trust my public key(s)
 - do I have to create a new HTTPS endpoint to publish a JWK Set?
 * determine the correct settings for issuer, audience, subject, etc. 
Does the AS impose non-standard requirements? e.g. RFC 7523 says that the JWT 
MUST contain a “sub” claim, but Google only allows this to be present if your 
client is doing impersonation of an end-user (which requires additional 
permissions).
 * do I need a unique “jti” claim? (OIDC servers do, plain OAuth ones 
might not) If I do, can I reuse the JWT or must it be freshly signed for every 
call?
 * locate and evaluate a JWT library for my language of choice. Monitor 
that new dependency for security advisories.
 * choose a suitable signature algorithm (‘ere be dragons)
 * figure out how to distribute the private key to my service
 
 Compared to “create a service account and POST the username and 
password to the token endpoint” it adds a little friction. (It also adds a lot 
of advantages, but it is undeniably more complex).
 
 — Neil
 
 
> On 21 Feb 2020, at 13:41, Matthew De Haast 
 wrote:
> 
> I have a feeling that if we had more concise JWT libraries and 
command line tools, where using the JWT Bearer grant became a one-liner again 
then we wouldn’t be having this conversation. So perhaps removing it is an 
incentive to make that happen.
> 
> Neil could you elaborate more on this please. What failures are you 
currently experiencing/seeing with the JWT Bearer grant? 
> 
> Matt
> 
> On Thu, Feb 20, 2020 at 12:42 AM Neil Madden 
 wrote:
> I have a feeling that if we had more concise JWT libraries and 
command line tools, where using the JWT Bearer grant became a one-liner again 
then we wouldn’t be having this conversation. So perhaps removing it is an 
incentive to make that happen.
> 
> 
>> On 19 Feb 2020, at 22:01, Dick Hardt  wrote:
>> 
>> Neil: are you advocating that password grant be preserved in 2.1? Or 
do you think that service account developers know enough about what they are 
doing to follow what is in 6749?

Re: [OAUTH-WG] OAuth 2.1: dropping password grant

2020-02-19 Thread Richard Backman, Annabelle
An AS can define client IDs that are specific to a service account. I.e., 
Service Account A has client ID 1234..., Service Account B has client ID 
5678 This appears to be what Google is doing. From there, the AS could 
implement whatever client authentication mechanism it wants to: client secret, 
mTLS, JWT bearer, etc.

–
Annabelle Backman (she/her)
AWS Identity
https://aws.amazon.com/identity/
 

On 2/19/20, 1:54 PM, "OAuth on behalf of Neil Madden"  wrote:

OAuth2 clients are often private to the AS - they live in a database that 
only the AS can access, have attributes specific to their use in OAuth2, and so 
on. Many existing systems have access controls based on users, roles, 
permissions and so on and expect all users accessing the system to exist in 
some user repository, e.g. LDAP, where they can be looked up and appropriate 
permissions determined. A service account can be created inside such a system 
as if it was a regular user, managed through the normal account provisioning 
tools, assigned permissions, roles, etc.

Another reason is that sometimes OAuth is just one authentication option 
out of many, and so permissions assigned to service accounts are preferred over 
scopes because they are consistently applied no matter how a request is 
authenticated. This is often the case when OAuth has been retrofitted to an 
existing system and they need to preserve compatibility with already deployed 
clients.

See e.g. Google cloud platform (GCP): 
https://developers.google.com/identity/protocols/OAuth2ServiceAccount
They use the JWT bearer grant type for service account authentication and 
assign permissions to those service accounts and typically have very broad 
scopes. For service-to-service API calls you typically get an access token with 
a single scope that is effectively “all of GCP” and everything is managed at 
the level of permissions on the RO service account itself. They only break down 
fine-grained scopes when you are dealing with user data and will be getting an 
access token approved by a real user (through a normal auth code flow).

— Neil

> On 19 Feb 2020, at 21:35, Torsten Lodderstedt  
wrote:
> 
> Can you explain more in detail why the client credentials grant type 
isn’t applicable for the kind of use cases you mentioned?
> 
>> Am 19.02.2020 um 22:03 schrieb Neil Madden :
>> 
>> I very much agree with this with regards to real users. 
>> 
>> The one legitimate use-case for ROPC I’ve seen is for service accounts - 
where you essentially want something like client_credentials but for whatever 
reason you need the RO to be a service user rather than an OAuth2 client 
(typically so that some lower layer of the system can still perform its 
required permission checks).
>> 
>> There are better grant types for this - e.g. JWT bearer - but they are a 
bit harder to implement. Having recently converted some code from ROPC to JWT 
bearer for exactly this use-case, it went from a couple of lines of code to two 
screens of code. For service to service API calls within a datacenter I’m not 
convinced this resulted in a material increase in security for the added 
complexity.
>> 
>> — Neil
>> 
>>> On 18 Feb 2020, at 21:57, Hans Zandbelt  
wrote:
>>> 
>>> I would also seriously look at the original motivation behind ROPC: I 
know it has been deployed and is used in quite a lot of places but I have never 
actually come across a use case where it is used for migration purposes and the 
migration is actually executed (I know that is statistically not a very strong 
argument but I challenge others to come up with one...)
>>> In reality it turned out just to be a one off that people used as an 
easy way out to stick to an anti-pattern and still claim to do OAuth 2.0. It is 
plain wrong, it is not OAuth and we need to get rid of it.
>>> 
>>> Hans.
>>> 
>>> On Tue, Feb 18, 2020 at 10:44 PM Aaron Parecki  
wrote:
>>> Agreed. Plus, the Security BCP is already effectively acting as a grace 
period since it currently says the password grant MUST NOT be used, so in the 
OAuth 2.0 world that's already a pretty strong signal.
>>> 
>>> Aaron
>>> 
>>> 
>>> 
>>> On Tue, Feb 18, 2020 at 4:16 PM Justin Richer  wrote:
>>> There is no need for a grace period. People using OAuth 2.0 can still 
do OAuth 2.0. People using OAuth 2.1 will do OAuth 2.1. 
>>> 
>>> — Justin
>>> 
> On Feb 18, 2020, at 3:54 PM, Anthony Nadalin 
 wrote:
 
 I would suggest a SHOULD NOT instead of MUST, there are still sites 
using this and a grace period should be provided before a MUST is pushed out as 
there are valid use cases out there still.
 
 From: OAuth  On Behalf Of Dick Hardt
 Sent: Tuesday, February 18, 2020 12:37 PM
 To: oauth@ietf.org
 Subject: [EXTERNAL] [OAUTH-WG] OAuth 2.1: dropping 

Re: [OAUTH-WG] [UNVERIFIED SENDER] RE: Cryptographic hygiene and the limits of jwks_uri

2020-01-29 Thread Richard Backman, Annabelle
This could be nice, but it’s solving a different problem. The issue I’m drawing 
attention to is about how an AS indicates that a given key is valid. That’s 
what the jwks_uri AS metadata property is for, and it does a great job. The 
problem is that it does not allow enough granularity. Currently all an AS can 
do is say “here are the keys to use to verify stuff I signed.” It can’t say 
“here are the keys to use to verify ID Tokens, and here is a different set of 
keys to use to verify access tokens.”

—
Annabelle Backman
AWS Identity

> On Jan 28, 2020, at 10:51 PM, Manger, James  
> wrote:
> 
> 
>> 
>>> It would’ve been nice if JWK could’ve agreed on a URL-based 
>>> addressing format for individual keys within the set, but that ship’s 
>>> sailed.
> 
> Using the fragment on a JWKS URL to indicate the key id would be good.
> Then a single URL by itself can identify a specific key.
> 
> https://example.com/keys.jwks#2011-04-29
> 
> This would have worked particularly well if a JWKS was a JSON object with 
> key-ids as the member names, instead of an array. That is presumably too late 
> to fix. But defining the fragment format for application/jwk-set+json to be a 
> kid value should be possible.
> 
> If you put multiple keys with the same key-id in a JWKS you are asking for 
> trouble -- just call that a non-interoperable corner for people to avoid.
> 
> --
> James Manger
> ___
> OAuth mailing list
> OAuth@ietf.org
> https://www.ietf.org/mailman/listinfo/oauth
___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] [UNVERIFIED SENDER] OAuth Topics for Vancouver

2020-01-20 Thread Richard Backman, Annabelle
To be honest I’m somewhat taken aback by this reaction. The request was for 
time to discuss an alternative PoP mechanism face-to-face. This is a topic 
which has come up in the context of other work (e.g., DPoP) at several recent 
IETF meetings, including the last one in Singapore. While I recognize that the 
working group has a lot on its plate and needs to allocate time judiciously, it 
seems clear to me that this is both timely and relevant.

Unless the chairs indicate that they require further justification for the time 
slot, I’m going to stop cluttering this thread with defenses of a draft that 
doesn’t exist yet.

–
Annabelle Richard Backman
AWS Identity


From: Rob Cordes 
Date: Monday, January 20, 2020 at 1:38 PM
To: "Richard Backman, Annabelle" 
Cc: Rifaat Shekh-Yusef , oauth 
Subject: Re: [UNVERIFIED SENDER] [OAUTH-WG] OAuth Topics for Vancouver

Hi Annabelle,


Sure TLS is not th one size fits all but if you swap out Client Y signs / 
authenticates message A to recipient X  by:  Client  Y uses TLS for 
authentication of the source (itself), integrity of data / communications and  
even confidentiality (not really needed in our HTTP signing use case)  where 
TLS is initiated and handled by the client Y  itself (native libs or proxy at 
the same host(s) then you have precisely that what HTTP Message signing should 
do. (authenticity,  integrity and as a bonus confidentiality).


That said, one can opt for HTTP signing if one wants to, except it is not 
secure for now and is at present for many developers a nuisance use  as it 
turns out. If you do not want  or cannot deal with TLS tunnels and yes indeed 
TLS connection re-use, by all means go ahead. I would advise my customers to 
try TLS first because it is proven and simple to implement and so easy (cheap 
;-) ) to support. It is always worthwhile to at least try to get Infra on board 
to see if one can go the TLS route first and if that fails… well then HTTP 
signing or accept the risk.

The issues we have at ING with 3rd parties cause us to back down from using it 
in general but still for those API’s wanting to have better assurance than 
otherwise. We do not want to provide our own libs to external parties for 
obvious (legal mostly) reasons. We did not go the TLS route at first, that 
turned out a mistake ;-).


Let me conclude that I always am quite happy to see alternatives popping up and 
existing protocols being continuously enhanced. For this I thank you and others 
to continue developing protocol implementations such as HTTP message signing.


Regards,

Rob



On 20 Jan 2020, at 21:50, Richard Backman, Annabelle 
mailto:richa...@amazon.com>> wrote:

introduction to the HTTP Message Signatures 
draft<https://tools.ietf.org/html/draft-richanna-http-message-signatures-00#section-1>

___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] [EXTERNAL] Re: JWT Secured Authorization Request (JAR) vs OIDC request object

2020-01-17 Thread Richard Backman, Annabelle
We should not be prescriptive about how the AS recognizes request URIs from 
itself. Trusted authority or custom URI scheme are fine as examples, but 
ultimately this is an internal implementation of the AS. It could just as 
easily be using data URIs containing a symmetrically encrypted database record 
ID.

> On Jan 16, 2020, at 8:00 PM, Benjamin Kaduk  wrote:
> 
> On Thu, Jan 16, 2020 at 04:31:30PM +, Neil Madden wrote:
>> The mitigations of 10.4.1 are related, but the section heading is about 
>> (D)DoS attacks. I think this heading needs to be reworded to apply to SSRF 
>> attacks too or else add another section with similar mitigations. 
>> 
>> Mitigation (a) is a bit vague as to what an "unexpected location" is. 
>> Perhaps specific wording that it should be a URI that has been 
>> pre-registered for the client (and validated at that time) or is otherwise 
>> known to be safe (e.g., is a URI scheme controlled by the AS itself as with 
>> PAR).
> 
> pedantic nit: "URI scheme" is probably not what we want, as the authority
> component of the URI (per RFC 3986) seems more likely to match "controlled
> by the AS itself"
> 
> -Ben
> 
>> In addition for this to be effective the AS should not follow redirects when 
>> fetching the URI. It's not clear to me whether that is implied by "not 
>> perform recursive GET" so it may be worth explicitly spelling that out.
>> 
> 
> ___
> OAuth mailing list
> OAuth@ietf.org
> https://www.ietf.org/mailman/listinfo/oauth
___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] [UNVERIFIED SENDER] Re: [UNVERIFIED SENDER] Re: [UNVERIFIED SENDER] Re: PAR metadata

2020-01-08 Thread Richard Backman, Annabelle
I almost included text to that effect, but thought it was getting too wordy. 
However your suggestion is simple and concise. +1

Given all of this discussion, we should include a section on request validation 
in Security Considerations, to provide some context on what might be validated 
when and where, what kinds of problems deployments need to consider, etc. I 
think this is useful to have in the document, but would be too much clutter in 
the main body. We should keep that focused on the precise normative 
requirements.

– 
Annabelle Richard Backman
AWS Identity
 

On 1/8/20, 2:11 AM, "Torsten Lodderstedt" 
 wrote:

Hi Annabelle, 

thanks for your proposal, which reads reasonable to me. 

I suggest to extend “and that the request has not been modified in a way 
that would affect the outcome of the omitted steps.” a bit to also consider 
policy changes that may have occurred between push and authorization request 
processing. 

"and that the request or the authorization server’s policy has not been 
modified in a way that would affect the outcome of the omitted steps."

best regards,
Torsten. 

> On 8. Jan 2020, at 03:25, Richard Backman, Annabelle 
 wrote:
> 
> I think it’s clearer if we split out the requirements for the PAR 
endpoint and the requirements for the authorization endpoint, given that they 
are each covered in different sections of the doc (2 and 4, respectively). With 
that in mind, here are a couple suggestions:
>  
> For the text in Section 2.1:
>  
> 3.  The AS MUST validate the pushed request as it would an authorization
> 
> request sent to the authorization endpoint, however the AS MAY omit
> 
> validation steps that it is unable to perform when processing the
> 
> pushed request.
> 
>  
>  
> Additional text for Section 4 (note that this section pertains to the 
authorization endpoint):
>  
> The AS MUST validate authorization requests arising from a pushed request 
as
> 
> it would any other authorization request.  The AS MAY omit validation 
steps
> 
> that it performed when the request was pushed, provided that it can 
validate
> 
> that the request was a pushed request, and that the request has not been
> 
> modified in a way that would affect the outcome of the omitted steps.
> 
>  
>  
> This is longer than the current text and the other proposals, but it adds 
a few important points:
>   • Turns the 2.1 SHOULD back into a MUST, with an explicit exception for 
validation that can’t be done yet.
>   • Reinforces the fact that the authorization endpoint still needs to do 
validation.
>   • Clearly states requirements for when an AS can trust that validation 
happened at the PAR endpoint.
>  
> – 
> Annabelle Richard Backman
> AWS Identity
>  
    >  
> From: Brian Campbell 
> Date: Tuesday, January 7, 2020 at 2:54 PM
> To: Vladimir Dzhuvinov 
> Cc: Filip Skokan , "Richard Backman, Annabelle" 
, Dave Tonge , oauth 
, Nat Sakimura 
> Subject: [UNVERIFIED SENDER] Re: [OAUTH-WG] [UNVERIFIED SENDER] Re: PAR 
metadata
>  
> A little more context about that proposed wording is in a github issue at
>> 
>>>>•
>>>>•
>>>>•
>>>>•
>>>>•
https://github.com/oauthstuff/draft-oauth-par/issues/38, which is different 
driver than allowing a PAR endpoint to stash the encrypted request object 
rather than decrypting/validating it. But it's kind of the same concept at some 
level too - that there are some things that can't or won't be validated at the 
PAR endpoint and those then have to be validated at the authorization endpoint..
 
I rather like your suggested text, Vladimir, and have mentioned it in a 
comment on the aforementioned issue.
 
 
On Tue, Jan 7, 2020 at 6:58 AM Vladimir Dzhuvinov  
wrote:
On 07/01/2020 00:22, Filip Skokan wrote:
We've been discussing making the following change to the language
 
The AS SHOULD validate the request in the same way as at the authorization 
endpoint. The AS MUST ensure that all parameters to the authorization request 
are still valid at the time when the request URI is used.
Could you expand a bit on the second sentence?
Alternative suggestion:
The AS MUST validate the request in the same way as at the authorization 
endpoint, or complete the request validation at the authorization endpoint.
Vladimir
 
This would allow the PAR endpoint to simply stash the encrypted request 
object instead of decrypting and validating it. All within the bounds of 
"SHOU

Re: [OAUTH-WG] PAR metadata

2020-01-03 Thread Richard Backman, Annabelle
PAR introduces an added wrinkle for encrypted request objects: the PAR endpoint 
and authorization endpoint may not have access to the same cryptographic keys, 
even though they're both part of the "authorization server." Since they're 
different endpoints with different roles, it's reasonable to put them in 
separate trust boundaries. There is no way to support this isolation with just 
a single "jwks_uri" metadata property.

The two options that I see are:

1. Define a new par_jwks_uri metadata property.
2. Explicitly state that this separation is not supported.

I strongly perfer #1 as it has a very minor impact on deployments that don't 
care (i.e., they just set par_jwks_uri and jwks_uri to the same value) and 
failing to support this trust boundary creates an artificial limit on 
implementation architecture and could lead to compatibility-breaking 
workarounds.

– 
Annabelle Richard Backman
AWS Identity
 

On 12/31/19, 8:07 AM, "OAuth on behalf of Torsten Lodderstedt" 
 
wrote:

Hi Filip, 

> On 31. Dec 2019, at 16:22, Filip Skokan  wrote:
> 
> I don't think we need a *_auth_method_* metadata for every endpoint the 
client calls directly, none of the new specs defined these (e.g. device 
authorization endpoint or CIBA), meaning they also didn't follow the scheme 
from RFC 8414 where introspection and revocation got its own metadata. In most 
cases the unfortunately named `token_endpoint_auth_method` and its related 
metadata is what's used by clients for all direct calls anyway.
> 
> The same principle could be applied to signing (and encryption) 
algorithms as well.
> 
> This I do not follow, auth methods and their signing is dealt with by 
using `token_endpoint_auth_methods_supported` and 
`token_endpoint_auth_signing_alg_values_supported` - there's no encryption for 
the `_jwt` client auth methods. 
> Unless it was meant to address the Request Object signing and encryption 
metadata, which is defined and IANA registered by OIDC. PAR only references JAR 
section 6.1 and 6.2 for decryption/signature validation and these do not 
mention the metadata (e.g. request_object_signing_alg) anymore since draft 07.

Dammed! You are so right. Sorry, I got confused somehow. 

> 
> PS: I also found this comment related to the same question about auth 
metadata but for CIBA.

Thanks for sharing. 

> 
> Best,
> Filip

thanks,
Torsten. 

> 
> 
> On Tue, 31 Dec 2019 at 15:38, Torsten Lodderstedt 
 wrote:
> Hi all,
> 
> Ronald just sent me an email asking whether we will define metadata for 
> 
> pushed_authorization_endpoint_auth_methods_supported and
> pushed_authorization_endpoint_auth_signing_alg_values_supported.
> 
> The draft right now utilises the existing token endpoint authentication 
methods so there is basically no need to define another parameter. The same 
principle could be applied to signing (and encryption) algorithms as well. 
> 
> What’s your opinion?
> 
> best regards,
> Torsten.



___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] [UNVERIFIED SENDER] Re: New Version Notification for draft-fett-oauth-dpop-03.txt

2019-12-02 Thread Richard Backman, Annabelle
> Session cookies serve the same purpose in web apps as access tokens for APIs 
> but there are much more web apps than APIs. I use the analogy to illustrate 
> that either there are security issues with cloud deployments of web apps or 
> the techniques used to secure web apps are ok for APIs as well.

"Security issues" is a loaded term, but if you mean that there are practical 
risks that are not addressed by bearer tokens (whether they be session cookies 
or access tokens) then yes, I think we both agree that there are. Otherwise we 
wouldn't be discussing PoP, sender-constrained tokens, etc. TLS-based solutions 
mitigate some risks, while leaving others unmitigated. Depending on your use 
case and threat model, these risks may or may not present practical threats. 
For my use cases, they do.

Ultimately I'd like to mitigate these risks for both service APIs and web 
applications. My focus is on service APIs for a couple reasons:

1. Interoperability is more important when the sender and recipient aren't 
necessarily owned by a single entity. I can do proprietary things in JavaScript 
if I want to just as I can in client SDKs, but this breaks down if my API 
implements a standard protocol and is expected to work with off-the-shelf 
clients and/or implementations from other vendors.

2. Web applications are just a special subset of service APIs that happens to 
be accessed via a browser. A solution for service APIs ought to be reusable for 
web applications, or at least serve as a foundation for their solution.

>- Have you seen this kind of proxies intercepting the connection from 
> on-prem service deployments to service provider? I’m asking because I thought 
> the main use case was to intercept employees PC internet traffic.

I'm working from second-hand knowledge here, but like most things in the 
enterprise world, it depends. Separating employee device outbound traffic from 
internal service outbound traffic requires some level of sophistication, be it 
in network topology, routing rules, or configuration rules on the TLIS 
appliance. 

>- Are you saying this kind of proxy does not support mutual TLS at all?

From what I understand, at the very least mTLS is not universally supported. 
There may be some vendors that support it, but it's not guaranteed. The 
documentation for Symantec's SSL Visibility product [1] indicates that sessions 
using client certificates will be rejected unless they are exempted based on 
destination whitelisting (which is problematic when the destination may be a 
general-purpose cloud service provider).

> On the other hand, I would expect these kind of proxy to understand a lot 
> about the protocols running through it, otherwise they cannot fulfil their 
> task of inspecting this traffic.

Maybe, maybe not. In any case there's a difference between understanding HTTP 
or SMTP or P2P-protocol-du-jour and understanding the application-level 
protocol running on top of HTTP. There hasn't been any need for these proxies 
to understand OAuth 2.0 thus far.

[1]: 
https://origin-symwisedownload.symantec.com/resources/webguides/sslv/45/Content/Topics/troubleshooting/Support_for_Client_Cert.htm
– 
Annabelle Richard Backman
AWS Identity


On 12/1/19, 7:41 AM, "Torsten Lodderstedt" 
 wrote:


Annabelle,

    > Am 27.11.2019 um 02:46 schrieb Richard Backman, Annabelle 
:
> 
> Torsten,
> 
> I'm not tracking how cookies are relevant to the discussion.

I’m still trying to understand why you and others argue mTLS cannot be used 
in public cloud deployments (and thus focus on application level PoP).

Session cookies serve the same purpose in web apps as access tokens for 
APIs but there are much more web apps than APIs. I use the analogy to 
illustrate that either there are security issues with cloud deployments of web 
apps or the techniques used to secure web apps are ok for APIs as well.

Here are the two main arguments and my conclusions/questions:  

1) mTLS it’s not end 2 end: although that’s true from a connection 
perspective, there are solutions employed to secure the last hop(s) between TLS 
terminating proxy and service (private net, VPN, TLS). That works and is 
considered secure enough for (session) cookies, it should be the same for 
access tokens.

2) TLS terminating proxies do not forward cert data: if the service itself 
terminates TLS this is feasible, we do it for our public-cloud-hosted 
mTLS-protected APIs. If TLS termination is provided by a component run by the 
cloud provider, the question is: is this component able to forward the client 
certificate to the service? If not, web apps using certs for authentication 
cannot be supported straightway by the cloud provider. Any insights?

> I'm guessing that's because we're not on the same page regarding use 
cases, so allow me to clearly state mine:

I

Re: [OAUTH-WG] New Version Notification for draft-fett-oauth-dpop-03.txt

2019-11-26 Thread Richard Backman, Annabelle
Torsten,

I'm not tracking how cookies are relevant to the discussion. I'm guessing 
that's because we're not on the same page regarding use cases, so allow me to 
clearly state mine:

The use case I am concerned with is requests between services where end-to-end 
TLS cannot be guaranteed. For example, an enterprise service running 
on-premise, communicating with a service in the cloud, where the enterprise's 
outbound traffic is routed through a TLS Inspection (TLSI) appliance. The TLSI 
appliance sits in the middle of the communication, terminating the TLS session 
established by the on-premise service and establishing a separate TLS 
connection with the cloud service.

In this kind of environment, there is no end-to-end TLS connection between 
on-premise service and cloud service, and it is very unlikely that the TLSI 
appliance is configurable enough to support TLS-based sender-constraint 
mechanisms without significantly compromising on the scope of "sender" (e.g., 
"this service at this enterprise" becomes "this enterprise"). Even if it is 
possible, it is likely to require advanced configuration that is non-trivial 
for administrators to deploy. It's no longer as simple as the developer passing 
a self-signed certificate to the HTTP stack.

– 
Annabelle Richard Backman
AWS Identity
 

On 11/23/19, 9:50 AM, "Torsten Lodderstedt"  wrote:

    

    > On 23. Nov 2019, at 00:34, Richard Backman, Annabelle 
 wrote:
> 
>> how are cookies protected from leakage, replay, injection in a setup 
like this?
> They aren’t.

Thats very interesting when compared to what we are discussing with respect 
to API security. 

It effectively means anyone able to capture a session cookie, e.g. between 
TLS termination point and application, by way of an HTML injection, or any 
other suitable attack is able to impersonate a legitimate user by injecting the 
cookie(s) in an arbitrary user agent. The impact of such an attack might be 
even worse than abusing an access token given the (typically) broad scope of a 
session.

TLS-based methods for sender constrained access tokens, in contrast, 
prevent this type of replay, even if the requests are protected between client 
and TLS terminating proxy, only. Ensuring the authenticity of the client 
certificate when forwarded from TLS terminating proxy to service, e.g. through 
another authenticated TLS connection, will even prevent injection within the 
data center/cloud environment. 

I come to the conclusion that we already have the mechanism at hand to 
implement APIs with a considerable higher security level than what is accepted 
today for web applications. So what problem do we want to solve?

> But my primary concern here isn't web browser traffic, it's calls from 
services/apps running inside a corporate network to services outside a 
corporate network (e.g., service-to-service API calls that pass through a 
corporate TLS gateway).

Can you please describe the challenges arising in these settings? I assume 
those proxies won’t support CONNECT style pass through otherwise we wouldn’t 
talk about them.

> 
>> That’s a totally valid point. But again, such a solution makes the life 
of client developers harder. 
>> I personally think, we as a community need to understand the pros and 
cons of both approaches. I also think we have not even come close to this 
point, which, in my option, is the prerequisite for making informed decisions.
> 
> Agreed. It's clear that there are a number of parties coming at this from 
a number of different directions, and that's coloring our perceptions. That's 
why I think we need to nail down the scope of what we're trying to solve with 
DPoP before we can have a productive conversation how it should work.

We will do so.

> 
> – 
> Annabelle Richard Backman
> AWS Identity
> 
> 
> On 11/22/19, 10:51 PM, "Torsten Lodderstedt"  
wrote:
> 
> 
> 
>> On 22. Nov 2019, at 22:12, Richard Backman, Annabelle 
 wrote:
>> 
>> The service provider doesn't own the entire connection. They have no 
control over corporate or government TLS gateways, or other terminators that 
might exist on the client's side. In larger organizations, or when cloud 
hosting is involved, the service team may not even own all the hops on their 
side.
> 
>how are cookies protected from leakage, replay, injection in a setup 
like this?
> 
>> While presumably they have some trust in them, protection against leaked 
bearer tokens is an attractive defense-in-depth measure.
> 
>That’s a totally valid point. But again, such a solution makes the 
life of client developers harder. 
> 
>I personally think, we as a community need to und

Re: [OAUTH-WG] New Version Notification for draft-fett-oauth-dpop-03.txt

2019-11-22 Thread Richard Backman, Annabelle
> how are cookies protected from leakage, replay, injection in a setup like 
> this?
They aren't. But my primary concern here isn't web browser traffic, it's calls 
from services/apps running inside a corporate network to services outside a 
corporate network (e.g., service-to-service API calls that pass through a 
corporate TLS gateway).

> That’s a totally valid point. But again, such a solution makes the life of 
> client developers harder. 
> I personally think, we as a community need to understand the pros and cons of 
> both approaches. I also think we have not even come close to this point, 
> which, in my option, is the prerequisite for making informed decisions.

Agreed. It's clear that there are a number of parties coming at this from a 
number of different directions, and that's coloring our perceptions. That's why 
I think we need to nail down the scope of what we're trying to solve with DPoP 
before we can have a productive conversation how it should work.

– 
Annabelle Richard Backman
AWS Identity
 

On 11/22/19, 10:51 PM, "Torsten Lodderstedt"  wrote:



    > On 22. Nov 2019, at 22:12, Richard Backman, Annabelle 
 wrote:
> 
> The service provider doesn't own the entire connection. They have no 
control over corporate or government TLS gateways, or other terminators that 
might exist on the client's side. In larger organizations, or when cloud 
hosting is involved, the service team may not even own all the hops on their 
side.

how are cookies protected from leakage, replay, injection in a setup like 
this?

> While presumably they have some trust in them, protection against leaked 
bearer tokens is an attractive defense-in-depth measure.

That’s a totally valid point. But again, such a solution makes the life of 
client developers harder. 

I personally think, we as a community need to understand the pros and cons 
of both approaches. I also think we have not even come close to this point, 
which, in my option, is the prerequisite for making informed decisions.

> 
> – 
> Annabelle Richard Backman
> AWS Identity
> 
> 
> On 11/22/19, 9:37 PM, "OAuth on behalf of Torsten Lodderstedt" 
 
wrote:
> 
> 
> 
>> On 22. Nov 2019, at 21:21, Richard Backman, Annabelle 
 wrote:
>> 
>> The dichotomy of "TLS working" and "TLS failed" only applies to a single 
TLS connection. In non-end-to-end TLS environments, each TLS terminator between 
client and RS introduces additional token leakage/exfiltration risk, 
irrespective of the quality of the TLS connections themselves. Each terminator 
also introduces complexity for implementing mTLS, Token Binding, or any other 
TLS-based sender constraint solution, which means developers with 
non-end-to-end TLS use cases will be more likely to turn to DPoP.
> 
>The point is we are talking about different developers here. The 
client developer does not need to care about the connection between proxy and 
service. She relies on the service provider to get it right. So the developers 
(or DevOps or admins) of the service provider need to ensure end to end 
security. And if the path is secured once, it will work for all clients. 
> 
>> If DPoP is intended to address "cases where neither mTLS nor OAuth Token 
Binding are available" [1], then it should address this risk of token leakage 
between client and RS. If on the other hand DPoP is only intended to support 
the SPA use case and assumes the use of end-to-end TLS, then the document 
should be updated to reflect that.
> 
>I agree. 
> 
>> 
>> [1]: https://tools.ietf.org/html/draft-fett-oauth-dpop-03#section-1
>> 
>> – 
>> Annabelle Richard Backman
>> AWS Identity
>> 
>> 
>> On 11/22/19, 8:17 PM, "OAuth on behalf of Torsten Lodderstedt" 
 
wrote:
>> 
>>   Hi Neil,
>> 
>>> On 22. Nov 2019, at 18:08, Neil Madden  
wrote:
>>> 
>>> On 22 Nov 2019, at 07:53, Torsten Lodderstedt 
 wrote:
>>>> 
>>>> 
>>>> 
>>>>> On 22. Nov 2019, at 15:24, Justin Richer  wrote:
>>>>> 
>>>>> I’m going to +1 Dick and Annabelle’s question about the scope here. 
That was the one major thing that struck me during the DPoP discussions in 
Singapore yesterday: we don’t seem to agree on what DPoP is for. Some 
(including the authors, it seems) see it as a quick point-solution to a 
specific use case. Others see it as a general PoP mechanism. 
>>>>> 
>>>>> If it’s the former, then it should be explicitly tied to one specific 
set of things. If it’s the latt

Re: [OAUTH-WG] New Version Notification for draft-fett-oauth-dpop-03.txt

2019-11-22 Thread Richard Backman, Annabelle
The service provider doesn't own the entire connection. They have no control 
over corporate or government TLS gateways, or other terminators that might 
exist on the client's side. In larger organizations, or when cloud hosting is 
involved, the service team may not even own all the hops on their side. While 
presumably they have some trust in them, protection against leaked bearer 
tokens is an attractive defense-in-depth measure.

– 
Annabelle Richard Backman
AWS Identity
 

On 11/22/19, 9:37 PM, "OAuth on behalf of Torsten Lodderstedt" 
 
wrote:



> On 22. Nov 2019, at 21:21, Richard Backman, Annabelle 
 wrote:
> 
> The dichotomy of "TLS working" and "TLS failed" only applies to a single 
TLS connection. In non-end-to-end TLS environments, each TLS terminator between 
client and RS introduces additional token leakage/exfiltration risk, 
irrespective of the quality of the TLS connections themselves. Each terminator 
also introduces complexity for implementing mTLS, Token Binding, or any other 
TLS-based sender constraint solution, which means developers with 
non-end-to-end TLS use cases will be more likely to turn to DPoP.

The point is we are talking about different developers here. The client 
developer does not need to care about the connection between proxy and service. 
She relies on the service provider to get it right. So the developers (or 
DevOps or admins) of the service provider need to ensure end to end security. 
And if the path is secured once, it will work for all clients. 

> If DPoP is intended to address "cases where neither mTLS nor OAuth Token 
Binding are available" [1], then it should address this risk of token leakage 
between client and RS. If on the other hand DPoP is only intended to support 
the SPA use case and assumes the use of end-to-end TLS, then the document 
should be updated to reflect that.

I agree. 

> 
> [1]: https://tools.ietf.org/html/draft-fett-oauth-dpop-03#section-1
> 
> – 
> Annabelle Richard Backman
> AWS Identity
> 
> 
> On 11/22/19, 8:17 PM, "OAuth on behalf of Torsten Lodderstedt" 
 
wrote:
> 
>Hi Neil,
> 
>> On 22. Nov 2019, at 18:08, Neil Madden  wrote:
>> 
>> On 22 Nov 2019, at 07:53, Torsten Lodderstedt 
 wrote:
>>> 
>>> 
>>> 
>>>> On 22. Nov 2019, at 15:24, Justin Richer  wrote:
>>>> 
>>>> I’m going to +1 Dick and Annabelle’s question about the scope here. 
That was the one major thing that struck me during the DPoP discussions in 
Singapore yesterday: we don’t seem to agree on what DPoP is for. Some 
(including the authors, it seems) see it as a quick point-solution to a 
specific use case. Others see it as a general PoP mechanism. 
>>>> 
>>>> If it’s the former, then it should be explicitly tied to one specific 
set of things. If it’s the latter, then it needs to be expanded. 
>>> 
>>> as a co-author of the DPoP draft I state again what I said yesterday: 
DPoP is a mechanism for sender-constraining access tokens sent from SPAs only. 
The threat to be prevented is token replay.
>> 
>> I think the phrase "token replay" is ambiguous. Traditionally it refers 
to an attacker being able to capture a token (or whole requests) in use and 
then replay it against the same RS. This is already protected against by the 
use of normal TLS on the connection between the client and the RS. I think 
instead you are referring to a malicious/compromised RS replaying the token to 
a different RS - which has more of the flavour of a man in the middle attack 
(of the phishing kind).
> 
>I would argue TLS basically prevents leakage and not replay. The 
threats we try to cope with can be found in the Security BCP. There are 
multiple ways access tokens can leak, including referrer headers, mix-up, open 
redirection, browser history, and all sorts of access token leakage at the 
resource server
> 
>Please have a look at 
https://tools.ietf.org/html/draft-ietf-oauth-security-topics-13#section-4.
> 
>
https://tools.ietf.org/html/draft-ietf-oauth-security-topics-13#section-4.8 
also has an extensive discussion of potential counter measures, including 
audience restricted access tokens and a conclusion to recommend sender 
constrained access tokens over other mechanisms.
> 
>> 
>> But if that's the case then there are much simpler defences than those 
proposed in the current draft:
>> 
>> 1. Get separate access tokens for each RS with correct audience and 
scopes. The consensus appears to be that this is hard to do in some cases, 
hence the draft.
> 
>How ma

Re: [OAUTH-WG] New Version Notification for draft-fett-oauth-dpop-03.txt

2019-11-22 Thread Richard Backman, Annabelle
> Yes of course. But this is the HMAC *tag* not the original key.
Sure. And if the client attenuates the macaroon, it is used as a key that the 
client proves possession of by presenting the chained HMAC. Clients doing DPoP 
aren’t proving possession of the “original key” (i.e., a key used to generate 
the access token) either.

> Well, you don’t have to return a key from the token endpoint for a start.
Yes, that’s what I meant by saying that it eliminates key negotiation. Though I 
suppose it’s more correct to say that it inlines it. The AS still provides a 
key, it just happens to be part of the access token.

Macaroons are an interesting pattern, but not because they’re not doing PoP. 
Proof of possession is pretty core to the whole idea of digital signatures and 
HMACs. What makes them interesting is the way they inline key distribution. 
Whether or not they’re applicable to DPoP depends, ultimately, on the use cases 
DPoP is targeting and the threats it is trying to mitigate.

–
Annabelle Richard Backman
AWS Identity


From: Neil Madden 
Date: Friday, November 22, 2019 at 3:09 PM
To: "Richard Backman, Annabelle" 
Cc: Brian Campbell , oauth 
Subject: Re: [OAUTH-WG] New Version Notification for 
draft-fett-oauth-dpop-03.txt

On 22 Nov 2019, at 01:42, Richard Backman, Annabelle  
wrote:

Macaroons are built on proof of possession. In order to add a caveat to a 
macaroon, the sender has to have the HMAC of the macaroon without their caveat.

Yes of course. But this is the HMAC *tag* not the original key. They can’t 
change anything the AS originally signed.


The distinctive property of macaroons as I see it is that they eliminate the 
need for key negotiation with the bearer. How much value this has over the AS 
just returning a symmetric key alongside the access token in the token request, 
I’m not sure.

Well, you don’t have to return a key from the token endpoint for a start. The 
client doesn’t need to create and send any additional token. The whole thing 
works with existing standards and technologies and can be incrementally adopted 
as required. If RSes do token introspection already then they need zero changes 
to support this.


There are key distribution challenges with that if you are doing validation at 
the RS, but validation at the RS using either approach means you’ve lost 
protection against replay by the RS. This brings us back to a core question: 
what threats are in scope for DPoP, and in what contexts?

Agreed, but validation at the RS is premature optimisation in many cases. And 
if you do need protection against that the client can even append a 
confirmation key as a caveat and retrospectively upgrade a bearer token to a 
pop token. They can even do transfer of ownership by creating copies of the 
original token bound to other certificates/public keys.

Neil




–
Annabelle Richard Backman
AWS Identity


From: OAuth  on behalf of Neil Madden 

Date: Friday, November 22, 2019 at 4:40 AM
To: Brian Campbell 
Cc: oauth 
Subject: Re: [OAUTH-WG] New Version Notification for 
draft-fett-oauth-dpop-03.txt

At the end of my previous email I mentioned that you can achieve some of the 
same aims as DPoP without needing a PoP mechanism at all. This email is that 
follow-up.

OAuth is agnostic about the format of access tokens and many vendors support 
either random string database tokens or JWTs. But there are other choices for 
access token format, some of which have more interesting properties. In 
particular, Google proposed Macaroons a few years ago as a "better cookie" [1] 
and I think they systematically address many of these issues when used as an 
access token format.

For those who aren't familiar with them, Macaroons are a bit like a HS256 JWT. 
They have a location (a bit like the audience in a JWT) and an identifier (an 
arbitrary string) and then are signed with HMAC-SHA256 using a secret key. 
(There's no claims set or headers - they are very minimal). In this case the 
secret key would be owned by the AS and used to sign macaroon-based access 
tokens. Validating the token would be done via token introspection at the AS.

The clever bit is that anybody at all can append "caveats" to a macaroon at any 
time, but nobody can remove one once added. Caveats are restrictions on the use 
of a token - they only ever reduce the authority granted by the token, never 
expand it. The AS can validate the token and all the caveats with its secret 
key. So, for example, if an access token was a macaroon then the client could 
append a caveat to reduce the scope, or reduce the expiry time, or reduce the 
audience, and so on.

The really clever bit is that the client can keep a copy of the original token 
and create restricted versions to send to different resource servers. Because 
HMAC is very cheap, the client can even do this before each and every request. 
(This is what the original paper refers to as "contextual caveats"). This means 
that a c

Re: [OAUTH-WG] New Version Notification for draft-fett-oauth-dpop-03.txt

2019-11-21 Thread Richard Backman, Annabelle
Macaroons are built on proof of possession. In order to add a caveat to a 
macaroon, the sender has to have the HMAC of the macaroon without their caveat. 
The distinctive property of macaroons as I see it is that they eliminate the 
need for key negotiation with the bearer. How much value this has over the AS 
just returning a symmetric key alongside the access token in the token request, 
I’m not sure. There are key distribution challenges with that if you are doing 
validation at the RS, but validation at the RS using either approach means 
you’ve lost protection against replay by the RS. This brings us back to a core 
question: what threats are in scope for DPoP, and in what contexts?

–
Annabelle Richard Backman
AWS Identity


From: OAuth  on behalf of Neil Madden 

Date: Friday, November 22, 2019 at 4:40 AM
To: Brian Campbell 
Cc: oauth 
Subject: Re: [OAUTH-WG] New Version Notification for 
draft-fett-oauth-dpop-03.txt

At the end of my previous email I mentioned that you can achieve some of the 
same aims as DPoP without needing a PoP mechanism at all. This email is that 
follow-up.

OAuth is agnostic about the format of access tokens and many vendors support 
either random string database tokens or JWTs. But there are other choices for 
access token format, some of which have more interesting properties. In 
particular, Google proposed Macaroons a few years ago as a "better cookie" [1] 
and I think they systematically address many of these issues when used as an 
access token format.

For those who aren't familiar with them, Macaroons are a bit like a HS256 JWT. 
They have a location (a bit like the audience in a JWT) and an identifier (an 
arbitrary string) and then are signed with HMAC-SHA256 using a secret key. 
(There's no claims set or headers - they are very minimal). In this case the 
secret key would be owned by the AS and used to sign macaroon-based access 
tokens. Validating the token would be done via token introspection at the AS.

The clever bit is that anybody at all can append "caveats" to a macaroon at any 
time, but nobody can remove one once added. Caveats are restrictions on the use 
of a token - they only ever reduce the authority granted by the token, never 
expand it. The AS can validate the token and all the caveats with its secret 
key. So, for example, if an access token was a macaroon then the client could 
append a caveat to reduce the scope, or reduce the expiry time, or reduce the 
audience, and so on.

The really clever bit is that the client can keep a copy of the original token 
and create restricted versions to send to different resource servers. Because 
HMAC is very cheap, the client can even do this before each and every request. 
(This is what the original paper refers to as "contextual caveats"). This means 
that a client can be issued a single access token from the AS with broad scope 
and applicable to many different RS and can then locally create restricted 
copies for each individual RS.

The relevance to DPoP is that the client could even append caveats equivalent 
to "htm" and "htu" just before sending the access token to the RS, and maybe 
add an "exp" for 5 seconds in the future, reduce the scope, and so on:

  newAccessToken = accessToken.withCaveats({
exp: now + 5seconds,
scope: "a b",
htm: "POST",

  });
  httpClient.post(data, Authorization: Bearer newAccessToken);

Note that the client doesn't need anything extra here - no keys, extra tokens 
etc. They just have the access token and a macaroon library.

The RS will see an opaque access token, send it to the AS for introspection. 
The AS however, will see and validate the new caveats on the token and return 
an introspection response with the restricted scope and expiry time, and return 
the htm/htu restrictions that the RS can then enforce.

For clients this is transparent until they want to take advantage of it and 
then they can just use an off-the-shelf macaroon library. For the RS it is also 
completely transparent. All the (relatively small) complexity lives in the AS, 
which just has to be able to produce and verify macaroons and take caveats into 
account when performing token introspection - e.g. the returned scope should be 
the intersection of the original token scope and any scope caveats. But I don't 
think this would be too much effort.

[1]: https://ai.google/research/pubs/pub41892

-- Neil


On 21 Nov 2019, at 06:23, Brian Campbell 
mailto:bcampb...@pingidentity.com>> wrote:

Yeah, suggestions and/or an MTI about algorithm support would probably be 
worthwhile. Perhaps also some defined means of signaling when an unsupported 
algorithm is used along with any other reason a DPoP is invalid or rejected.

There are a lot of tradeoffs in what claims are required and what protections 
are provided etc. The aim of what was chosen was to do just enough to provide 
some reasonable protections against reuse or use in a different context while 
being simple to implement 

Re: [OAUTH-WG] Issues with Android and UC Browser

2019-01-25 Thread Richard Backman, Annabelle
(Content warning for advice given without direct experience with the specific 
problem __)

Are these deep links using custom schemes or "Android App Links"? If the 
former, try switching to the latter?

-- 
Annabelle Richard Backman
AWS Identity
 

On 1/25/19, 8:43 AM, "OAuth on behalf of George Fletcher" 
 wrote:

Hi,

We are seeing issues with the authorization code flow on an Android 
device where the system browser is the UC browser 
(https://www.ucweb.com/). Basically, the deep link back to the 
application to deliver the code value fails.

Just curious if others have experienced this behavior and if there are 
any known work-arounds.

Thanks,
George

___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] Shepherd write-up for draft-ietf-oauth-resource-indicators-01

2019-01-18 Thread Richard Backman, Annabelle
Doesn’t the “scope” parameter already provide a means of specifying a logical 
identifier?

--
Annabelle Richard Backman
AWS Identity


From: OAuth  on behalf of Vittorio Bertocci 

Date: Friday, January 18, 2019 at 5:47 AM
To: John Bradley 
Cc: IETF oauth WG 
Subject: Re: [OAUTH-WG] Shepherd write-up for 
draft-ietf-oauth-resource-indicators-01

Thanks John for the background.
I agree that from the client validation PoV, having an identifier corresponding 
to a location makes things more solid.
That said: the use of logical identifiers is widespread, as it has significant 
practical advantages (think of services that assign generated hosting URLs only 
at deployment time, or services that are somehow grouped under the same logical 
audience across regions/environment/deployments). People won't stop using 
logical identifiers, because they often have no alternative (generating new 
audiences on the fly at the AS every time you do a deployment and get assigned 
a new URL can be unfeasible). Leaving a widely used approach as exercise to the 
reader seems a disservice to the community, given that this might lead to 
vendors (for example Microsoft and Auth0) keeping their own proprietary 
parameters, or developers misusing the ones in place; would make it hard for 
SDK developers to provide libraries that work out of the box with different 
ASes; and so on.
Would it be feasible to add such parameter directly in this spec? That would 
eliminate the interop issues, and also gives us a chance to fully warn people 
about the security shortcomings of choosing that approach.



On Thu, Jan 17, 2019 at 4:32 PM John Bradley 
mailto:ve7...@ve7jtb.com>> wrote:

We have discussed this.

Audiences can certainly be logical identifiers.

This however is a more specific location.  The AS is free to map the location 
into some abstract audience in the AT.

From a security point of view once the client starts asking for logical 
resources it can be tricked into asking for the wrong one as a bad resource can 
always lie about what logical resource it is.

If we were to change it, how a client would validate it becomes challenging to 
impossible.

The AS is free to do whatever mapping of locations to identifiers it needs for 
access tokens.

Some implementations may want to keep additional parameters like logical 
audience, but that should be separate from resource.

John B.
On 1/17/2019 9:56 AM, Rifaat Shekh-Yusef wrote:
Hi Vittorio,

The text you quoted is copied form the abstract of the draft itself.


Authors,

Should the draft be updated to cover the logical identifier case?

Regards,
 Rifaat


On Thu, Jan 17, 2019 at 8:19 AM Vittorio Bertocci 
mailto:vitto...@auth0.com>> wrote:
Hi Rifaat,
one detail. The tech summary says


An extension to the OAuth 2.0 Authorization Framework defining request

parameters that enable a client to explicitly signal to an authorization server

about the location of the protected resource(s) to which it is requesting

access.
But at least in the Microsoft implementation, the resource identifier doesn't 
have to be a network addressable URL (and if it is, it doesn't strictly need to 
match the actual resource location). It can be a logical identifier, tho using 
the actual resource location there has benefits (domain ownership check, 
prevention of token forwarding etc).
Same for Auth0, the audience parameter is a logical identifier rather than a 
location.



On Wed, Jan 16, 2019 at 6:32 PM Rifaat Shekh-Yusef 
mailto:rifaat.i...@gmail.com>> wrote:
All,

The following is the first shepherd write-up for the 
draft-ietf-oauth-resource-indicators-01 document.
https://datatracker.ietf.org/doc/draft-ietf-oauth-resource-indicators/shepherdwriteup/

Please, take a look and let me know if I missed anything.

Regards,
 Rifaat

___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth



___

OAuth mailing list

OAuth@ietf.org

https://www.ietf..org/mailman/listinfo/oauth
___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth
___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] Call for Adoption: OAuth 2.0 for Browser-Based Apps

2018-12-18 Thread Richard Backman, Annabelle
I am in favor of adopting this as a working group document. There is a clear 
need for updated guidance for these clients.

-- 
Annabelle Richard Backman
AWS Identity
 

On 12/17/18, 1:02 PM, "OAuth on behalf of Hannes Tschofenig" 
 wrote:

Hi all,

We would like to get a confirmation on the mailing list for the adoption of 
https://tools.ietf.org/html/draft-parecki-oauth-browser-based-apps-02 as a 
starting point for a BCP document about *OAuth 2.0 for Browser-Based Apps*.

Please, let us know if you support or object to the adoption of this 
document by Jan. 7th 2019.

Ciao
Hannes & Rifaat
IMPORTANT NOTICE: The contents of this email and any attachments are 
confidential and may also be privileged. If you are not the intended recipient, 
please notify the sender immediately and do not disclose the contents to any 
other person, use it for any purpose, or store or copy the information in any 
medium. Thank you.

___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] Binding Access Tokens is not enough!

2018-11-29 Thread Richard Backman, Annabelle
Sure. But given that TLS token binding is often put forth as a mitigation 
against token theft and replay, it's worth noting that on its own it does not 
address this risk when the replay is coming from the same browser, and the RS 
needs to support CORS.

-- 
Annabelle Richard Backman
AWS Identity
 

On 11/28/18, 10:51 PM, "Neil Madden"  wrote:

My intent wasn’t to suggest that tokens *must* be origin constrained, just 
to point out if you are using TLS-based sender constrained tokens then you may 
also want to consider that aspect. 

In some cases you may be comfortable that protections against in-browser 
token theft are adequate without any sender/origin constraint. Sometimes a 
plain old bearer token is fine. 

— Neil

> On 29 Nov 2018, at 01:22, Richard Backman, Annabelle 
 wrote:
> 
> In some cases, the resource server will need to support CORS in order to 
support SPA clients that are on different origins. In this case, the resource 
server must optimistically allow the CORS request to be made, then validate 
that the request origin is appropriate for the access token provided in the 
request. To my knowledge, I haven't seen "origin-constrained access tokens" 
raised as a requirement anywhere, but here we are.
> 
> -- 
> Annabelle Richard Backman
> AWS Identity
> 
> 
> On 11/26/18, 2:34 AM, "OAuth on behalf of Neil Madden" 
 wrote:
> 
>I would perhaps clarify this a little, as it’s not really CORS that is 
doing the work here, but rather the same-origin policy (SOP) — which is 
actually *relaxed* by CORS. 
> 
>It is the fact that there is a non-safe header (Authorization) on the 
request that triggers the SOP protections - and it would do so even in an old 
pre-CORS browser. Otherwise CORS wouldn’t even be involved as the request would 
be considered “safe”. For instance, if your (RS) API just requires an 
x-www-form-urlencoded POST body with the access token as one of the fields then 
I can always just create a form in a hidden iframe and submit it cross-origin 
with no problems, CORS or not. Adding the Authorization header prevents that - 
you can’t add a custom header to a form submission, and Ajax would not be 
allowed to make that request.
> 
>What CORS changes is that things that would previously be blocked 
outright now produce a CORS preflight to allow the destination origin to 
override the SOP and allow a request to go ahead anyway.
> 
>— Neil
> 
>> On 26 Nov 2018, at 08:46, Daniel Fett  wrote:
>> 
>> Yes. Token Binding enforces that only the right browser can send the 
token; in this browser, CORS enforces that only the correct origin can send the 
token.
>> 
>>> Am 25.11.18 um 19:46 schrieb Torsten Lodderstedt:
>>> Does this mean the RS effectively relies on the user agent to enforce 
the sender constraint (via CORS policy)?
>>> 
>>> 
>>>> Am 23.11.2018 um 14:54 schrieb Neil Madden 
>>>> :
>>>> 
>>>> Thanks for doing this Daniel, I think the proposed text is good.
>>>> 
>>>> — Neil
>>>> 
>>>> 
>>>>> On 22 Nov 2018, at 14:42, Daniel Fett 
>>>>> wrote:
>>>>> 
>>>>> Hi all,
>>>>> 
>>>>> I would like to discuss a text proposal for the security BCP.
>>>>> 
>>>>> Background:
>>>>> 
>>>>> Yesterday, Neil pointed out the following problem with binding access 
tokens using mTLS or token binding in SPAs:
>>>>> 
>>>>> "I am talking about scripts from places like ad servers that are 
usually included via an iframe to enforce the SOP and sandbox them from other 
scripts. If they get access to an access token - e.g. via document.referrer or 
a redirect or some other leak, then they still act within the same TLS context 
as the legitimate client."
>>>>> 
>>>>> The problem is that a browser's TLS stack will attach the proof of 
possession no matter which origin started a request.
>>>>> 
>>>>> (This seems like a real downside of token binding - why does it not 
have the "same site" option that cookies nowadays have?)
>>>>> 
>>>>> I prepared the following addition to the security BCP and would like 
to hear your opinions:
>>>>> 
>>>>> "It is important to note that constraining the sender of a token to a 
web browser (using a TLS-based method) does not constrain the orig

Re: [OAUTH-WG] Dynamic Client Registration with Native Apps

2018-11-29 Thread Richard Backman, Annabelle
> Claimed "https" scheme URIs (RFC 8252, Sec 7.2) can be used to provide some 
> identity guarantees…

Yes, provided that the AS can verify that the claimed URI is a valid URI for 
the identity being asserted by the client. And this identity guarantee would 
apply to an public native app client just as well as one that has established a 
client secret via dynamic client registration.

Section 2.3 of RFC6749<https://tools.ietf.org/html/rfc6749#section-2.3> is 
relevant here:

The authorization server MAY establish a client authentication method
with public clients.  However, the authorization server MUST NOT rely
on public client authentication for the purpose of identifying the
client.

--
Annabelle Richard Backman
AWS Identity


From: OAuth  on behalf of William Denniss 

Date: Thursday, November 29, 2018 at 10:49 AM
To: Christian Mainka 
Cc: oauth 
Subject: Re: [OAUTH-WG] Dynamic Client Registration with Native Apps


On Thu, Nov 29, 2018 at 6:03 AM Christian Mainka 
mailto:40rub...@dmarc.ietf.org>> 
wrote:
Hi,

thanks for pointing this out!
This was exactly what confused us during reading - the main threat we see and 
which is not addressed is related to the app impersonation attack.
Even PKCE does not help against the app impersonation attack.

Claimed "https" scheme URIs (RFC 8252, Sec 7.2) can be used to provide some 
identity guarantees (security considerations in Sec 8.6), as the OS will only 
open apps that can verify domain ownership to process the redirect. This is 
what I would recommend as a starting point if you want assurances over the 
app's identity.

A spoofing app can still use a web-view to intercept the response that way, but 
in that case they'd also have full access to the session cookie (due to the use 
of webview for the sign-in), which is potentially a more valuable token (i.e. 
you have bigger issues). It does effectively prevent against tokens issued form 
the browser SSO session being intercepted by the wrong app.



So a "Native App + Dynamic Client Registration" can be seen at a different 
"confidentiality level" than a "public client", because every native App can 
dynamically register itself on the IdP.
The IdP cannot distinguish, for example, an honest native client from an 
malicious client starting an app impersonation attack.

We agree that, e.g., a leaked code cannot be redeemed unless you have the 
respective client_id/client_secret.

But... we asked ourselfs, in which cases does a code leak?

1) In the front-channel. In this case, it is true that no client credentials 
leak and an attacker cannot redeem the code.

2) In the back-channel. But if this channel is insecure, you directly get 
client credentials (unless client_secret_jwt is used as pointed out by George).

So, Dynamic Client Registration only helps if the code leaks alone (as in 1.), 
or if it leaks on different levels (e.g. logfiles).

On the opposite site, if Dynamic Registration is available, an attacker can 
very easily do an app impersonation attack by registering on the IdP. To be 
clear, it is not "impersonation" as in the "one secret per software" scenario, 
because different client_id and client_secret is used, but to the best of my 
knowledge, the IdP cannot distinguish between an honest app and an app 
impersonation client that has simply registered.

In addition, if the IdP supports the dynamic client registration:
How can the IdP distinguish between confidential and public/native clients?
With respect to the consent page, which must be shown every time for native 
apps, this is an important issue, which should be addressed properly.

Best Regards
Vladi/Christian

Am 29.11.18 um 00:38 schrieb Richard Backman, Annabelle:

It should be noted that “traditional” confidential clients with registered 
return URLs and server-side secrets may provide a higher degree of confidence 
in the true identity of the client that doesn’t carry over to confidential 
native app clients. A native app instance’s registration call is necessarily 
unauthenticated (for the same reasons that statically registered native app 
clients are public clients), so the Client Impersonation concerns described in 
section 8.6 of 
RFC8252<https://tools.ietf.org/html/rfc8252#section-8.6><https://tools.ietf.org/html/rfc8252#section-8.6>
 still apply.

--

Annabelle Richard Backman

AWS Identity





From: OAuth <mailto:oauth-boun...@ietf.org> on behalf 
of Filip Skokan <mailto:panva...@gmail.com>

Date: Wednesday, November 28, 2018 at 9:11 AM

To: George Fletcher <mailto:gffle...@aol.com>

Cc: oauth <mailto:oauth@ietf.org>

Subject: Re: [OAUTH-WG] Dynamic Client Registration with Native Apps



Apologies, I missed the issued in "issued a shared secret", just reading 
"shared secret" alone is the exact opposite of a per-instance secret. The rest 
is clear and as you say it brings the benefi

Re: [OAUTH-WG] Binding Access Tokens is not enough!

2018-11-28 Thread Richard Backman, Annabelle
In some cases, the resource server will need to support CORS in order to 
support SPA clients that are on different origins. In this case, the resource 
server must optimistically allow the CORS request to be made, then validate 
that the request origin is appropriate for the access token provided in the 
request. To my knowledge, I haven't seen "origin-constrained access tokens" 
raised as a requirement anywhere, but here we are.

-- 
Annabelle Richard Backman
AWS Identity
 

On 11/26/18, 2:34 AM, "OAuth on behalf of Neil Madden"  wrote:

I would perhaps clarify this a little, as it’s not really CORS that is 
doing the work here, but rather the same-origin policy (SOP) — which is 
actually *relaxed* by CORS. 

It is the fact that there is a non-safe header (Authorization) on the 
request that triggers the SOP protections - and it would do so even in an old 
pre-CORS browser. Otherwise CORS wouldn’t even be involved as the request would 
be considered “safe”. For instance, if your (RS) API just requires an 
x-www-form-urlencoded POST body with the access token as one of the fields then 
I can always just create a form in a hidden iframe and submit it cross-origin 
with no problems, CORS or not. Adding the Authorization header prevents that - 
you can’t add a custom header to a form submission, and Ajax would not be 
allowed to make that request.

What CORS changes is that things that would previously be blocked outright 
now produce a CORS preflight to allow the destination origin to override the 
SOP and allow a request to go ahead anyway.

— Neil

> On 26 Nov 2018, at 08:46, Daniel Fett  wrote:
> 
> Yes. Token Binding enforces that only the right browser can send the 
token; in this browser, CORS enforces that only the correct origin can send the 
token.
> 
> Am 25.11.18 um 19:46 schrieb Torsten Lodderstedt:
>> Does this mean the RS effectively relies on the user agent to enforce 
the sender constraint (via CORS policy)?
>> 
>> 
>>> Am 23.11.2018 um 14:54 schrieb Neil Madden 
>>> :
>>> 
>>> Thanks for doing this Daniel, I think the proposed text is good.
>>> 
>>> — Neil
>>> 
>>> 
 On 22 Nov 2018, at 14:42, Daniel Fett 
  wrote:
 
 Hi all,
 
 I would like to discuss a text proposal for the security BCP.
 
 Background:
 
 Yesterday, Neil pointed out the following problem with binding access 
tokens using mTLS or token binding in SPAs:
 
 "I am talking about scripts from places like ad servers that are 
usually included via an iframe to enforce the SOP and sandbox them from other 
scripts. If they get access to an access token - e.g. via document.referrer or 
a redirect or some other leak, then they still act within the same TLS context 
as the legitimate client."
 
 The problem is that a browser's TLS stack will attach the proof of 
possession no matter which origin started a request.
 
 (This seems like a real downside of token binding - why does it not 
have the "same site" option that cookies nowadays have?)
 
 I prepared the following addition to the security BCP and would like 
to hear your opinions:
 
 "It is important to note that constraining the sender of a token to a 
web browser (using a TLS-based method) does not constrain the origin of the 
script that uses the token (lack of origin binding). In other words, if access 
tokens are used in a browser (as in a single-page application, SPA) and the 
access tokens are sender-constrained using a TLS-based method, it must be 
ensured that origins other than the origin of the SPA cannot misuse the 
TLS-based sender authentication.
 
 The problem can be illustrated using an SPA on origin A that uses an 
access token AT that is bound to the TLS connection between the browser and the 
resource server R. If AT leaks to an attacker E, and there is, for example, an 
iframe from E's origin loaded in the web browser, that iframe can send a 
request to origin R using the access token AT. This request will contain the 
proof-of-posession of the (mTLS or token binding) key. The resource server R 
therefore cannot distinguish if a request containing a valid access token 
originates from origin A or origin E.
 
 If the resource server only accepts the access token in an 
Authorization header, then Cross-Origin Resource Sharing (CORS) will protect 
against this attack by default. If the resource server accepts the access 
tokens in other ways (e.g., as a URL parameter), or if the CORS policy of the 
resource server permits requests by origin E, then the TLS-based token binding 
only provides protection if the browser is offline."
 
 
 The "summary" above this text reads as follows:
 
 "If access tokens are sender-constrained to a web browser, 

Re: [OAUTH-WG] I-D Action: draft-ietf-oauth-security-topics-10.txt

2018-11-28 Thread Richard Backman, Annabelle
I think we need to be very careful about prescribing behavior around refresh 
token lifetime, and setting expectations for what degree of consistency is 
attainable. Considering the terms "session", "authenticated session", 
"offline", "expiration", "termination", and "log out" can mean different things 
to different services (and those tiny nuances matter!) I am against text that 
makes binding refresh tokens to the authenticated session a "SHOULD." Rather, 
we should recommend that the AS provide the end user with a mechanism by which 
they may terminate refresh tokens. We can also describe session-bound refresh 
tokens as one such method that may or may not be appropriate depending on the 
use case.

To back up my claim that consistency is Hard, here are a few scenarios to 
consider:

1)
A mobile app loads the authorization request in an in-app browser tab that has 
an app-scoped cookie jar and is never presented by the app again after the flow 
is complete. How does the user sign out of that session? Should the AS kill the 
session due to inactivity? Won't that confuse the user when suddenly the 
integration between app and service stops working for no discernable reason? If 
this scenario sounds unlikely, it's not. This is the behavior of every app that 
integrated with the Safari in-app browser tab in iOS 9 and never updated to the 
authentication-oriented replacements introduced later, as well as that of every 
app that opens the authorization request in a web view (ugh).

2)
A mobile app loads the authorization request in the external browser, but the 
user always uses the AS's app on their device instead of visiting their website 
(e.g., using the Gmail app instead of going to gmail.com in the browser), so 
their browser session quickly times out due to inactivity. Again, won't that 
confuse the user when the client mobile app stops working?

3)
A set-top box uses the device flow, and the tokens it receives are bound to the 
user's session in the web browser on their laptop, where they completed the 
device flow. The user buys a new laptop, their session on their old laptop 
times out due to inactivity, and their set-top box can't stream videos anymore. 
¯\_(ツ)_/¯

-- 
Annabelle Richard Backman
AWS Identity
 

On 11/28/18, 9:20 AM, "OAuth on behalf of Torsten Lodderstedt" 
 wrote:



> Am 28.11.2018 um 16:53 schrieb George Fletcher :
> 
> On 11/28/18 5:11 AM, Torsten Lodderstedt wrote:
>> Hi George,
>> 
>>> Am 20.11.2018 um 23:38 schrieb George Fletcher :
>>> 
>>> Thanks for the additional section on refresh_tokens. Some additional 
recommendations...
>>> 
>>> 1. By default refresh_tokens are bound to the user's authenticated 
session. When the authenticated session expires or is terminated (whether by 
the user or for some other reason) the refresh_tokenis implicitly revoked.
>> SHOULD or MUST? I would suggest to go with a SHOULD.
> I would say that the AS SHOULD bind the refresh_token to the user's 
authentication. However, I'd lean more to MUST for session bound refresh_tokens 
being revoked when the session is terminated.

wfm 

Anyone on the list wanting to object? 

>> 
>>> 2. Clients that need to obtain a refresh_token that exists beyond the 
lifetime of the user's authentication session MUST indicate this need by 
requesting the "offline_access" scope 
(https://openid.net/specs/openid-connect-core-1_0.html#OfflineAccess). This 
provides for a user consent event making it clear to the user that the client 
is requesting access even when the user's authentication session expires. This 
then becomes the default for mobile apps as the refresh_token should not be 
tied to the web session used to authorize the app.
>> Sounds reasonable, just the scope „offline_access“ is OIDC specific. Is 
it time to move it down the stack to OAuth?
> It may be if we want more consistency in the implementation of 
refresh_token policy across authorization servers.

Who has an opinion on that topic?

>> 
>>> 3. The AS MAY consider putting an upper bound on the lifetime of a 
refresh_token (e.g. 1 year). There is no real need to issue a refresh_token 
that is good indefinitely.
>> I thought I had covered that in the last section. It’s now:
>> 
>> Refresh tokens SHOULD expire if the client has been inactive for some 
time,
>>  i.e. the refresh token has not been used to obtain fresh access 
tokens for
>>  some time. The expiration time is at the discretion of the 
authorization server.
>>  It might be a global value or determined based on the client 
policy or
>>  the grant associated with the refresh token (and its 
sensitivity).
> This is slightly different but sufficient so +1 for the text :)

Ok, thanks. 

>> 
>> Proposals are welcome!
>> 
>> kind regards,
>> Torsten.
>> 
>> 
>>> This is in 

Re: [OAUTH-WG] Call for Adoption: OAuth 2.0 Incremental Authorization

2018-04-18 Thread Richard Backman, Annabelle
I support adoption of OAuth 2.0 Incremental Authorization as a WG document.

--
Annabelle Richard Backman
Amazon – Identity Services

From: OAuth  on behalf of Brian Campbell 

Date: Wednesday, April 18, 2018 at 8:23 AM
To: Rifaat Shekh-Yusef 
Cc: oauth 
Subject: Re: [OAUTH-WG] Call for Adoption: OAuth 2.0 Incremental Authorization

I support adoption of OAuth 2.0 Incremental Authorization as a WG document.

On Mon, Apr 16, 2018 at 8:47 AM, Rifaat Shekh-Yusef 
> wrote:
All,

We would like to get a confirmation on the mailing list for the adoption of the 
OAuth 2.0 Incremental Authorization as a WG document
https://datatracker.ietf.org/doc/draft-wdenniss-oauth-incremental-auth/

Please, let us know if you support or object to the adoption of this document.

Regards,
 Rifaat & Hannes


___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


CONFIDENTIALITY NOTICE: This email may contain confidential and privileged 
material for the sole use of the intended recipient(s). Any review, use, 
distribution or disclosure by others is strictly prohibited..  If you have 
received this communication in error, please notify the sender immediately by 
e-mail and delete the message and any file attachments from your computer. 
Thank you.
___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] Call for Adoption: Reciprocal OAuth

2018-04-17 Thread Richard Backman, Annabelle
I support the adoption of *Reciprocal OAuth* as a WG document.

-- 
Annabelle Richard Backman
Amazon – Identity Services
 
On 4/16/18, 8:20 AM, "OAuth on behalf of Hannes Tschofenig" 
 wrote:

Hi all,

we had gotten positive feedback from the group on Reciprocal OAuth at the 
virtual interim meeting earlier this year and also at the London IETF meeting.

We would therefore like to get a final confirmation on the mailing list for 
the adoption of the *Reciprocal OAuth* as a WG document
https://tools.ietf.org/html/draft-hardt-oauth-mutual-02

Please, let us know if you support or object to the adoption of this 
document by April 25th.

Ciao
Hannes & Rifaat
IMPORTANT NOTICE: The contents of this email and any attachments are 
confidential and may also be privileged. If you are not the intended recipient, 
please notify the sender immediately and do not disclose the contents to any 
other person, use it for any purpose, or store or copy the information in any 
medium. Thank you.

___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] What Does Logout Mean?

2018-03-30 Thread Richard Backman, Annabelle
It sounds like you're asking the OP to provide client-side session management 
as a service. There may be value in standardizing that, but I think it goes 
beyond what Backchannel Logout is intended to do.

--
Annabelle Richard Backman
Amazon – Identity Services
 
On 3/30/18, 10:42 AM, "Bill Burke" <bbu...@redhat.com> wrote:

On Fri, Mar 30, 2018 at 12:57 PM, Richard Backman, Annabelle
<richa...@amazon.com> wrote:
>
> FWIW, our OP implementation allows RPs to register their node specific
> logout endpoints at boot.  This request is authenticated via client
> authentication.  We also extended code to token request to transmit the
> local session id.  The OP stores this information.  Backchannel logout 
POSTS
> to each and every registered node and transmits a JWS signed by the OP
> containing the local session ids to invalidate.  That's been enough to 
cover
> all the weirdness out there so far.
>
> [richanna]
>
> What does “at boot” mean in the context of OpenID Connect? Do you mean 
that
> for every logout, the OP makes a Backchannel Logout request to each of the
> client’s node-specific logout endpoints?

Just in caseThis is all for backchannel logouts which are out of
band from the browser.

Node boots up and registers with the Auth Server its logout endpoint.

POST /authserver/node_registration

client_id=myclient&
client_secret=geheim&
node=https://node.internal:8443/app/oidc/_logout

 As I mentioned earlier, the node doing code to token request will
also pass its local session id so it can be associated with the Auth
server's SID.  When an admin initiates a forced logout, a backchannel
logout request is sent to each client's logout endpoint.  If the
client has nodes that have registered, this request is duplicated to
each node.





> If that’s the case, why can’t the
> client make those calls themselves, from whichever host happens to receive
> the Backchannel Logout request?
>

Your assuming that each node has knowledge of cluster topology which
isn't neccesarily true.  Each additional proprietary extension we've
made is optional.  Nodes can optionally register themselves.  Nodes
can optionally send local session ids with the code to token request.


>
>
> Since the client only cares about node-local state, they should be able to
> maintain the mapping between OP session IDs and local session IDs on their
> side.
>

Considering a cluster of load balanced web applications that dont'
have session replication and don't have knowledge of cluster topology.
The only way for the Auth Server to perform backchannel logout is to
send the same backchannel logout to each and every node.

There's also the case where the nodes do support session replication,
but don't have a way to get at topology or a way to store the
association between the SID and application session id.  In this case
you don't need node registration, but you do need a way to associate
the SID with the local session id.

As a IDP vendor, you have to support all these types of clients.
Telling developers that they are just going to have to manage this
themselves is not really an option if you want adoption.

Bill


___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] What Does Logout Mean?

2018-03-28 Thread Richard Backman, Annabelle
That makes somewhat more sense to me if we’re talking about applications with 
sticky sessions. Adding a session-specific logout URI introduces security 
concerns (e.g. how does the OP validate the URI) and only works if the RP can 
provide URIs that target individual hosts in their fleet. The “is this SID 
valid?” endpoint solution that David described doesn’t scale and depends on SID 
(which is OPTIONAL). Both shift the burden of state management onto the OP, 
which may not be in any better position to handle it.

This seems like something that needs to be addressed in the client 
implementations rather than in the specification. Especially when we consider 
that there are implementation-specific questions lurking in the edge cases. 
(e.g. what happens when a user comes in with valid cookies, but no server-side 
session state?)

--
Annabelle Richard Backman
Amazon – Identity Services

From: Bill Burke <bbu...@redhat.com>
Date: Wednesday, March 28, 2018 at 12:10 PM
To: "Richard Backman, Annabelle" <richa...@amazon.com>
Cc: Mike Jones <michael.jo...@microsoft.com>, Roberto Carbone <carb...@fbk.eu>, 
"oauth@ietf.org" <oauth@ietf.org>, Nat Sakimura <n...@sakimura.org>
Subject: Re: [OAUTH-WG] What Does Logout Mean?



On Wed, Mar 28, 2018 at 1:40 PM, Richard Backman, Annabelle 
<richa...@amazon.com<mailto:richa...@amazon.com>> wrote:

I'm reminded of this session from IIW 
21<http://iiw.idcommons.net/What_Does_%E2%80%9CLogOUT%E2%80%99_mean%3F>. ☺ I 
look forward to reading the document distilling the various competing use cases 
and requirements into some semblance of sanity.



> If the framework has no way of invalidating a session across the cluster…



Is this a common deficiency in application frameworks? It seems to me that much 
of the value of a server-side session record is lost if its state isn’t 
synchronized across the fleet.


"modern" apps are REST based with Javascript frontends, but there's still a ton 
of "old school" developers out there.

Was involved with developing an application server for over a decade 
(JBoss)...There were many app developers that didn't want to store app session 
information in a database (as David says) or deal with the headaches of session 
replication so they just set up their load balancer to do session affinity 
(sticky sessions).  That way the login session was always local.  If the oidc 
logout spec allowed the client to register logout callback tied to the token's 
session (like maybe during code to token), that might be a simple way to solve 
many of these issues too.


___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth