Re: [OAUTH-WG] WGLC on draft-ietf-oauth-mtls-07

2018-05-04 Thread Brian Campbell
Wearing my editor's hat here (did I do that right?) and looking to bring
this to a close so the draft can proceed - I don't see a consensus for
additional confirmation methods in this draft.

On Tue, May 1, 2018 at 3:08 AM, Neil Madden 
wrote:

> JOSE and many other specs have allowed algorithms to be specified at
> multiple security levels: a baseline 128-bit level, and then usually 192-
> and 256-bit levels too. It seems odd that a draft that is ostensibly for
> high security assurance environments would choose to only specify the
> lowest acceptable security level, especially when the 256-bit level has
> essentially negligible overhead. (OK, ~60 bytes additional overhead in a
> JWT - I’d be surprised if that was a deal breaker though).
>
> Still, if the consensus of the WG is that this is not worth it, then I
> don’t want to delay the draft any further. I can always submit a 2 line RFC
> in future to add a SHA-512 confirmation method.
>
> — Neil
>
> > On 30 Apr 2018, at 23:58, John Bradley  wrote:
> >
> > We allow for new thumbprint algorithms to be defined and used with this
> spec.
> > I think that we all agree that is a good thing.
> >
> > The question is if we should define them here or as part of JWT/CWT
> based on broader demand.
> >
> > Including them in this document may be a distraction in my opinion.
>  There is no attack against SHA256 with a short duration token/key (days)
> that is better solved by using a long duration token/key (years) with a
> longer hash.
> >
> > That said it woiulden't like me.  I just think it will distract people
> in the wrong direction.
> >
> > John B.
> >
> >> On Apr 30, 2018, at 7:23 PM, Neil Madden 
> wrote:
> >>
> >> Responses inline again.
> >>
> >> On Mon, 30 Apr 2018 at 19:44, John Bradley  wrote:
> >> Inline.
> >>
> >>
> >>> On Apr 30, 2018, at 12:57 PM, Neil Madden 
> wrote:
> >>>
> >>> Hi John,
> >>>
>  On 30 Apr 2018, at 15:07, John Bradley  wrote:
> 
>  I lean towards letting new certificate thumbprints be defined
> someplace else.
> 
>  With SHA256, it is really second preimage resistance that we care
> about for a certificate thumbprint, rather than simple collision
> resistance.
> >>>
> >>> That’s not true if you consider a malicious client. If I can find any
> pair of certificates c1 and c2 such that SHA256(c1) == SHA256(c2) then I
> can present c1 to the AS when I request an access token and later present
> c2 to the protected resource when I use it. I don’t know if there is an
> actual practical attack based on this, but a successful attack would
> violate the security goal implied by the draft: that that requests made to
> the protected resource "MUST be made […] using the same certificate that
> was used for mutual TLS at the token endpoint.”
> >>>
> >>> NB: this is obviously easier if the client gets to choose its own
> client_id, as it can find the colliding certificates and then sign up with
> whatever subject ended up in c1.
> >>>
> >>
> >> Both C1 and C2 need to be valid certificates, so not just any collision
> will do.
> >>
> >> That doesn’t help much. There’s still enough you can vary in a
> certificate to generate collisions.
> >>
> >> If the client produces C1 and C2 and has the private keys for them, I
> have a hard time seeing what advantage it could get by having colliding
> certificate hashes.
> >>
> >> Me too. But if the security goal is proof of possession, then this
> attack (assuming practical collisions) would break that goal.
> >>
> >>
> >> If the AS is trusting a CA, the attacker producing a certificate that
> matches the hash of another certificate so that it seems like the fake
> certificate was issued by the CA, is an attack that worked on MD5 given
> some predictability.  That is why we now have entropy requirements for
> certificate serial numbers, that reduce known prefix attacks.
> >>
> >> The draft allows for self-signed certificates.
> >>
> >> Second-preimage Resistance is being computationaly infusible to find a
> second preimage that has the same output as the first preimage.   The
> second preimage strength for SHA256 is 201-256bits and collision resistance
> strength is 128 bits.  See Appendix A of https://nvlpubs.nist.gov/
> nistpubs/Legacy/SP/nistspecialpublication800-107r1.pdf if you want to
> understand the relationship between message length and second Preimage
> resistance.
> >>
> >> RFC 4270 is old but still has some relevant info.
> https://tools.ietf.org/html/rfc4270
> >>
> >> Think of the confirmation method as the out of band integrity check for
> the certificate that is presented in the TLS session.
> >>
> >> This is all largely irrelevant.
> >>
>  MD5 failed quite badly with chosen prefix collision attacks against
> certificates (Thanks to some X.509 extensions).
>  SHA1 has also been shown to be vulnerable to a PDF chosen prefix
> 

Re: [OAUTH-WG] WGLC on draft-ietf-oauth-mtls-07

2018-05-01 Thread Neil Madden
JOSE and many other specs have allowed algorithms to be specified at multiple 
security levels: a baseline 128-bit level, and then usually 192- and 256-bit 
levels too. It seems odd that a draft that is ostensibly for high security 
assurance environments would choose to only specify the lowest acceptable 
security level, especially when the 256-bit level has essentially negligible 
overhead. (OK, ~60 bytes additional overhead in a JWT - I’d be surprised if 
that was a deal breaker though).

Still, if the consensus of the WG is that this is not worth it, then I don’t 
want to delay the draft any further. I can always submit a 2 line RFC in future 
to add a SHA-512 confirmation method.

— Neil

> On 30 Apr 2018, at 23:58, John Bradley  wrote:
> 
> We allow for new thumbprint algorithms to be defined and used with this spec.
> I think that we all agree that is a good thing.
> 
> The question is if we should define them here or as part of JWT/CWT based on 
> broader demand.
> 
> Including them in this document may be a distraction in my opinion.   There 
> is no attack against SHA256 with a short duration token/key (days) that is 
> better solved by using a long duration token/key (years) with a longer hash.
> 
> That said it woiulden't like me.  I just think it will distract people in the 
> wrong direction.
> 
> John B.
> 
>> On Apr 30, 2018, at 7:23 PM, Neil Madden  wrote:
>> 
>> Responses inline again. 
>> 
>> On Mon, 30 Apr 2018 at 19:44, John Bradley  wrote:
>> Inline.
>> 
>> 
>>> On Apr 30, 2018, at 12:57 PM, Neil Madden  wrote:
>>> 
>>> Hi John,
>>> 
 On 30 Apr 2018, at 15:07, John Bradley  wrote:
 
 I lean towards letting new certificate thumbprints be defined someplace 
 else.
 
 With SHA256, it is really second preimage resistance that we care about 
 for a certificate thumbprint, rather than simple collision resistance.  
>>> 
>>> That’s not true if you consider a malicious client. If I can find any pair 
>>> of certificates c1 and c2 such that SHA256(c1) == SHA256(c2) then I can 
>>> present c1 to the AS when I request an access token and later present c2 to 
>>> the protected resource when I use it. I don’t know if there is an actual 
>>> practical attack based on this, but a successful attack would violate the 
>>> security goal implied by the draft: that that requests made to the 
>>> protected resource "MUST be made […] using the same certificate that was 
>>> used for mutual TLS at the token endpoint.”
>>> 
>>> NB: this is obviously easier if the client gets to choose its own 
>>> client_id, as it can find the colliding certificates and then sign up with 
>>> whatever subject ended up in c1.
>>> 
>> 
>> Both C1 and C2 need to be valid certificates, so not just any collision will 
>> do.  
>> 
>> That doesn’t help much. There’s still enough you can vary in a certificate 
>> to generate collisions. 
>> 
>> If the client produces C1 and C2 and has the private keys for them, I have a 
>> hard time seeing what advantage it could get by having colliding certificate 
>> hashes.
>> 
>> Me too. But if the security goal is proof of possession, then this attack 
>> (assuming practical collisions) would break that goal. 
>> 
>> 
>> If the AS is trusting a CA, the attacker producing a certificate that 
>> matches the hash of another certificate so that it seems like the fake 
>> certificate was issued by the CA, is an attack that worked on MD5 given some 
>> predictability.  That is why we now have entropy requirements for 
>> certificate serial numbers, that reduce known prefix attacks.
>> 
>> The draft allows for self-signed certificates. 
>> 
>> Second-preimage Resistance is being computationaly infusible to find a 
>> second preimage that has the same output as the first preimage.   The second 
>> preimage strength for SHA256 is 201-256bits and collision resistance 
>> strength is 128 bits.  See Appendix A of 
>> https://nvlpubs.nist.gov/nistpubs/Legacy/SP/nistspecialpublication800-107r1.pdf
>>  if you want to understand the relationship between message length and 
>> second Preimage resistance.
>> 
>> RFC 4270 is old but still has some relevant info. 
>> https://tools.ietf.org/html/rfc4270
>> 
>> Think of the confirmation method as the out of band integrity check for the 
>> certificate that is presented in the TLS session.
>> 
>> This is all largely irrelevant. 
>> 
 MD5 failed quite badly with chosen prefix collision attacks against 
 certificates (Thanks to some X.509 extensions).
 SHA1 has also been shown to be vulnerable to a PDF chosen prefix attack 
 (http://shattered.io)
 
 The reason NIST pushed for development of SHA3 was concern that a preimage 
 attack might eventually be found agains the SHA2 family of hash 
 algorithms. 
 
 While SHA512 may have double the number of bytes it may not help much 
 against a 

Re: [OAUTH-WG] WGLC on draft-ietf-oauth-mtls-07

2018-04-30 Thread John Bradley
We allow for new thumbprint algorithms to be defined and used with this spec.
I think that we all agree that is a good thing.

The question is if we should define them here or as part of JWT/CWT based on 
broader demand.

Including them in this document may be a distraction in my opinion.   There is 
no attack against SHA256 with a short duration token/key (days) that is better 
solved by using a long duration token/key (years) with a longer hash.

That said it woiulden't like me.  I just think it will distract people in the 
wrong direction.

John B.

> On Apr 30, 2018, at 7:23 PM, Neil Madden  wrote:
> 
> Responses inline again. 
> 
> On Mon, 30 Apr 2018 at 19:44, John Bradley  > wrote:
> Inline.
> 
> 
>> On Apr 30, 2018, at 12:57 PM, Neil Madden > > wrote:
>> 
>> Hi John,
>> 
>>> On 30 Apr 2018, at 15:07, John Bradley >> > wrote:
>>> 
>>> I lean towards letting new certificate thumbprints be defined someplace 
>>> else.
>>> 
>>> With SHA256, it is really second preimage resistance that we care about for 
>>> a certificate thumbprint, rather than simple collision resistance.  
>> 
>> That’s not true if you consider a malicious client. If I can find any pair 
>> of certificates c1 and c2 such that SHA256(c1) == SHA256(c2) then I can 
>> present c1 to the AS when I request an access token and later present c2 to 
>> the protected resource when I use it. I don’t know if there is an actual 
>> practical attack based on this, but a successful attack would violate the 
>> security goal implied by the draft: that that requests made to the protected 
>> resource "MUST be made […] using the same certificate that was used for 
>> mutual TLS at the token endpoint.”
>> 
>> NB: this is obviously easier if the client gets to choose its own client_id, 
>> as it can find the colliding certificates and then sign up with whatever 
>> subject ended up in c1.
>> 
> 
> Both C1 and C2 need to be valid certificates, so not just any collision will 
> do.  
> 
> That doesn’t help much. There’s still enough you can vary in a certificate to 
> generate collisions. 
> 
> If the client produces C1 and C2 and has the private keys for them, I have a 
> hard time seeing what advantage it could get by having colliding certificate 
> hashes.
> 
> Me too. But if the security goal is proof of possession, then this attack 
> (assuming practical collisions) would break that goal. 
> 
> 
> If the AS is trusting a CA, the attacker producing a certificate that matches 
> the hash of another certificate so that it seems like the fake certificate 
> was issued by the CA, is an attack that worked on MD5 given some 
> predictability.  That is why we now have entropy requirements for certificate 
> serial numbers, that reduce known prefix attacks.
> 
> The draft allows for self-signed certificates. 
> 
> Second-preimage Resistance is being computationaly infusible to find a second 
> preimage that has the same output as the first preimage.   The second 
> preimage strength for SHA256 is 201-256bits and collision resistance strength 
> is 128 bits.  See Appendix A of 
> https://nvlpubs.nist.gov/nistpubs/Legacy/SP/nistspecialpublication800-107r1.pdf
>  
> 
>  if you want to understand the relationship between message length and second 
> Preimage resistance.
> 
> RFC 4270 is old but still has some relevant info. 
> https://tools.ietf.org/html/rfc4270 
> 
> Think of the confirmation method as the out of band integrity check for the 
> certificate that is presented in the TLS session.
> 
> This is all largely irrelevant. 
> 
>>> MD5 failed quite badly with chosen prefix collision attacks against 
>>> certificates (Thanks to some X.509 extensions).
>>> SHA1 has also been shown to be vulnerable to a PDF chosen prefix attack 
>>> (http://shattered.io )
>>> 
>>> The reason NIST pushed for development of SHA3 was concern that a preimage 
>>> attack might eventually be found agains the SHA2 family of hash algorithms. 
>>> 
>>> While SHA512 may have double the number of bytes it may not help much 
>>> against a SHA2 preimage attack,. (Some papers  suggest that the double word 
>>> size of SHA512 it may be more vulnerable than SHA256 to some attacks)
>> 
>> This is really something where the input of a cryptographer would be 
>> welcome. As far as I am aware, the collision resistance of SHA-256 is still 
>> considered at around the 128-bit level, while it is considered at around the 
>> 256-bit level for SHA-512. Absent a total break of SHA2, it is likely that 
>> SHA-512 will remain at a higher security level than SHA-256 even if both are 
>> weakened by cryptanalytic advances. They are based on the same algorithm, 
>> with different 

Re: [OAUTH-WG] WGLC on draft-ietf-oauth-mtls-07

2018-04-30 Thread Neil Madden
Responses inline again.

On Mon, 30 Apr 2018 at 19:44, John Bradley  wrote:

> Inline.
>
>
> On Apr 30, 2018, at 12:57 PM, Neil Madden 
> wrote:
>
> Hi John,
>
> On 30 Apr 2018, at 15:07, John Bradley  wrote:
>
> I lean towards letting new certificate thumbprints be defined someplace
> else.
>
> With SHA256, it is really second preimage resistance that we care about
> for a certificate thumbprint, rather than simple collision resistance.
>
>
> That’s not true if you consider a malicious client. If I can find any pair
> of certificates c1 and c2 such that SHA256(c1) == SHA256(c2) then I can
> present c1 to the AS when I request an access token and later present c2 to
> the protected resource when I use it. I don’t know if there is an actual
> practical attack based on this, but a successful attack would violate the
> security goal implied by the draft: that that requests made to the
> protected resource "MUST be made […] using the same certificate that was
> used for mutual TLS at the token endpoint.”
>
> NB: this is obviously easier if the client gets to choose its own
> client_id, as it can find the colliding certificates and then sign up with
> whatever subject ended up in c1.
>
>
> Both C1 and C2 need to be valid certificates, so not just any collision
> will do.
>

That doesn’t help much. There’s still enough you can vary in a certificate
to generate collisions.

If the client produces C1 and C2 and has the private keys for them, I have
> a hard time seeing what advantage it could get by having colliding
> certificate hashes.
>

Me too. But if the security goal is proof of possession, then this attack
(assuming practical collisions) would break that goal.


> If the AS is trusting a CA, the attacker producing a certificate that
> matches the hash of another certificate so that it seems like the fake
> certificate was issued by the CA, is an attack that worked on MD5 given
> some predictability.  That is why we now have entropy requirements for
> certificate serial numbers, that reduce known prefix attacks.
>

The draft allows for self-signed certificates.

Second-preimage Resistance is being computationaly infusible to find a
> second preimage that has the same output as the first preimage.   The
> second preimage strength for SHA256 is 201-256bits and collision resistance
> strength is 128 bits.  See Appendix A of
> https://nvlpubs.nist.gov/nistpubs/Legacy/SP/nistspecialpublication800-107r1.pdf
>  if
> you want to understand the relationship between message length and second
> Preimage resistance.
>
> RFC 4270 is old but still has some relevant info.
> https://tools.ietf.org/html/rfc4270
>
> Think of the confirmation method as the out of band integrity check for
> the certificate that is presented in the TLS session.
>

This is all largely irrelevant.

MD5 failed quite badly with chosen prefix collision attacks against
> certificates (Thanks to some X.509 extensions).
> SHA1 has also been shown to be vulnerable to a PDF chosen prefix attack (
> http://shattered.io)
>
> The reason NIST pushed for development of SHA3 was concern that a preimage
> attack might eventually be found agains the SHA2 family of hash algorithms.
>
> While SHA512 may have double the number of bytes it may not help much
> against a SHA2 preimage attack,. (Some papers  suggest that the double word
> size of SHA512 it may be more vulnerable than SHA256 to some attacks)
>
>
> This is really something where the input of a cryptographer would be
> welcome. As far as I am aware, the collision resistance of SHA-256 is still
> considered at around the 128-bit level, while it is considered at around
> the 256-bit level for SHA-512. Absent a total break of SHA2, it is likely
> that SHA-512 will remain at a higher security level than SHA-256 even if
> both are weakened by cryptanalytic advances. They are based on the same
> algorithm, with different parameters and word/block sizes.
>
> SHA512 uses double words and more rounds, true.  It also has more rounds
> broken by known attacks than SHA256 https://en.wikipedia.org/wiki/SHA-2..
> So it is slightly more complex than doubling the output size doubles the
> strength.
>

SHA-512 also has more rounds (80) than SHA-256 (64), so still has more
rounds left to go...


>
> It is currently believed that SHA256 has 256 bits of second preimage
> strength.   That could always turn out to be wrong as SHA2 has some
> similarities to SHA1, and yes post quantum that is reduced to 128bits.
>
> To have a safe future option we would probably want to go with SHA3-512.
>   However I don’t see that getting much traction in the near term..
>
>
> SHA3 is also slower than SHA2 in software.
>
> Yes roughly half the speed in software but generally faster in hardware.
>
> I am not necessarily arguing for SHA3, rather I think this issue is larger
> than this spec and selecting alternate hashing algorithms for security
> should be separate from this 

Re: [OAUTH-WG] WGLC on draft-ietf-oauth-mtls-07

2018-04-30 Thread John Bradley
Yes that is an issue.  

I think one of the things that kicked this off was a question of will this make 
it pointless for people to use algs such as AES GCM256 when it is perceived 
that our choice of hash somehow limits overall security to 128bits.

Let me take another run at this.

Things like block cyphers need to have long term secrecy.  An attacker may 
still get value from decrypting something years down the road.   

Things like signatures typically need to have some non repudiation property 
that lasts the useful lifetime of the document. That can be years or minutes 
depending on the document. 

In our case we are providing out of band integrity protection for the cert.  We 
could include the cert directly but it is allready being sent as part of TLS.  

In general the lifetime of the key pair used for access tokens will be less 
than the lifetime of the certificate, so it is hard to argue that we need 
stronger security than the cert itself.

We have a way to rotate keys/certs daily if desired with JWKS and it can 
support self signed certificates.  The security of this is still limited by the 
security of the TLS cert for the JWKS endpoint, but that is relatively easy to 
update if there is a need, and alternate certificate chains become available 
with security better than SHA256. 

However currently most if not all CAB forum roots are using SHA256 hashes with 
RSA2048 keys  (some like RSA still have roots using RSA 1028bit keys) 

I am normally the paranoid one in the crowd, but I would rather pick off some 
of the other issues that are more likely to go wrong first.  

We can point out extensibility for future use, but I am not buying us defining 
a new thumbprint when the one we have is as strong or stronger than the other 
parts of the trust chain.

I can see people choosing to use SHA512 having larger messages more processing 
as a way to avoid certificate rollover and that would be a bad tradeoff.

John B.



> On Apr 30, 2018, at 6:19 PM, Brian Campbell  
> wrote:
> 
> 
> 
> On Mon, Apr 30, 2018 at 9:57 AM, Neil Madden  > wrote:
> 
> > On 30 Apr 2018, at 15:07, John Bradley  > > wrote:
> 
> > My concern is that people will see a bigger number and decide it is better 
> > if we define it in the spec.  
> > We may be getting people to do additional work and increasing token size 
> > without a good reason by putting it in the spec directly.
> 
> I’m not sure why this is a concern. As previously pointed out, SHA-512 is 
> often *faster* than SHA-256, and an extra 32 bytes doesn’t seem worth 
> worrying about.
> 
> Seems like maybe it's worth noting that with JWT, where size can be a 
> legitimate constraint, those extra bytes end up being base64 encoded twice.  
> 
> 
> 
> CONFIDENTIALITY NOTICE: This email may contain confidential and privileged 
> material for the sole use of the intended recipient(s). Any review, use, 
> distribution or disclosure by others is strictly prohibited.  If you have 
> received this communication in error, please notify the sender immediately by 
> e-mail and delete the message and any file attachments from your computer. 
> Thank you.

___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] WGLC on draft-ietf-oauth-mtls-07

2018-04-30 Thread Brian Campbell
On Mon, Apr 30, 2018 at 9:57 AM, Neil Madden 
wrote:

>
> > On 30 Apr 2018, at 15:07, John Bradley  wrote:
>
> > My concern is that people will see a bigger number and decide it is
> better if we define it in the spec.
> > We may be getting people to do additional work and increasing token size
> without a good reason by putting it in the spec directly.
>
> I’m not sure why this is a concern. As previously pointed out, SHA-512 is
> often *faster* than SHA-256, and an extra 32 bytes doesn’t seem worth
> worrying about.
>

Seems like maybe it's worth noting that with JWT, where size can be a
legitimate constraint, those extra bytes end up being base64 encoded
twice.

-- 
_CONFIDENTIALITY NOTICE: This email may contain confidential and privileged 
material for the sole use of the intended recipient(s). Any review, use, 
distribution or disclosure by others is strictly prohibited.  If you have 
received this communication in error, please notify the sender immediately 
by e-mail and delete the message and any file attachments from your 
computer. Thank you._
___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] WGLC on draft-ietf-oauth-mtls-07

2018-04-30 Thread John Bradley
Inline.


> On Apr 30, 2018, at 12:57 PM, Neil Madden  > wrote:
> 
> Hi John,
> 
>> On 30 Apr 2018, at 15:07, John Bradley > > wrote:
>> 
>> I lean towards letting new certificate thumbprints be defined someplace else.
>> 
>> With SHA256, it is really second preimage resistance that we care about for 
>> a certificate thumbprint, rather than simple collision resistance.  
> 
> That’s not true if you consider a malicious client. If I can find any pair of 
> certificates c1 and c2 such that SHA256(c1) == SHA256(c2) then I can present 
> c1 to the AS when I request an access token and later present c2 to the 
> protected resource when I use it. I don’t know if there is an actual 
> practical attack based on this, but a successful attack would violate the 
> security goal implied by the draft: that that requests made to the protected 
> resource "MUST be made […] using the same certificate that was used for 
> mutual TLS at the token endpoint.”
> 
> NB: this is obviously easier if the client gets to choose its own client_id, 
> as it can find the colliding certificates and then sign up with whatever 
> subject ended up in c1.
> 

Both C1 and C2 need to be valid certificates, so not just any collision will 
do.  
If the client produces C1 and C2 and has the private keys for them, I have a 
hard time seeing what advantage it could get by having colliding certificate 
hashes.

If the AS is trusting a CA, the attacker producing a certificate that matches 
the hash of another certificate so that it seems like the fake certificate was 
issued by the CA, is an attack that worked on MD5 given some predictability.  
That is why we now have entropy requirements for certificate serial numbers, 
that reduce known prefix attacks.

Second-preimage Resistance is being computationaly infusible to find a second 
preimage that has the same output as the first preimage.   The second preimage 
strength for SHA256 is 201-256bits and collision resistance strength is 128 
bits.  See Appendix A of 
https://nvlpubs.nist.gov/nistpubs/Legacy/SP/nistspecialpublication800-107r1.pdf 

 if you want to understand the relationship between message length and second 
Preimage resistance.

RFC 4270 is old but still has some relevant info. 
https://tools.ietf.org/html/rfc4270 

Think of the confirmation method as the out of band integrity check for the 
certificate that is presented in the TLS session.




>> 
>> MD5 failed quite badly with chosen prefix collision attacks against 
>> certificates (Thanks to some X.509 extensions).
>> SHA1 has also been shown to be vulnerable to a PDF chosen prefix attack 
>> (http://shattered.io )
>> 
>> The reason NIST pushed for development of SHA3 was concern that a preimage 
>> attack might eventually be found agains the SHA2 family of hash algorithms. 
>> 
>> While SHA512 may have double the number of bytes it may not help much 
>> against a SHA2 preimage attack,. (Some papers  suggest that the double word 
>> size of SHA512 it may be more vulnerable than SHA256 to some attacks)
> 
> This is really something where the input of a cryptographer would be welcome. 
> As far as I am aware, the collision resistance of SHA-256 is still considered 
> at around the 128-bit level, while it is considered at around the 256-bit 
> level for SHA-512. Absent a total break of SHA2, it is likely that SHA-512 
> will remain at a higher security level than SHA-256 even if both are weakened 
> by cryptanalytic advances. They are based on the same algorithm, with 
> different parameters and word/block sizes.
> 
SHA512 uses double words and more rounds, true.  It also has more rounds broken 
by known attacks than SHA256 https://en.wikipedia.org/wiki/SHA-2 
.. So it is slightly more complex than 
doubling the output size doubles the strength.

>> 
>> It is currently believed that SHA256 has 256 bits of second preimage 
>> strength.   That could always turn out to be wrong as SHA2 has some 
>> similarities to SHA1, and yes post quantum that is reduced to 128bits. 
>> 
>> To have a safe future option we would probably want to go with SHA3-512.   
>> However I don’t see that getting much traction in the near term.  
> 
> SHA3 is also slower than SHA2 in software.
Yes roughly half the speed in software but generally faster in hardware.  

I am not necessarily arguing for SHA3, rather I think this issue is larger than 
this spec and selecting alternate hashing algorithms for security should be 
separate from this spec.

I am for agility, but I don’t want to accidentally have people doing something 
that is just theatre.

Rotating certificates, and having the lifetime of a certificates validity is as 
useful as doubling the hash size. 

I don’t think the 

Re: [OAUTH-WG] WGLC on draft-ietf-oauth-mtls-07

2018-04-30 Thread Neil Madden
Hi John,

> On 30 Apr 2018, at 15:07, John Bradley  wrote:
> 
> I lean towards letting new certificate thumbprints be defined someplace else.
> 
> With SHA256, it is really second preimage resistance that we care about for a 
> certificate thumbprint, rather than simple collision resistance.  

That’s not true if you consider a malicious client. If I can find any pair of 
certificates c1 and c2 such that SHA256(c1) == SHA256(c2) then I can present c1 
to the AS when I request an access token and later present c2 to the protected 
resource when I use it. I don’t know if there is an actual practical attack 
based on this, but a successful attack would violate the security goal implied 
by the draft: that that requests made to the protected resource "MUST be made 
[…] using the same certificate that was used for mutual TLS at the token 
endpoint.”

NB: this is obviously easier if the client gets to choose its own client_id, as 
it can find the colliding certificates and then sign up with whatever subject 
ended up in c1.

> 
> MD5 failed quite badly with chosen prefix collision attacks against 
> certificates (Thanks to some X.509 extensions).
> SHA1 has also been shown to be vulnerable to a PDF chosen prefix attack 
> (http://shattered.io)
> 
> The reason NIST pushed for development of SHA3 was concern that a preimage 
> attack might eventually be found agains the SHA2 family of hash algorithms. 
> 
> While SHA512 may have double the number of bytes it may not help much against 
> a SHA2 preimage attack,. (Some papers  suggest that the double word size of 
> SHA512 it may be more vulnerable than SHA256 to some attacks)

This is really something where the input of a cryptographer would be welcome. 
As far as I am aware, the collision resistance of SHA-256 is still considered 
at around the 128-bit level, while it is considered at around the 256-bit level 
for SHA-512. Absent a total break of SHA2, it is likely that SHA-512 will 
remain at a higher security level than SHA-256 even if both are weakened by 
cryptanalytic advances. They are based on the same algorithm, with different 
parameters and word/block sizes.

> 
> It is currently believed that SHA256 has 256 bits of second preimage 
> strength.   That could always turn out to be wrong as SHA2 has some 
> similarities to SHA1, and yes post quantum that is reduced to 128bits. 
> 
> To have a safe future option we would probably want to go with SHA3-512.   
> However I don’t see that getting much traction in the near term.  

SHA3 is also slower than SHA2 in software.

> 
> Practical things people should do run more along the lines of:
> 1: Put at least 64 bits of entropy into the certificate serial number if 
> using self signed or a local CA.  Commercial CA need to do that now.
> 2: Rotate certificates on a regular basis,  using a registered JWKS URI
> 
> My concern is that people will see a bigger number and decide it is better if 
> we define it in the spec.  
> We may be getting people to do additional work and increasing token size 
> without a good reason by putting it in the spec directly.

I’m not sure why this is a concern. As previously pointed out, SHA-512 is often 
*faster* than SHA-256, and an extra 32 bytes doesn’t seem worth worrying about.

[snip]

— Neil
___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] WGLC on draft-ietf-oauth-mtls-07

2018-04-30 Thread Mike Jones
I agree that this specification should not define new certificate thumbprint 
methods.  They can always be registered by other specifications if needed in 
the future.

   -- Mike

From: OAuth <oauth-boun...@ietf.org> On Behalf Of John Bradley
Sent: Monday, April 30, 2018 7:07 AM
To: Brian Campbell <bcampb...@pingidentity.com>
Cc: oauth <oauth@ietf.org>
Subject: Re: [OAUTH-WG] WGLC on draft-ietf-oauth-mtls-07

I lean towards letting new certificate thumbprints be defined someplace else.

With SHA256, it is really second preimage resistance that we care about for a 
certificate thumbprint, rather than simple collision resistance.

MD5 failed quite badly with chosen prefix collision attacks against 
certificates (Thanks to some X.509 extensions).
SHA1 has also been shown to be vulnerable to a PDF chosen prefix attack 
(http://shattered.io)

The reason NIST pushed for development of SHA3 was concern that a preimage 
attack might eventually be found agains the SHA2 family of hash algorithms.

While SHA512 may have double the number of bytes it may not help much against a 
SHA2 preimage attack,. (Some papers  suggest that the double word size of 
SHA512 it may be more vulnerable than SHA256 to some attacks)

It is currently believed that SHA256 has 256 bits of second preimage strength.  
 That could always turn out to be wrong as SHA2 has some similarities to SHA1, 
and yes post quantum that is reduced to 128bits.

To have a safe future option we would probably want to go with SHA3-512.   
However I don’t see that getting much traction in the near term.

Practical things people should do run more along the lines of:
1: Put at least 64 bits of entropy into the certificate serial number if using 
self signed or a local CA.  Commercial CA need to do that now.
2: Rotate certificates on a regular basis,  using a registered JWKS URI

My concern is that people will see a bigger number and decide it is better if 
we define it in the spec.
We may be getting people to do additional work and increasing token size 
without a good reason by putting it in the spec directly.

I have yet to see any real discussion on using bigger hashes for signing 
certificates, or creating thumbprints in other places.

John B.




On Thu, Apr 19, 2018 at 1:23 PM, Brian Campbell 
<bcampb...@pingidentity.com<mailto:bcampb...@pingidentity.com>> wrote:
Okay, so I retract the idea of metadata indicating the hash alg/cnf method 
(based on John pointing out that it doesn't really make sense).
That still leaves the question of whether or not to define additional 
confirmation methods in this document (and if so, what they would be though 
x5t#S384 and x5t#S512 seem the most likely).
There's some reasonable rational for both adding one or two new hash alg 
confirmation methods in the doc now vs. sticking with just SHA256 for now. I'll 
note again that the doc doesn't preclude using or later defining other 
confirmation methods.
I'm kind of on the fence about it, to be honest. But that doesn't really matter 
because the draft should reflect rough WG consensus. So I'm looking to get a 
rough gauge of rough consensus. At this point there's one comment out of WGLC 
asking for additional confirmation method(s). I don't think that makes for 
consensus. But I'd ask others from the WG to chime in, if appropriate, to help 
me better gauge consensus.

On Fri, Apr 13, 2018 at 4:49 AM, Neil Madden 
<neil.mad...@forgerock.com<mailto:neil.mad...@forgerock.com>> wrote:
I’m not particularly wedded to SHA-512, just that it should be possible to use 
something else. At the moment, the draft seems pretty wedded to SHA-256. 
SHA-512 may be overkill, but it is fast (faster than SHA-256 on many 64-bit 
machines) and provides a very wide security margin against any future crypto 
advances (quantum or otherwise). I’d also be happy with SHA-384, SHA3-512, 
Blake2 etc but SHA-512 seems pretty widely available.

I don’t think short-lived access tokens is a help if the likelihood is that 
certs will be reused for many access tokens.

Public Web PKI certs tend to only use SHA-256 as it has broad support, and I 
gather there were some compatibility issues with SHA-512 certs in TLS. There 
are a handful of SHA-384 certs - e.g., the Comodo CA certs for 
https://home.cern/ are signed with SHA-384 (although with RSA-2048, which NSA 
estimates at only ~112-bit security). SHA-512 is used on some internal networks 
where there is more control over components being used, which is also where 
people are mostly likely to care about security beyond 128-bit level (eg 
government internal networks).

By the way, I just mentioned quantum attacks as an example of something that 
might weaken the hash in future. Obviously, quantum attacks completely destroy 
RSA, ECDSA etc, so SHA-512 would not solve this on its own, but it provides a 
considerable margin to hedge against future quantum *or classi

Re: [OAUTH-WG] WGLC on draft-ietf-oauth-mtls-07

2018-04-30 Thread John Bradley
I lean towards letting new certificate thumbprints be defined someplace
else.

With SHA256, it is really second preimage resistance that we care about for
a certificate thumbprint, rather than simple collision resistance.

MD5 failed quite badly with chosen prefix collision attacks against
certificates (Thanks to some X.509 extensions).
SHA1 has also been shown to be vulnerable to a PDF chosen prefix attack (
http://shattered.io)

The reason NIST pushed for development of SHA3 was concern that a preimage
attack might eventually be found agains the SHA2 family of hash algorithms.

While SHA512 may have double the number of bytes it may not help much
against a SHA2 preimage attack,. (Some papers  suggest that the double word
size of SHA512 it may be more vulnerable than SHA256 to some attacks)

It is currently believed that SHA256 has 256 bits of second preimage
strength.   That could always turn out to be wrong as SHA2 has some
similarities to SHA1, and yes post quantum that is reduced to 128bits.

To have a safe future option we would probably want to go with SHA3-512.
However I don’t see that getting much traction in the near term.

Practical things people should do run more along the lines of:
1: Put at least 64 bits of entropy into the certificate serial number if
using self signed or a local CA.  Commercial CA need to do that now.
2: Rotate certificates on a regular basis,  using a registered JWKS URI

My concern is that people will see a bigger number and decide it is better
if we define it in the spec.
We may be getting people to do additional work and increasing token size
without a good reason by putting it in the spec directly.

I have yet to see any real discussion on using bigger hashes for signing
certificates, or creating thumbprints in other places.

John B.




On Thu, Apr 19, 2018 at 1:23 PM, Brian Campbell 
wrote:

> Okay, so I retract the idea of metadata indicating the hash alg/cnf
> method (based on John pointing out that it doesn't really make sense).
>
> That still leaves the question of whether or not to define additional
> confirmation methods in this document (and if so, what they would be
> though x5t#S384 and x5t#S512 seem the most likely).
>
> There's some reasonable rational for both adding one or two new hash alg
> confirmation methods in the doc now vs. sticking with just SHA256 for
> now. I'll note again that the doc doesn't preclude using or later defining
> other confirmation methods.
>
> I'm kind of on the fence about it, to be honest. But that doesn't really
> matter because the draft should reflect rough WG consensus. So I'm looking
> to get a rough gauge of rough consensus. At this point there's one
> comment out of WGLC asking for additional confirmation method(s). I don't
> think that makes for consensus. But I'd ask others from the WG to chime
> in, if appropriate, to help me better gauge consensus.
>
> On Fri, Apr 13, 2018 at 4:49 AM, Neil Madden 
> wrote:
>
>> I’m not particularly wedded to SHA-512, just that it should be possible
>> to use something else. At the moment, the draft seems pretty wedded to
>> SHA-256. SHA-512 may be overkill, but it is fast (faster than SHA-256 on
>> many 64-bit machines) and provides a very wide security margin against any
>> future crypto advances (quantum or otherwise). I’d also be happy with
>> SHA-384, SHA3-512, Blake2 etc but SHA-512 seems pretty widely available.
>>
>> I don’t think short-lived access tokens is a help if the likelihood is
>> that certs will be reused for many access tokens.
>>
>> Public Web PKI certs tend to only use SHA-256 as it has broad support,
>> and I gather there were some compatibility issues with SHA-512 certs in
>> TLS. There are a handful of SHA-384 certs - e.g., the Comodo CA certs for
>> https://home.cern/ are signed with SHA-384 (although with RSA-2048,
>> which NSA estimates at only ~112-bit security). SHA-512 is used on some
>> internal networks where there is more control over components being used,
>> which is also where people are mostly likely to care about security beyond
>> 128-bit level (eg government internal networks).
>>
>> By the way, I just mentioned quantum attacks as an example of something
>> that might weaken the hash in future. Obviously, quantum attacks completely
>> destroy RSA, ECDSA etc, so SHA-512 would not solve this on its own, but it
>> provides a considerable margin to hedge against future quantum *or
>> classical* advances while allowing the paranoid to pick a stronger security
>> level now. We have customers that ask for 256-bit AES already.
>>
>> (I also misremembered the quantum attack - “Serious Cryptography” by
>> Aumasson tells me the best known quantum attack against collision
>> resistance is O(2^n/3) - so ~2^85 for SHA-256 but also needs O(2^85) space
>> so is impractical. I don’t know if that is the last word though)..
>>
>> As for SHA-1, doesn’t that prove the point? SHA-1 is pretty broken now
>> with 

Re: [OAUTH-WG] WGLC on draft-ietf-oauth-mtls-07

2018-04-23 Thread Brian Campbell
That's pretty much in line with my on-the-fence position on it.

On Fri, Apr 20, 2018 at 4:43 PM, Justin Richer  wrote:

> Additional confirmation methods can be easily defined outside of this
> draft. That said, I think those two in particular are pretty
> straightforward to add (well-known algorithms that are widely available) so
> it might make sense to just toss them in now? I think it’s fine either way.
>
>  — Justin
>
> On Apr 19, 2018, at 12:23 PM, Brian Campbell 
> wrote:
>
> Okay, so I retract the idea of metadata indicating the hash alg/cnf
> method (based on John pointing out that it doesn't really make sense).
>
> That still leaves the question of whether or not to define additional
> confirmation methods in this document (and if so, what they would be
> though x5t#S384 and x5t#S512 seem the most likely).
>
> There's some reasonable rational for both adding one or two new hash alg
> confirmation methods in the doc now vs. sticking with just SHA256 for
> now. I'll note again that the doc doesn't preclude using or later defining
> other confirmation methods.
>
> I'm kind of on the fence about it, to be honest. But that doesn't really
> matter because the draft should reflect rough WG consensus. So I'm looking
> to get a rough gauge of rough consensus. At this point there's one
> comment out of WGLC asking for additional confirmation method(s). I don't
> think that makes for consensus. But I'd ask others from the WG to chime
> in, if appropriate, to help me better gauge consensus.
>
> On Fri, Apr 13, 2018 at 4:49 AM, Neil Madden 
> wrote:
>
>> I’m not particularly wedded to SHA-512, just that it should be possible
>> to use something else. At the moment, the draft seems pretty wedded to
>> SHA-256. SHA-512 may be overkill, but it is fast (faster than SHA-256 on
>> many 64-bit machines) and provides a very wide security margin against any
>> future crypto advances (quantum or otherwise). I’d also be happy with
>> SHA-384, SHA3-512, Blake2 etc but SHA-512 seems pretty widely available.
>>
>> I don’t think short-lived access tokens is a help if the likelihood is
>> that certs will be reused for many access tokens.
>>
>> Public Web PKI certs tend to only use SHA-256 as it has broad support,
>> and I gather there were some compatibility issues with SHA-512 certs in
>> TLS. There are a handful of SHA-384 certs - e.g., the Comodo CA certs for
>>  https://home.cern/ are signed with SHA-384 (although with RSA-2048,
>> which NSA estimates at only ~112-bit security). SHA-512 is used on some
>> internal networks where there is more control over components being used,
>> which is also where people are mostly likely to care about security beyond
>> 128-bit level (eg government internal networks).
>>
>> By the way, I just mentioned quantum attacks as an example of something
>> that might weaken the hash in future. Obviously, quantum attacks completely
>> destroy RSA, ECDSA etc, so SHA-512 would not solve this on its own, but it
>> provides a considerable margin to hedge against future quantum *or
>> classical* advances while allowing the paranoid to pick a stronger security
>> level now.. We have customers that ask for 256-bit AES already.
>>
>>
>> (I also misremembered the quantum attack - “Serious Cryptography” by
>> Aumasson tells me the best known quantum attack against collision
>> resistance is O(2^n/3) - so ~2^85 for SHA-256 but also needs O(2^85) space
>> so is impractical. I don’t know if that is the last word though)..
>>
>> As for SHA-1, doesn’t that prove the point? SHA-1 is pretty broken now
>> with practical collisions having been demonstrated. The kind of attacks on
>> CA certs demonstrated for MD5 [1] are probably not too far off for SHA-1
>> now. As far as I am aware, we’re nowhere near those kinds of attacks on
>> SHA-256, but I’d prefer that there was a backup plan in place already
>> rather than waiting to see (and waiting for everyone to have hard-coded
>> SHA-256 everywhere).
>>
>> Now I have to get back to my daughter’s birthday party… :-)
>>
>> [1] http://www.win.tue.nl/hashclash/rogue-ca/
>>
>> Neil
>>
>>
>> On Thursday, Apr 12, 2018 at 10:07 pm, John Bradley 
>> wrote:
>> The WG discusses RS meta-data as part of one of Dick’s proposals..   I am
>> unclear on who is going to work on it in what draft.
>>
>> If the client is doing mtls to both the Token endpoint and RS the
>> information in the confirmation method is not relevant to the client as
>> long as the AS and RS are in agreement like with most tokens.
>>
>> The hash on the certificate and length of the signing key are equally or
>> more vulnerable to any sort of attack.
>> At least with AT the tokens are not long lived.
>>
>> Doing anything better than SHA256 only makes sense if the cert is signed
>> by something stronger like SHA512 with a 2048bit RSA key.   That seems sort
>> of unlikely in the near term.
>>
>> I prefer to wait to see 

Re: [OAUTH-WG] WGLC on draft-ietf-oauth-mtls-07

2018-04-20 Thread Justin Richer
Additional confirmation methods can be easily defined outside of this draft. 
That said, I think those two in particular are pretty straightforward to add 
(well-known algorithms that are widely available) so it might make sense to 
just toss them in now? I think it’s fine either way.

 — Justin

> On Apr 19, 2018, at 12:23 PM, Brian Campbell  
> wrote:
> 
> Okay, so I retract the idea of metadata indicating the hash alg/cnf method 
> (based on John pointing out that it doesn't really make sense). 
> 
> That still leaves the question of whether or not to define additional 
> confirmation methods in this document (and if so, what they would be though 
> x5t#S384 and x5t#S512 seem the most likely). 
> 
> There's some reasonable rational for both adding one or two new hash alg 
> confirmation methods in the doc now vs. sticking with just SHA256 for now. 
> I'll note again that the doc doesn't preclude using or later defining other 
> confirmation methods.
> 
> I'm kind of on the fence about it, to be honest. But that doesn't really 
> matter because the draft should reflect rough WG consensus. So I'm looking to 
> get a rough gauge of rough consensus. At this point there's one comment out 
> of WGLC asking for additional confirmation method(s). I don't think that 
> makes for consensus. But I'd ask others from the WG to chime in, if 
> appropriate, to help me better gauge consensus. 
> 
> On Fri, Apr 13, 2018 at 4:49 AM, Neil Madden  > wrote:
> I’m not particularly wedded to SHA-512, just that it should be possible to 
> use something else. At the moment, the draft seems pretty wedded to SHA-256. 
> SHA-512 may be overkill, but it is fast (faster than SHA-256 on many 64-bit 
> machines) and provides a very wide security margin against any future crypto 
> advances (quantum or otherwise). I’d also be happy with SHA-384, SHA3-512, 
> Blake2 etc but SHA-512 seems pretty widely available. 
> 
> I don’t think short-lived access tokens is a help if the likelihood is that 
> certs will be reused for many access tokens. 
> 
> Public Web PKI certs tend to only use SHA-256 as it has broad support, and I 
> gather there were some compatibility issues with SHA-512 certs in TLS. There 
> are a handful of SHA-384 certs - e.g., the Comodo CA certs for 
> https://home.cern/  are signed with SHA-384 (although 
> with RSA-2048, which NSA estimates at only ~112-bit security). SHA-512 is 
> used on some internal networks where there is more control over components 
> being used, which is also where people are mostly likely to care about 
> security beyond 128-bit level (eg government internal networks). 
> 
> By the way, I just mentioned quantum attacks as an example of something that 
> might weaken the hash in future. Obviously, quantum attacks completely 
> destroy RSA, ECDSA etc, so SHA-512 would not solve this on its own, but it 
> provides a considerable margin to hedge against future quantum *or classical* 
> advances while allowing the paranoid to pick a stronger security level now.. 
> We have customers that ask for 256-bit AES already.
> 
> (I also misremembered the quantum attack - “Serious Cryptography” by Aumasson 
> tells me the best known quantum attack against collision resistance is 
> O(2^n/3) - so ~2^85 for SHA-256 but also needs O(2^85) space so is 
> impractical. I don’t know if that is the last word though).  
> 
> As for SHA-1, doesn’t that prove the point? SHA-1 is pretty broken now with 
> practical collisions having been demonstrated. The kind of attacks on CA 
> certs demonstrated for MD5 [1] are probably not too far off for SHA-1 now. As 
> far as I am aware, we’re nowhere near those kinds of attacks on SHA-256, but 
> I’d prefer that there was a backup plan in place already rather than waiting 
> to see (and waiting for everyone to have hard-coded SHA-256 everywhere).
> 
> Now I have to get back to my daughter’s birthday party… :-)
> 
> [1] http://www.win.tue.nl/hashclash/rogue-ca/ 
> 
> 
> Neil
> 
> 
> On Thursday, Apr 12, 2018 at 10:07 pm, John Bradley  > wrote:
> The WG discusses RS meta-data as part of one of Dick’s proposals.   I am 
> unclear on who is going to work on it in what draft.
> 
> If the client is doing mtls to both the Token endpoint and RS the information 
> in the confirmation method is not relevant to the client as long as the AS 
> and RS are in agreement like with most tokens.
> 
> The hash on the certificate and length of the signing key are equally or more 
> vulnerable to any sort of attack.
> At least with AT the tokens are not long lived.
> 
> Doing anything better than SHA256 only makes sense if the cert is signed by 
> something stronger like SHA512 with a 2048bit RSA key.   That seems sort of 
> unlikely in the near term.  
> 
> I prefer to wait to see what general fix is 

Re: [OAUTH-WG] WGLC on draft-ietf-oauth-mtls-07

2018-04-19 Thread Brian Campbell
Okay, so I retract the idea of metadata indicating the hash alg/cnf method
(based on John pointing out that it doesn't really make sense).

That still leaves the question of whether or not to define additional
confirmation methods in this document (and if so, what they would be though
x5t#S384 and x5t#S512 seem the most likely).

There's some reasonable rational for both adding one or two new hash alg
confirmation methods in the doc now vs. sticking with just SHA256 for now.
I'll note again that the doc doesn't preclude using or later defining other
confirmation methods.

I'm kind of on the fence about it, to be honest. But that doesn't really
matter because the draft should reflect rough WG consensus. So I'm looking
to get a rough gauge of rough consensus. At this point there's one comment
out of WGLC asking for additional confirmation method(s). I don't think
that makes for consensus. But I'd ask others from the WG to chime in, if
appropriate, to help me better gauge consensus.

On Fri, Apr 13, 2018 at 4:49 AM, Neil Madden 
wrote:

> I’m not particularly wedded to SHA-512, just that it should be possible to
> use something else. At the moment, the draft seems pretty wedded to
> SHA-256. SHA-512 may be overkill, but it is fast (faster than SHA-256 on
> many 64-bit machines) and provides a very wide security margin against any
> future crypto advances (quantum or otherwise). I’d also be happy with
> SHA-384, SHA3-512, Blake2 etc but SHA-512 seems pretty widely available.
>
> I don’t think short-lived access tokens is a help if the likelihood is
> that certs will be reused for many access tokens.
>
> Public Web PKI certs tend to only use SHA-256 as it has broad support, and
> I gather there were some compatibility issues with SHA-512 certs in TLS.
> There are a handful of SHA-384 certs - e.g., the Comodo CA certs for
> https://home.cern/ are signed with SHA-384 (although with RSA-2048, which
> NSA estimates at only ~112-bit security). SHA-512 is used on some internal
> networks where there is more control over components being used, which is
> also where people are mostly likely to care about security beyond 128-bit
> level (eg government internal networks).
>
> By the way, I just mentioned quantum attacks as an example of something
> that might weaken the hash in future. Obviously, quantum attacks completely
> destroy RSA, ECDSA etc, so SHA-512 would not solve this on its own, but it
> provides a considerable margin to hedge against future quantum *or
> classical* advances while allowing the paranoid to pick a stronger security
> level now. We have customers that ask for 256-bit AES already.
>
> (I also misremembered the quantum attack - “Serious Cryptography” by
> Aumasson tells me the best known quantum attack against collision
> resistance is O(2^n/3) - so ~2^85 for SHA-256 but also needs O(2^85) space
> so is impractical. I don’t know if that is the last word though).
>
> As for SHA-1, doesn’t that prove the point? SHA-1 is pretty broken now
> with practical collisions having been demonstrated. The kind of attacks on
> CA certs demonstrated for MD5 [1] are probably not too far off for SHA-1
> now. As far as I am aware, we’re nowhere near those kinds of attacks on
> SHA-256, but I’d prefer that there was a backup plan in place already
> rather than waiting to see (and waiting for everyone to have hard-coded
> SHA-256 everywhere).
>
> Now I have to get back to my daughter’s birthday party… :-)
>
> [1] http://www.win.tue.nl/hashclash/rogue-ca/
>
> Neil
>
>
> On Thursday, Apr 12, 2018 at 10:07 pm, John Bradley 
> wrote:
> The WG discusses RS meta-data as part of one of Dick’s proposals.   I am
> unclear on who is going to work on it in what draft.
>
> If the client is doing mtls to both the Token endpoint and RS the
> information in the confirmation method is not relevant to the client as
> long as the AS and RS are in agreement like with most tokens.
>
> The hash on the certificate and length of the signing key are equally or
> more vulnerable to any sort of attack.
> At least with AT the tokens are not long lived.
>
> Doing anything better than SHA256 only makes sense if the cert is signed
> by something stronger like SHA512 with a 2048bit RSA key.   That seems sort
> of unlikely in the near term.
>
> I prefer to wait to see what general fix is proposed before we jump the
> gun with a bandade for a small part of the overall problem.
>
> People are typically using SHA1 fingerprints.  We added SHA256 for JWT and
> got push back on that as overkill.
> I don’t think this is the correct place to be deciding this.
>
> We could say that if new thumbprints are defined the AS and RS can decide
> to use them as necessary.
> That is sort of the situation we have now.
>
> The correct solution may be a thousand rounds of PBKDF2 or something like
> that to make finding collisions more difficult rather than longer hashes.
>
> John B.
>
> > On Apr 12, 2018, at 5:20 PM, 

Re: [OAUTH-WG] WGLC on draft-ietf-oauth-mtls-07

2018-04-19 Thread Brian Campbell
Thanks. Will do.

On Thu, Apr 19, 2018 at 8:57 AM, Benjamin Kaduk  wrote:

> I would go ahead and put them in.  The blog post might get some
> pushback, but I think there's plenty of precedent for academic
> papers.
>
> -Ben
>
> On Wed, Apr 18, 2018 at 09:34:23AM -0600, Brian Campbell wrote:
> > Thanks for the text, Neil. And the nit on the text, Ben. I'll include it
> in
> > the next draft.
> >
> > Ben, bit of a procedural question for you: can or should I include those
> > references (https://www.cryptologie.net/article/374/common-x509-
> certificate-
> > validationcreation-pitfalls/ & http://www.cs.utexas.edu/~
> > shmat/shmat_ccs12.pdf) that Neil had with the text in the draft as
> > informational? Or? I'm honestly not sure if it's okay to cite a blog post
> > or university paper.
> >
> >
> >  validationcreation-pitfalls/>
> >
> >
> >
> >
> > On Tue, Apr 17, 2018 at 8:13 AM, Benjamin Kaduk  wrote:
> >
> > > Picking nits, but maybe "established and well-tested X.509 library
> > > (such as one used by an established TLS library)", noting that TLS
> > > 1.3 has added a new protocol feature that allows for TLS and X.509
> > > library capabilities to be separately indicated (as would be needed
> > > if they were organizationally separate).
> > >
> > > -Ben
> > >
> > > On Tue, Apr 17, 2018 at 10:48:04AM +0100, Neil Madden wrote:
> > > > OK, here’s a stab at a new security considerations section on X.509
> > > parsing and validation:
> > > >
> > > > ---
> > > > 5.3 X.509 Certificate Parsing and Validation Complexity
> > > >
> > > > Parsing and validation of X.509 certificates and certificate chains
> is
> > > complex and implementation mistakes have previously exposed security
> > > vulnerabilities. Complexities of validation include (but are not
> limited
> > > to) [1][2][3]:
> > > > - checking of Basic Constraints, basic and extended Key Usage
> > > constraints, validity periods, and critical extensions;
> > > > - handling of null-terminator bytes and non-canonical string
> > > representations in subject names;
> > > > - handling of wildcard patterns in subject names;
> > > > - recursive verification of certificate chains and checking
> certificate
> > > revocation.
> > > > For these reasons, implementors SHOULD use an established and
> > > well-tested TLS library for validation of X.509 certificate chains and
> > > SHOULD NOT attempt to write their own X.509 certificate validation
> > > procedures.
> > > >
> > > > [1] https://www.cryptologie.net/article/374/common-x509-certificate-
> > > validationcreation-pitfalls/  > > article/374/common-x509-certificate-validationcreation-pitfalls/>
> > > > [2] http://www.cs.utexas.edu/~shmat/shmat_ccs12.pdf <
> > > http://www.cs.utexas.edu/~shmat/shmat_ccs12.pdf>
> > > > [3] https://tools.ietf.org/html/rfc5280 <
> https://tools.ietf.org/html/
> > > rfc5280>
> > > >
> > > > ---
> > > >
> > > > NB - this blog post [1] is the best concise summary of attacks I
> could
> > > find. Most of these attacks have been published as Black Hat talks and
> I
> > > can’t seem to find definitive references or good survey papers (beyond
> [2],
> > > which is a little older).
> > > >
> > > > Let me know what you think,
> > > >
> > > > Neil
> > > >
> > > >
> > > > > On 12 Apr 2018, at 20:42, Brian Campbell <
> bcampb...@pingidentity.com>
> > > wrote:
> > > > >
> > > > > Thanks Neil.
> > > > >
> > > > > Other than the potential metadata changes, which I'd like more WG
> > > input on and may raise in a new thread, I think I've got enough to make
> > > updates addressing your comments.  But please do send text for that
> > > Security Considerations bit, if you come up with something.
> > > > >
> > > > > On Thu, Apr 12, 2018 at 3:03 AM, Neil Madden <
> > > neil.mad...@forgerock.com > wrote:
> > > > > Hi Brian,
> > > > >
> > > > > Thanks for the detailed responses. Comments in line below (marked
> with
> > > ***).
> > > > >
> > > > > Neil
> > > > >
> > > > > On Wednesday, Apr 11, 2018 at 9:47 pm, Brian Campbell <
> > > bcampb...@pingidentity.com > wrote:
> > > > > Thanks for the review and feedback, Neil. I apologize for my being
> > > slow to respond. As I said to Justin recently <
> > > https://mailarchive.ietf.org/arch/msg/oauth/cNmk8fSuxp37L-
> z8Rvr6_EnyCug>,
> > > I've been away from things for a while. Also there's a lot here to get
> > > through so took me some time.
> > > > >
> > > > > It looks like John touched on some of your comments but not all.
> I'll
> > > try and reply to them as best I can inline below.
> > > > >
> > > > >
> > > > > On Thu, Mar 29, 2018 at 9:18 AM, Neil Madden <
> > > neil.mad...@forgerock.com > wrote:
> > > > > Hi,
> > > > >
> > > > > I have reviewed this draft and have a number of comments, below.
> > > ForgeRock have not yet 

Re: [OAUTH-WG] WGLC on draft-ietf-oauth-mtls-07

2018-04-18 Thread Brian Campbell
Thanks for the text, Neil. And the nit on the text, Ben. I'll include it in
the next draft.

Ben, bit of a procedural question for you: can or should I include those
references (https://www.cryptologie.net/article/374/common-x509-certificate-
validationcreation-pitfalls/ & http://www.cs.utexas.edu/~
shmat/shmat_ccs12.pdf) that Neil had with the text in the draft as
informational? Or? I'm honestly not sure if it's okay to cite a blog post
or university paper.







On Tue, Apr 17, 2018 at 8:13 AM, Benjamin Kaduk  wrote:

> Picking nits, but maybe "established and well-tested X.509 library
> (such as one used by an established TLS library)", noting that TLS
> 1.3 has added a new protocol feature that allows for TLS and X.509
> library capabilities to be separately indicated (as would be needed
> if they were organizationally separate).
>
> -Ben
>
> On Tue, Apr 17, 2018 at 10:48:04AM +0100, Neil Madden wrote:
> > OK, here’s a stab at a new security considerations section on X..509
> parsing and validation:
> >
> > ---
> > 5.3 X.509 Certificate Parsing and Validation Complexity
> >
> > Parsing and validation of X.509 certificates and certificate chains is
> complex and implementation mistakes have previously exposed security
> vulnerabilities. Complexities of validation include (but are not limited
> to) [1][2][3]:
> > - checking of Basic Constraints, basic and extended Key Usage
> constraints, validity periods, and critical extensions;
> > - handling of null-terminator bytes and non-canonical string
> representations in subject names;
> > - handling of wildcard patterns in subject names;
> > - recursive verification of certificate chains and checking certificate
> revocation.
> > For these reasons, implementors SHOULD use an established and
> well-tested TLS library for validation of X.509 certificate chains and
> SHOULD NOT attempt to write their own X.509 certificate validation
> procedures.
> >
> > [1] https://www.cryptologie.net/article/374/common-x509-certificate-
> validationcreation-pitfalls/  article/374/common-x509-certificate-validationcreation-pitfalls/>
> > [2] http://www.cs.utexas.edu/~shmat/shmat_ccs12.pdf <
> http://www.cs.utexas.edu/~shmat/shmat_ccs12.pdf>
> > [3] https://tools.ietf.org/html/rfc5280  rfc5280>
> >
> > ---
> >
> > NB - this blog post [1] is the best concise summary of attacks I could
> find. Most of these attacks have been published as Black Hat talks and I
> can’t seem to find definitive references or good survey papers (beyond [2],
> which is a little older).
> >
> > Let me know what you think,
> >
> > Neil
> >
> >
> > > On 12 Apr 2018, at 20:42, Brian Campbell 
> wrote:
> > >
> > > Thanks Neil.
> > >
> > > Other than the potential metadata changes, which I'd like more WG
> input on and may raise in a new thread, I think I've got enough to make
> updates addressing your comments.  But please do send text for that
> Security Considerations bit, if you come up with something.
> > >
> > > On Thu, Apr 12, 2018 at 3:03 AM, Neil Madden <
> neil.mad...@forgerock.com > wrote:
> > > Hi Brian,
> > >
> > > Thanks for the detailed responses. Comments in line below (marked with
> ***).
> > >
> > > Neil
> > >
> > > On Wednesday, Apr 11, 2018 at 9:47 pm, Brian Campbell <
> bcampb...@pingidentity.com > wrote:
> > > Thanks for the review and feedback, Neil. I apologize for my being
> slow to respond. As I said to Justin recently <
> https://mailarchive.ietf.org/arch/msg/oauth/cNmk8fSuxp37L-z8Rvr6_EnyCug>,
> I've been away from things for a while. Also there's a lot here to get
> through so took me some time.
> > >
> > > It looks like John touched on some of your comments but not all. I'll
> try and reply to them as best I can inline below.
> > >
> > >
> > > On Thu, Mar 29, 2018 at 9:18 AM, Neil Madden <
> neil.mad...@forgerock.com > wrote:
> > > Hi,
> > >
> > > I have reviewed this draft and have a number of comments, below.
> ForgeRock have not yet implemented this draft, but there is interest in
> implementing it at some point. (Disclaimer: We have no firm commitments on
> this at the moment, I do not speak for ForgeRock, etc).
> > >
> > > 1. https://tools.ietf.org/html/draft-ietf-oauth-mtls-07#section-3.1 <
> https://tools.ietf.org/html/draft-ietf-oauth-mtls-07#section-3.1> defines
> a new confirmation method “x5t#S256”. However, there is already a
> confirmation method “jwk” that can contain a JSON Web Key, which itself can
> contain a “x5t#S526” claim with exactly the same syntax and semantics. The
> draft proposes:
> > >
> > > { “cnf”: { “x5t#S256”: “…” } }
> > >
> > > but you can already do:
> > >
> > > { “cnf”: { “jwk”: { … , “x5t#S256”: “…” } } }
> > >
> > > If the 

Re: [OAUTH-WG] WGLC on draft-ietf-oauth-mtls-07

2018-04-17 Thread Benjamin Kaduk
Picking nits, but maybe "established and well-tested X.509 library
(such as one used by an established TLS library)", noting that TLS
1.3 has added a new protocol feature that allows for TLS and X.509
library capabilities to be separately indicated (as would be needed
if they were organizationally separate).

-Ben

On Tue, Apr 17, 2018 at 10:48:04AM +0100, Neil Madden wrote:
> OK, here’s a stab at a new security considerations section on X.509 parsing 
> and validation:
> 
> ---
> 5.3 X.509 Certificate Parsing and Validation Complexity
> 
> Parsing and validation of X.509 certificates and certificate chains is 
> complex and implementation mistakes have previously exposed security 
> vulnerabilities. Complexities of validation include (but are not limited to) 
> [1][2][3]:
> - checking of Basic Constraints, basic and extended Key Usage constraints, 
> validity periods, and critical extensions;
> - handling of null-terminator bytes and non-canonical string representations 
> in subject names;
> - handling of wildcard patterns in subject names;
> - recursive verification of certificate chains and checking certificate 
> revocation.
> For these reasons, implementors SHOULD use an established and well-tested TLS 
> library for validation of X.509 certificate chains and SHOULD NOT attempt to 
> write their own X.509 certificate validation procedures.
> 
> [1] 
> https://www.cryptologie.net/article/374/common-x509-certificate-validationcreation-pitfalls/
>  
> 
> [2] http://www.cs.utexas.edu/~shmat/shmat_ccs12.pdf 
> 
> [3] https://tools.ietf.org/html/rfc5280  
> 
> ---
> 
> NB - this blog post [1] is the best concise summary of attacks I could find. 
> Most of these attacks have been published as Black Hat talks and I can’t seem 
> to find definitive references or good survey papers (beyond [2], which is a 
> little older).
> 
> Let me know what you think,
> 
> Neil
> 
> 
> > On 12 Apr 2018, at 20:42, Brian Campbell  wrote:
> > 
> > Thanks Neil. 
> > 
> > Other than the potential metadata changes, which I'd like more WG input on 
> > and may raise in a new thread, I think I've got enough to make updates 
> > addressing your comments.  But please do send text for that Security 
> > Considerations bit, if you come up with something.  
> > 
> > On Thu, Apr 12, 2018 at 3:03 AM, Neil Madden  > > wrote:
> > Hi Brian,
> > 
> > Thanks for the detailed responses. Comments in line below (marked with ***).
> > 
> > Neil
> > 
> > On Wednesday, Apr 11, 2018 at 9:47 pm, Brian Campbell 
> > > wrote:
> > Thanks for the review and feedback, Neil. I apologize for my being slow to 
> > respond. As I said to Justin recently 
> > , 
> > I've been away from things for a while. Also there's a lot here to get 
> > through so took me some time. 
> > 
> > It looks like John touched on some of your comments but not all. I'll try 
> > and reply to them as best I can inline below. 
> > 
> > 
> > On Thu, Mar 29, 2018 at 9:18 AM, Neil Madden  > > wrote:
> > Hi,
> > 
> > I have reviewed this draft and have a number of comments, below. ForgeRock 
> > have not yet implemented this draft, but there is interest in implementing 
> > it at some point. (Disclaimer: We have no firm commitments on this at the 
> > moment, I do not speak for ForgeRock, etc).
> > 
> > 1. https://tools.ietf.org/html/draft-ietf-oauth-mtls-07#section-3.1 
> >  defines 
> > a new confirmation method “x5t#S256”. However, there is already a 
> > confirmation method “jwk” that can contain a JSON Web Key, which itself can 
> > contain a “x5t#S526” claim with exactly the same syntax and semantics. The 
> > draft proposes:
> > 
> > { “cnf”: { “x5t#S256”: “…” } }
> > 
> > but you can already do:
> > 
> > { “cnf”: { “jwk”: { … , “x5t#S256”: “…” } } }
> > 
> > If the intent is just to save some space and avoid the mandatory fields of 
> > the existing JWK types, maybe this would be better addressed by defining a 
> > new JWK type which only has a thumbprint? e.g., { “kty”: “x5t”, “x5t#S256”: 
> > “…” }.
> > 
> > The intent of the x5t#S256 confirmation method was to be space efficient 
> > and straightforward while utilizing the framework and registry that RFC 
> > 7800 gives.  Even a new JWK type like that would still use more space. And 
> > I'd argue that the new confirmation method is considerably more 
> > straightforward than registering a new JWK type (and the implications that 
> > would have on JWK implementations in general) in 

Re: [OAUTH-WG] WGLC on draft-ietf-oauth-mtls-07

2018-04-17 Thread Neil Madden
OK, here’s a stab at a new security considerations section on X.509 parsing and 
validation:

---
5.3 X.509 Certificate Parsing and Validation Complexity

Parsing and validation of X.509 certificates and certificate chains is complex 
and implementation mistakes have previously exposed security vulnerabilities. 
Complexities of validation include (but are not limited to) [1][2][3]:
- checking of Basic Constraints, basic and extended Key Usage constraints, 
validity periods, and critical extensions;
- handling of null-terminator bytes and non-canonical string representations in 
subject names;
- handling of wildcard patterns in subject names;
- recursive verification of certificate chains and checking certificate 
revocation.
For these reasons, implementors SHOULD use an established and well-tested TLS 
library for validation of X.509 certificate chains and SHOULD NOT attempt to 
write their own X.509 certificate validation procedures.

[1] 
https://www.cryptologie.net/article/374/common-x509-certificate-validationcreation-pitfalls/
 

[2] http://www.cs.utexas.edu/~shmat/shmat_ccs12.pdf 

[3] https://tools.ietf.org/html/rfc5280  

---

NB - this blog post [1] is the best concise summary of attacks I could find. 
Most of these attacks have been published as Black Hat talks and I can’t seem 
to find definitive references or good survey papers (beyond [2], which is a 
little older).

Let me know what you think,

Neil


> On 12 Apr 2018, at 20:42, Brian Campbell  wrote:
> 
> Thanks Neil. 
> 
> Other than the potential metadata changes, which I'd like more WG input on 
> and may raise in a new thread, I think I've got enough to make updates 
> addressing your comments.  But please do send text for that Security 
> Considerations bit, if you come up with something.  
> 
> On Thu, Apr 12, 2018 at 3:03 AM, Neil Madden  > wrote:
> Hi Brian,
> 
> Thanks for the detailed responses. Comments in line below (marked with ***).
> 
> Neil
> 
> On Wednesday, Apr 11, 2018 at 9:47 pm, Brian Campbell 
> > wrote:
> Thanks for the review and feedback, Neil. I apologize for my being slow to 
> respond. As I said to Justin recently 
> , 
> I've been away from things for a while. Also there's a lot here to get 
> through so took me some time. 
> 
> It looks like John touched on some of your comments but not all. I'll try and 
> reply to them as best I can inline below. 
> 
> 
> On Thu, Mar 29, 2018 at 9:18 AM, Neil Madden  > wrote:
> Hi,
> 
> I have reviewed this draft and have a number of comments, below. ForgeRock 
> have not yet implemented this draft, but there is interest in implementing it 
> at some point. (Disclaimer: We have no firm commitments on this at the 
> moment, I do not speak for ForgeRock, etc).
> 
> 1. https://tools.ietf.org/html/draft-ietf-oauth-mtls-07#section-3.1 
>  defines a 
> new confirmation method “x5t#S256”. However, there is already a confirmation 
> method “jwk” that can contain a JSON Web Key, which itself can contain a 
> “x5t#S526” claim with exactly the same syntax and semantics. The draft 
> proposes:
> 
> { “cnf”: { “x5t#S256”: “…” } }
> 
> but you can already do:
> 
> { “cnf”: { “jwk”: { … , “x5t#S256”: “…” } } }
> 
> If the intent is just to save some space and avoid the mandatory fields of 
> the existing JWK types, maybe this would be better addressed by defining a 
> new JWK type which only has a thumbprint? e.g., { “kty”: “x5t”, “x5t#S256”: 
> “…” }.
> 
> The intent of the x5t#S256 confirmation method was to be space efficient and 
> straightforward while utilizing the framework and registry that RFC 7800 
> gives.  Even a new JWK type like that would still use more space. And I'd 
> argue that the new confirmation method is considerably more straightforward 
> than registering a new JWK type (and the implications that would have on JWK 
> implementations in general) in order to use the existing "jwk" confirmation 
> method.  
> 
> ***
> OK, that is reasonable. Given that the draft says SHOULD rather than MUST for 
> using this confirmation key method, I think it is currently allowed to use 
> either representation. 
> 
>  
> 
> 2. I find the naming “mutual TLS” and “mTLS” a bit of a misnomer: it’s really 
> only the client authentication that we are interested here, and the fact that 
> the server also authenticates with a certificate is not hugely relevant to 
> this particular spec (although it is to the overall security of OAuth). Also, 
> TLS 

Re: [OAUTH-WG] WGLC on draft-ietf-oauth-mtls-07

2018-04-13 Thread Neil Madden
I’m not particularly wedded to SHA-512, just that it should be possible to use 
something else. At the moment, the draft seems pretty wedded to SHA-256. 
SHA-512 may be overkill, but it is fast (faster than SHA-256 on many 64-bit 
machines) and provides a very wide security margin against any future crypto 
advances (quantum or otherwise). I’d also be happy with SHA-384, SHA3-512, 
Blake2 etc but SHA-512 seems pretty widely available. 

I don’t think short-lived access tokens is a help if the likelihood is that 
certs will be reused for many access tokens. 

Public Web PKI certs tend to only use SHA-256 as it has broad support, and I 
gather there were some compatibility issues with SHA-512 certs in TLS. There 
are a handful of SHA-384 certs - e.g., the Comodo CA certs for 
https://home.cern/ are signed with SHA-384 (although with RSA-2048, which NSA 
estimates at only ~112-bit security). SHA-512 is used on some internal networks 
where there is more control over components being used, which is also where 
people are mostly likely to care about security beyond 128-bit level (eg 
government internal networks). 

By the way, I just mentioned quantum attacks as an example of something that 
might weaken the hash in future. Obviously, quantum attacks completely destroy 
RSA, ECDSA etc, so SHA-512 would not solve this on its own, but it provides a 
considerable margin to hedge against future quantum *or classical* advances 
while allowing the paranoid to pick a stronger security level now. We have 
customers that ask for 256-bit AES already.

(I also misremembered the quantum attack - “Serious Cryptography” by Aumasson 
tells me the best known quantum attack against collision resistance is O(2^n/3) 
- so ~2^85 for SHA-256 but also needs O(2^85) space so is impractical. I don’t 
know if that is the last word though).  

As for SHA-1, doesn’t that prove the point? SHA-1 is pretty broken now with 
practical collisions having been demonstrated. The kind of attacks on CA certs 
demonstrated for MD5 [1] are probably not too far off for SHA-1 now. As far as 
I am aware, we’re nowhere near those kinds of attacks on SHA-256, but I’d 
prefer that there was a backup plan in place already rather than waiting to see 
(and waiting for everyone to have hard-coded SHA-256 everywhere).

Now I have to get back to my daughter’s birthday party… :-)

[1] http://www.win.tue.nl/hashclash/rogue-ca/

Neil


On Thursday, Apr 12, 2018 at 10:07 pm, John Bradley  wrote:
The WG discusses RS meta-data as part of one of Dick’s proposals.   I am 
unclear on who is going to work on it in what draft.

If the client is doing mtls to both the Token endpoint and RS the information 
in the confirmation method is not relevant to the client as long as the AS and 
RS are in agreement like with most tokens.

The hash on the certificate and length of the signing key are equally or more 
vulnerable to any sort of attack.
At least with AT the tokens are not long lived.

Doing anything better than SHA256 only makes sense if the cert is signed by 
something stronger like SHA512 with a 2048bit RSA key.   That seems sort of 
unlikely in the near term.  

I prefer to wait to see what general fix is proposed before we jump the gun 
with a bandade for a small part of the overall problem.

People are typically using SHA1 fingerprints.  We added SHA256 for JWT and got 
push back on that as overkill. 
I don’t think this is the correct place to be deciding this.   

We could say that if new thumbprints are defined the AS and RS can decide to 
use them as necessary.
That is sort of the situation we have now.

The correct solution may be a thousand rounds of PBKDF2 or something like that 
to make finding collisions more difficult rather than longer hashes.

John B.

> On Apr 12, 2018, at 5:20 PM, Brian Campbell  
> wrote:
> 
> That's true about it being opaque to the client. I was thinking of client 
> metadata (config or registration) as a way for the AS to decide if to bind 
> the AT to a cert. And moving from a boolean to a conf method as an extension 
> of that. It would be more meaningful in RS discovery, if there was such a 
> thing.
> 
> I don’t know that SHA512 would be the best choice either but it is something 
> that is stronger and could be included now. If there's consensus to do more 
> than SHA256 in this doc.  
> 
> 
> 
> On Thu, Apr 12, 2018 at 1:52 PM, John Bradley  wrote:
> Inline
> 
> Snip
> 
>> 
>> 
>> 12. The use of only SHA-256 fingerprints means that the security strength of 
>> the sender-constrained access tokens is limited by the collision resistance 
>> of SHA-256 - roughly “128-bit security" - without a new specification for a 
>> new thumbprint algorithm. An implication of this is that is is fairly 
>> pointless for the protected resource TLS stack to ever negotiate cipher 
>> suites/keys with a higher level of security. In more crystal ball territory, 
>> if a practical 

Re: [OAUTH-WG] WGLC on draft-ietf-oauth-mtls-07

2018-04-12 Thread Brian Campbell
That's true about it being opaque to the client. I was thinking of client
metadata (config or registration) as a way for the AS to decide if to bind
the AT to a cert. And moving from a boolean to a conf method as an
extension of that. It would be more meaningful in RS discovery, if there
was such a thing.

I don’t know that SHA512 would be the best choice either but it is
something that is stronger and could be included now. If there's consensus
to do more than SHA256 in this doc.



On Thu, Apr 12, 2018 at 1:52 PM, John Bradley  wrote:

> Inline
>
> Snip
>
>
>
>> 12. The use of only SHA-256 fingerprints means that the security strength
>> of the sender-constrained access tokens is limited by the collision
>> resistance of SHA-256 - roughly “128-bit security" - without a new
>> specification for a new thumbprint algorithm. An implication of this is
>> that is is fairly pointless for the protected resource TLS stack to ever
>> negotiate cipher suites/keys with a higher level of security. In more
>> crystal ball territory, if a practical quantum computer becomes a
>> possibility within the lifetime of this spec, then the expected collision
>> resistance of SHA-256 would drop quadratically, allowing an attacker to
>> find a colliding certificate in ~2^64 effort. If we are going to pick just
>> one thumbprint hash algorithm, I would prefer we pick SHA-512.
>>
>
> The idea behind haveing just one thumbprint hash algorithm was to keep
> things simple. And SHA-256 seems good enough for the reasonably foreseeable
> future (and space aware). Also a new little spec to register a different
> hash algorithm, should the need arise, didn't seem particularity onerous.
>
> That was the thinking anyway. Maybe it is too short sighted though?
>
> I do think SHA-256 should stay regardless.
>
> But the draft could also define SHA-512 (and maybe others). What do you
> and WG folks think about that?
>
> *** Yes please.
>
> It would probably then be useful for the metadata in §3.3 and §3.4 to
> change from just boolean values to something to convey what hash alg/cnf
> method the client expects and the list of what the server supports. That's
> maybe something that should be done anyway. That'd be a breaking change to
> the metadata. But there's already another potential breaking change
> identified earlier in this message. So maybe it's worth doing...
>
> How do folks feel about making this kind of change?
>
>
> The confirmation method is opaque to the client.  I don’t think adding
> hash algs to discovery will really help.
> The AS selection needs to be based on what the RS can support.
>
> If anyplace it should be in RS discovery.
>
> As a practical matter you art going to find a client certificate with more
> than a SHA256 hash anytime in the near future.
> So for a short lived access token 128bits of collision resistance is quite
> good.   We are going to have issues with certificates long before this
> becomes a problem.
>
> SHA256 is appropriate for AES128 256bit elliptic curves and 3072bit RSA
> keys, but again that is over the long term.
> We are using short lived access tokens.  People should rotate the
> certificate more often than once a year if this is a real issue.
>
> I am not against new hash for the fingerprint, but I also don’t know that
> SHA512 would be the best choice if we are concerned about quantum crypto
> resistance.   That is a issue beyond mtls and should be addressed by CFRG
> etc.
>
> Regards
> John B.
>
>
>

-- 
_CONFIDENTIALITY NOTICE: This email may contain confidential and privileged 
material for the sole use of the intended recipient(s). Any review, use, 
distribution or disclosure by others is strictly prohibited.  If you have 
received this communication in error, please notify the sender immediately 
by e-mail and delete the message and any file attachments from your 
computer. Thank you._
___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] WGLC on draft-ietf-oauth-mtls-07

2018-04-12 Thread John Bradley
Inline

Snip

> 
> 
> 12. The use of only SHA-256 fingerprints means that the security strength of 
> the sender-constrained access tokens is limited by the collision resistance 
> of SHA-256 - roughly “128-bit security" - without a new specification for a 
> new thumbprint algorithm. An implication of this is that is is fairly 
> pointless for the protected resource TLS stack to ever negotiate cipher 
> suites/keys with a higher level of security. In more crystal ball territory, 
> if a practical quantum computer becomes a possibility within the lifetime of 
> this spec, then the expected collision resistance of SHA-256 would drop 
> quadratically, allowing an attacker to find a colliding certificate in ~2^64 
> effort. If we are going to pick just one thumbprint hash algorithm, I would 
> prefer we pick SHA-512.
> 
> The idea behind haveing just one thumbprint hash algorithm was to keep things 
> simple. And SHA-256 seems good enough for the reasonably foreseeable future 
> (and space aware). Also a new little spec to register a different hash 
> algorithm, should the need arise, didn't seem particularity onerous. 
> 
> That was the thinking anyway. Maybe it is too short sighted though?
> 
> I do think SHA-256 should stay regardless. 
> 
> But the draft could also define SHA-512 (and maybe others). What do you and 
> WG folks think about that?
> 
> *** Yes please. 
> 
> It would probably then be useful for the metadata in §3.3 and §3.4 to change 
> from just boolean values to something to convey what hash alg/cnf method the 
> client expects and the list of what the server supports. That's maybe 
> something that should be done anyway. That'd be a breaking change to the 
> metadata. But there's already another potential breaking change identified 
> earlier in this message. So maybe it's worth doing...
> 
> How do folks feel about making this kind of change? 
> 
> 
The confirmation method is opaque to the client.  I don’t think adding hash 
algs to discovery will really help.
The AS selection needs to be based on what the RS can support.

If anyplace it should be in RS discovery. 

As a practical matter you art going to find a client certificate with more than 
a SHA256 hash anytime in the near future. 
So for a short lived access token 128bits of collision resistance is quite 
good.   We are going to have issues with certificates long before this becomes 
a problem.

SHA256 is appropriate for AES128 256bit elliptic curves and 3072bit RSA keys, 
but again that is over the long term.  
We are using short lived access tokens.  People should rotate the certificate 
more often than once a year if this is a real issue.

I am not against new hash for the fingerprint, but I also don’t know that 
SHA512 would be the best choice if we are concerned about quantum crypto 
resistance.   That is a issue beyond mtls and should be addressed by CFRG etc.

Regards
John B.


___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] WGLC on draft-ietf-oauth-mtls-07

2018-04-12 Thread Brian Campbell
Thanks for the schooling, Ben.

On Thu, Apr 12, 2018 at 7:26 AM, Benjamin Kaduk  wrote:

> Just replying on one thing...
>
> On Thu, Apr 12, 2018 at 10:03:11AM +0100, Neil Madden wrote:
> > Hi Brian,
> >
> > Thanks for the detailed responses. Comments in line below (marked with
> ***).
> >
> > Neil
> >
> > > On Wednesday, Apr 11, 2018 at 9:47 pm, Brian Campbell <
> bcampb...@pingidentity.com (mailto:bcampb...@pingidentity.com)> wrote:
> > > On Thu, Mar 29, 2018 at 9:18 AM, Neil Madden <
> neil.mad...@forgerock.com (mailto:neil.mad...@forgerock.com)> wrote:
> > > > 10. The PKI client authentication method (
> https://tools.ietf.org/html/draft-ietf-oauth-mtls-07#section-2.1) makes
> no mention at all of certificate revocation and how to handle checking for
> that (CRLs, OCSP - with stapling?). Neither does the Security
> Considerations. If this is a detail to be agreed between then AS and the CA
> (or just left up to the AS TLS stack) then that should perhaps be made
> explicit. Again, there are privacy considerations with some of these
> mechanisms, as OCSP requests are typically sent in the clear (plain HTTP)
> and so allow an observer to see which clients are connecting to which AS.
> > >
> > > I didn't think that a TLS client could do OCSP stapling?
> > >
> > > *** I think you are right about this. I always assumed it was
> symmetric (and I think it technically could work), but the spec only talks
> about stapling in the server-side of the handshake.
>
> This changed between TLS 1.2 and TLS 1.3 -- in 1.3, the server can
> include "status_request" in its CertificateRequest, and the
> extensions block in the client's Certificate message can include the
> OCSP staple.
>
> -Ben
>

-- 
_CONFIDENTIALITY NOTICE: This email may contain confidential and privileged 
material for the sole use of the intended recipient(s). Any review, use, 
distribution or disclosure by others is strictly prohibited.  If you have 
received this communication in error, please notify the sender immediately 
by e-mail and delete the message and any file attachments from your 
computer. Thank you._
___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] WGLC on draft-ietf-oauth-mtls-07

2018-04-12 Thread Benjamin Kaduk
Just replying on one thing...

On Thu, Apr 12, 2018 at 10:03:11AM +0100, Neil Madden wrote:
> Hi Brian,
> 
> Thanks for the detailed responses. Comments in line below (marked with ***).
> 
> Neil
> 
> > On Wednesday, Apr 11, 2018 at 9:47 pm, Brian Campbell 
> >  wrote:
> > On Thu, Mar 29, 2018 at 9:18 AM, Neil Madden  > (mailto:neil.mad...@forgerock.com)> wrote:
> > > 10. The PKI client authentication method 
> > > (https://tools.ietf.org/html/draft-ietf-oauth-mtls-07#section-2.1) makes 
> > > no mention at all of certificate revocation and how to handle checking 
> > > for that (CRLs, OCSP - with stapling?). Neither does the Security 
> > > Considerations. If this is a detail to be agreed between then AS and the 
> > > CA (or just left up to the AS TLS stack) then that should perhaps be made 
> > > explicit. Again, there are privacy considerations with some of these 
> > > mechanisms, as OCSP requests are typically sent in the clear (plain HTTP) 
> > > and so allow an observer to see which clients are connecting to which AS.
> >
> > I didn't think that a TLS client could do OCSP stapling?
> >
> > *** I think you are right about this. I always assumed it was symmetric 
> > (and I think it technically could work), but the spec only talks about 
> > stapling in the server-side of the handshake.

This changed between TLS 1.2 and TLS 1.3 -- in 1.3, the server can
include "status_request" in its CertificateRequest, and the
extensions block in the client's Certificate message can include the
OCSP staple.

-Ben


signature.asc
Description: PGP signature
___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] WGLC on draft-ietf-oauth-mtls-07

2018-04-12 Thread Neil Madden
I agree that this is beyond the scope of the spec. To be clear, our desire is 
that the mtls spec includes some wording along the following lines:

“The AS MAY choose to terminate TLS connections at a load balancer, reverse 
proxy, or other network intermediary. How the client certificate metadata is 
securely communicated between the intermediary and the AS in this case is out 
of scope of this specification.”

That makes it clear that it is a supported pattern without commiting to how it 
should be achieved.

Regards,

Neil

--

> On Thursday, Apr 05, 2018 at 5:07 pm, John Bradley <ve7...@ve7jtb.com 
> (mailto:ve7...@ve7jtb.com)> wrote:
> +1
> On Wed, Apr 4, 2018, 5:42 PM Brian Campbell <bcampb...@pingidentity.com 
> (mailto:bcampb...@pingidentity.com)> wrote:
> > Strongly agree with Justin that any kind of TLS header forwarding standards 
> > like that are well beyond the scope of this spec.
> >
> >
> > On Fri, Mar 30, 2018 at 10:02 PM, Justin Richer <jric...@mit.edu 
> > (mailto:jric...@mit.edu)> wrote:
> > > I don’t believe this is the spec to define TLS header forwarding 
> > > standards in.
> > >
> > > — Justin
> > >
> > >
> > > > On Mar 30, 2018, at 2:03 PM, Vivek Biswas <vivek.bis...@oracle.com 
> > > > (mailto:vivek.bis...@oracle.com)> wrote:
> > > > There are additional challenges which we have faced.
> > > >
> > > > A. Most of the Mutual SSL communication as mentioned below terminates 
> > > > at the LBR and the LBR needs to have client certificates to trust the 
> > > > client. But lot of times the connection from LBR to Authorization 
> > > > server may be non-SSL.
> > > >
> > > > The CN, SHA-256 thumprint and serial number of the Client Cert are sent 
> > > > as header to the AuthzServer/Backend Server. However, if the connection 
> > > > from LBR to AuthzServer/Backend Server is unencrypted it is prone to 
> > > > MIM attacks. Hence, it’s a MUST requirement to have one-way SSL from 
> > > > LBR to AuthzServer/Backend Server, so that the headers passed are not 
> > > > compromised.
> > > >
> > > > This is a MOST common scenario in a real world. And we don’t want 
> > > > everyone come up with their own names for the header. There should be 
> > > > some kind of standardization around the header names.
> > > >
> > > > Regards
> > > > Vivek Biswas, CISSP
> > > >
> > > > From: John Bradley [mailto:ve7...@ve7jtb.com 
> > > > (mailto:ve7...@ve7jtb..com)]
> > > > Sent: Thursday, March 29, 2018 11:57 AM
> > > > To: Neil Madden
> > > > Cc: oauth
> > > > Subject: Re: [OAUTH-WG] WGLC on draft-ietf-oauth-mtls-07
> > > >
> > > > Yes that is quite a common deployment scenario. I think that is the way 
> > > > most of the Open Banking implementations have deployed it currently.
> > > >
> > > >
> > > > The intent is to support that. One problem is that how the certificate 
> > > > is transmitted to the application tends to be load balancer/reverse 
> > > > proxy specific as no real standard exists.
> > > >
> > > >
> > > >
> > > > If you think that needs to be clarified text is welcome.
> > > >
> > > >
> > > >
> > > > John B.
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >
> > > > > On Mar 29, 2018, at 2:54 PM, Neil Madden <neil.mad...@forgerock.com 
> > > > > (mailto:neil.mad...@forgerock.com)> wrote:
> > > > >
> > > > > Thanks, and understood.
> > > > >
> > > > >
> > > > >
> > > > > The privacy concerns are mostly around correlating activity of 
> > > > > *clients*, which may or may not reveal activity patterns of users 
> > > > > using those clients. I don’t know how much of a concern that is in 
> > > > > reality, but thought it should be mentioned.
> > > > >
> > > > >
> > > > >
> > > > > A colleague also made the following comment about the draft:
> > > > >
> > > > >
> > > > >
> > > > > “It is still quite common to terminate TLS in a load balancer or 
> > > > > proxy, and to deploy authorization servers in a secure network zone 
> > > > > behind an

Re: [OAUTH-WG] WGLC on draft-ietf-oauth-mtls-07

2018-04-12 Thread Neil Madden
Hi Brian,

Thanks for the detailed responses. Comments in line below (marked with ***).

Neil

> On Wednesday, Apr 11, 2018 at 9:47 pm, Brian Campbell 
>  wrote:
> Thanks for the review and feedback, Neil. I apologize for my being slow to 
> respond. As I said to Justin recently 
> (https://mailarchive.ietf.org/arch/msg/oauth/cNmk8fSuxp37L-z8Rvr6_EnyCug), 
> I've been away from things for a while. Also there's a lot here to get 
> through so took me some time.
>
> It looks like John touched on some of your comments but not all. I'll try and 
> reply to them as best I can inline below.
>
>
> On Thu, Mar 29, 2018 at 9:18 AM, Neil Madden  (mailto:neil.mad...@forgerock.com)> wrote:
> > Hi,
> >
> > I have reviewed this draft and have a number of comments, below. ForgeRock 
> > have not yet implemented this draft, but there is interest in implementing 
> > it at some point. (Disclaimer: We have no firm commitments on this at the 
> > moment, I do not speak for ForgeRock, etc).
> >
> > 1. https://tools.ietf.org/html/draft-ietf-oauth-mtls-07#section-3.1 defines 
> > a new confirmation method “x5t#S256”. However, there is already a 
> > confirmation method “jwk” that can contain a JSON Web Key, which itself can 
> > contain a “x5t#S526” claim with exactly the same syntax and semantics. The 
> > draft proposes:
> >
> > { “cnf”: { “x5t#S256”: “…” } }
> >
> > but you can already do:
> >
> > { “cnf”: { “jwk”: { … , “x5t#S256”: “…” } } }
> >
> > If the intent is just to save some space and avoid the mandatory fields of 
> > the existing JWK types, maybe this would be better addressed by defining a 
> > new JWK type which only has a thumbprint? e.g., { “kty”: “x5t”, “x5t#S256”: 
> > “…” }.
>
> The intent of the x5t#S256 confirmation method was to be space efficient and 
> straightforward while utilizing the framework and registry that RFC 7800 
> gives. Even a new JWK type like that would still use more space.. And I'd 
> argue that the new confirmation method is considerably more straightforward 
> than registering a new JWK type (and the implications that would have on JWK 
> implementations in general) in order to use the existing "jwk" confirmation 
> method.
>
> *** OK, that is reasonable. Given that the draft says SHOULD rather than MUST 
> for using this confirmation key method, I think it is currently allowed to 
> use either representation.
>
> >
> > 2. I find the naming “mutual TLS” and “mTLS” a bit of a misnomer: it’s 
> > really only the client authentication that we are interested here, and the 
> > fact that the server also authenticates with a certificate is not hugely 
> > relevant to this particular spec (although it is to the overall security of 
> > OAuth). Also, TLS defines non-certificate based authentication mechanisms 
> > (e.g. TLS-SRP extension for password authenticated key exchange, PSK for 
> > pre-shared key authentication) and even non-X.509 certificate types 
> > (https://www.iana.org/assignments/tls-extensiontype-values/tls-extensiontype-values.xhtml#tls-extensiontype-values-3).
> >  I’d prefer that the draft explicitly referred to “X.509 Client Certificate 
> > Authentication” rather than mutual TLS, and changed identifiers like 
> > ‘tls_client_auth’ 
> > (https://tools.ietf.org/html/draft-ietf-oauth-mtls-07#section-2.1.1) to 
> > something more explicit like ‘tls_x509_pki_client_auth’.
> >
> > This is especially confusing in section 3 on sender constrained access 
> > tokens, as there are two different servers involved: the AS and the 
> > protected resource server, but there is no “mutual” authentication between 
> > them, only between each of them and the client.
>
> Choosing names and terminology is difficult and the "right" wording is often 
> subjective. I believe that the current wording sufficiently conveys what is 
> going on in the draft to most readers. Most readers thus far seem to agree. 
> There is some text now that does say that the mutual auth in the draft is in 
> fact X.509 client cert authn but, in the next revision, I'll look for other 
> opportunities where it could be stated more clearly.
>
> *** Thanks.
>
> >
> > 3. The draft links to the TLS 1.2 RFC, while the original OAuth 2.0 RFC 
> > only specifies TLS 1.0. Is the intention that TLS 1.2+ is required? The 
> > wording in Section 5.1 doesn’t seem clear if this could also be used with 
> > TLS 1.0 or 1.1, or whether it is only referring to future TLS versions.
>
> The reference to BCP 195 (which unfortunately the original OAuth 2.0 RFC 
> doesn't have because it didn't exist then) is meant to account for changing 
> versions and recommendations around TLS. Currently that BCP says TLS 1.2 is a 
> must and suggests against 1.1 & 1.0 but doesn't outright prohibit them.
>
> *** OK, that seems good to me.
>
> >
> > 4. It might be useful to have a discussion for implementors of whether TLS 
> > session resumption (and PSK in 

Re: [OAUTH-WG] WGLC on draft-ietf-oauth-mtls-07

2018-04-11 Thread Brian Campbell
Thanks for the review and feedback, Neil. I apologize for my being slow to
respond. As I said to Justin recently
,
I've been away from things for a while. Also there's a lot here to get
through so took me some time.

It looks like John touched on some of your comments but not all. I'll try
and reply to them as best I can inline below.


On Thu, Mar 29, 2018 at 9:18 AM, Neil Madden 
wrote:

> Hi,
>
> I have reviewed this draft and have a number of comments, below. ForgeRock
> have not yet implemented this draft, but there is interest in implementing
> it at some point. (Disclaimer: We have no firm commitments on this at the
> moment, I do not speak for ForgeRock, etc).
>
> 1. https://tools.ietf.org/html/draft-ietf-oauth-mtls-07#section-3.1
> defines a new confirmation method “x5t#S256”. However, there is already a
> confirmation method “jwk” that can contain a JSON Web Key, which itself can
> contain a “x5t#S526” claim with exactly the same syntax and semantics. The
> draft proposes:
>
> { “cnf”: { “x5t#S256”: “…” } }
>
> but you can already do:
>
> { “cnf”: { “jwk”: { … , “x5t#S256”: “…” } } }
>
> If the intent is just to save some space and avoid the mandatory fields of
> the existing JWK types, maybe this would be better addressed by defining a
> new JWK type which only has a thumbprint? e.g., { “kty”: “x5t”, “x5t#S256”:
> “…” }.
>

The intent of the x5t#S256 confirmation method was to be space efficient
and straightforward while utilizing the framework and registry that RFC
7800 gives.  Even a new JWK type like that would still use more space. And
I'd argue that the new confirmation method is considerably more
straightforward than registering a new JWK type (and the implications that
would have on JWK implementations in general) in order to use the existing
"jwk" confirmation method.



>
> 2. I find the naming “mutual TLS” and “mTLS” a bit of a misnomer: it’s
> really only the client authentication that we are interested here, and the
> fact that the server also authenticates with a certificate is not hugely
> relevant to this particular spec (although it is to the overall security of
> OAuth). Also, TLS defines non-certificate based authentication mechanisms
> (e.g. TLS-SRP extension for password authenticated key exchange, PSK for
> pre-shared key authentication) and even non-X.509 certificate types (
> https://www.iana.org/assignments/tls-extensiontype-values/t
> ls-extensiontype-values.xhtml#tls-extensiontype-values-3). I’d prefer
> that the draft explicitly referred to “X.509 Client Certificate
> Authentication” rather than mutual TLS, and changed identifiers like
> ‘tls_client_auth’ (https://tools.ietf.org/html/d
> raft-ietf-oauth-mtls-07#section-2.1.1) to something more explicit like
> ‘tls_x509_pki_client_auth’.
>
> This is especially confusing in section 3 on sender constrained access
> tokens, as there are two different servers involved: the AS and the
> protected resource server, but there is no “mutual” authentication between
> them, only between each of them and the client.
>

Choosing names and terminology is difficult and the "right" wording is
often subjective. I believe that the current wording sufficiently conveys
what is going on in the draft to most readers. Most readers thus far seem
to agree. There is some text now that does say that the mutual auth in the
draft is in fact X.509 client cert authn but, in the next revision, I'll
look for other opportunities where it could be stated more clearly.



>
> 3. The draft links to the TLS 1.2 RFC, while the original OAuth 2.0 RFC
> only specifies TLS 1.0. Is the intention that TLS 1.2+ is required? The
> wording in Section 5.1 doesn’t seem clear if this could also be used with
> TLS 1.0 or 1.1, or whether it is only referring to future TLS versions.
>

The reference to BCP 195 (which unfortunately the original OAuth 2.0 RFC
doesn't have because it didn't exist then) is meant to account for changing
versions and recommendations around TLS. Currently that BCP says TLS 1.2 is
a must and suggests against 1.1 & 1.0 but doesn't outright prohibit them.



>
> 4. It might be useful to have a discussion for implementors of whether TLS
> session resumption (and PSK in TLS 1.3) and/or renegotiation impact the use
> of client certificates, if at all?
>

That might well be useful but I don't myself know what it would say. I've
(maybe naively) figured those are deployment details that will just work
out. Perhaps you could propose some text around such a discussion that the
WG could consider?



>
> 5. Section 3 defines sender-constrained access tokens in terms of the
> confirmation key claims (e.g., RFC 7800 for JWT). However, the OAuth 2.0
> Pop Architecture draft defines sender constraint and key confirmation as
> different things (https://tools.ietf.org/html/d
> raft-ietf-oauth-pop-architecture-08#section-6.2). The draft should decide

Re: [OAUTH-WG] WGLC on draft-ietf-oauth-mtls-07

2018-04-05 Thread John Bradley
+1

On Wed, Apr 4, 2018, 5:42 PM Brian Campbell <bcampb...@pingidentity.com>
wrote:

> Strongly agree with Justin that any kind of TLS header forwarding
> standards like that are well beyond the scope of this spec.
>
>
> On Fri, Mar 30, 2018 at 10:02 PM, Justin Richer <jric...@mit.edu> wrote:
>
>> I don’t believe this is the spec to define TLS header forwarding
>> standards in.
>>
>>  — Justin
>>
>>
>> On Mar 30, 2018, at 2:03 PM, Vivek Biswas <vivek.bis...@oracle.com>
>> wrote:
>>
>> There are additional challenges which we have faced.
>>
>> A.  Most of the Mutual SSL communication as mentioned below
>> terminates at the LBR and the LBR needs to have client certificates to
>> trust the client. But lot of times the connection from LBR to Authorization
>> server may be non-SSL.
>>
>> The CN, SHA-256 thumprint and serial number of the Client Cert are sent
>> as header to the AuthzServer/Backend Server. However, if the connection
>> from LBR to AuthzServer/Backend Server is unencrypted it is prone to MIM
>> attacks. Hence, it’s a MUST requirement to have one-way SSL from LBR to
>> AuthzServer/Backend Server, so that the headers passed are not compromised.
>>
>> This is a MOST common scenario in a real world. And we don’t want
>> everyone come up with their own names for the header. There should be some
>> kind of standardization around the header names.
>>
>> Regards
>> Vivek Biswas, CISSP
>>
>> *From:* John Bradley [mailto:ve7...@ve7jtb.com <ve7...@ve7jtb.com>]
>> *Sent:* Thursday, March 29, 2018 11:57 AM
>> *To:* Neil Madden
>> *Cc:* oauth
>> *Subject:* Re: [OAUTH-WG] WGLC on draft-ietf-oauth-mtls-07
>>
>> Yes that is quite a common deployment scenario.   I think that is the way
>> most of the Open Banking implementations have deployed it currently.
>>
>> The intent is to support that.   One problem is that how the certificate
>> is transmitted to the application tends to be load balancer/reverse proxy
>> specific as no real standard exists.
>>
>> If you think that needs to be clarified text is welcome.
>>
>> John B.
>>
>>
>>
>>
>> On Mar 29, 2018, at 2:54 PM, Neil Madden <neil.mad...@forgerock.com>
>> wrote:
>>
>> Thanks, and understood.
>>
>> The privacy concerns are mostly around correlating activity of *clients*,
>> which may or may not reveal activity patterns of users using those clients.
>> I don’t know how much of a concern that is in reality, but thought it
>> should be mentioned.
>>
>> A colleague also made the following comment about the draft:
>>
>> “It is still quite common to terminate TLS in a load balancer or proxy,
>> and to deploy authorization servers in a secure network zone behind an
>> intermediate in a DMZ. In these cases, TLS would not be established between
>> the client and authorization server as per §2, but information about the
>> TLS handshake may be made available by other means (typically adding to a
>> downstream header) allowing lookup and verification of the client
>> certificate as otherwise described. Given the prevalence of this approach
>> it would be good to know whether such a deployment would be compliant or
>> not.”
>>
>> Kind regards,
>> Neil
>> --
>>
>>
>> On Thursday, Mar 29, 2018 at 4:47 pm, John Bradley <ve7...@ve7jtb.com>
>> wrote:
>> Thanks for the feedback. We will review your comments and reply.
>>
>> One data point is that this will not be the only POP spec. The spec using
>> token binding vs mtls has better privacy properties. It is UK Open banking
>> that has pressed us to come up with a standard to help with
>> interoperability.
>>
>> This spec has been simplified in some ways to facilitate the majority of
>> likely deployments.
>>
>> I understand that in future certificates may have better than SHA256
>> hashes.
>>
>> Regards
>> John B.
>>
>>
>>
>> On Mar 29, 2018, at 12:18 PM, Neil Madden <neil.mad...@forgerock.com>
>> wrote:
>>
>> Hi,
>>
>> I have reviewed this draft and have a number of comments, below.
>> ForgeRock have not yet implemented this draft, but there is interest in
>> implementing it at some point. (Disclaimer: We have no firm commitments on
>> this at the moment, I do not speak for ForgeRock, etc).
>>
>> 1. https://tools.ietf.org/html/draft-ietf-oauth-mtls-07#section-3.1 defines
>> a new confirmation method “x5t#S256”. However,

Re: [OAUTH-WG] WGLC on draft-ietf-oauth-mtls-07

2018-04-04 Thread Brian Campbell
Strongly agree with Justin that any kind of TLS header forwarding standards
like that are well beyond the scope of this spec.


On Fri, Mar 30, 2018 at 10:02 PM, Justin Richer <jric...@mit.edu> wrote:

> I don’t believe this is the spec to define TLS header forwarding standards
> in.
>
>  — Justin
>
>
> On Mar 30, 2018, at 2:03 PM, Vivek Biswas <vivek.bis...@oracle.com> wrote:
>
> There are additional challenges which we have faced.
>
> A.  Most of the Mutual SSL communication as mentioned below
> terminates at the LBR and the LBR needs to have client certificates to
> trust the client. But lot of times the connection from LBR to Authorization
> server may be non-SSL.
>
> The CN, SHA-256 thumprint and serial number of the Client Cert are sent as
> header to the AuthzServer/Backend Server. However, if the connection from
> LBR to AuthzServer/Backend Server is unencrypted it is prone to MIM
> attacks. Hence, it’s a MUST requirement to have one-way SSL from LBR to
> AuthzServer/Backend Server, so that the headers passed are not compromised.
>
> This is a MOST common scenario in a real world. And we don’t want everyone
> come up with their own names for the header. There should be some kind of
> standardization around the header names.
>
> Regards
> Vivek Biswas, CISSP
>
> *From:* John Bradley [mailto:ve7...@ve7jtb.com <ve7...@ve7jtb.com>]
> *Sent:* Thursday, March 29, 2018 11:57 AM
> *To:* Neil Madden
> *Cc:* oauth
> *Subject:* Re: [OAUTH-WG] WGLC on draft-ietf-oauth-mtls-07
>
> Yes that is quite a common deployment scenario.   I think that is the way
> most of the Open Banking implementations have deployed it currently.
>
> The intent is to support that.   One problem is that how the certificate
> is transmitted to the application tends to be load balancer/reverse proxy
> specific as no real standard exists.
>
> If you think that needs to be clarified text is welcome.
>
> John B.
>
>
>
>
> On Mar 29, 2018, at 2:54 PM, Neil Madden <neil.mad...@forgerock.com>
> wrote:
>
> Thanks, and understood.
>
> The privacy concerns are mostly around correlating activity of *clients*,
> which may or may not reveal activity patterns of users using those clients.
> I don’t know how much of a concern that is in reality, but thought it
> should be mentioned.
>
> A colleague also made the following comment about the draft:
>
> “It is still quite common to terminate TLS in a load balancer or proxy,
> and to deploy authorization servers in a secure network zone behind an
> intermediate in a DMZ. In these cases, TLS would not be established between
> the client and authorization server as per §2, but information about the
> TLS handshake may be made available by other means (typically adding to a
> downstream header) allowing lookup and verification of the client
> certificate as otherwise described. Given the prevalence of this approach
> it would be good to know whether such a deployment would be compliant or
> not.”
>
> Kind regards,
> Neil
> --
>
>
> On Thursday, Mar 29, 2018 at 4:47 pm, John Bradley <ve7...@ve7jtb.com>
> wrote:
> Thanks for the feedback. We will review your comments and reply.
>
> One data point is that this will not be the only POP spec. The spec using
> token binding vs mtls has better privacy properties. It is UK Open banking
> that has pressed us to come up with a standard to help with
> interoperability.
>
> This spec has been simplified in some ways to facilitate the majority of
> likely deployments.
>
> I understand that in future certificates may have better than SHA256
> hashes.
>
> Regards
> John B.
>
>
>
> On Mar 29, 2018, at 12:18 PM, Neil Madden <neil.mad...@forgerock.com>
> wrote:
>
> Hi,
>
> I have reviewed this draft and have a number of comments, below. ForgeRock
> have not yet implemented this draft, but there is interest in implementing
> it at some point. (Disclaimer: We have no firm commitments on this at the
> moment, I do not speak for ForgeRock, etc).
>
> 1. https://tools.ietf.org/html/draft-ietf-oauth-mtls-07#section-3.1 defines
> a new confirmation method “x5t#S256”. However, there is already a
> confirmation method “jwk” that can contain a JSON Web Key, which itself can
> contain a “x5t#S526” claim with exactly the same syntax and semantics. The
> draft proposes:
>
> { “cnf”: { “x5t#S256”: “…” } }
>
> but you can already do:
>
> { “cnf”: { “jwk”: { … , “x5t#S256”: “…” } } }
>
> If the intent is just to save some space and avoid the mandatory fields of
> the existing JWK types, maybe this would be better addressed by defining a
> new JWK type which only has a thumbprint? e.g., { “kty”: “

Re: [OAUTH-WG] WGLC on draft-ietf-oauth-mtls-07

2018-03-30 Thread Justin Richer
I don’t believe this is the spec to define TLS header forwarding standards in.

 — Justin

> On Mar 30, 2018, at 2:03 PM, Vivek Biswas <vivek.bis...@oracle.com> wrote:
> 
> There are additional challenges which we have faced.
>  
> A.  Most of the Mutual SSL communication as mentioned below terminates at 
> the LBR and the LBR needs to have client certificates to trust the client. 
> But lot of times the connection from LBR to Authorization server may be 
> non-SSL.
>  
> The CN, SHA-256 thumprint and serial number of the Client Cert are sent as 
> header to the AuthzServer/Backend Server. However, if the connection from LBR 
> to AuthzServer/Backend Server is unencrypted it is prone to MIM attacks. 
> Hence, it’s a MUST requirement to have one-way SSL from LBR to 
> AuthzServer/Backend Server, so that the headers passed are not compromised.
>  
> This is a MOST common scenario in a real world. And we don’t want everyone 
> come up with their own names for the header. There should be some kind of 
> standardization around the header names.
>  
> Regards
> Vivek Biswas, CISSP
>  
> From: John Bradley [mailto:ve7...@ve7jtb.com] 
> Sent: Thursday, March 29, 2018 11:57 AM
> To: Neil Madden
> Cc: oauth
> Subject: Re: [OAUTH-WG] WGLC on draft-ietf-oauth-mtls-07
>  
> Yes that is quite a common deployment scenario.   I think that is the way 
> most of the Open Banking implementations have deployed it currently.   
>  
> The intent is to support that.   One problem is that how the certificate is 
> transmitted to the application tends to be load balancer/reverse proxy 
> specific as no real standard exists.
>  
> If you think that needs to be clarified text is welcome.
>  
> John B.
>  
>  
> 
> 
> On Mar 29, 2018, at 2:54 PM, Neil Madden <neil.mad...@forgerock.com 
> <mailto:neil.mad...@forgerock.com>> wrote:
>  
> Thanks, and understood. 
>  
> The privacy concerns are mostly around correlating activity of *clients*, 
> which may or may not reveal activity patterns of users using those clients. I 
> don’t know how much of a concern that is in reality, but thought it should be 
> mentioned. 
>  
> A colleague also made the following comment about the draft:
>  
> “It is still quite common to terminate TLS in a load balancer or proxy, and 
> to deploy authorization servers in a secure network zone behind an 
> intermediate in a DMZ. In these cases, TLS would not be established between 
> the client and authorization server as per §2, but information about the TLS 
> handshake may be made available by other means (typically adding to a 
> downstream header) allowing lookup and verification of the client certificate 
> as otherwise described. Given the prevalence of this approach it would be 
> good to know whether such a deployment would be compliant or not.”
>  
> Kind regards,
> Neil
> --
>  
> On Thursday, Mar 29, 2018 at 4:47 pm, John Bradley <ve7...@ve7jtb.com 
> <mailto:ve7...@ve7jtb.com>> wrote:
> Thanks for the feedback. We will review your comments and reply. 
> 
> One data point is that this will not be the only POP spec. The spec using 
> token binding vs mtls has better privacy properties. It is UK Open banking 
> that has pressed us to come up with a standard to help with interoperability. 
> 
> This spec has been simplified in some ways to facilitate the majority of 
> likely deployments. 
> 
> I understand that in future certificates may have better than SHA256 hashes. 
> 
> Regards 
> John B. 
> 
> 
> 
> On Mar 29, 2018, at 12:18 PM, Neil Madden <neil.mad...@forgerock.com 
> <mailto:neil.mad...@forgerock.com>> wrote: 
> 
> Hi, 
> 
> I have reviewed this draft and have a number of comments, below. ForgeRock 
> have not yet implemented this draft, but there is interest in implementing it 
> at some point. (Disclaimer: We have no firm commitments on this at the 
> moment, I do not speak for ForgeRock, etc). 
> 
> 1. https://tools.ietf.org/html/draft-ietf-oauth-mtls-07#section-3.1 
> <https://tools.ietf.org/html/draft-ietf-oauth-mtls-07#section-3.1> defines a 
> new confirmation method “x5t#S256”. However, there is already a confirmation 
> method “jwk” that can contain a JSON Web Key, which itself can contain a 
> “x5t#S526” claim with exactly the same syntax and semantics. The draft 
> proposes: 
> 
> { “cnf”: { “x5t#S256”: “…” } } 
> 
> but you can already do: 
> 
> { “cnf”: { “jwk”: { … , “x5t#S256”: “…” } } } 
> 
> If the intent is just to save some space and avoid the mandatory fields of 
> the existing JWK types, maybe this would be better addressed by defining a 
> new JWK type which only has a thumbprint? e.g.,

Re: [OAUTH-WG] WGLC on draft-ietf-oauth-mtls-07

2018-03-30 Thread Vivek Biswas
There are additional challenges which we have faced.

 

A.  Most of the Mutual SSL communication as mentioned below terminates at 
the LBR and the LBR needs to have client certificates to trust the client. But 
lot of times the connection from LBR to Authorization server may be non-SSL.

 

The CN, SHA-256 thumprint and serial number of the Client Cert are sent as 
header to the AuthzServer/Backend Server. However, if the connection from LBR 
to AuthzServer/Backend Server is unencrypted it is prone to MIM attacks. Hence, 
it’s a MUST requirement to have one-way SSL from LBR to AuthzServer/Backend 
Server, so that the headers passed are not compromised.

 

This is a MOST common scenario in a real world. And we don’t want everyone come 
up with their own names for the header. There should be some kind of 
standardization around the header names.

 

Regards

Vivek Biswas, CISSP

 

From: John Bradley [mailto:ve7...@ve7jtb.com] 
Sent: Thursday, March 29, 2018 11:57 AM
To: Neil Madden
Cc: oauth
Subject: Re: [OAUTH-WG] WGLC on draft-ietf-oauth-mtls-07

 

Yes that is quite a common deployment scenario.   I think that is the way most 
of the Open Banking implementations have deployed it currently.   

 

The intent is to support that.   One problem is that how the certificate is 
transmitted to the application tends to be load balancer/reverse proxy specific 
as no real standard exists.

 

If you think that needs to be clarified text is welcome.

 

John B.

 

 





On Mar 29, 2018, at 2:54 PM, Neil Madden mailto:neil.mad...@forgerock.com"neil.mad...@forgerock.com> wrote:

 

Thanks, and understood. 

 

The privacy concerns are mostly around correlating activity of *clients*, which 
may or may not reveal activity patterns of users using those clients. I don’t 
know how much of a concern that is in reality, but thought it should be 
mentioned. 

 

A colleague also made the following comment about the draft:

 

“It is still quite common to terminate TLS in a load balancer or proxy, and to 
deploy authorization servers in a secure network zone behind an intermediate in 
a DMZ. In these cases, TLS would not be established between the client and 
authorization server as per §2, but information about the TLS handshake may be 
made available by other means (typically adding to a downstream header) 
allowing lookup and verification of the client certificate as otherwise 
described. Given the prevalence of this approach it would be good to know 
whether such a deployment would be compliant or not.”

 

Kind regards,

Neil

--

 

On Thursday, Mar 29, 2018 at 4:47 pm, John Bradley mailto:ve7...@ve7jtb.com"ve7...@ve7jtb.com> wrote:

Thanks for the feedback. We will review your comments and reply. 

One data point is that this will not be the only POP spec. The spec using token 
binding vs mtls has better privacy properties. It is UK Open banking that has 
pressed us to come up with a standard to help with interoperability.. 

This spec has been simplified in some ways to facilitate the majority of likely 
deployments. 

I understand that in future certificates may have better than SHA256 hashes.. 

Regards 
John B. 





On Mar 29, 2018, at 12:18 PM, Neil Madden mailto:neil.mad...@forgerock.com"neil.mad...@forgerock.com> wrote: 

Hi, 

I have reviewed this draft and have a number of comments, below. ForgeRock have 
not yet implemented this draft, but there is interest in implementing it at 
some point. (Disclaimer: We have no firm commitments on this at the moment, I 
do not speak for ForgeRock, etc). 

1. https://tools.ietf.org/html/draft-ietf-oauth-mtls-07#section-3.1 defines a 
new confirmation method “x5t#S256”. However, there is already a confirmation 
method “jwk” that can contain a JSON Web Key, which itself can contain a 
“x5t#S526” claim with exactly the same syntax and semantics. The draft 
proposes: 

{ “cnf”: { “x5t#S256”: “…” } } 

but you can already do: 

{ “cnf”: { “jwk”: { … , “x5t#S256”: “…” } } } 

If the intent is just to save some space and avoid the mandatory fields of the 
existing JWK types, maybe this would be better addressed by defining a new JWK 
type which only has a thumbprint? e.g., { “kty”: “x5t”, “x5t#S256”: “…” }. 

2. I find the naming “mutual TLS” and “mTLS” a bit of a misnomer: it’s really 
only the client authentication that we are interested here, and the fact that 
the server also authenticates with a certificate is not hugely relevant to this 
particular spec (although it is to the overall security of OAuth). Also, TLS 
defines non-certificate based authentication mechanisms (e.g. TLS-SRP extension 
for password authenticated key exchange, PSK for pre-shared key authentication) 
and even non-X.509 certificate types 
(https://www.iana.org/assignments/tls-extensiontype-values/tls-extensiontype-values.xhtml#tls-extensiontype-values-3).
 I’d prefer that the draft explicitly referred to “X.509 Client Certificate 
Authentication” rather than

Re: [OAUTH-WG] WGLC on draft-ietf-oauth-mtls-07

2018-03-29 Thread John Bradley
Yes that is quite a common deployment scenario.   I think that is the way most 
of the Open Banking implementations have deployed it currently.   

The intent is to support that.   One problem is that how the certificate is 
transmitted to the application tends to be load balancer/reverse proxy specific 
as no real standard exists.

If you think that needs to be clarified text is welcome.

John B.



> On Mar 29, 2018, at 2:54 PM, Neil Madden  wrote:
> 
> Thanks, and understood. 
> 
> The privacy concerns are mostly around correlating activity of *clients*, 
> which may or may not reveal activity patterns of users using those clients. I 
> don’t know how much of a concern that is in reality, but thought it should be 
> mentioned. 
> 
> A colleague also made the following comment about the draft:
> 
> “It is still quite common to terminate TLS in a load balancer or proxy, and 
> to deploy authorization servers in a secure network zone behind an 
> intermediate in a DMZ. In these cases, TLS would not be established between 
> the client and authorization server as per §2, but information about the TLS 
> handshake may be made available by other means (typically adding to a 
> downstream header) allowing lookup and verification of the client certificate 
> as otherwise described. Given the prevalence of this approach it would be 
> good to know whether such a deployment would be compliant or not.”
> 
> Kind regards,
> Neil
> --
> 
> On Thursday, Mar 29, 2018 at 4:47 pm, John Bradley  > wrote:
> Thanks for the feedback. We will review your comments and reply. 
> 
> One data point is that this will not be the only POP spec. The spec using 
> token binding vs mtls has better privacy properties. It is UK Open banking 
> that has pressed us to come up with a standard to help with interoperability. 
> 
> This spec has been simplified in some ways to facilitate the majority of 
> likely deployments. 
> 
> I understand that in future certificates may have better than SHA256 hashes. 
> 
> Regards 
> John B. 
> 
> 
>> On Mar 29, 2018, at 12:18 PM, Neil Madden  wrote: 
>> 
>> Hi, 
>> 
>> I have reviewed this draft and have a number of comments, below. ForgeRock 
>> have not yet implemented this draft, but there is interest in implementing 
>> it at some point. (Disclaimer: We have no firm commitments on this at the 
>> moment, I do not speak for ForgeRock, etc). 
>> 
>> 1. https://tools.ietf.org/html/draft-ietf-oauth-mtls-07#section-3.1 defines 
>> a new confirmation method “x5t#S256”. However, there is already a 
>> confirmation method “jwk” that can contain a JSON Web Key, which itself can 
>> contain a “x5t#S526” claim with exactly the same syntax and semantics. The 
>> draft proposes: 
>> 
>> { “cnf”: { “x5t#S256”: “…” } } 
>> 
>> but you can already do: 
>> 
>> { “cnf”: { “jwk”: { … , “x5t#S256”: “…” } } } 
>> 
>> If the intent is just to save some space and avoid the mandatory fields of 
>> the existing JWK types, maybe this would be better addressed by defining a 
>> new JWK type which only has a thumbprint? e.g., { “kty”: “x5t”, “x5t#S256”: 
>> “…” }. 
>> 
>> 2. I find the naming “mutual TLS” and “mTLS” a bit of a misnomer: it’s 
>> really only the client authentication that we are interested here, and the 
>> fact that the server also authenticates with a certificate is not hugely 
>> relevant to this particular spec (although it is to the overall security of 
>> OAuth). Also, TLS defines non-certificate based authentication mechanisms 
>> (e.g. TLS-SRP extension for password authenticated key exchange, PSK for 
>> pre-shared key authentication) and even non-X.509 certificate types 
>> (https://www.iana.org/assignments/tls-extensiontype-values/tls-extensiontype-values.xhtml#tls-extensiontype-values-3).
>>  I’d prefer that the draft explicitly referred to “X.509 Client Certificate 
>> Authentication” rather than mutual TLS, and changed identifiers like 
>> ‘tls_client_auth’ 
>> (https://tools.ietf.org/html/draft-ietf-oauth-mtls-07#section-2.1.1) to 
>> something more explicit like ‘tls_x509_pki_client_auth’. 
>> 
>> This is especially confusing in section 3 on sender constrained access 
>> tokens, as there are two different servers involved: the AS and the 
>> protected resource server, but there is no “mutual” authentication between 
>> them, only between each of them and the client. 
>> 
>> 3. The draft links to the TLS 1.2 RFC, while the original OAuth 2.0 RFC only 
>> specifies TLS 1.0. Is the intention that TLS 1.2+ is required? The wording 
>> in Section 5.1 doesn’t seem clear if this could also be used with TLS 1.0 or 
>> 1.1, or whether it is only referring to future TLS versions. 
>> 
>> 4. It might be useful to have a discussion for implementors of whether TLS 
>> session resumption (and PSK in TLS 1.3) and/or renegotiation impact the use 
>> of client certificates, if at all? 
>> 
>> 5. Section 3 

Re: [OAUTH-WG] WGLC on draft-ietf-oauth-mtls-07

2018-03-29 Thread Neil Madden
Thanks, and understood.

The privacy concerns are mostly around correlating activity of *clients*, which 
may or may not reveal activity patterns of users using those clients. I don’t 
know how much of a concern that is in reality, but thought it should be 
mentioned.

A colleague also made the following comment about the draft:

“It is still quite common to terminate TLS in a load balancer or proxy, and to 
deploy authorization servers in a secure network zone behind an intermediate in 
a DMZ. In these cases, TLS would not be established between the client and 
authorization server as per §2, but information about the TLS handshake may be 
made available by other means (typically adding to a downstream header) 
allowing lookup and verification of the client certificate as otherwise 
described. Given the prevalence of this approach it would be good to know 
whether such a deployment would be compliant or not.”

Kind regards,
Neil

--

> On Thursday, Mar 29, 2018 at 4:47 pm, John Bradley  (mailto:ve7...@ve7jtb.com)> wrote:
> Thanks for the feedback. We will review your comments and reply.
>
> One data point is that this will not be the only POP spec. The spec using 
> token binding vs mtls has better privacy properties. It is UK Open banking 
> that has pressed us to come up with a standard to help with interoperability.
>
> This spec has been simplified in some ways to facilitate the majority of 
> likely deployments.
>
> I understand that in future certificates may have better than SHA256 hashes.
>
> Regards
> John B.
>
>
> > On Mar 29, 2018, at 12:18 PM, Neil Madden  wrote:
> >
> > Hi,
> >
> > I have reviewed this draft and have a number of comments, below. ForgeRock 
> > have not yet implemented this draft, but there is interest in implementing 
> > it at some point. (Disclaimer: We have no firm commitments on this at the 
> > moment, I do not speak for ForgeRock, etc).
> >
> > 1. https://tools.ietf.org/html/draft-ietf-oauth-mtls-07#section-3.1 defines 
> > a new confirmation method “x5t#S256”. However, there is already a 
> > confirmation method “jwk” that can contain a JSON Web Key, which itself can 
> > contain a “x5t#S526” claim with exactly the same syntax and semantics. The 
> > draft proposes:
> >
> > { “cnf”: { “x5t#S256”: “…” } }
> >
> > but you can already do:
> >
> > { “cnf”: { “jwk”: { … , “x5t#S256”: “…” } } }
> >
> > If the intent is just to save some space and avoid the mandatory fields of 
> > the existing JWK types, maybe this would be better addressed by defining a 
> > new JWK type which only has a thumbprint? e.g., { “kty”: “x5t”, “x5t#S256”: 
> > “…” }.
> >
> > 2. I find the naming “mutual TLS” and “mTLS” a bit of a misnomer: it’s 
> > really only the client authentication that we are interested here, and the 
> > fact that the server also authenticates with a certificate is not hugely 
> > relevant to this particular spec (although it is to the overall security of 
> > OAuth). Also, TLS defines non-certificate based authentication mechanisms 
> > (e.g. TLS-SRP extension for password authenticated key exchange, PSK for 
> > pre-shared key authentication) and even non-X.509 certificate types 
> > (https://www.iana.org/assignments/tls-extensiontype-values/tls-extensiontype-values.xhtml#tls-extensiontype-values-3).
> >  I’d prefer that the draft explicitly referred to “X.509 Client Certificate 
> > Authentication” rather than mutual TLS, and changed identifiers like 
> > ‘tls_client_auth’ 
> > (https://tools.ietf.org/html/draft-ietf-oauth-mtls-07#section-2.1.1) to 
> > something more explicit like ‘tls_x509_pki_client_auth’.
> >
> > This is especially confusing in section 3 on sender constrained access 
> > tokens, as there are two different servers involved: the AS and the 
> > protected resource server, but there is no “mutual” authentication between 
> > them, only between each of them and the client.
> >
> > 3. The draft links to the TLS 1.2 RFC, while the original OAuth 2.0 RFC 
> > only specifies TLS 1.0. Is the intention that TLS 1.2+ is required? The 
> > wording in Section 5.1 doesn’t seem clear if this could also be used with 
> > TLS 1.0 or 1.1, or whether it is only referring to future TLS versions.
> >
> > 4. It might be useful to have a discussion for implementors of whether TLS 
> > session resumption (and PSK in TLS 1.3) and/or renegotiation impact the use 
> > of client certificates, if at all?
> >
> > 5. Section 3 defines sender-constrained access tokens in terms of the 
> > confirmation key claims (e.g., RFC 7800 for JWT). However, the OAuth 2..0 
> > Pop Architecture draft defines sender constraint and key confirmation as 
> > different things 
> > (https://tools.ietf.org/html/draft-ietf-oauth-pop-architecture-08#section-6.2).
> >  The draft should decide which of those it is implementing and if sender 
> > constraint is intended, then reusing the confirmation key claims seems 
> > misleading. (I think this mTLS draft 

Re: [OAUTH-WG] WGLC on draft-ietf-oauth-mtls-07

2018-03-29 Thread John Bradley
Thanks for the feedback.   We will review your comments and reply.

One data point is that this will not be the only POP spec.   The spec using 
token binding vs mtls has better privacy properties.  It is UK Open banking 
that has pressed us to come up with a standard to help with interoperability. 

This spec has been simplified in some ways to facilitate the majority of likely 
deployments.

I understand that in future certificates may have better than SHA256 hashes.

Regards
John B.


> On Mar 29, 2018, at 12:18 PM, Neil Madden  wrote:
> 
> Hi,
> 
> I have reviewed this draft and have a number of comments, below. ForgeRock 
> have not yet implemented this draft, but there is interest in implementing it 
> at some point. (Disclaimer: We have no firm commitments on this at the 
> moment, I do not speak for ForgeRock, etc).
> 
> 1. https://tools.ietf.org/html/draft-ietf-oauth-mtls-07#section-3.1 defines a 
> new confirmation method “x5t#S256”. However, there is already a confirmation 
> method “jwk” that can contain a JSON Web Key, which itself can contain a 
> “x5t#S526” claim with exactly the same syntax and semantics. The draft 
> proposes:
> 
>   { “cnf”: { “x5t#S256”: “…” } }
> 
> but you can already do:
> 
>   { “cnf”: { “jwk”: { … , “x5t#S256”: “…” } } }
> 
> If the intent is just to save some space and avoid the mandatory fields of 
> the existing JWK types, maybe this would be better addressed by defining a 
> new JWK type which only has a thumbprint? e.g., { “kty”: “x5t”, “x5t#S256”: 
> “…” }.
> 
> 2. I find the naming “mutual TLS” and “mTLS” a bit of a misnomer: it’s really 
> only the client authentication that we are interested here, and the fact that 
> the server also authenticates with a certificate is not hugely relevant to 
> this particular spec (although it is to the overall security of OAuth). Also, 
> TLS defines non-certificate based authentication mechanisms (e.g. TLS-SRP 
> extension for password authenticated key exchange, PSK for pre-shared key 
> authentication) and even non-X.509 certificate types 
> (https://www.iana.org/assignments/tls-extensiontype-values/tls-extensiontype-values.xhtml#tls-extensiontype-values-3).
>  I’d prefer that the draft explicitly referred to “X.509 Client Certificate 
> Authentication” rather than mutual TLS, and changed identifiers like 
> ‘tls_client_auth’ 
> (https://tools.ietf.org/html/draft-ietf-oauth-mtls-07#section-2.1.1) to 
> something more explicit like ‘tls_x509_pki_client_auth’.
> 
> This is especially confusing in section 3 on sender constrained access 
> tokens, as there are two different servers involved: the AS and the protected 
> resource server, but there is no “mutual” authentication between them, only 
> between each of them and the client.
> 
> 3. The draft links to the TLS 1.2 RFC, while the original OAuth 2.0 RFC only 
> specifies TLS 1.0. Is the intention that TLS 1.2+ is required? The wording in 
> Section 5.1 doesn’t seem clear if this could also be used with TLS 1.0 or 
> 1.1, or whether it is only referring to future TLS versions.
> 
> 4. It might be useful to have a discussion for implementors of whether TLS 
> session resumption (and PSK in TLS 1.3) and/or renegotiation impact the use 
> of client certificates, if at all?
> 
> 5. Section 3 defines sender-constrained access tokens in terms of the 
> confirmation key claims (e.g., RFC 7800 for JWT). However, the OAuth 2.0 Pop 
> Architecture draft defines sender constraint and key confirmation as 
> different things 
> (https://tools.ietf.org/html/draft-ietf-oauth-pop-architecture-08#section-6.2).
>  The draft should decide which of those it is implementing and if sender 
> constraint is intended, then reusing the confirmation key claims seems 
> misleading. (I think this mTLS draft is doing key confirmation so should drop 
> the language about sender constrained tokens).
> 
> 6. The OAuth 2.0 PoP Architecture draft says 
> (https://tools.ietf.org/html/draft-ietf-oauth-pop-architecture-08#section-5):
> 
>Strong, fresh session keys:
> 
>   Session keys MUST be strong and fresh.  Each session deserves an
>   independent session key, i.e., one that is generated 
> specifically
>   for the intended use.  In context of OAuth this means that 
> keying
>   material is created in such a way that can only be used by the
>   combination of a client instance, protected resource, and
>   authorization scope.
> 
> 
> However, the mTLS draft section 3 
> (https://tools.ietf.org/html/draft-ietf-oauth-mtls-07#section-3) says:
> 
>   The client makes protected resource requests as described in
>   [RFC6750], however, those requests MUST be made over a mutually
>   authenticated TLS connection using the same certificate that was used
>   for mutual TLS at the token endpoint.
> 
> These two statements are contradictory: the OAuth 2.0 PoP architecture 
> 

Re: [OAUTH-WG] WGLC on draft-ietf-oauth-mtls-07

2018-03-29 Thread Neil Madden
Hi,

I have reviewed this draft and have a number of comments, below. ForgeRock have 
not yet implemented this draft, but there is interest in implementing it at 
some point. (Disclaimer: We have no firm commitments on this at the moment, I 
do not speak for ForgeRock, etc).

1. https://tools.ietf.org/html/draft-ietf-oauth-mtls-07#section-3.1 defines a 
new confirmation method “x5t#S256”. However, there is already a confirmation 
method “jwk” that can contain a JSON Web Key, which itself can contain a 
“x5t#S526” claim with exactly the same syntax and semantics. The draft proposes:

{ “cnf”: { “x5t#S256”: “…” } }

but you can already do:

{ “cnf”: { “jwk”: { … , “x5t#S256”: “…” } } }

If the intent is just to save some space and avoid the mandatory fields of the 
existing JWK types, maybe this would be better addressed by defining a new JWK 
type which only has a thumbprint? e.g., { “kty”: “x5t”, “x5t#S256”: “…” }.

2. I find the naming “mutual TLS” and “mTLS” a bit of a misnomer: it’s really 
only the client authentication that we are interested here, and the fact that 
the server also authenticates with a certificate is not hugely relevant to this 
particular spec (although it is to the overall security of OAuth). Also, TLS 
defines non-certificate based authentication mechanisms (e.g. TLS-SRP extension 
for password authenticated key exchange, PSK for pre-shared key authentication) 
and even non-X.509 certificate types 
(https://www.iana.org/assignments/tls-extensiontype-values/tls-extensiontype-values.xhtml#tls-extensiontype-values-3).
 I’d prefer that the draft explicitly referred to “X.509 Client Certificate 
Authentication” rather than mutual TLS, and changed identifiers like 
‘tls_client_auth’ 
(https://tools.ietf.org/html/draft-ietf-oauth-mtls-07#section-2.1.1) to 
something more explicit like ‘tls_x509_pki_client_auth’.

This is especially confusing in section 3 on sender constrained access tokens, 
as there are two different servers involved: the AS and the protected resource 
server, but there is no “mutual” authentication between them, only between each 
of them and the client.

3. The draft links to the TLS 1.2 RFC, while the original OAuth 2.0 RFC only 
specifies TLS 1.0. Is the intention that TLS 1.2+ is required? The wording in 
Section 5.1 doesn’t seem clear if this could also be used with TLS 1.0 or 1.1, 
or whether it is only referring to future TLS versions.

4. It might be useful to have a discussion for implementors of whether TLS 
session resumption (and PSK in TLS 1.3) and/or renegotiation impact the use of 
client certificates, if at all?

5. Section 3 defines sender-constrained access tokens in terms of the 
confirmation key claims (e.g., RFC 7800 for JWT). However, the OAuth 2.0 Pop 
Architecture draft defines sender constraint and key confirmation as different 
things 
(https://tools.ietf.org/html/draft-ietf-oauth-pop-architecture-08#section-6.2). 
The draft should decide which of those it is implementing and if sender 
constraint is intended, then reusing the confirmation key claims seems 
misleading. (I think this mTLS draft is doing key confirmation so should drop 
the language about sender constrained tokens).

6. The OAuth 2.0 PoP Architecture draft says 
(https://tools.ietf.org/html/draft-ietf-oauth-pop-architecture-08#section-5):

 Strong, fresh session keys:

Session keys MUST be strong and fresh.  Each session deserves an
independent session key, i.e., one that is generated 
specifically
for the intended use.  In context of OAuth this means that 
keying
material is created in such a way that can only be used by the
combination of a client instance, protected resource, and
authorization scope.


However, the mTLS draft section 3 
(https://tools.ietf.org/html/draft-ietf-oauth-mtls-07#section-3) says:

The client makes protected resource requests as described in
[RFC6750], however, those requests MUST be made over a mutually
authenticated TLS connection using the same certificate that was used
for mutual TLS at the token endpoint.

These two statements are contradictory: the OAuth 2.0 PoP architecture 
effectively requires a fresh key-pair to be used for every access token 
request, whereas this draft proposes reusing the same long-lived client 
certificate for every single access token and every resource server.

In the self-signed case (and even in the CA case, with a bit of work - e.g., 
https://www.vaultproject.io/docs/secrets/pki/index.html) it is perfectly 
possible for the client to generate a fresh key-pair for each access token and 
include the certificate on the token request (e.g., as per 
https://tools.ietf.org/html/draft-ietf-oauth-pop-key-distribution-03#section-5.1
 - in which case an appropriate “alg” value should probably be described). This 
should probably at least be an option.

7. The use of a single 

Re: [OAUTH-WG] WGLC on draft-ietf-oauth-mtls-07

2018-03-20 Thread Brian Campbell
I talked with Justin briefly yesterday after the meeting and he pointed out
that the document is currently rather ambiguous about whether or not the
base64 pad "=" character is to be used on the encoding of "x5t#S256"
member. The intent was that padding be omitted and I'll take it as a WGLC
comment to be explicit about that in the next draft revision.

On Mon, Mar 19, 2018 at 10:34 PM, Rifaat Shekh-Yusef 
wrote:

> All,
>
> As discussed during the meeting today, we are starting a WGLC on the MTLS
> document:
> *https://tools.ietf.org/html/draft-ietf-oauth-mtls-07
> *
>
> Please, review the document and provide feedback on any issues you see
> with the document.
>
> The WGLC will end in two weeks, on April 2, 2018.
>
> Regards,
>  Rifaat and Hannes
>
>
> ___
> OAuth mailing list
> OAuth@ietf.org
> https://www.ietf.org/mailman/listinfo/oauth
>
>

-- 
*CONFIDENTIALITY NOTICE: This email may contain confidential and privileged 
material for the sole use of the intended recipient(s). Any review, use, 
distribution or disclosure by others is strictly prohibited.  If you have 
received this communication in error, please notify the sender immediately 
by e-mail and delete the message and any file attachments from your 
computer. Thank you.*
___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth