Re: RFR[10] 8186057: TLS interoperability testing between different Java versions

2017-11-27 Thread sha . jiang

Hi Artem,

Could you please take a look this update: 
http://cr.openjdk.java.net/~jjiang/8186057/webrev.08/


By the way, I just noticed that JdkUtils.supportECKey() method and 
other return strings "true" and "false" instead of boolean values. 
This looks a bit unusual and unnecessary.
The returned values from method supportECKey(), supportsSNI and 
supportsALPN are consumed by method attr(), which only accepts String 
value. The values are outputted to stdout.
Especially these supportXXX() methods are not used by other classes. So, 
I thought they could simply return String instead of boolean.
But now I change them to return boolean values, and accordingly modify 
method attr() to accept any type.



Utils.java, line 146: it would be better to close the stream.

Fixed.

Best regards,
John Jiang



Re: KDF API review, round 2

2017-11-27 Thread Jamil Nimeh
Hi Mike, I know I said you made arguments in favor of specifying the 
keys up front in init, but I'm still really uncomfortable with this.  
It's been bothering me all day.  Comments below:


On 11/27/2017 10:09 AM, Michael StJohns wrote:

On 11/27/2017 1:03 AM, Jamil Nimeh wrote:






One additional topic for discussion: Late in the week we talked 
about the current state of the API internally and one item to 
revisit is where the DerivationParameterSpec objects are passed. It 
was brought up by a couple people that it would be better to 
provide the DPS objects pertaining to keys at the time they are 
called for through deriveKey() and deriveKeys() (and possibly 
deriveData).


Originally we had them all grouped in a List in the init method. 
One reason for needing it up there was to know the total length of 
material to generate.  If we can provide the total length through 
the AlgorithmParameterSpec passed in via init() then things like:


Key deriveKey(DerivationParameterSpec param);
List deriveKeys(List params);

become possible.  To my eyes at least it does make it more clear 
what DPS you're processing since they're provided at derive time, 
rather than the caller having to keep track in their heads where in 
the DPS list they might be with each successive deriveKey or 
deriveKeys calls.  And I think we could do away with 
deriveKeys(int), too.


See above - the key stream is logically produced in its entirety 
before any assignment of that stream is made to any cryptographic 
objects because the mixins (except for the round differentiator) are 
the same for each key stream production round.   Simply passing in 
the total length may not give you the right result if the KDF 
requires a per component length (and it should to defeat (5) or it 
should only produce a single key).
From looking at 800-108, I don't see any place where the KDF needs a 
per-component length.  It looks like it takes L (total length) as an 
input and that is applied to each round of the PRF.  HKDF takes L 
up-front as an input too, though it doesn't use it as an input to the 
HMAC function itself.  For TLS 1.3 that component length becomes part 
of the context info (HkdfLabel) through the HKDF-Expand-Label 
function...and it's only doing one key for a given label which is 
also part of that context specific info, necessitating an init() 
call.  Seems like the length can go into the APS provided via init 
(for those KDFs that need it at least) and you shouldn't need a DPS 
list up-front.




HKDF and SP800-108 only deal with the creation of the key stream and 
ignore the issues with assigning the key stream to cryptographic 
objects.  In the TLS version of HDKF, the L value is mandatory and 
only a single object is assigned per init/call to the KDF.   An HSM 
can look at the HKDF label information and set the appropriate 
policies for the assigned cryptographic object (because if any of the 
label data changes, the entire key stream changes).  That's not the 
case for the raw HKDF nor for any KDF that allows for multiple objects 
to be extracted out of a single key stream.  Hence the per-component 
length values.
So enforce a no-zero-length key policy in your provider code.  You 
probably can't affect the internals of the HSM, but you should be able 
to prevent it in the provider code.  I can't get away from the feeling 
that this could be dealt with in other ways besides specifying all this 
up-front.



Ideally, there should be a complete object spec for each object to be 
generated that is part of the mixins (label and context) for any 
KDF.   That allows an HSM to rely upon the object spec when setting 
policy controls for each generated object - and incidentally allows 
for a KDF to generate both public and non-public data in a secure way.
Between different generations of keystreams do you expect to have 
different sets of policy controls?  The KDF API has no way for you to 
set those things so I would assume those would be pretty static, or at 
least controlled outside the KDF API.  If so, why is the KDF API 
concerning itself with how some HSM sets its policy on objects it makes?


So as long as you allow for the specification of all of the production 
objects as part of the .init() I'm good.   A given KDF might not 
require this - but I can't see any way of fixing the current KDFs to 
work in HSMs without something like this.


As far as your (5) scenario goes, I can see how you can twiddle the 
lengths to get the keystream output with zero-length keys and large 
IV buffers.  But that scenario really glosses over what should be a 
big hurdle and a major access control issue that stands outside the 
KDF API: That the attacker shouldn't have access to the input keying 
material in the first place.  Protect the input keying material 
properly and their attack cannot be done.


Let me give you an example.   I'm running an embedded HSM - to protect 
TLS keys and to do all of the crypto.  An attacker compromises the TLS 

Re: RFR[10] 8186057: TLS interoperability testing between different Java versions

2017-11-27 Thread Artem

Hi John,

Please see inline.


On 11/28/2017 05:35 AM, sha.ji...@oracle.com wrote:

Hi Artem,
Please review the new webrev at: 
http://cr.openjdk.java.net/~jjiang/8186057/webrev.07/

Please see my comments below.



One question about Compatibility.caseStatus(). What's the case 
when you expect a timeout and no client output? Should it always 
result to a test case failure?
I'm not sure understanding your question. Did you mean server or 
client timeout?

I think client should output something.
My questing is - should timeout be always considered as a failure? 
Why don't you just check if server or client result is FAIL?


if (clientStatus == FAIL || serverStatus == FAIL) {
    return FAIL;
}

return PASS;

Some cases are negative cases, their status should be EXPECTED_FAIL.

That's fine, I got that.
A server or client timeout would lead to the case to fail, if the 
failure is not expected.
If a server starts normally, but client doesn't start due to a 
negative case, the server should get timeout. But this failure is 
expected.
If a server starts normally, and client get a unexpected timeout, 
this failure would make the case to fail.
This logic looks unnecessary complex to me. The more complex it is, 
the more likely we get a bug there. I think TIMEOUT status is not 
necessary here. If a failure is expected on one side, the other side 
should fail as well (no matter if it's a timeout or an exception).


If I were you, I would simplify this method like this

if (client == EXPECTED_FAIL && server == EXPECTED_FAIL) {
    return EXPECTED_FAIL;
}

if (client == PASS && server == PASS) {
    return PASS;
}

return FAIL;

If I am not missing something, the pseudo-code above looks better and 
simpler.

1. This test designs 5 status:
SUCCESS, UNEXPECTED_SUCCESS, FAIL, EXPECTED_FAIL and TIMEOUT
The main purpose is making the test report more clear.
A case fails that may be due to:
Unexpected failure;
No failure, but it is expected to get some failure;
Timeout, although it is also not expected, this failure may have 
nothing to do with SSL/TLS exceptions, like SSLHandshakeException.
The test just wants to distinguish these different situations. I 
assume users generally just look through the report table, but not the 
detail logs.


I am not sure it's useful to distinguish this situations, but it makes 
the logic more complex.
2. This test really treats TIMEOUT as failure. Server and/or client 
timout, it should result in the case gets FAIL or EXPECTED_FAIL.


3. Your above code snippet doesn't cover all possible result status.
For example, client == PASS && server == PASS, but this case may be 
expected to get some failure. The status should be UNEXPECTED_SUCCESS 
in my design.
My code might not cover all cases because it's just an example. If there 
are more statuses, then it's going to be more complex. I am suggesting 
to get rid of unnecessary statuses to make it simpler.


Again, some more status are used to make the report table could give 
more information before investigating details.
I don't think it's going to be useful at least to me. Instead, it might 
be better to print an exception message in a separate columns for both 
client and server.


4. The updated webrev havs merged two if-clauses to simplify the 
method Compatibility.caseStatus() a bit.

That's fine. I'll let you decide if you want to simplify it or not.

By the way, I just noticed that JdkUtils.supportECKey() method and other 
return strings "true" and "false" instead of boolean values. This looks 
a bit unusual and unnecessary.


Utils.java, line 146: it would be better to close the stream.

Artem


Best regards,
John Jiang




Re: KDF API review, round 2

2017-11-27 Thread Weijun Wang
Very very new to this discussion. If what I say below does not make sense, 
please ignore it.

I'm still finding it uncomfortable to provide _n_ DPSes in initialize() and let 
user to call deriveKey() _n_ times later. (Or, is this still the preferred 
way?) If providing all DPSes up-front helps an HSM to emit all keys in a single 
operation (this is how I understand the reason), do you expect user calling 
deriveKeys() to get all of them or calling deriveKey() again and again? If the 
former, we can still put DPS into deriveKey (i.e. deriveKey(DPS) and 
deriveKeys(DPS[])) and restrict that deriveKey(s) can only be called once with 
each initialize() so they are not "abused".

If user uses this a PRNG, they can always call deriveData() indefinitely.

Thanks
Max

> On Nov 28, 2017, at 6:23 AM, Jamil Nimeh  wrote:
> 
> 
> 
> On 11/27/2017 11:46 AM, Michael StJohns wrote:
>> On 11/27/2017 2:16 PM, Jamil Nimeh wrote: 
>>> See above with respect to set/getParameter.  But hopefully you'll be happy 
>>> with the API after this next round.  I have one other change I will be 
>>> making.  I'm removing deriveObject.  I'm uncomfortable right now with the 
>>> idea of the API executing an arbitrary class' constructor.  This is 
>>> something I'm definitely willing to examine in the future once the most 
>>> pressing tasks both with this API, and projects that are immediately 
>>> depending on it are take care of. It is easier to add calls to the API than 
>>> it is to remove/modify/deprecate them if there's a problem.  I will file an 
>>> RFE so that we can track this enhancement. 
>>> 
>>> Modifications to the KeyAgreement API are beyond the scope of this JEP.  We 
>>> can certainly discuss ideas you have, but this KDF JEP isn't going to be 
>>> dependent on those discussions. 
>> 
>> 
>> Fair enough. 
>> 
>> The deriveObject stuff is a problem because it doesn't fit well in the JCA.  
>> Mostly we've got KeyGenerator/KeyPairGenerator/KeyFactory that produce 
>> objects of a particular provider.  KeyDerivation is weird in that one 
>> provider will be producing the derived key stream and potentially others 
>> might need to provide key or cryptographic objects from that stream.   I can 
>> see the point in delaying this to a later rev though it might make something 
>> like [KDF is Bouncycastle, keys are PKCS11] a bit difficult to work around. 
>> 
>> Last one - 
>> 
>> Can I get you to buy into a MasterKey/MasterKeySpec  that is not a sub class 
>> of SecretKey but has the same characteristics (and probably the same 
>> definitions) as those classes (and is what gets used in the .init() 
>> argument)?  This goes back to trying to prevent a SecretKey from being used 
>> both with a KDF and the underlying PRF of the KDF.  I know this is a don't 
>> care for software based providers but would be useful for security module 
>> based ones. 
>> 
>> I'm really hoping to improve cryptographic type and use safety along the 
>> way. 
> 
> I'm not quite getting what you mean here.  From looking at KDFs described in 
> 800-108, it looks like the key input to the KDF is KI, and KI ends up being 
> the seed for each round of the PRF.  If that isn't what you're referring to 
> can you explain what you're looking for in more detail?
> 
> --Jamil  



Re: KDF API review, round 2

2017-11-27 Thread Jamil Nimeh



On 11/27/2017 11:46 AM, Michael StJohns wrote:

On 11/27/2017 2:16 PM, Jamil Nimeh wrote:
See above with respect to set/getParameter.  But hopefully you'll be 
happy with the API after this next round.  I have one other change I 
will be making.  I'm removing deriveObject.  I'm uncomfortable right 
now with the idea of the API executing an arbitrary class' 
constructor.  This is something I'm definitely willing to examine in 
the future once the most pressing tasks both with this API, and 
projects that are immediately depending on it are take care of. It is 
easier to add calls to the API than it is to remove/modify/deprecate 
them if there's a problem.  I will file an RFE so that we can track 
this enhancement.


Modifications to the KeyAgreement API are beyond the scope of this 
JEP.  We can certainly discuss ideas you have, but this KDF JEP isn't 
going to be dependent on those discussions.



Fair enough.

The deriveObject stuff is a problem because it doesn't fit well in the 
JCA.  Mostly we've got KeyGenerator/KeyPairGenerator/KeyFactory that 
produce objects of a particular provider.  KeyDerivation is weird in 
that one provider will be producing the derived key stream and 
potentially others might need to provide key or cryptographic objects 
from that stream.   I can see the point in delaying this to a later 
rev though it might make something like [KDF is Bouncycastle, keys are 
PKCS11] a bit difficult to work around.


Last one -

Can I get you to buy into a MasterKey/MasterKeySpec  that is not a sub 
class of SecretKey but has the same characteristics (and probably the 
same definitions) as those classes (and is what gets used in the 
.init() argument)?  This goes back to trying to prevent a SecretKey 
from being used both with a KDF and the underlying PRF of the KDF.  I 
know this is a don't care for software based providers but would be 
useful for security module based ones.


I'm really hoping to improve cryptographic type and use safety along 
the way.


I'm not quite getting what you mean here.  From looking at KDFs 
described in 800-108, it looks like the key input to the KDF is K_I , 
and K_I ends up being the seed for each round of the PRF. If that isn't 
what you're referring to can you explain what you're looking for in more 
detail?


--Jamil
_


Re: KDF API review, round 2

2017-11-27 Thread Michael StJohns

On 11/27/2017 2:16 PM, Jamil Nimeh wrote:
See above with respect to set/getParameter.  But hopefully you'll be 
happy with the API after this next round.  I have one other change I 
will be making.  I'm removing deriveObject.  I'm uncomfortable right 
now with the idea of the API executing an arbitrary class' 
constructor.  This is something I'm definitely willing to examine in 
the future once the most pressing tasks both with this API, and 
projects that are immediately depending on it are take care of. It is 
easier to add calls to the API than it is to remove/modify/deprecate 
them if there's a problem.  I will file an RFE so that we can track 
this enhancement.


Modifications to the KeyAgreement API are beyond the scope of this 
JEP.  We can certainly discuss ideas you have, but this KDF JEP isn't 
going to be dependent on those discussions.



Fair enough.

The deriveObject stuff is a problem because it doesn't fit well in the 
JCA.  Mostly we've got KeyGenerator/KeyPairGenerator/KeyFactory that 
produce objects of a particular provider.  KeyDerivation is weird in 
that one provider will be producing the derived key stream and 
potentially others might need to provide key or cryptographic objects 
from that stream.   I can see the point in delaying this to a later rev 
though it might make something like [KDF is Bouncycastle, keys are 
PKCS11] a bit difficult to work around.


Last one -

Can I get you to buy into a MasterKey/MasterKeySpec  that is not a sub 
class of SecretKey but has the same characteristics (and probably the 
same definitions) as those classes (and is what gets used in the .init() 
argument)?  This goes back to trying to prevent a SecretKey from being 
used both with a KDF and the underlying PRF of the KDF.  I know this is 
a don't care for software based providers but would be useful for 
security module based ones.


I'm really hoping to improve cryptographic type and use safety along the 
way.


Thanks - Mike





Re: KDF API review, round 2

2017-11-27 Thread Anthony Scarpino

On 11/27/2017 11:16 AM, Jamil Nimeh wrote:

I thought that we had ditched setParameter in favor of putting these 
parameters in getInstance.  IIRC we were headed toward an algorithm 
naming convention of /, plus APS in the getInstance (which may 
be null (and might be for most KDFs that we start with: HKDF and 
possibly TLS-PRF).


For those I could see naming conventions:
HKDF would need a PRF specifier, so HKDF/HmacSHA256, HKDF/HmacSHA384.  
Basically for that PRF field I want to see values that line up with Mac 
algorthms in the standard names document.
TLS-PRF would probably allow a default "TLS-PRF" would be TLS-PRF used 
in 1.1 and earlier.  "TLS-PRF/SHA256" would be P_SHA256 from RFC 5246.  
Or we could make it also follow the Mac standard name, so 
"TLS-PRF/HmacSHA256".  I'm fine with that too.  Basically each 
implementation



When the naming convention first came up, I never got around to 
replying.  I think it would be better to specify the KDF and PRF as 
separate parameters.  I don't think it's worth creating an naming 
convention given what we have/are experiencing with Cipher 
transformations, it's simpler to spell out each one separately.


Tony



Re: KDF API review, round 2

2017-11-27 Thread Jamil Nimeh



On 11/27/2017 10:09 AM, Michael StJohns wrote:

On 11/27/2017 1:03 AM, Jamil Nimeh wrote:






One additional topic for discussion: Late in the week we talked 
about the current state of the API internally and one item to 
revisit is where the DerivationParameterSpec objects are passed. It 
was brought up by a couple people that it would be better to 
provide the DPS objects pertaining to keys at the time they are 
called for through deriveKey() and deriveKeys() (and possibly 
deriveData).


Originally we had them all grouped in a List in the init method. 
One reason for needing it up there was to know the total length of 
material to generate.  If we can provide the total length through 
the AlgorithmParameterSpec passed in via init() then things like:


Key deriveKey(DerivationParameterSpec param);
List deriveKeys(List params);

become possible.  To my eyes at least it does make it more clear 
what DPS you're processing since they're provided at derive time, 
rather than the caller having to keep track in their heads where in 
the DPS list they might be with each successive deriveKey or 
deriveKeys calls.  And I think we could do away with 
deriveKeys(int), too.


See above - the key stream is logically produced in its entirety 
before any assignment of that stream is made to any cryptographic 
objects because the mixins (except for the round differentiator) are 
the same for each key stream production round.   Simply passing in 
the total length may not give you the right result if the KDF 
requires a per component length (and it should to defeat (5) or it 
should only produce a single key).
From looking at 800-108, I don't see any place where the KDF needs a 
per-component length.  It looks like it takes L (total length) as an 
input and that is applied to each round of the PRF.  HKDF takes L 
up-front as an input too, though it doesn't use it as an input to the 
HMAC function itself.  For TLS 1.3 that component length becomes part 
of the context info (HkdfLabel) through the HKDF-Expand-Label 
function...and it's only doing one key for a given label which is 
also part of that context specific info, necessitating an init() 
call.  Seems like the length can go into the APS provided via init 
(for those KDFs that need it at least) and you shouldn't need a DPS 
list up-front.




HKDF and SP800-108 only deal with the creation of the key stream and 
ignore the issues with assigning the key stream to cryptographic 
objects.  In the TLS version of HDKF, the L value is mandatory and 
only a single object is assigned per init/call to the KDF.   An HSM 
can look at the HKDF label information and set the appropriate 
policies for the assigned cryptographic object (because if any of the 
label data changes, the entire key stream changes).  That's not the 
case for the raw HKDF nor for any KDF that allows for multiple objects 
to be extracted out of a single key stream.  Hence the per-component 
length values.


Ideally, there should be a complete object spec for each object to be 
generated that is part of the mixins (label and context) for any 
KDF.   That allows an HSM to rely upon the object spec when setting 
policy controls for each generated object - and incidentally allows 
for a KDF to generate both public and non-public data in a secure way.


So as long as you allow for the specification of all of the production 
objects as part of the .init() I'm good.   A given KDF might not 
require this - but I can't see any way of fixing the current KDFs to 
work in HSMs without something like this.


As far as your (5) scenario goes, I can see how you can twiddle the 
lengths to get the keystream output with zero-length keys and large 
IV buffers.  But that scenario really glosses over what should be a 
big hurdle and a major access control issue that stands outside the 
KDF API: That the attacker shouldn't have access to the input keying 
material in the first place.  Protect the input keying material 
properly and their attack cannot be done.


Let me give you an example.   I'm running an embedded HSM - to protect 
TLS keys and to do all of the crypto.  An attacker compromises the TLS 
server and now has access to the HSM.  No problem - I'm going to 
notice if the attacker starts extraditing large amounts of data from 
the server (e.g. copies of the TLS in the clear but possibly 
reencrypted data stream) so this isn't a threat or is it?  Smart 
attacker does an extraction attack on the TLS 1.2 and before KDF and 
turns all of the key stream material into IV material and exports it 
from the HSM.  The attacker now has the much smaller key material so 
he can send a few messages with those keys and allow for the passive 
external interception of the traffic and decryption thereof without 
the risk of detection of all that traffic being sent.  Alternately, I 
can place the key material in a picture via steganography and publish 
it as part of the server data.


The idea is to protect extraction of the key 

Re: KDF API review, round 2

2017-11-27 Thread Michael StJohns

On 11/27/2017 1:03 AM, Jamil Nimeh wrote:






One additional topic for discussion: Late in the week we talked 
about the current state of the API internally and one item to 
revisit is where the DerivationParameterSpec objects are passed. It 
was brought up by a couple people that it would be better to provide 
the DPS objects pertaining to keys at the time they are called for 
through deriveKey() and deriveKeys() (and possibly deriveData).


Originally we had them all grouped in a List in the init method. One 
reason for needing it up there was to know the total length of 
material to generate.  If we can provide the total length through 
the AlgorithmParameterSpec passed in via init() then things like:


Key deriveKey(DerivationParameterSpec param);
List deriveKeys(List params);

become possible.  To my eyes at least it does make it more clear 
what DPS you're processing since they're provided at derive time, 
rather than the caller having to keep track in their heads where in 
the DPS list they might be with each successive deriveKey or 
deriveKeys calls.  And I think we could do away with 
deriveKeys(int), too.


See above - the key stream is logically produced in its entirety 
before any assignment of that stream is made to any cryptographic 
objects because the mixins (except for the round differentiator) are 
the same for each key stream production round.   Simply passing in 
the total length may not give you the right result if the KDF 
requires a per component length (and it should to defeat (5) or it 
should only produce a single key).
From looking at 800-108, I don't see any place where the KDF needs a 
per-component length.  It looks like it takes L (total length) as an 
input and that is applied to each round of the PRF.  HKDF takes L 
up-front as an input too, though it doesn't use it as an input to the 
HMAC function itself.  For TLS 1.3 that component length becomes part 
of the context info (HkdfLabel) through the HKDF-Expand-Label 
function...and it's only doing one key for a given label which is also 
part of that context specific info, necessitating an init() call.  
Seems like the length can go into the APS provided via init (for those 
KDFs that need it at least) and you shouldn't need a DPS list up-front.




HKDF and SP800-108 only deal with the creation of the key stream and 
ignore the issues with assigning the key stream to cryptographic 
objects.  In the TLS version of HDKF, the L value is mandatory and only 
a single object is assigned per init/call to the KDF.   An HSM can look 
at the HKDF label information and set the appropriate policies for the 
assigned cryptographic object (because if any of the label data changes, 
the entire key stream changes).  That's not the case for the raw HKDF 
nor for any KDF that allows for multiple objects to be extracted out of 
a single key stream.  Hence the per-component length values.


Ideally, there should be a complete object spec for each object to be 
generated that is part of the mixins (label and context) for any KDF.   
That allows an HSM to rely upon the object spec when setting policy 
controls for each generated object - and incidentally allows for a KDF 
to generate both public and non-public data in a secure way.


So as long as you allow for the specification of all of the production 
objects as part of the .init() I'm good.   A given KDF might not require 
this - but I can't see any way of fixing the current KDFs to work in 
HSMs without something like this.


As far as your (5) scenario goes, I can see how you can twiddle the 
lengths to get the keystream output with zero-length keys and large IV 
buffers.  But that scenario really glosses over what should be a big 
hurdle and a major access control issue that stands outside the KDF 
API: That the attacker shouldn't have access to the input keying 
material in the first place.  Protect the input keying material 
properly and their attack cannot be done.


Let me give you an example.   I'm running an embedded HSM - to protect 
TLS keys and to do all of the crypto.  An attacker compromises the TLS 
server and now has access to the HSM.  No problem - I'm going to notice 
if the attacker starts extraditing large amounts of data from the server 
(e.g. copies of the TLS in the clear but possibly reencrypted data 
stream) so this isn't a threat or is it?  Smart attacker does an 
extraction attack on the TLS 1.2 and before KDF and turns all of the key 
stream material into IV material and exports it from the HSM.  The 
attacker now has the much smaller key material so he can send a few 
messages with those keys and allow for the passive external interception 
of the traffic and decryption thereof without the risk of detection of 
all that traffic being sent.  Alternately, I can place the key material 
in a picture via steganography and publish it as part of the server data.


The idea is to protect extraction of the key material from an HSM _*even 
from authorized users of