Re: JCA design for RFC 7748

2017-08-18 Thread Michael StJohns

On 8/18/2017 3:26 PM, Adam Petcher wrote:

On 8/17/2017 1:44 PM, Michael StJohns wrote:

See inline.

On 8/17/2017 11:19 AM, Adam Petcher wrote:



Specifically, these standards have properties related to byte arrays 
like: "The Curve25519 function was carefully designed to allow all 
32-byte strings as Diffie-Hellman public keys."[1]


This statement is actually a problem.  Valid keys are in the range of 
1 to p-1 for the field (with some additional pruning).   32 byte 
strings (or 256 bit integers) do not map 1-1 into that space.  E.g. 
there are some actual canonical keys where multiple (at least 2) 32 
byte strings map to them.  (See the pruning and clamping 
algorithms).  The NIST private key generation for EC private keys 
mitigates this bias by either (a) repeatedly generating random keys 
until you get one in the range or (b) generating a key stream with 
extra (64) bits and reducing that mod p of the curve.


If you are concerned about the distribution of private keys in these 
standards, then you may want to raise your concerns on the CFRG 
mailing list (c...@irtf.org). I don't have this concern, so I think we 
should implement the RFC as written.
 Basically, the RFC describes the form of the private key.  It also 
gives you an implementation for how to generate one.  BUT - any method 
that gives you an equivalent key and that has the same or better 
security guarantees is equally valid.WRT to the RFC, I missed this 
at the time it went through but noticed it after having to deal with 
generation of normal EC keys and a FIPS evaluation. The bias is small, 
but it exists.  The method of the RFC does not reduce the bias.  The 
NIST method does.   I know which one I would choose.


Regardless - this has no bearing on the fact that the output of the key 
generation process is an integer.








RFC 8032 private keys: These are definitely bit strings, and 
modeling them as integers doesn't make much sense. The only thing 
that is ever done with these private keys is that they are used as 
input to a hash function.


Again - no.   The actual private key is what you get after stage 3 of 
section 5.1.5.  E.g. generate a random string of 32 bytes. Hash it to 
help with the bad random generators (*sheesh*), Interpret the hash 
after pruning as a little endian integer. 


I'm not sure I understand what you are suggesting. For Ed25519, the 
initial 32-byte secret is hashed to produce 64 bytes. The first 32 
bytes are pruned, interpreted as an integer, and used to produce the 
public key. The second 32 bytes are also used in the signing operation 
and contribute to a deterministic nonce. See RFC 8032, section 5.1.6, 
step 1. Are you suggesting that we should represent all 64 bytes as an 
integer?


Sorry - I did miss this.   Unfortunately, it screws up making this work 
cleanly.There are two ways of doing this - I'd prefer (1) but could 
live with (2):


1) The private key is a BigInteger (and that's how it gets in and out of 
the ECPrivateKeySpec as 's') derived as up to step 3.  The nonce is the 
leftmost 32 bytes of the  SHA512 of the private key expressed as a 
little endian integer).   The key thing about the deterministic nonce is 
that it's the same for the key for its signature lifetime (which I'm 
really not sure is a good thing - but its what we have).  The way the 
nonce is derived in the RFC shows one way.  Hashing the private key 
shows another way.  I'd prefer this because you can use the same 
input/output from the 'spec method for the ed25519 key as the curve25519 
key.   (do you really want two different spec classes for edxx and 
curvexxx keys???)


2) The private key is the 32byte secret array of step 1 stored in the 
concrete class.  The input and output ECPrivateKeySpec defines a 
transform from that array to/from a BigInteger to that array.  The 
private key concrete class includes storage for both the internal 's' 
little endian integer value and the internal 'prefix' value. This isn't 
as clean because the scalar secret 's' isn't the same as the 
ECPrivateKeySpec.getS(), but allows for impedance matching between the 
existing classes and the strangeness of an ed25519 private key.  It 
would definitely divorce keys from the two curve uses by doing it this way.


*sigh* The private key is an integer and that's really what needs to be 
represented.


Mike






Re: JCA design for RFC 7748

2017-08-18 Thread Adam Petcher

On 8/17/2017 1:44 PM, Michael StJohns wrote:

See inline.

On 8/17/2017 11:19 AM, Adam Petcher wrote:



Specifically, these standards have properties related to byte arrays 
like: "The Curve25519 function was carefully designed to allow all 
32-byte strings as Diffie-Hellman public keys."[1]


This statement is actually a problem.  Valid keys are in the range of 
1 to p-1 for the field (with some additional pruning).   32 byte 
strings (or 256 bit integers) do not map 1-1 into that space.  E.g. 
there are some actual canonical keys where multiple (at least 2) 32 
byte strings map to them.  (See the pruning and clamping algorithms).  
The NIST private key generation for EC private keys mitigates this 
bias by either (a) repeatedly generating random keys until you get one 
in the range or (b) generating a key stream with extra (64) bits and 
reducing that mod p of the curve.


If you are concerned about the distribution of private keys in these 
standards, then you may want to raise your concerns on the CFRG mailing 
list (c...@irtf.org). I don't have this concern, so I think we should 
implement the RFC as written.






RFC 8032 private keys: These are definitely bit strings, and modeling 
them as integers doesn't make much sense. The only thing that is ever 
done with these private keys is that they are used as input to a hash 
function.


Again - no.   The actual private key is what you get after stage 3 of 
section 5.1.5.  E.g. generate a random string of 32 bytes. Hash it to 
help with the bad random generators (*sheesh*), Interpret the hash 
after pruning as a little endian integer. 


I'm not sure I understand what you are suggesting. For Ed25519, the 
initial 32-byte secret is hashed to produce 64 bytes. The first 32 bytes 
are pruned, interpreted as an integer, and used to produce the public 
key. The second 32 bytes are also used in the signing operation and 
contribute to a deterministic nonce. See RFC 8032, section 5.1.6, step 
1. Are you suggesting that we should represent all 64 bytes as an integer?




Re: JCA design for RFC 7748

2017-08-18 Thread Michael StJohns

On 8/17/2017 7:01 PM, Xuelei Fan wrote:

On 8/17/2017 11:35 AM, Michael StJohns wrote:

On 8/17/2017 1:28 PM, Xuelei Fan wrote:
This is the same for ANY current publicly known curve - different 
providers may implement all some or none of them.  So extending 
this model for the curve25519 stuff isn't going to be any different 
old provider and new provider wise than is currently the case. If 
you want the new curves, you have to specify the new providers. If 
the new and old providers don't implement the same curves, you may 
need to deal with two different providers simultaneously - and 
that's not something that just happens.


I see your points.  Not-binding to a provider cause problems; 
binding to a provider cause other problems.  There are a few 
complains on the problems, and impact the real world applications in 
practice.


Basically, this is a failing of imagination when the various 
getInstance() methods were defined.  Now its possible to use 
Security.getProvider(Map)  to good effect (but more 
work) to find appropriate providers for appropriate signature/key 
agreement algorithms and curves.


I'm not sure how this applies to the compatibility impact concern.  
See more in the example bellow.







I don't think your concerns are valid.  I may still be missing 
something here - but would ask for a real-world example that 
actually shows breakage.



I happened to have a real-world example.  See
https://bugs.openjdk.java.net/browse/JDK-8064330


I'm not sure how this applies to the current question of whether or 
not its possible to integrate new EC curves?





This is an interesting bug.  At first it is requested to support 
SHA224 in JSSE implementation. And, SHA224 is added as the supported 
hash algorithm for TLS.  However, because SunMSCAPI does not support 
SHA224 signature, compatibility issues comes.  So we removed SHA224 
if the SunMSCAPI is presented.  Later, one found the code is unusual 
as SHA224 and the related signature algorithms are supported by the 
underlying providers, look like no reason to limit the use of 
SHA224.  So, SHA224 is added back and then the compatibility issues 
come back again.  Then we removed SHA224 again if the SunMSCAPI is 
presented.  However, at the same time, another request is asking to 
support SHA224 on Windows.  The API design itself put me in a 
either-or situation.  I would try to avoid it if possible for new 
design.


This appears to be an MSCAPI issue vice a JSSE issue.
MSCAPI is fine as it does not support SHA224.  JSSE is then in a bad 
position because it cannot support SHA224 in a general way even one of 
the underlying provider supports SHA224 but another one not.


No - I don't think both are fine.   MSCAPI could outsource algorithms it 
doesn't have internally to another provider - for a very limited set of 
classes (e.g. MessageDigest and Secure Random), but it would be smarter 
if JSSE just tried to see of the provider it was already using for other 
stuff had the available algorithm (e.g. using the long form of getInstance).


In the above case - MSCAPI should realize that it can't do a 
SHA224with... signature and just throw the appropriate error.  JSSE 
should know which TLS suites MSCAPI can support.  Or at least be able to 
figure out which ones it supports by checking which JCA algorithms are 
supported.




And the JCA specifically disclaims the guaranteed ability to use 
cryptographic objects from one provider in another provider.
It's not the real problem.  The real problem is that there is a 
provider support the requested algorithms, why not use it?  We can 
say, the spec does not guarantee the behavior, but the application 
still has a problem.  We also can say, we have a design flaw, but the 
question is still there.


There is no design flaw, there is only incorrect programming. Basically, 
the only non-key related crypto algorithms in the providers are hashing 
and random number generation.While its possible to use these across 
providers (and you might want to), special casing things to do this 
means (for example) that your FIPS approved signature mechanism might 
end up using a non-fips summary (hash) mechanism.


Every other crypto mechanism has or involves keys.  And there's no 
guarantee that keys are valid across providers.




Secondary users like the JSSE probably need to stick to a single 
provider for a given connection.


Stick to a single provider will open other windows for different 
problems.  JCE spec does not grant one provider could implement all 
services.


An application that needs to use two different cryptographic algorithm 
providers should be doing that explicitly and not having it happen under 
the covers.   For example, I was using the BC provider to deal with 
elliptic curve stuff for a long while.  That meant I was also using BC 
to deal with certificates (because the Sun provider didn't understand 
how to deal with the BC's ECKey implementation).   It was kind of a 

Re: RFR: JDK-8159544: Remove deprecated classes in com.sun.security.auth.**

2017-08-18 Thread Sean Mullan

On 8/17/17 8:16 PM, Weijun Wang wrote:

Hi Sean

Change looks fine.

And I found another 4 references in comments in 
jdk/src/java.base/share/classes/javax/security/auth/Policy.java.


Good catch. I changed SolarisPrincipal and SolarisNumericUserPrincipal 
to UnixPrincipal and UnixNumericUserPrincipal, respectively.


Updated webrev: 
http://cr.openjdk.java.net/~mullan/webrevs/8159544/jdk.webrev.01/



BTW, do we have a test to show what lines in various Resources.java files are 
used where and if one is useless?


I don't know. I have copied Naoto to see if he knows.

--Sean



Thanks
Max


On Aug 18, 2017, at 3:08 AM, Sean Mullan  wrote:

Please review this JDK 10 change to remove the deprecated classes in 
com.sun.security.auth.** that have been previously marked with forRemoval=true 
in JDK 9.

webrev: http://cr.openjdk.java.net/~mullan/webrevs/8159544/

I have also copied Jan for reviewing a change in langtools, and also build-dev 
for a change to one of the JDK Makefiles.

Thanks,
Sean




RFR: 8170157, 8170245: Enable unlimited cryptographic policy by default in OracleJDK

2017-08-18 Thread Seán Coffey
Looking to backport 8170157 to jdk8u-dev. The 8170245 test bug also gets 
pulled in for this port since some tests need cleaning up to deal with 
unlimited crypto environment.


webrev : 
http://cr.openjdk.java.net/~coffeys/webrev.8170157.8u.01/webrev/index.html


--
Regards,
Sean.



Re: RFR: JDK-8159544: Remove deprecated classes in com.sun.security.auth.**

2017-08-18 Thread Jan Lahoda
For the langtools change, I think we shouldn't remove the entries from 
the include.list. This list defines which packages/classes should and 
should not be available when compiling for previous versions of the 
platform using --release. The removed entries are exclude entries, so if 
removed, it would mean the classes should be available when compiling 
for previous platform versions, which is not the intent, I think.


Maybe adding a comment (or a JBS entry) that these should be cleaned up 
when --release is no longer supporting 9?


Thanks,
Jan

On 17.8.2017 21:08, Sean Mullan wrote:

Please review this JDK 10 change to remove the deprecated classes in
com.sun.security.auth.** that have been previously marked with
forRemoval=true in JDK 9.

webrev: http://cr.openjdk.java.net/~mullan/webrevs/8159544/

I have also copied Jan for reviewing a change in langtools, and also
build-dev for a change to one of the JDK Makefiles.

Thanks,
Sean


Re: RFR: JDK-8159544: Remove deprecated classes in com.sun.security.auth.**

2017-08-18 Thread Vincent Ryan
Your changes look fine to me. Thanks.


> On 17 Aug 2017, at 20:08, Sean Mullan  wrote:
> 
> Please review this JDK 10 change to remove the deprecated classes in 
> com.sun.security.auth.** that have been previously marked with 
> forRemoval=true in JDK 9.
> 
> webrev: http://cr.openjdk.java.net/~mullan/webrevs/8159544/
> 
> I have also copied Jan for reviewing a change in langtools, and also 
> build-dev for a change to one of the JDK Makefiles.
> 
> Thanks,
> Sean