Re: JCA design for RFC 7748

2017-08-18 Thread Michael StJohns

On 8/18/2017 3:26 PM, Adam Petcher wrote:

On 8/17/2017 1:44 PM, Michael StJohns wrote:

See inline.

On 8/17/2017 11:19 AM, Adam Petcher wrote:



Specifically, these standards have properties related to byte arrays 
like: "The Curve25519 function was carefully designed to allow all 
32-byte strings as Diffie-Hellman public keys."[1]


This statement is actually a problem.  Valid keys are in the range of 
1 to p-1 for the field (with some additional pruning).   32 byte 
strings (or 256 bit integers) do not map 1-1 into that space.  E.g. 
there are some actual canonical keys where multiple (at least 2) 32 
byte strings map to them.  (See the pruning and clamping 
algorithms).  The NIST private key generation for EC private keys 
mitigates this bias by either (a) repeatedly generating random keys 
until you get one in the range or (b) generating a key stream with 
extra (64) bits and reducing that mod p of the curve.


If you are concerned about the distribution of private keys in these 
standards, then you may want to raise your concerns on the CFRG 
mailing list (c...@irtf.org). I don't have this concern, so I think we 
should implement the RFC as written.
 Basically, the RFC describes the form of the private key.  It also 
gives you an implementation for how to generate one.  BUT - any method 
that gives you an equivalent key and that has the same or better 
security guarantees is equally valid.WRT to the RFC, I missed this 
at the time it went through but noticed it after having to deal with 
generation of normal EC keys and a FIPS evaluation. The bias is small, 
but it exists.  The method of the RFC does not reduce the bias.  The 
NIST method does.   I know which one I would choose.


Regardless - this has no bearing on the fact that the output of the key 
generation process is an integer.








RFC 8032 private keys: These are definitely bit strings, and 
modeling them as integers doesn't make much sense. The only thing 
that is ever done with these private keys is that they are used as 
input to a hash function.


Again - no.   The actual private key is what you get after stage 3 of 
section 5.1.5.  E.g. generate a random string of 32 bytes. Hash it to 
help with the bad random generators (*sheesh*), Interpret the hash 
after pruning as a little endian integer. 


I'm not sure I understand what you are suggesting. For Ed25519, the 
initial 32-byte secret is hashed to produce 64 bytes. The first 32 
bytes are pruned, interpreted as an integer, and used to produce the 
public key. The second 32 bytes are also used in the signing operation 
and contribute to a deterministic nonce. See RFC 8032, section 5.1.6, 
step 1. Are you suggesting that we should represent all 64 bytes as an 
integer?


Sorry - I did miss this.   Unfortunately, it screws up making this work 
cleanly.There are two ways of doing this - I'd prefer (1) but could 
live with (2):


1) The private key is a BigInteger (and that's how it gets in and out of 
the ECPrivateKeySpec as 's') derived as up to step 3.  The nonce is the 
leftmost 32 bytes of the  SHA512 of the private key expressed as a 
little endian integer).   The key thing about the deterministic nonce is 
that it's the same for the key for its signature lifetime (which I'm 
really not sure is a good thing - but its what we have).  The way the 
nonce is derived in the RFC shows one way.  Hashing the private key 
shows another way.  I'd prefer this because you can use the same 
input/output from the 'spec method for the ed25519 key as the curve25519 
key.   (do you really want two different spec classes for edxx and 
curvexxx keys???)


2) The private key is the 32byte secret array of step 1 stored in the 
concrete class.  The input and output ECPrivateKeySpec defines a 
transform from that array to/from a BigInteger to that array.  The 
private key concrete class includes storage for both the internal 's' 
little endian integer value and the internal 'prefix' value. This isn't 
as clean because the scalar secret 's' isn't the same as the 
ECPrivateKeySpec.getS(), but allows for impedance matching between the 
existing classes and the strangeness of an ed25519 private key.  It 
would definitely divorce keys from the two curve uses by doing it this way.


*sigh* The private key is an integer and that's really what needs to be 
represented.


Mike






Re: JCA design for RFC 7748

2017-08-18 Thread Adam Petcher

On 8/17/2017 1:44 PM, Michael StJohns wrote:

See inline.

On 8/17/2017 11:19 AM, Adam Petcher wrote:



Specifically, these standards have properties related to byte arrays 
like: "The Curve25519 function was carefully designed to allow all 
32-byte strings as Diffie-Hellman public keys."[1]


This statement is actually a problem.  Valid keys are in the range of 
1 to p-1 for the field (with some additional pruning).   32 byte 
strings (or 256 bit integers) do not map 1-1 into that space.  E.g. 
there are some actual canonical keys where multiple (at least 2) 32 
byte strings map to them.  (See the pruning and clamping algorithms).  
The NIST private key generation for EC private keys mitigates this 
bias by either (a) repeatedly generating random keys until you get one 
in the range or (b) generating a key stream with extra (64) bits and 
reducing that mod p of the curve.


If you are concerned about the distribution of private keys in these 
standards, then you may want to raise your concerns on the CFRG mailing 
list (c...@irtf.org). I don't have this concern, so I think we should 
implement the RFC as written.






RFC 8032 private keys: These are definitely bit strings, and modeling 
them as integers doesn't make much sense. The only thing that is ever 
done with these private keys is that they are used as input to a hash 
function.


Again - no.   The actual private key is what you get after stage 3 of 
section 5.1.5.  E.g. generate a random string of 32 bytes. Hash it to 
help with the bad random generators (*sheesh*), Interpret the hash 
after pruning as a little endian integer. 


I'm not sure I understand what you are suggesting. For Ed25519, the 
initial 32-byte secret is hashed to produce 64 bytes. The first 32 bytes 
are pruned, interpreted as an integer, and used to produce the public 
key. The second 32 bytes are also used in the signing operation and 
contribute to a deterministic nonce. See RFC 8032, section 5.1.6, step 
1. Are you suggesting that we should represent all 64 bytes as an integer?




Re: JCA design for RFC 7748

2017-08-18 Thread Michael StJohns

On 8/17/2017 7:01 PM, Xuelei Fan wrote:

On 8/17/2017 11:35 AM, Michael StJohns wrote:

On 8/17/2017 1:28 PM, Xuelei Fan wrote:
This is the same for ANY current publicly known curve - different 
providers may implement all some or none of them.  So extending 
this model for the curve25519 stuff isn't going to be any different 
old provider and new provider wise than is currently the case. If 
you want the new curves, you have to specify the new providers. If 
the new and old providers don't implement the same curves, you may 
need to deal with two different providers simultaneously - and 
that's not something that just happens.


I see your points.  Not-binding to a provider cause problems; 
binding to a provider cause other problems.  There are a few 
complains on the problems, and impact the real world applications in 
practice.


Basically, this is a failing of imagination when the various 
getInstance() methods were defined.  Now its possible to use 
Security.getProvider(Map)  to good effect (but more 
work) to find appropriate providers for appropriate signature/key 
agreement algorithms and curves.


I'm not sure how this applies to the compatibility impact concern.  
See more in the example bellow.







I don't think your concerns are valid.  I may still be missing 
something here - but would ask for a real-world example that 
actually shows breakage.



I happened to have a real-world example.  See
https://bugs.openjdk.java.net/browse/JDK-8064330


I'm not sure how this applies to the current question of whether or 
not its possible to integrate new EC curves?





This is an interesting bug.  At first it is requested to support 
SHA224 in JSSE implementation. And, SHA224 is added as the supported 
hash algorithm for TLS.  However, because SunMSCAPI does not support 
SHA224 signature, compatibility issues comes.  So we removed SHA224 
if the SunMSCAPI is presented.  Later, one found the code is unusual 
as SHA224 and the related signature algorithms are supported by the 
underlying providers, look like no reason to limit the use of 
SHA224.  So, SHA224 is added back and then the compatibility issues 
come back again.  Then we removed SHA224 again if the SunMSCAPI is 
presented.  However, at the same time, another request is asking to 
support SHA224 on Windows.  The API design itself put me in a 
either-or situation.  I would try to avoid it if possible for new 
design.


This appears to be an MSCAPI issue vice a JSSE issue.
MSCAPI is fine as it does not support SHA224.  JSSE is then in a bad 
position because it cannot support SHA224 in a general way even one of 
the underlying provider supports SHA224 but another one not.


No - I don't think both are fine.   MSCAPI could outsource algorithms it 
doesn't have internally to another provider - for a very limited set of 
classes (e.g. MessageDigest and Secure Random), but it would be smarter 
if JSSE just tried to see of the provider it was already using for other 
stuff had the available algorithm (e.g. using the long form of getInstance).


In the above case - MSCAPI should realize that it can't do a 
SHA224with... signature and just throw the appropriate error.  JSSE 
should know which TLS suites MSCAPI can support.  Or at least be able to 
figure out which ones it supports by checking which JCA algorithms are 
supported.




And the JCA specifically disclaims the guaranteed ability to use 
cryptographic objects from one provider in another provider.
It's not the real problem.  The real problem is that there is a 
provider support the requested algorithms, why not use it?  We can 
say, the spec does not guarantee the behavior, but the application 
still has a problem.  We also can say, we have a design flaw, but the 
question is still there.


There is no design flaw, there is only incorrect programming. Basically, 
the only non-key related crypto algorithms in the providers are hashing 
and random number generation.While its possible to use these across 
providers (and you might want to), special casing things to do this 
means (for example) that your FIPS approved signature mechanism might 
end up using a non-fips summary (hash) mechanism.


Every other crypto mechanism has or involves keys.  And there's no 
guarantee that keys are valid across providers.




Secondary users like the JSSE probably need to stick to a single 
provider for a given connection.


Stick to a single provider will open other windows for different 
problems.  JCE spec does not grant one provider could implement all 
services.


An application that needs to use two different cryptographic algorithm 
providers should be doing that explicitly and not having it happen under 
the covers.   For example, I was using the BC provider to deal with 
elliptic curve stuff for a long while.  That meant I was also using BC 
to deal with certificates (because the Sun provider didn't understand 
how to deal with the BC's ECKey implementation).   It was kind of a 

Re: JCA design for RFC 7748

2017-08-17 Thread Xuelei Fan

On 8/17/2017 11:35 AM, Michael StJohns wrote:

On 8/17/2017 1:28 PM, Xuelei Fan wrote:
This is the same for ANY current publicly known curve - different 
providers may implement all some or none of them.  So extending this 
model for the curve25519 stuff isn't going to be any different old 
provider and new provider wise than is currently the case.   If you 
want the new curves, you have to specify the new providers. If the 
new and old providers don't implement the same curves, you may need 
to deal with two different providers simultaneously - and that's not 
something that just happens.


I see your points.  Not-binding to a provider cause problems; binding 
to a provider cause other problems.  There are a few complains on the 
problems, and impact the real world applications in practice.


Basically, this is a failing of imagination when the various 
getInstance() methods were defined.  Now its possible to use 
Security.getProvider(Map)  to good effect (but more work) 
to find appropriate providers for appropriate signature/key agreement 
algorithms and curves.


I'm not sure how this applies to the compatibility impact concern.  See 
more in the example bellow.







I don't think your concerns are valid.  I may still be missing 
something here - but would ask for a real-world example that actually 
shows breakage.



I happened to have a real-world example.  See
https://bugs.openjdk.java.net/browse/JDK-8064330


I'm not sure how this applies to the current question of whether or not 
its possible to integrate new EC curves?





This is an interesting bug.  At first it is requested to support 
SHA224 in JSSE implementation. And, SHA224 is added as the supported 
hash algorithm for TLS.  However, because SunMSCAPI does not support 
SHA224 signature, compatibility issues comes.  So we removed SHA224 if 
the SunMSCAPI is presented.  Later, one found the code is unusual as 
SHA224 and the related signature algorithms are supported by the 
underlying providers, look like no reason to limit the use of SHA224.  
So, SHA224 is added back and then the compatibility issues come back 
again.  Then we removed SHA224 again if the SunMSCAPI is presented.  
However, at the same time, another request is asking to support SHA224 
on Windows.  The API design itself put me in a either-or situation.  I 
would try to avoid it if possible for new design.


This appears to be an MSCAPI issue vice a JSSE issue.
MSCAPI is fine as it does not support SHA224.  JSSE is then in a bad 
position because it cannot support SHA224 in a general way even one of 
the underlying provider supports SHA224 but another one not.


And the JCA 
specifically disclaims the guaranteed ability to use cryptographic 
objects from one provider in another provider.
It's not the real problem.  The real problem is that there is a provider 
support the requested algorithms, why not use it?  We can say, the spec 
does not guarantee the behavior, but the application still has a 
problem.  We also can say, we have a design flaw, but the question is 
still there.


Secondary users like 
the JSSE probably need to stick to a single provider for a given connection.


Stick to a single provider will open other windows for different 
problems.  JCE spec does not grant one provider could implement all 
services.





Treat these simply as new curves and let's move forward with very 
minimal changes to the public API.


I would like to treat it as two things.  One is to support new curves 
for new forms.  The other one is to support named curves [1].  For the 
support of new forms,  there are still significant problems to solve. 
For the support of named curves (including the current EC form), looks 
like we are in a not-that-bad situation right now.  Will the named 
curves solution impacts the support of new curves APIs in the future?  
I don't see the impact yet.  I may missing something, but I see no 
reason to option out the named curves support.




I'm not sure why you think this (the example in [1]) can't be done?

I gave the example elsewhere, but let me expand it with my comment above 
(possibly faking the hash key names - sorry):



HashMap neededAlgs = new HashMap<>();
neededAlgs.put("Signature.EdDSA", "");
 neededAlgs.put("AlgorithmParameters.EC SupportedCurves", "ed25519")




Provider[] p = Security.getProviders(neededAlgs);
if (p == null) throw new Exception ("Oops");



AlgorithmParameters parameters = AlgorithmParameters.getInstance("EC", p[0]);
 parameters.init(new ECGenParameterSpec("ed25519"));
 ECParameterSpec ecParameters = 
parameters.getParameterSpec(ECParameterSpec.class);

 return KeyFactory.getInstance("EC", p[0]).generatePublic(new 
ECPublicKeySpec(new ECPoint(x, y), ecParameters));



Hm, good example!

But it is really too weight to use for general application development.

In the example, two crypto operations ("EC" AlgorithmParameters and "EC" 

Re: JCA design for RFC 7748

2017-08-17 Thread Michael StJohns

On 8/17/2017 1:28 PM, Xuelei Fan wrote:
This is the same for ANY current publicly known curve - different 
providers may implement all some or none of them.  So extending this 
model for the curve25519 stuff isn't going to be any different old 
provider and new provider wise than is currently the case.   If you 
want the new curves, you have to specify the new providers. If the 
new and old providers don't implement the same curves, you may need 
to deal with two different providers simultaneously - and that's not 
something that just happens.


I see your points.  Not-binding to a provider cause problems; binding 
to a provider cause other problems.  There are a few complains on the 
problems, and impact the real world applications in practice.


Basically, this is a failing of imagination when the various 
getInstance() methods were defined.  Now its possible to use 
Security.getProvider(Map)  to good effect (but more work) 
to find appropriate providers for appropriate signature/key agreement 
algorithms and curves.






I don't think your concerns are valid.  I may still be missing 
something here - but would ask for a real-world example that actually 
shows breakage.



I happened to have a real-world example.  See
https://bugs.openjdk.java.net/browse/JDK-8064330


I'm not sure how this applies to the current question of whether or not 
its possible to integrate new EC curves?





This is an interesting bug.  At first it is requested to support 
SHA224 in JSSE implementation. And, SHA224 is added as the supported 
hash algorithm for TLS.  However, because SunMSCAPI does not support 
SHA224 signature, compatibility issues comes.  So we removed SHA224 if 
the SunMSCAPI is presented.  Later, one found the code is unusual as 
SHA224 and the related signature algorithms are supported by the 
underlying providers, look like no reason to limit the use of SHA224.  
So, SHA224 is added back and then the compatibility issues come back 
again.  Then we removed SHA224 again if the SunMSCAPI is presented.  
However, at the same time, another request is asking to support SHA224 
on Windows.  The API design itself put me in a either-or situation.  I 
would try to avoid it if possible for new design.


This appears to be an MSCAPI issue vice a JSSE issue.  And the JCA 
specifically disclaims the guaranteed ability to use cryptographic 
objects from one provider in another provider.   Secondary users like 
the JSSE probably need to stick to a single provider for a given connection.





Treat these simply as new curves and let's move forward with very 
minimal changes to the public API.


I would like to treat it as two things.  One is to support new curves 
for new forms.  The other one is to support named curves [1].  For the 
support of new forms,  there are still significant problems to solve. 
For the support of named curves (including the current EC form), looks 
like we are in a not-that-bad situation right now.  Will the named 
curves solution impacts the support of new curves APIs in the future?  
I don't see the impact yet.  I may missing something, but I see no 
reason to option out the named curves support.




I'm not sure why you think this (the example in [1]) can't be done?

I gave the example elsewhere, but let me expand it with my comment above 
(possibly faking the hash key names - sorry):



HashMap neededAlgs = new HashMap<>();
neededAlgs.put("Signature.EdDSA", "");
 neededAlgs.put("AlgorithmParameters.EC SupportedCurves", "ed25519")




Provider[] p = Security.getProviders(neededAlgs);
if (p == null) throw new Exception ("Oops");



AlgorithmParameters parameters = AlgorithmParameters.getInstance("EC", p[0]);
 parameters.init(new ECGenParameterSpec("ed25519"));
 ECParameterSpec ecParameters = 
parameters.getParameterSpec(ECParameterSpec.class);

 return KeyFactory.getInstance("EC", p[0]).generatePublic(new 
ECPublicKeySpec(new ECPoint(x, y), ecParameters));


If you're talking more generally, NamedCurves should be a form of 
ECParameterSpec so you can read the name from the key, but there's no 
support for adding names to that spec.  Maybe extend it?  E.g.:


package java.security.spec;
public class NamedECParameterSpec extends ECParameterSpec {

   private Collection names;
   private Collection oids;
   public NamedECParameterSpec (EllipticCurve curve, ECPoint g, 
BigInteger n, int h, Collection names, Collection oids) {

super (curve, g, n, h);
if (names != null) {
this.names = new ArrayList(names);
}
if (oids != null) {
this.oids = new ArrayList(oids);
   }
  }

   public Collection getNames() {
if (names == null)
 return (Collection)Collections.EMPTY_LIST;
 else
return Collections.unmodifiableList(names);
}

etc

   This makes it easier to get exactly what you want from a key. 
Assuming the provider 

Re: JCA design for RFC 7748

2017-08-17 Thread Michael StJohns

See inline.

On 8/17/2017 11:19 AM, Adam Petcher wrote:

On 8/16/2017 3:17 PM, Michael StJohns wrote:


On 8/16/2017 11:18 AM, Adam Petcher wrote:


My intention with this ByteArrayValue is to only use it for 
information that has a clear semantics when represented as a byte 
array, and a byte array is a convenient and appropriate 
representation for the algorithms involved (so there isn't a lot of 
unnecessary conversion). This is the case for public/private keys in 
RFC 7748/8032:


1) RFC 8032: "An EdDSA private key is a b-bit string k." "The EdDSA 
public key is ENC(A)." (ENC is a function from integers to 
little-endian bit strings.


Oops, minor correction. Here A is a point, so ENC is a function from 
points to little-endian bit strings.


2) RFC 7748: "Alice generates 32 random bytes in a[0] to a[31] and 
transmits K_A =X25519(a, 9) to Bob..." The X25519 and X448 
functions, as described in the RFC, take bit strings as input and 
produce bit strings as output.


Thanks for making my point for me.  The internal representation of 
the public point is an integer.  It's only when encoding or decoding 
that it gets externally represented as an array of bytes.  (And yes, 
I understand that the RFC defines an algorithm using little endian 
byte array representations of the integers - but that's the 
implementation's call, not the API).


With respect to the output of the KeyAgreement algorithm - your (2) 
above, the transmission representation (e.g. the encoded public key) 
is little endian byte array representation of an integer.  The 
internal representation is - wait for it - integer.


I have no problems at all with any given implementation using little 
endian math internally.  For the purposes of using JCA, stick with 
BigInteger to represent your integers.  Use your provider encoding 
methods to translate between what the math is internally and what the 
bits are externally if necessary. Implement the conversion methods 
for the factory and for dealing with the existing EC classes.   Maybe 
get BigInteger to be extended to handle (natively) littleEndian 
representation (as well as fixed length outputs necessary for things 
like ECDH).




All good points, and I think BigInteger may be a reasonable 
representation to use for public/private key values. I'm just not sure 
that it is better than byte arrays. I'll share some relevant 
information that affects this decision.


First off, one of the goals of RFC 7748 and 8032 is to address some of 
the implementation challenges related to ECC. These algorithms are 
designed to eliminate the need for checks at various stages, and to 
generally make implementation bugs less likely. These improvements are 
motivated by all the ECC implementation bugs that have emerged in the 
last ~20 years. I mention this because I think it is important that we 
choose an API and implementation that allows us to benefit from these 
improvements in the standards. That means we shouldn't necessarily 
follow all the existing ECC patterns in the API and implementation.


No - it means that the authors of the RFCs have a bias to have their 
code be the only code.  As I note below I don't actually think they got 
everything right.   The underlying math is really what matters, and the 
API should be able to handle any implementation that gets the math correct.




Specifically, these standards have properties related to byte arrays 
like: "The Curve25519 function was carefully designed to allow all 
32-byte strings as Diffie-Hellman public keys."[1]


This statement is actually a problem.  Valid keys are in the range of 1 
to p-1 for the field (with some additional pruning).   32 byte strings 
(or 256 bit integers) do not map 1-1 into that space.  E.g. there are 
some actual canonical keys where multiple (at least 2) 32 byte strings 
map to them.  (See the pruning and clamping algorithms).  The NIST 
private key generation for EC private keys mitigates this bias by either 
(a) repeatedly generating random keys until you get one in the range or 
(b) generating a key stream with extra (64) bits and reducing that mod p 
of the curve.



If we use representations other than byte strings in the API, then we 
should ensure that our representations have the same properties (e.g. 
every BigInteger is a valid public key).


It's best to talk about each type on its own. Of course, one of the 
benefits of using bit strings is that we may have the option of using 
the same class/interface in the API to hold all of these.


RFC 7748 public keys: I think we can reasonably use BigInteger to hold 
public key values. One minor issue is that we need to specify how 
implementations should handle non-canonical values (numbers that are 
less than 0 or greater than p-1). This does not seem like a huge 
issue, though, and the existing ECC API has the same issue. Another 
minor issue is that modeling this as a BigInteger may encourage 
implementations to use BigInteger in the RFC 7748 Montgomery 

Re: JCA design for RFC 7748

2017-08-17 Thread Xuelei Fan

On 8/17/2017 8:25 AM, Michael StJohns wrote:

On 8/16/2017 12:31 PM, Xuelei Fan wrote:

On 8/16/2017 8:18 AM, Adam Petcher wrote:


I don't worry about this issue any more.  At present, each 
java.security.Key has three characters (see the API Java doc):

. an algorithm
. an encoded form
. a format

The format could be "X.509", and could be "RAW" (like 
ByteArrayValue.getValue()).  I would suggest have the named curve 
in the algorithm characters, and use "RAW" as the encode format.

If X.509 encoding is required, KeyFactory.getKeySpec​() could do it.
Um... I think that doesn't make a lot of sense.  The default 
contract for public keys is X.509 and the default for private keys 
is PKCS#8. Almost all uses of the encoded formats are related to 
PKIX related functions.   (See for info the javadoc for PublicKey).


I'm concerned about this, too. Ideally, we want PKI code to handle 
these new keys without modification. The javadoc wording makes it a 
little unclear whether public keys *must* use X.509 encoding, but 
using other encodings for public keys would probably be surprising.
I have not had a conclusion, but I was wondering if almost all uses of 
the encoded formats are related to PKIX related functions, whether it 
is still a priority to support encoding other than X.509?


So far, I think our proposal works.  The concern is mainly about how 
to *simplify* the transaction between two formats (Raw, PKIX, XML and 
JSON) in applications.  I don't worry about the transaction too much 
as it is doable with our current proposal, but just not as 
straightforward as exposing the RAW public key.


If we want to simplify the transaction, I see your concerns of my 
proposal above.  We may keep using "X.509" for default, and define a 
new key spec.  The encoded form for X.509 is as:


 SubjectPublicKeyInfo ::= SEQUENCE {
 algorithm AlgorithmIdentifier,
 subjectPublicKey BIT STRING
 }

The new encoded form could be just the subjectPublicKey field in the 
SubjectPublicKeyInfo above.


However, I'm not very sure of how common the transaction is required 
and whether it is beyond the threshold to be a public API in JRE.


There's a proposed defined format for PKIX keys on the new curves - the 
IETF CURDLE working group is working through that.  JCA should use that 
and not invent something new.






>
> The new keys remain tagged as ECKeys.  Old code won't notice (because
> old code is using old curves).  New code (new providers) will have to
> pay attention to EllipticCurve and ECField information if its handling
> both types of curves.  As is the case now, no provider need support
> every curve or even every field type.
>
My concern is about the case to use old provider and new provider all 
together at the same time.  JDK is a multiple providers coexisting 
environment.  We cannot grant that old code only uses old curves (old 
providers) and new code only uses new curves (new providers).


I did see this email and commented on it.  I think you're still missing 
the point that not every provider implements (or is required to 
implement) every curve.  And in fact I argued that it was somewhat 
stupid that the underlying C code in the SunEC provider was at odds with 
the code in the SunEC java side with respect to known curves and how 
they were handled. (Subject was RFR 8182999: SunEC throws 
ProviderException on invalid curves).



Consider - currently if I don't specify a provider, and there are two 
providers, and the lower priority provider implements "curveFoobar" but 
the higher priority provider does not, and I use an ECGenParameterSpec 
of "curveFoobar" to try and generate a key, I get a failure because I 
got the higher priority provider (which gets selected because JCA 
doesn't yet know that I want curveFoobar ) that doesn't do that curve. I 
need to specify the same provider throughout the entire process to make 
sure things work as expected AND I need to specify the provider that 
implements the curve I want.



This is the same for ANY current publicly known curve - different 
providers may implement all some or none of them.  So extending this 
model for the curve25519 stuff isn't going to be any different old 
provider and new provider wise than is currently the case.   If you want 
the new curves, you have to specify the new providers.  If the new and 
old providers don't implement the same curves, you may need to deal with 
two different providers simultaneously - and that's not something that 
just happens.


I see your points.  Not-binding to a provider cause problems; binding to 
a provider cause other problems.  There are a few complains on the 
problems, and impact the real world applications in practice.



I don't think your concerns are valid.  I may still be missing something 
here - but would ask for a real-world example that actually shows breakage.



I happened to have a real-world example.  See
   https://bugs.openjdk.java.net/browse/JDK-8064330

This is an 

Re: JCA design for RFC 7748

2017-08-17 Thread Michael StJohns

On 8/16/2017 12:31 PM, Xuelei Fan wrote:

On 8/16/2017 8:18 AM, Adam Petcher wrote:


I don't worry about this issue any more.  At present, each 
java.security.Key has three characters (see the API Java doc):

. an algorithm
. an encoded form
. a format

The format could be "X.509", and could be "RAW" (like 
ByteArrayValue.getValue()).  I would suggest have the named curve 
in the algorithm characters, and use "RAW" as the encode format.

If X.509 encoding is required, KeyFactory.getKeySpec​() could do it.
Um... I think that doesn't make a lot of sense.  The default 
contract for public keys is X.509 and the default for private keys 
is PKCS#8. Almost all uses of the encoded formats are related to 
PKIX related functions.   (See for info the javadoc for PublicKey).


I'm concerned about this, too. Ideally, we want PKI code to handle 
these new keys without modification. The javadoc wording makes it a 
little unclear whether public keys *must* use X.509 encoding, but 
using other encodings for public keys would probably be surprising.
I have not had a conclusion, but I was wondering if almost all uses of 
the encoded formats are related to PKIX related functions, whether it 
is still a priority to support encoding other than X.509?


So far, I think our proposal works.  The concern is mainly about how 
to *simplify* the transaction between two formats (Raw, PKIX, XML and 
JSON) in applications.  I don't worry about the transaction too much 
as it is doable with our current proposal, but just not as 
straightforward as exposing the RAW public key.


If we want to simplify the transaction, I see your concerns of my 
proposal above.  We may keep using "X.509" for default, and define a 
new key spec.  The encoded form for X.509 is as:


 SubjectPublicKeyInfo ::= SEQUENCE {
 algorithm AlgorithmIdentifier,
 subjectPublicKey BIT STRING
 }

The new encoded form could be just the subjectPublicKey field in the 
SubjectPublicKeyInfo above.


However, I'm not very sure of how common the transaction is required 
and whether it is beyond the threshold to be a public API in JRE.


There's a proposed defined format for PKIX keys on the new curves - the 
IETF CURDLE working group is working through that.  JCA should use that 
and not invent something new.






>
> The new keys remain tagged as ECKeys.  Old code won't notice (because
> old code is using old curves).  New code (new providers) will have to
> pay attention to EllipticCurve and ECField information if its handling
> both types of curves.  As is the case now, no provider need support
> every curve or even every field type.
>
My concern is about the case to use old provider and new provider all 
together at the same time.  JDK is a multiple providers coexisting 
environment.  We cannot grant that old code only uses old curves (old 
providers) and new code only uses new curves (new providers).


I did see this email and commented on it.  I think you're still missing 
the point that not every provider implements (or is required to 
implement) every curve.  And in fact I argued that it was somewhat 
stupid that the underlying C code in the SunEC provider was at odds with 
the code in the SunEC java side with respect to known curves and how 
they were handled. (Subject was RFR 8182999: SunEC throws 
ProviderException on invalid curves).



Consider - currently if I don't specify a provider, and there are two 
providers, and the lower priority provider implements "curveFoobar" but 
the higher priority provider does not, and I use an ECGenParameterSpec 
of "curveFoobar" to try and generate a key, I get a failure because I 
got the higher priority provider (which gets selected because JCA 
doesn't yet know that I want curveFoobar ) that doesn't do that curve.  
I need to specify the same provider throughout the entire process to 
make sure things work as expected AND I need to specify the provider 
that implements the curve I want.



This is the same for ANY current publicly known curve - different 
providers may implement all some or none of them.  So extending this 
model for the curve25519 stuff isn't going to be any different old 
provider and new provider wise than is currently the case.   If you want 
the new curves, you have to specify the new providers.  If the new and 
old providers don't implement the same curves, you may need to deal with 
two different providers simultaneously - and that's not something that 
just happens.


I don't think your concerns are valid.  I may still be missing something 
here - but would ask for a real-world example that actually shows breakage.


Treat these simply as new curves and let's move forward with very 
minimal changes to the public API.



Mike







There is an example in my August 10 reply:

http://mail.openjdk.java.net/pipermail/security-dev/2017-August/016199.html 



Xuelei





Re: JCA design for RFC 7748

2017-08-17 Thread Adam Petcher

On 8/16/2017 3:17 PM, Michael StJohns wrote:


On 8/16/2017 11:18 AM, Adam Petcher wrote:


My intention with this ByteArrayValue is to only use it for 
information that has a clear semantics when represented as a byte 
array, and a byte array is a convenient and appropriate 
representation for the algorithms involved (so there isn't a lot of 
unnecessary conversion). This is the case for public/private keys in 
RFC 7748/8032:


1) RFC 8032: "An EdDSA private key is a b-bit string k." "The EdDSA 
public key is ENC(A)." (ENC is a function from integers to 
little-endian bit strings.


Oops, minor correction. Here A is a point, so ENC is a function from 
points to little-endian bit strings.


2) RFC 7748: "Alice generates 32 random bytes in a[0] to a[31] and 
transmits K_A =X25519(a, 9) to Bob..." The X25519 and X448 functions, 
as described in the RFC, take bit strings as input and produce bit 
strings as output.


Thanks for making my point for me.  The internal representation of the 
public point is an integer.  It's only when encoding or decoding that 
it gets externally represented as an array of bytes.  (And yes, I 
understand that the RFC defines an algorithm using little endian byte 
array representations of the integers - but that's the 
implementation's call, not the API).


With respect to the output of the KeyAgreement algorithm - your (2) 
above, the transmission representation (e.g. the encoded public key) 
is little endian byte array representation of an integer.  The 
internal representation is - wait for it - integer.


I have no problems at all with any given implementation using little 
endian math internally.  For the purposes of using JCA, stick with 
BigInteger to represent your integers.  Use your provider encoding 
methods to translate between what the math is internally and what the 
bits are externally if necessary. Implement the conversion methods for 
the factory and for dealing with the existing EC classes.   Maybe get 
BigInteger to be extended to handle (natively) littleEndian 
representation (as well as fixed length outputs necessary for things 
like ECDH).




All good points, and I think BigInteger may be a reasonable 
representation to use for public/private key values. I'm just not sure 
that it is better than byte arrays. I'll share some relevant information 
that affects this decision.


First off, one of the goals of RFC 7748 and 8032 is to address some of 
the implementation challenges related to ECC. These algorithms are 
designed to eliminate the need for checks at various stages, and to 
generally make implementation bugs less likely. These improvements are 
motivated by all the ECC implementation bugs that have emerged in the 
last ~20 years. I mention this because I think it is important that we 
choose an API and implementation that allows us to benefit from these 
improvements in the standards. That means we shouldn't necessarily 
follow all the existing ECC patterns in the API and implementation.


Specifically, these standards have properties related to byte arrays 
like: "The Curve25519 function was carefully designed to allow all 
32-byte strings as Diffie-Hellman public keys."[1] If we use 
representations other than byte strings in the API, then we should 
ensure that our representations have the same properties (e.g. every 
BigInteger is a valid public key).


It's best to talk about each type on its own. Of course, one of the 
benefits of using bit strings is that we may have the option of using 
the same class/interface in the API to hold all of these.


RFC 7748 public keys: I think we can reasonably use BigInteger to hold 
public key values. One minor issue is that we need to specify how 
implementations should handle non-canonical values (numbers that are 
less than 0 or greater than p-1). This does not seem like a huge issue, 
though, and the existing ECC API has the same issue. Another minor issue 
is that modeling this as a BigInteger may encourage implementations to 
use BigInteger in the RFC 7748 Montgomery ladder. This would be 
unfortunate because it would leak sensitive information through timing 
channels.


RFC 7748 private keys: This one is a bit more difficult. RFC 7748 
defines a "clamping" operation that ensures that the integers 
corresponding to bit strings have certain properties (e.g. they are a 
multiple of the cofactor). So if we use BigInteger for private keys in 
the API, we need to specify whether the value is clamped or unclamped. 
If an unclamped value is treated as clamped, then this can result in 
security and correctness issues. Also, the RFC treats private keys as 
bit strings---they are not used in any integer operations. So modeling 
them with byte arrays seems just as valid as modeling them with BigInteger.


RFC 8042 public keys: The analysis here is similar to RFC 7748 public 
keys, except we also need to store the (probably compressed) x 
coordinate. So if we don't use byte arrays, we would need to use 
something 

Re: JCA design for RFC 7748

2017-08-16 Thread Michael StJohns

On 8/16/2017 11:18 AM, Adam Petcher wrote:

On 8/15/2017 7:05 PM, Michael StJohns wrote:

On 8/15/2017 1:43 PM, Xuelei Fan wrote:

On 8/11/2017 7:57 AM, Adam Petcher wrote:


I'm also coming to the conclusion that using X.509 encoding for 
this sort of interoperability is too onerous, and we should come up 
with something better. Maybe we should add a new general-purpose 
interface that exposes some structure in an algorithm-independent 
way. Something like this:


package java.security.interfaces;
public interface ByteArrayValue {

 String getAlgorithm();
 AlgorithmParameterSpec getParams();
 byte[] getValue();
}


I'm not sure how to use the above interface in an application.


This is sort of the moral equivalent of using the TXT RR record in 
DNS and the arguments are similar.


This is a bad idea.


I'm not a DNS expert, so I apologize in advance if I misunderstood 
your argument. What I think you are saying is that it is bad to store 
something in a string or byte array which has no semantics, and it is 
better to store information in a more structured way that has a clear 
semantics. I agree with this.





My intention with this ByteArrayValue is to only use it for 
information that has a clear semantics when represented as a byte 
array, and a byte array is a convenient and appropriate representation 
for the algorithms involved (so there isn't a lot of unnecessary 
conversion). This is the case for public/private keys in RFC 7748/8032:


1) RFC 8032: "An EdDSA private key is a b-bit string k." "The EdDSA 
public key is ENC(A)." (ENC is a function from integers to 
little-endian bit strings.
2) RFC 7748: "Alice generates 32 random bytes in a[0] to a[31] and 
transmits K_A =X25519(a, 9) to Bob..." The X25519 and X448 functions, 
as described in the RFC, take bit strings as input and produce bit 
strings as output.


Thanks for making my point for me.  The internal representation of the 
public point is an integer.  It's only when encoding or decoding that it 
gets externally represented as an array of bytes.  (And yes, I 
understand that the RFC defines an algorithm using little endian byte 
array representations of the integers - but that's the implementation's 
call, not the API).


With respect to the output of the KeyAgreement algorithm - your (2) 
above, the transmission representation (e.g. the encoded public key) is 
little endian byte array representation of an integer.  The internal 
representation is - wait for it - integer.


I have no problems at all with any given implementation using little 
endian math internally.  For the purposes of using JCA, stick with 
BigInteger to represent your integers.  Use your provider encoding 
methods to translate between what the math is internally and what the 
bits are externally if necessary.  Implement the conversion methods for 
the factory and for dealing with the existing EC classes.   Maybe get 
BigInteger to be extended to handle (natively) littleEndian 
representation (as well as fixed length outputs necessary for things 
like ECDH).









So I think that a byte array is the correct representation for 
public/private keys in these two RFCs.


Nope.







I don't worry about this issue any more.  At present, each 
java.security.Key has three characters (see the API Java doc):

. an algorithm
. an encoded form
. a format

The format could be "X.509", and could be "RAW" (like 
ByteArrayValue.getValue()).  I would suggest have the named curve in 
the algorithm characters, and use "RAW" as the encode format.

If X.509 encoding is required, KeyFactory.getKeySpec​() could do it.
Um... I think that doesn't make a lot of sense.  The default contract 
for public keys is X.509 and the default for private keys is PKCS#8.  
Almost all uses of the encoded formats are related to PKIX related 
functions.   (See for info the javadoc for PublicKey).


I'm concerned about this, too. Ideally, we want PKI code to handle 
these new keys without modification. The javadoc wording makes it a 
little unclear whether public keys *must* use X.509 encoding, but 
using other encodings for public keys would probably be surprising.


There's two different things going on here - many encodings (e.g. for EC 
public keys there's the fixed length X and fixed length Y byte array big 
endian big integer as well as the X.92 representation indicating both 
compressed and uncompressed points and then the wrapping of those in 
SubjectPublicKeyInfo then there's the raw signature vs the ASN1 SEQUENCE 
OF INTEGER version.)  JCA uses the X.509 stuff quite a bit to deal with 
all of the CertificatePathValidation  and Certificate validation 
things.  But the "rawer" stuff such as DH sometimes uses bare points 
with the curve being understood by context (see for example the Javacard 
KeyAgreement classes).


In the current case, we still need a way to "sign" and "carry" the new 
public points and to be as interoperable with all of the "old" stuff.  I 
guess you could come 

Re: JCA design for RFC 7748

2017-08-16 Thread Xuelei Fan

On 8/16/2017 8:18 AM, Adam Petcher wrote:


I don't worry about this issue any more.  At present, each 
java.security.Key has three characters (see the API Java doc):

. an algorithm
. an encoded form
. a format

The format could be "X.509", and could be "RAW" (like 
ByteArrayValue.getValue()).  I would suggest have the named curve in 
the algorithm characters, and use "RAW" as the encode format.

If X.509 encoding is required, KeyFactory.getKeySpec​() could do it.
Um... I think that doesn't make a lot of sense.  The default contract 
for public keys is X.509 and the default for private keys is PKCS#8. 
Almost all uses of the encoded formats are related to PKIX related 
functions.   (See for info the javadoc for PublicKey).


I'm concerned about this, too. Ideally, we want PKI code to handle these 
new keys without modification. The javadoc wording makes it a little 
unclear whether public keys *must* use X.509 encoding, but using other 
encodings for public keys would probably be surprising.
I have not had a conclusion, but I was wondering if almost all uses of 
the encoded formats are related to PKIX related functions, whether it is 
still a priority to support encoding other than X.509?


So far, I think our proposal works.  The concern is mainly about how to 
*simplify* the transaction between two formats (Raw, PKIX, XML and JSON) 
in applications.  I don't worry about the transaction too much as it is 
doable with our current proposal, but just not as straightforward as 
exposing the RAW public key.


If we want to simplify the transaction, I see your concerns of my 
proposal above.  We may keep using "X.509" for default, and define a new 
key spec.  The encoded form for X.509 is as:


 SubjectPublicKeyInfo ::= SEQUENCE {
 algorithm AlgorithmIdentifier,
 subjectPublicKey BIT STRING
 }

The new encoded form could be just the subjectPublicKey field in the 
SubjectPublicKeyInfo above.


However, I'm not very sure of how common the transaction is required and 
whether it is beyond the threshold to be a public API in JRE.


On 8/15/2017 4:05 PM, Michael StJohns wrote:
> Here's what I think should happen:
>
> 1) ECPoint gets a document modification to handle compressed points. The
> X is the X value, the Y is -1, 0 or 1 depending on positive negative or
> don't care for the sign of the Y value.   (This means that the byte
> array public key gets handled as a BigInteger). Any value other than 0,
> -1 or 1 for Y indicates a normal X Y point.
>
> 2) EllipticCurve gets a document modification to describe the mappings
> for Edwards and Montgomery curves as discussed previously.
>
> 3) Two classes are added to java.security.spec:
> java.security.spec.ECFieldEdwards and
> java.security.spec.ECFieldMontgomery - both of which implement ECField.
>
> 4) The ECFieldEdwards/Montgomery classes contain an indication of
> whether the curve is signature only, key agreement only or both. They
> also contain any parameters that can't be mapped in EllipticCurve
>
> 5) Using the above, someone specifies the curve sets for the four new
> curves as ECParameterSpec's and we iterate until we're satisfied we've
> got a standard public representation that can be used for other than
> the 4 curves.
>
> 6) Until the JCA is updated, a provider for the new curves can use its
> own concrete ECField classes and later make them be subclasses of the
> java.security.spec.ECFieldMontgomery etc.  It's not ideal, but it does
> let the guys who are chomping at the bit do an implementation while
> waiting for the JCA to be updated.
>
> 7) No other changes to the JCA are made.  The providers implement
> SubjectPublicKeyInfo and PKCS8 as the standard encodings  using the
> definitions in
> https://tools.ietf.org/html/draft-ietf-curdle-pkix-newcurves-00 and
> RFC5480.
>
> The new keys remain tagged as ECKeys.  Old code won't notice (because
> old code is using old curves).  New code (new providers) will have to
> pay attention to EllipticCurve and ECField information if its handling
> both types of curves.  As is the case now, no provider need support
> every curve or even every field type.
>
My concern is about the case to use old provider and new provider all 
together at the same time.  JDK is a multiple providers coexisting 
environment.  We cannot grant that old code only uses old curves (old 
providers) and new code only uses new curves (new providers).


There is an example in my August 10 reply:

http://mail.openjdk.java.net/pipermail/security-dev/2017-August/016199.html

Xuelei


Re: JCA design for RFC 7748

2017-08-16 Thread Adam Petcher

On 8/15/2017 7:05 PM, Michael StJohns wrote:

On 8/15/2017 1:43 PM, Xuelei Fan wrote:

On 8/11/2017 7:57 AM, Adam Petcher wrote:


I'm also coming to the conclusion that using X.509 encoding for this 
sort of interoperability is too onerous, and we should come up with 
something better. Maybe we should add a new general-purpose 
interface that exposes some structure in an algorithm-independent 
way. Something like this:


package java.security.interfaces;
public interface ByteArrayValue {

 String getAlgorithm();
 AlgorithmParameterSpec getParams();
 byte[] getValue();
}


I'm not sure how to use the above interface in an application.


This is sort of the moral equivalent of using the TXT RR record in DNS 
and the arguments are similar.


This is a bad idea.


I'm not a DNS expert, so I apologize in advance if I misunderstood your 
argument. What I think you are saying is that it is bad to store 
something in a string or byte array which has no semantics, and it is 
better to store information in a more structured way that has a clear 
semantics. I agree with this.


My intention with this ByteArrayValue is to only use it for information 
that has a clear semantics when represented as a byte array, and a byte 
array is a convenient and appropriate representation for the algorithms 
involved (so there isn't a lot of unnecessary conversion). This is the 
case for public/private keys in RFC 7748/8032:


1) RFC 8032: "An EdDSA private key is a b-bit string k." "The EdDSA 
public key is ENC(A)." (ENC is a function from integers to little-endian 
bit strings.
2) RFC 7748: "Alice generates 32 random bytes in a[0] to a[31] and 
transmits K_A =X25519(a, 9) to Bob..." The X25519 and X448 functions, as 
described in the RFC, take bit strings as input and produce bit strings 
as output.


So I think that a byte array is the correct representation for 
public/private keys in these two RFCs.






I don't worry about this issue any more.  At present, each 
java.security.Key has three characters (see the API Java doc):

. an algorithm
. an encoded form
. a format

The format could be "X.509", and could be "RAW" (like 
ByteArrayValue.getValue()).  I would suggest have the named curve in 
the algorithm characters, and use "RAW" as the encode format.

If X.509 encoding is required, KeyFactory.getKeySpec​() could do it.
Um... I think that doesn't make a lot of sense.  The default contract 
for public keys is X.509 and the default for private keys is PKCS#8.  
Almost all uses of the encoded formats are related to PKIX related 
functions.   (See for info the javadoc for PublicKey).


I'm concerned about this, too. Ideally, we want PKI code to handle these 
new keys without modification. The javadoc wording makes it a little 
unclear whether public keys *must* use X.509 encoding, but using other 
encodings for public keys would probably be surprising.




To be JCA compliant you need all of: 

Are you describing hard compliance requirements or more informal 
expectations? If it's the former, where are these requirements documented?




Re: JCA design for RFC 7748

2017-08-16 Thread Adam Petcher

On 8/15/2017 5:29 PM, Anders Rundgren wrote:


Does this mean that the following is correct?

EC public key:

public interface ECPublicKey
extends PublicKey, ECKey


CFRG/RFC 7748 public key:

public class TheActualImplememtationClass
extends PublicKey, ByteArrayKey


Basically, yes. My latest prototype has something like:
class XDHPublicKeyImpl extends X509Key implements ByteArrayKey, PublicKey

So it supports X.509 encoding through getEncoded(), and raw encoding 
through ByteArrayKey. But perhaps ByteArrayKey and X.509 encoding is not 
necessary---see Xuelei's latest suggestion on this thread related to this.





If so, it should work. Documentation seems a bit less obvious though
which is why I advocated a more direct approach.

Cheers,
Anders





Re: JCA design for RFC 7748

2017-08-15 Thread Michael StJohns

On 8/15/2017 1:43 PM, Xuelei Fan wrote:

On 8/11/2017 7:57 AM, Adam Petcher wrote:

On 8/10/2017 9:46 PM, Michael StJohns wrote:


On 8/10/2017 7:36 PM, Xuelei Fan wrote:
Right now there are 3 major APIs (JCA, PKCS11 and Microsoft CSP) 
and at least 4 major representational domains (Raw, PKIX, XML and 
JSON).  In the current situation, I can take a JCA EC Public key 
and convert it to pretty much any of the other APIs or 
representations. For much of the hardware based stuff (ie, smart 
cards), I go straight from JCA into raw and vice versa. Assuming 
you left the "getEncoded()" stuff in the API and the encoding was 
PKIX, I'd have to encode to PKIX, decode the PKIX to extract the 
actual raw key or encode a PKIX blob and hope that the KeyFactory 
stuff actually worked.


It's not just support of arbitrary keys, but the ability to 
convert things without having to do multiple steps or stages.


Good point!  It would be nice if transaction between two formats 
could be done simply.  Using X.509 encoding is doable as you said 
above, but maybe there are spaces to get improvements.


I need more time to think about it.  Please let me know if any one 
have a solution to simplify the transaction if keeping use the 
proposed named curves solution.




I'm also coming to the conclusion that using X.509 encoding for this 
sort of interoperability is too onerous, and we should come up with 
something better. Maybe we should add a new general-purpose interface 
that exposes some structure in an algorithm-independent way. 
Something like this:


package java.security.interfaces;
public interface ByteArrayValue {

 String getAlgorithm();
 AlgorithmParameterSpec getParams();
 byte[] getValue();
}


I'm not sure how to use the above interface in an application.


This is sort of the moral equivalent of using the TXT RR record in DNS 
and the arguments are similar.


This is a bad idea.



I don't worry about this issue any more.  At present, each 
java.security.Key has three characters (see the API Java doc):

. an algorithm
. an encoded form
. a format

The format could be "X.509", and could be "RAW" (like 
ByteArrayValue.getValue()).  I would suggest have the named curve in 
the algorithm characters, and use "RAW" as the encode format.

If X.509 encoding is required, KeyFactory.getKeySpec​() could do it.
Um... I think that doesn't make a lot of sense.  The default contract 
for public keys is X.509 and the default for private keys is PKCS#8.  
Almost all uses of the encoded formats are related to PKIX related 
functions.   (See for info the javadoc for PublicKey).


Raw formats are used pretty much exclusively for symmetric keys.

The KeyFactory.getKeySpec() requires an actual KeySpec definition which 
is different than the ByteArrayValue stuff being proposed above.


To be JCA compliant you need all of:

1) A master interface (e.g. ECKey) that's a marker interface for key 
material

2) A public key interface that extends PublicKey and the master interface
3) A private key interface that extends PrivateKey and the master interface
4) A public key specification that implements KeySpec
5) A private key specification that implements KeySpec
6) A generation parameter specification that implements 
AlgorithmParameterSpec

7) A key parameter specification (if required by the master interface)
8) A factory class that implements KeyFactoryImpl taht implements 
AlgorithmParameterSpec

9) A common encoding for each of the public and private keys
10) A transform to/from the public and private key specs


Here's what I think should happen:

1) ECPoint gets a document modification to handle compressed points.  
The X is the X value, the Y is -1, 0 or 1 depending on positive negative 
or don't care for the sign of the Y value.   (This means that the byte 
array public key gets handled as a BigInteger). Any value other than 0, 
-1 or 1 for Y indicates a normal X Y point.


2) EllipticCurve gets a document modification to describe the mappings 
for Edwards and Montgomery curves as discussed previously.


3) Two classes are added to java.security.spec: 
java.security.spec.ECFieldEdwards and 
java.security.spec.ECFieldMontgomery - both of which implement ECField.


4) The ECFieldEdwards/Montgomery classes contain an indication of 
whether the curve is signature only, key agreement only or both. They 
also contain any parameters that can't be mapped in EllipticCurve


5) Using the above, someone specifies the curve sets for the four new 
curves as ECParameterSpec's and we iterate until we're satisfied we've 
got a standard public representation that can be used for other than the 
4 curves.


6) Until the JCA is updated, a provider for the new curves can use its 
own concrete ECField classes and later make them be subclasses of the 
java.security.spec.ECFieldMontgomery etc.  It's not ideal, but it does 
let the guys who are chomping at the bit do an implementation while 
waiting for the JCA to be updated.


7) No other 

Re: JCA design for RFC 7748

2017-08-15 Thread Xuelei Fan

On 8/11/2017 7:57 AM, Adam Petcher wrote:

On 8/10/2017 9:46 PM, Michael StJohns wrote:


On 8/10/2017 7:36 PM, Xuelei Fan wrote:
Right now there are 3 major APIs  (JCA, PKCS11 and Microsoft CSP) 
and at least 4 major representational domains (Raw, PKIX, XML and 
JSON).  In the current situation, I can take a JCA EC Public key and 
convert it to pretty much any of the other APIs or representations. 
For much of the hardware based stuff (ie, smart cards), I go 
straight from JCA into raw and vice versa. Assuming you left the 
"getEncoded()" stuff in the API and the encoding was PKIX, I'd have 
to encode to PKIX, decode the PKIX to extract the actual raw key or 
encode a PKIX blob and hope that the KeyFactory stuff actually worked.


It's not just support of arbitrary keys, but the ability to convert 
things without having to do multiple steps or stages.


Good point!  It would be nice if transaction between two formats 
could be done simply.  Using X.509 encoding is doable as you said 
above, but maybe there are spaces to get improvements.


I need more time to think about it.  Please let me know if any one 
have a solution to simplify the transaction if keeping use the 
proposed named curves solution.




I'm also coming to the conclusion that using X.509 encoding for this 
sort of interoperability is too onerous, and we should come up with 
something better. Maybe we should add a new general-purpose interface 
that exposes some structure in an algorithm-independent way. Something 
like this:


package java.security.interfaces;
public interface ByteArrayValue {

 String getAlgorithm();
 AlgorithmParameterSpec getParams();
 byte[] getValue();
}


I'm not sure how to use the above interface in an application.

I don't worry about this issue any more.  At present, each 
java.security.Key has three characters (see the API Java doc):

. an algorithm
. an encoded form
. a format

The format could be "X.509", and could be "RAW" (like 
ByteArrayValue.getValue()).  I would suggest have the named curve in the 
algorithm characters, and use "RAW" as the encode format.  If X.509 
encoding is required, KeyFactory.getKeySpec​() could do it.


Xuelei

The actual value is encoded, but the parameters are exposed, so this 
interface would work well for any value that is generally represented 
using a single encoded value (like public/private keys in RFC 7748, and 
8032). This could be used with the new NamedParameterSpec class to 
identify the parameters by name. It could also be used with other 
parameter specs to specify curve coefficients.


Of course, you may still need to look up curve name/OID/coefficients 
based on the parameters, but at least this solution provides direct 
access to the parameters and raw value, and you wouldn't need to go 
through X.509. Though perhaps this is less appropriate for SEC1 types 
and XML/JSON, because you would need to parse the value to extract the x 
and y coordinates. So using the existing ECKey for those types may make 
more sense.







Re: JCA design for RFC 7748

2017-08-15 Thread Adam Petcher
Okay, let me go back to the beginning with some slightly weaker 
assumptions, and a slightly modified design. I've attempted to 
incorporate all the feedback I've received so far, but if you feel like 
I missed something, please let me know.


Assumptions:

A) We don't need to expose curve domain parameters over the API in the 
initial design/implementation. Curves can be specified using names (e.g. 
"X25519") or OIDs. The underlying implementation will likely support 
arbitrary Montgomery curves, but the JCA application will only be able 
to use the supported named curves. Support for arbitrary curve 
parameters in the API is something that can be considered as a separate 
feature in the future.
B) We don't need specs and interfaces that are specific to RFC 7748 
public/private keys. To allow interoperability between providers, and to 
make it easy to specify key values, we can use more general purpose spec 
classes and interfaces that can be reused by multiple algorithms. In 
particular, we should be able to reuse these interfaces/classes between 
RFC 7748 and RFC 8032.


Here is a high-level description of the proposed JCA API. A more 
detailed description will be provided during the JEP review.


1) The string "XDH" will be used in getInstance() to refer to all 
services related to RFC 7748 (KeyAgreement, KeyFactory, 
KeyPairGenerator, etc). This is a departure from the ECDH API that used 
"EC" for key generation (shared with ECDSA) and "ECDH" for KeyAgreement, 
and makes the RFC 7748 API more like "DiffieHellman" and other 
algorithms that use the same name for all services.
2) The new class java.security.spec.NamedParameterSpec (which implements 
AlgorithmParameterSpec) will be used to specify curves for RFC 7748. 
This class has a single String member which holds the name of the curve 
("X25519" or "X448"). This parameter spec class can be reused by other 
crypto algorithms that similarly identify parameter sets using names 
(e.g. FFDHE3072 in DiffieHellman). This new class can be inserted into 
the hierarchy above ECGenParameterSpec.
3) There will be no classes in java.security.spec specific to RFC 7748 
public keys and private keys. A new class called ByteArrayKeySpec will 
be added to java.security.spec. This class will hold an algorithm name 
String, an AlgorithmParameterSpec, and a byte array containing the raw 
key bytes. This class is suitable for keys in RFC 7748, RFC 8032, and 
possibly other algorithms. ByteArrayKeySpec can be viewed as a 
generalization of SecretKeySpec, and it will be used in a similar 
manner. The existing classes X509EncodedKeySpec and PKCS8EncodedKeySpec 
can also be used for RFC 7748 public and private key specs, respectively.
4) There will be no interfaces in java.security.interfaces specific to 
RFC 7748 public/private keys. A new interface called ByteArrayKey will 
be added to java.security.interfaces. This interface will expose an 
AlgorithmParameterSpec and a byte array containing the raw key value. 
Implementations of keys in RFC 7748, 8032 (and possibly other 
algorithms) can implement this interface to allow other providers to 
access the required information about the keys. Implementations can also 
achieve interoperability by using encoded representations of keys.


Here is how the API will be implemented in the SunEC provider:

1) The public key and private key implementation classes will extend 
sun.security.ec.X509Key and sun.security.ec.PKCS8Key, respectively. This 
is similar to ECPublicKeyImpl and ECPrivateKeyImpl. They will also both 
implement ByteArrayKey.
2) The KeyFactory for RFC 7748 will support translation from opaque keys 
and ByteArrayKey to X509EncodedKeySpec/PKCS8EncodedKeySpec and 
ByteArrayKeySpec (and vice-versa).


Example code:

KeyPairGenerator kpg = KeyPairGenerator.getInstance("XDH");
NamedParameterSpec paramSpec = new NamedParameterSpec("X25519");
kpg.initialize(paramSpec); // equivalent to kpg.initialize(255)
KeyPair kp = kpg.generateKeyPair();

KeyFactory kf = KeyFactory.getInstance("XDH");
byte[] rawKeyBytes = ...
ByteArrayKeySpec pubSpec = new ByteArrayKeySpec(
"XDH",
   new NamedParameterSpec("X25519"),
   rawKeyBytes);
PublicKey pubKey = kf.generatePublic(pubSpec);

KeyAgreement ka = KeyAgreement.getInstance("XDH");
ka.init(kp.getPrivate());
ka.doPhase(pubKey, true);
byte[] secret = ka.generateSecret();

Additional notes:
1) The ability to specify curve domain parameters over the API is a 
useful feature, and this design accommodates the addition of this 
feature in the future. Both ByteArrayKeySpec and ByteArrayKey hold an 
AlgorithmParameterSpec rather than a more concrete type. In addition to 
using NamedParameterSpec, the parameters could be specified using some 
other type that holds domain parameters.


2) Applications that need to choose their behavior based on the type of 
(for example) public key can do so for RFC 7748 keys by calling 
Key::getAlgorithm() to determine the algorithm and then using 
ByteArrayKey to 

Re: JCA design for RFC 7748

2017-08-11 Thread Anders Rundgren

Hi Guys,

whatever you come up with I anticipate it comes with descriptions
of how to use it as well. I would recommend putting the JEP on GitHub
(although this may not be the usual process), since this topic have
bearings on other crypto APIs in the workings.

I really hope that the end-result will turn out better than the quite quirky
EC support provided by JDK:
http://mail.openjdk.java.net/pipermail/security-dev/2013-October/009105.html

Regards,
Anders



Re: JCA design for RFC 7748

2017-08-11 Thread Adam Petcher

On 8/10/2017 12:25 PM, Michael StJohns wrote:


On 8/10/2017 9:44 AM, Adam Petcher wrote:
Does anyone know of a particular use case (that we haven't discuss 
already) that would require a provider to support arbitrary curves? 
Any other arguments for or against this feature? 


There are uses for changing out the base point.  PAKE and SPAKE use 
similar math (e.g. G^s*sharedSecret is the equivalent of a new base 
point).


There are uses for private curves - e.g. when you want to actually be 
sure that the curve was randomly generated (sort of the same argument 
that got us to Curve25519 in the first place).


There are the whole set of Edwards curves that are mostly not included 
in any provider (except possible Microsoft's) as of yet.


Basically, you're trying to argue that there are no better curves (for 
the 'new' math) than have already been specified and there never will 
be.  I think that's a very shortsighted argument.


I expect that new curves will be developed in the future, and the API 
should absolutely be designed in such a way that that support for new 
curves can be easily added to applications and providers. What I 
attempted to argue (I apologize if this was not clear) was that we don't 
necessarily need support for arbitrary curve parameters in the initial 
API design because:


1) When new, better curves are developed, they will eventually get 
names, and then they can be used in exactly the same way as the curves 
that exist today.
2) If there is a desire to allow arbitrary curve parameters for 
Montgomery/Edwards curves over the API, then we can consider this in the 
future as a separate feature. We should ensure that the initial API 
design accommodates the addition of this API in the future.





Later, Mike






Re: JCA design for RFC 7748

2017-08-10 Thread Xuelei Fan

On 8/10/2017 6:46 PM, Michael StJohns wrote:

On 8/10/2017 7:36 PM, Xuelei Fan wrote:

Hi Michael,

Good points!  See comments inlines.

On 8/10/2017 3:20 PM, Michael StJohns wrote:


Instead of converting, I was thinking about mapping.  E.g. Montgomery 
A and B matches the A and B of the curve.  But the "x" of the 
Montgomery point is just the "x" of the ECPoint with the "y" left as 
null.  For Edwards, it looks like you would map "d" to A. For [3] I'd 
map "r" to A.  I'd leave B as null for both- no reason to throw an 
unsupported exception as the code generally has a clue about what 
types of keys they're dealing with (or we provide a marker so they 
can figure it out).


The conversion in and out for points is a conversion from little 
endian to big endian and vice versa, but that only has to be done if 
you're importing or exporting a parameter set and that's an 
implementation issue not an API issue.


Basically, all the math is BigIntegers under the hood.  The 
curve25519 RFC specifies an implementation that's little endian, but 
the actual math is just math and things like the public key is really 
just a BigInteger.


Old code would just continue to work - since it would not be using 
the new curves.  New code would have to look for the curve type 
marker (e.g. the ECField) if there was the possibility of confusion.


I understand your points.  The mapping may be confusing to application 
developers, but no problem for new codes if following the new coding 
guideline.  I'm not very sure of the old code, for similar reason to 
use the converting solution.


For example, an Edwards curve form of the SubjectPublicKeyInfo field 
in a X.509 cert is parsed as X509EncodedKeySpec, and "EC" KeyFactory 
is used to generate the ECPublicKey.  The algorithm name of the 
ECPublicKey instance is "EC", and the parameter is an instance of 
ECParameterSpec. Somehow, the ECPublicKey leave the key generation 
environment, and the curve OID is unknown in the new environment. Then 
the public could be used improperly.  In the past, it's fine as the 
only supported form is Weierstrass form, there is no need to tell the 
curve forms in a crypto implementation.  However, when a new form is 
introduces, identify the EC form of a key is an essential part for the 
following crypto operations. Old providers or codes may not be able to 
tell the form, as may result in compatibility issues.


I don't think any of this is an issue.   An X509EncodedKeySpec for 
either type of key has a id-ecPublicKey OID identifying it (embedded in 
the SubjectPublicKeyInfo encoding).  In the key body, there's the 
EcpkParameters structure which is a 'namedCurve' which consists of an 
OID.  The curve OIDs for 25519 and 447 are different than any of the 
Weiserstrass keys.   When the KeyFactory factory implementation reads 
the byte stream its going to build a JCA ECPublicKey that matches the 
OID AND that's a concrete ECPublicKey class of the key factory provider.


If the factory implementation doesn't understand the oid, then the 
provider throws an error.  I forget which one.

> The concrete class for the ECPublic key is specific to the provider.
Some providers may support the new key forms, some may not.  There's no 
guarantee (and there never has been a guarantee) that an ECPublic key 
from one provider can be used with another provider (e.g. PKCS11 
provider vs a software provider) - you have to convert the key into a 
keyspec and then run the factory method on it.


I'm not sure of it.  JDK is a multiple providers framework.  A key 
generated in one provider may work in another provider, as if the 
conversion of the key works.  It's not rare that the public key is 
parsed by one provider, and used in another provider.  For some cases, 
we don't have conversion problem in the past as the key spec is clear. 
But for some other cases, we do have problems.  But if a case works in 
the past, we don't want to break it; otherwise, it might be a kind of 
compatibility issues.


As we discussed, there are multiple forms of EC curves.  EC curves form 
is an essential part of a EC key for following operation.  As if the EC 
curves form is unknown, there are potential problems.  When a old 
provider try to use a new provider generated keys for a new forms, 
problems happens.  This actually can be avoid if the old provider does 
not support this algorithm.  The "EC" name is too general to accept new 
forms.


So I don't think there's anything we have to worry about here - no 
violation of the API contract as far as I can tell.


(As a more complete example - consider what happens when you have an F2M 
EC provider and an Fp EC provider both generating public keys and 
encoding them.  Neither provider can decode the other's encoded key 
because they don't have the OIDs and the parameter sets).


If we define all of the forms at the same time, a provider would follow 
the specs, and no compatibility issues.  However, if we add something 
new later, and 

Re: JCA design for RFC 7748

2017-08-10 Thread Michael StJohns

On 8/10/2017 7:36 PM, Xuelei Fan wrote:

Hi Michael,

Good points!  See comments inlines.

On 8/10/2017 3:20 PM, Michael StJohns wrote:


Instead of converting, I was thinking about mapping.  E.g. Montgomery 
A and B matches the A and B of the curve.  But the "x" of the 
Montgomery point is just the "x" of the ECPoint with the "y" left as 
null.  For Edwards, it looks like you would map "d" to A. For [3] I'd 
map "r" to A.  I'd leave B as null for both- no reason to throw an 
unsupported exception as the code generally has a clue about what 
types of keys they're dealing with (or we provide a marker so they 
can figure it out).


The conversion in and out for points is a conversion from little 
endian to big endian and vice versa, but that only has to be done if 
you're importing or exporting a parameter set and that's an 
implementation issue not an API issue.


Basically, all the math is BigIntegers under the hood.  The 
curve25519 RFC specifies an implementation that's little endian, but 
the actual math is just math and things like the public key is really 
just a BigInteger.


Old code would just continue to work - since it would not be using 
the new curves.  New code would have to look for the curve type 
marker (e.g. the ECField) if there was the possibility of confusion.


I understand your points.  The mapping may be confusing to application 
developers, but no problem for new codes if following the new coding 
guideline.  I'm not very sure of the old code, for similar reason to 
use the converting solution.


For example, an Edwards curve form of the SubjectPublicKeyInfo field 
in a X.509 cert is parsed as X509EncodedKeySpec, and "EC" KeyFactory 
is used to generate the ECPublicKey.  The algorithm name of the 
ECPublicKey instance is "EC", and the parameter is an instance of 
ECParameterSpec. Somehow, the ECPublicKey leave the key generation 
environment, and the curve OID is unknown in the new environment.  
Then the public could be used improperly.  In the past, it's fine as 
the only supported form is Weierstrass form, there is no need to tell 
the curve forms in a crypto implementation.  However, when a new form 
is introduces, identify the EC form of a key is an essential part for 
the following crypto operations. Old providers or codes may not be 
able to tell the form, as may result in compatibility issues.


I don't think any of this is an issue.   An X509EncodedKeySpec for 
either type of key has a id-ecPublicKey OID identifying it (embedded in 
the SubjectPublicKeyInfo encoding).  In the key body, there's the 
EcpkParameters structure which is a 'namedCurve' which consists of an 
OID.  The curve OIDs for 25519 and 447 are different than any of the 
Weiserstrass keys.   When the KeyFactory factory implementation reads 
the byte stream its going to build a JCA ECPublicKey that matches the 
OID AND that's a concrete ECPublicKey class of the key factory provider.


If the factory implementation doesn't understand the oid, then the 
provider throws an error.  I forget which one.


The concrete class for the ECPublic key is specific to the provider.  
Some providers may support the new key forms, some may not.  There's no 
guarantee (and there never has been a guarantee) that an ECPublic key 
from one provider can be used with another provider (e.g. PKCS11 
provider vs a software provider) - you have to convert the key into a 
keyspec and then run the factory method on it.


So I don't think there's anything we have to worry about here - no 
violation of the API contract as far as I can tell.


(As a more complete example - consider what happens when you have an F2M 
EC provider and an Fp EC provider both generating public keys and 
encoding them.  Neither provider can decode the other's encoded key 
because they don't have the OIDs and the parameter sets).









However, I'm not very sure of the compatibility impact (see above).

3. Where we are not now?
Using named curves is popular.  There is a ECGenParameterSpec class 
using named curves:

 ECGenParameterSpec​ ecgp =
 new ECGenParameterSpec​(secp256r1);
 KeyPairGenerator kpg = KeyPairGenerator.getInstance("EC");
 kpg.initialize(ecpg);
 KeyPair kp = kpg.generateKeyPair​();

 ECPublicKey pubKey = (ECPublicKey)kp.getPublic();
 String keyAlgorithm = pubKey.getAlgorithm​();  // "EC"

However, it is used for key generation only.  Once the keys are 
generated, there is no public API to know the name of the curve in 
ECKey.  ECKey.getAlgorithm() will return "EC" only. If it is 
required to known whether a key is of a named curve, the solution is 
not straightforward.


This ties back to "getEncoded()" representations.  Under the hood, if 
you do a getEncoded() there's a "which name does this parameter set 
match up to" search which checks various tables for an OID and uses 
that in an X.509 SPKI output object.  On input, the table lookup has 
to see whether or not it understands the curve OID (or the key type 

Re: JCA design for RFC 7748

2017-08-10 Thread Xuelei Fan

Hi Michael,

Good points!  See comments inlines.

On 8/10/2017 3:20 PM, Michael StJohns wrote:

Hi Xuelei -

Great analysis.

Some comments in line.

On 8/10/2017 3:10 PM, Xuelei Fan wrote:

Hi,

I want to extend the comment a little bit.

1. The background
A elliptic curve is determined by either an equation of form
 y^2 = x^3 + ax + b(1) [Weierstrass form]
or
 y^2 + xy = x^3 + ax^2 + b.(2)

However, some other forms may also be used.  For example:
 y^2 = x(x-1)(x-r) (3)
or
 By^2 = x^3 + Ax^2 + x (4) [RFC 7748, Montgomery curve]
 x^2 + y^2 = 1 + dx^2y^2   (5) [RFC 7748, Edwards curve]

In general, any elliptic curve can be written in Weierstrass form (1) 
or (2).  That's, Montgomery curve and Edwards curve can be expressed 
in Weierstrass form.


2. Where we are now?
In JDK, an elliptic curve is defined in the Weierstrass form 
((1)/(2)). See java.security.spec.EllipticCurve:


EllipticCurve​(ECField field, BigInteger a, BigInteger b)

In theory, the existing APIs can be used for RFC 7748, by converting 
the Montgomery curve and Edwards curve to the Weierstrass form. 
However, the conversion can be misleading and complicate the 
implementation significantly.  For example, before using a point 
Weierstrass form (x, y), the implementation need to convert it to 
Montgomery curve (x', -) so as to use the fully potential of RFC 
7748.   The curves returned in public APIs need to use (x, y), while 
the implementation need to use (x', y'). It's very confusing and the 
compatibility impact could be significant.  For example:


public something(ECPublicKey ecPublicKey)  {
   // Problem: If no other information, it is unclear
   // whether the ecPublicKey can be used for a particular
   // signature verification or not when the RFC 7748/8032
   // get supported.

   // Problem: an old application may use ecPublicKey for
   // the old style operation, even the ecPublicKey is supposed
   // to be x25519.  It's not easy to control the behavior in
   // legacy application code, and may introduce unexpected
   // security issues.
}

public KeyAgreement getKeyAgreement(AlgorithmParameterSpec aps) {
   // Problem: the code bellow should be comment in the current
   // code.  However, the ECParameterSpec may not be able to use
   // for the old style "EC" key agreement.
   //
   // JDK crypto provider can take special action to avoid this
   // issue in the JCA/JCE implementation.  But it cannot be
   // granted other provider can do this as well, and old
   // provider may run into problems as well.
   if (aps instance of ECParameterSpec) {
   return KeyAgreement.getInstance("EC");
   }
}

What's the problem with ECPublicKey/ECPrivateKey/ECKey? It's mainly 
about the ECParameterSpec:


 ECParameterSpec ECKey.getParams​()

and ECParameterSpec is using java.security.spec.EllipticCurve. This 
design makes it pretty confusing to use ECPublicKey/ECPrivateKey/ECKey 
for RFC 7748 (Edwards curve form and Montgomery curve form).


Can EllipticCurve be extended to support more forms? The 
java.security.spec.EllipticCurve defines two methods to get the 
coefficients of Weierstrass form.

 public BigInteger getA()
 public BigInteger getB()

The 'A' and 'B' may not exist in other forms, for example the 
(3)(4)(5) forms above.  While, the spec might be able to be updated by 
throwing UnsupportedOperationException for getA() and getB() for the 
(3)(4)(5) forms, and define new extended classes for new forms, like:

 public MCEllipticCurve extends EllipticCurve   // Montgomery curve
 public EDEllipticCurve extends EllipticCurve   // Edwards curve


Instead of converting, I was thinking about mapping.  E.g. Montgomery A 
and B matches the A and B of the curve.  But the "x" of the Montgomery 
point is just the "x" of the ECPoint with the "y" left as null.  For 
Edwards, it looks like you would map "d" to A. For [3] I'd map "r" to 
A.  I'd leave B as null for both- no reason to throw an unsupported 
exception as the code generally has a clue about what types of keys 
they're dealing with (or we provide a marker so they can figure it out).


The conversion in and out for points is a conversion from little endian 
to big endian and vice versa, but that only has to be done if you're 
importing or exporting a parameter set and that's an implementation 
issue not an API issue.


Basically, all the math is BigIntegers under the hood.  The curve25519 
RFC specifies an implementation that's little endian, but the actual 
math is just math and things like the public key is really just a 
BigInteger.


Old code would just continue to work - since it would not be using the 
new curves.  New code would have to look for the curve type marker (e.g. 
the ECField) if there was the possibility of confusion.


I understand your points.  The mapping may be confusing to 

Re: JCA design for RFC 7748

2017-08-10 Thread Xuelei Fan

Hi,

I want to extend the comment a little bit.

1. The background
A elliptic curve is determined by either an equation of form
 y^2 = x^3 + ax + b(1) [Weierstrass form]
or
 y^2 + xy = x^3 + ax^2 + b.(2)

However, some other forms may also be used.  For example:
 y^2 = x(x-1)(x-r) (3)
or
 By^2 = x^3 + Ax^2 + x (4) [RFC 7748, Montgomery curve]
 x^2 + y^2 = 1 + dx^2y^2   (5) [RFC 7748, Edwards curve]

In general, any elliptic curve can be written in Weierstrass form (1) or 
(2).  That's, Montgomery curve and Edwards curve can be expressed in 
Weierstrass form.


2. Where we are now?
In JDK, an elliptic curve is defined in the Weierstrass form ((1)/(2)). 
See java.security.spec.EllipticCurve:


EllipticCurve​(ECField field, BigInteger a, BigInteger b)

In theory, the existing APIs can be used for RFC 7748, by converting the 
Montgomery curve and Edwards curve to the Weierstrass form.  However, 
the conversion can be misleading and complicate the implementation 
significantly.  For example, before using a point Weierstrass form (x, 
y), the implementation need to convert it to Montgomery curve (x', -) so 
as to use the fully potential of RFC 7748.   The curves returned in 
public APIs need to use (x, y), while the implementation need to use 
(x', y').  It's very confusing and the compatibility impact could be 
significant.  For example:


public something(ECPublicKey ecPublicKey)  {
   // Problem: If no other information, it is unclear
   // whether the ecPublicKey can be used for a particular
   // signature verification or not when the RFC 7748/8032
   // get supported.

   // Problem: an old application may use ecPublicKey for
   // the old style operation, even the ecPublicKey is supposed
   // to be x25519.  It's not easy to control the behavior in
   // legacy application code, and may introduce unexpected
   // security issues.
}

public KeyAgreement getKeyAgreement(AlgorithmParameterSpec aps) {
   // Problem: the code bellow should be comment in the current
   // code.  However, the ECParameterSpec may not be able to use
   // for the old style "EC" key agreement.
   //
   // JDK crypto provider can take special action to avoid this
   // issue in the JCA/JCE implementation.  But it cannot be
   // granted other provider can do this as well, and old
   // provider may run into problems as well.
   if (aps instance of ECParameterSpec) {
   return KeyAgreement.getInstance("EC");
   }
}

What's the problem with ECPublicKey/ECPrivateKey/ECKey? It's mainly 
about the ECParameterSpec:


 ECParameterSpec ECKey.getParams​()

and ECParameterSpec is using java.security.spec.EllipticCurve.  This 
design makes it pretty confusing to use ECPublicKey/ECPrivateKey/ECKey 
for RFC 7748 (Edwards curve form and Montgomery curve form).


Can EllipticCurve be extended to support more forms? The 
java.security.spec.EllipticCurve defines two methods to get the 
coefficients of Weierstrass form.

 public BigInteger getA()
 public BigInteger getB()

The 'A' and 'B' may not exist in other forms, for example the (3)(4)(5) 
forms above.  While, the spec might be able to be updated by throwing 
UnsupportedOperationException for getA() and getB() for the (3)(4)(5) 
forms, and define new extended classes for new forms, like:

 public MCEllipticCurve extends EllipticCurve   // Montgomery curve
 public EDEllipticCurve extends EllipticCurve   // Edwards curve

However, I'm not very sure of the compatibility impact (see above).

3. Where we are not now?
Using named curves is popular.  There is a ECGenParameterSpec class 
using named curves:

 ECGenParameterSpec​ ecgp =
 new ECGenParameterSpec​(secp256r1);
 KeyPairGenerator kpg = KeyPairGenerator.getInstance("EC");
 kpg.initialize(ecpg);
 KeyPair kp = kpg.generateKeyPair​();

 ECPublicKey pubKey = (ECPublicKey)kp.getPublic();
 String keyAlgorithm = pubKey.getAlgorithm​();  // "EC"

However, it is used for key generation only.  Once the keys are 
generated, there is no public API to know the name of the curve in 
ECKey.  ECKey.getAlgorithm() will return "EC" only.  If it is required 
to known whether a key is of a named curve, the solution is not 
straightforward.


4. A general proposal
Support named curves could be a solution for #2 and #3 concerns above. 
For named curves, the parameters are defined explicitly.  So, it is 
REQUIRED to have the public APIs for named curves' parameters any more. 
It can be something hidden in the implementation layer.  The key pair 
generation may looks like:


KeyPairGenerator kpg =
KeyPairGenerator.getInstance("ECWithSecp256k1");
KeyPair kp = kpg.generateKeyPair​();

PublicKey pubKey = kp.getPublic();
String keyAlgorithm = pubKey.getAlgorithm​();  // "ECWithSecp256k1"

As no explicit parameters is required, the 

Re: JCA design for RFC 7748

2017-08-10 Thread Michael StJohns

On 8/10/2017 9:44 AM, Adam Petcher wrote:
Does anyone know of a particular use case (that we haven't discuss 
already) that would require a provider to support arbitrary curves? 
Any other arguments for or against this feature? 


There are uses for changing out the base point.  PAKE and SPAKE use 
similar math (e.g. G^s*sharedSecret is the equivalent of a new base point).


There are uses for private curves - e.g. when you want to actually be 
sure that the curve was randomly generated (sort of the same argument 
that got us to Curve25519 in the first place).


There are the whole set of Edwards curves that are mostly not included 
in any provider (except possible Microsoft's) as of yet.


Basically, you're trying to argue that there are no better curves (for 
the 'new' math) than have already been specified and there never will 
be.  I think that's a very shortsighted argument.


Later, Mike




Re: JCA design for RFC 7748

2017-08-10 Thread Adam Petcher

Anyone have any additional thoughts on this?

I think the most significant item we need to discuss is the extent to 
which JCA should allow curve parameters for RFC 7748/8032 to be 
specified over the API. Does anyone know of a particular use case (that 
we haven't discuss already) that would require a provider to support 
arbitrary curves? Any other arguments for or against this feature?



On 8/7/2017 4:37 PM, Adam Petcher wrote:
I'm working on the Java implementation of RFC 7748 (Diffie-Hellman 
with X25519 and X448). I know some of you have been anxious to talk 
about how this would fit into JCA, and I appreciate your patience 
while I learned enough about JCA and existing crypto implementations 
to develop this API proposal. This API/design proposal is for RFC 7748 
only, and it does not include the API for RFC 8032 (EdDSA). Of course, 
I expect many of the decisions that we make for RFC 7748 will also 
impact RFC 8032.


First off, I think it is important to separate RFC 7748 from the 
existing ECDH API and implementation. RFC 7748 is a different 
standard, it uses different encodings and algorithms, and it has 
different properties. Further, this separation will reduce the 
probability of programming errors (e.g. accidentally interpreting a 
Weierstrass point as an RFC 7748 point). So I propose that we use 
distinct algorithm names for RFC 7748, and that we don't use any of 
the existing EC classes like EllipticCurve and ECPoint with RFC 7748.


We can achieve this separation without duplicating a lot of code if we 
start with some simplifying assumptions. My goal is to remove 
functionality that nobody needs in order to simplify the design and 
API. If I am simplifying away something that you think you will need, 
please let me know.


A) We don't need to expose actual curve parameters over the API. 
Curves can be specified using names (e.g. "X25519") or OIDs. The 
underlying implementation will likely support arbitrary Montgomery 
curves, but the JCA application will only be able to use the supported 
named curves.
B) We don't need direct interoperability between different providers 
using opaque key representations. We can communicate with other 
providers using X509/PKCS8 encoding, or by using KeyFactory and key 
specs.


These two assumptions greatly simplify the API. We won't need classes 
that mirror ECParameterSpec, EllipticCurve, ECPoint, ECField, 
ECPublicKey, etc. for X25519/X448.


Now that the motivation and assumptions are out of the way, here is a 
description of the proposed JCA API:


1) The string "XDH" will be used in getInstance() to refer to all 
services related to RFC 7748 (KeyAgreement, KeyFactory, 
KeyPairGenerator, etc). This is a departure from the ECDH API that 
used "EC" for key generation (shared with ECDSA) and "ECDH" for 
KeyAgreement, and makes the RFC 7748 API more like "DiffieHellman" and 
other algorithms that use the same name for all services.
2) The new class java.security.spec.NamedParameterSpec (which 
implements AlgorithmParameterSpec) will be used to specify curves for 
RFC 7748. This class has a single String member which holds the name 
of the curve ("X25519" or "X448"). This parameter spec class can be 
reused by other crypto algorithms that similarly identify parameter 
sets using names (e.g. FFDHE3072 in DiffieHellman). This new class can 
be inserted into the hierarchy above ECGenParameterSpec.
3) There will be no classes in java.security.spec for EC public keys 
and private keys. An RFC 7748 implementation can use the existing 
classes X509EncodedKeySpec and PKCS8EncodedKeySpec for public and 
private key specs, respectively.
4) There will be no interfaces in java.security.interfaces for RFC 
7748 public/private keys. Public/private key implementation classes 
will implement java.security.PublicKey and java.security.PrivateKey, 
which allows access to their encoded representations.


Here is how the API will be implemented in the SunEC provider:

1) The public key and private key implementation classes will extend 
sun.security.ec.X509Key and sun.security.ec.PKCS8Key, respectively. 
This is similar to ECPublicKeyImpl and ECPrivateKeyImpl.
2) The KeyFactory for RFC 7748 will support translation to/from opaque 
keys and X509EncodedKeySpec/PKCS8EncodedKeySpec.


Example code:

KeyPairGenerator kpg = KeyPairGenerator.getInstance("XDH");
NamedParameterSpec paramSpec = new NamedParameterSpec("X25519");
kpg.initialize(paramSpec); // equivalent to kpg.initialize(255)
KeyPair kp = kpg.generateKeyPair();

KeyFactory kf = KeyFactory.getInstance("XDH");
X509EncodedKeySpec pubSpec = ...
PublicKey pubKey = kf.generatePublic(pubSpec);

KeyAgreement ka = KeyAgreement.getInstance("XDH");
ka.init(kp.getPrivate());
ka.doPhase(pubKey, true);
byte[] secret = ka.generateSecret();


One thing that is missing from the "core" API proposal is a way to 
easily produce a public/private key from an encoded numeric value. Of 
course, it's possible to put this value into a 

Re: JCA design for RFC 7748

2017-08-09 Thread Adam Petcher

On 8/8/2017 9:55 PM, Michael StJohns wrote:


Another option to consider is that we don't have subinterfaces for 
RFC 7748 public/private keys, but rather we use some common 
subinterface that provides enough information (e.g. the encoded 
number and the curve parameters).




You mean like "ECKey"?  This is implemented by both public and private 
EC keys and mostly contains the ECParameterSpec set.


Sort of. I'm trying to figure out how appropriate it is to have the 
equivalent of ECKey without the equivalent of ECPrivateKey and 
ECPublicKey. In this scenario, the equivalent of ECKey contains all the 
information about the public/private key (in RFC 7748, it's an integer 
in both cases).




Mike





Re: JCA design for RFC 7748

2017-08-08 Thread Michael StJohns

On 8/8/2017 5:00 PM, Adam Petcher wrote:

On 8/8/2017 12:50 PM, Michael StJohns wrote:



Further, this separation will reduce the probability of 
programming errors (e.g. accidentally interpreting a Weierstrass 
point as an RFC 7748 point).


Um.  What?   It actually won't.


This is the sort of problem I want to avoid:

KeyPairGenerator kpg = KeyPairGenerator.getInstance("ECDH");
KeyPair kp = kpg.generateKeyPair();
KeyFactory eckf = KeyFactory.getInstance("ECDH");
ECPrivateKeySpec priSpec = eckf.getKeySpec(kf.getPrivate(), 
ECPrivateKeySpec.class);


KeyFactory xdhkf = KeyFactory.getInstance("XDH");
PrivateKey xdhPrivate = xdhkf.generatePrivate(priSpec);

// Now use xdhPrivate for key agreement, which uses the wrong 
algorithm and curve, and may leak information about the private key


This is setting up a strawman and knocking it down.  It's already 
possible to do the above with any software based key - either 
directly or by pulling out the data.  Creating the API as you suggest 
will still not prevent this as long as I can retrieve the private 
value from the key.


If you want absolute protection from this - go to hardware based keys.


The goal is the prevention of common programming errors that lead to 
security issues, not absolute protection like you would get from 
hardware crypto. More like the kind of assurance you get from a type 
system.


The engine implementation (provider plugin) is responsible for 
preventing the "common programming errors", not the API.  The JCA really 
doesn't have support for a type system style assurance.









A) We don't need to expose actual curve parameters over the API. 
Curves can be specified using names (e.g. "X25519") or OIDs. The 
underlying implementation will likely support arbitrary Montgomery 
curves, but the JCA application will only be able to use the 
supported named curves.


Strangely, this hasn't turned out all that well.  There needs to be 
a name, OID in the public space (primarily for the encodings and 
PKIX stuff) and to be honest - you really want the parameters in 
public space as well (the ECParameterSpec and its ilk) so that a 
given key can be used with different providers or even to play 
around internally with new curves before giving them a name.


I don't understand why we need public curve parameters to allow keys 
to be used with different providers. It seems like this should work 
as long as the providers all understand the OIDs or curve names. Can 
you explain this part a bit more?
Because names and OIDs get assigned later than the curve parameters.  
There are two parts to the JCA - the general crypto part and then 
there's the PKIX part.  For the EC stuff, they sort of overlap 
because of a desire not to have to have everyone remember each of the 
parameter sets (curves) and those sets are tagged by name(s) and 
OID.  But its still  perfectly possible to do EC math on curves that 
were generated elsewhere (or even with a curve where everything but 
the basepoint is the same with a public curve).


What you need to be able to do is to pass to an "older" provider a 
"newer" curve - assuming the curve fits within the math already 
implemented.  There's really no good reason to implement a whole new 
set of API changes just to permit a single new curve.


Okay, thanks. If I am reading this right, this feature supports 
interoperability with providers that don't know about a specific curve 
name/OID, but support all curves within some family. Without a way to 
express the curve parameters in JCA, you would resort to using some 
provider-specific parameter spec, which would be unfortunate.


Pretty much.   This is why I've been pushing back so hard on "just one 
more key type" style arguments.







Related to tinkering with new curves that don't have a name: I don't 
think that this is a feature that JCA needs to have. In the common 
use case, the programmer wants to only use standard algorithms and 
curves, and I think we should focus on that use case.


The common use case is much wider than you think it is.  I find 
myself using the curve parameters much more than I would like - 
specifically because I use JCA in conjunction with PKCS11, HSMs and 
smart cards.   So no - focusing on a software only subset of things 
is really not the right approach.


I actually would have expected hardware crypto to have *less* support 
for arbitrary curves, and so this issue would come up more with 
software implementations. Why does this come up so frequently in 
hardware?


Because the hardware tends to work either very generally or very 
specifically.  A PKCS11 big iron HSM for example mostly doesn't do 
multiple curves and requires that you just give them the curve OID, but 
the JCOP smart cards work over a broad set of curves - as long as you 
give them the entire curve data set.  But the JCOP cards don't do ASN1 
so I keep getting to have to convert raw EC points into formatted EC 
public keys.   One set of programs I have has to 

Re: JCA design for RFC 7748

2017-08-08 Thread Adam Petcher

On 8/8/2017 12:50 PM, Michael StJohns wrote:



Further, this separation will reduce the probability of programming 
errors (e.g. accidentally interpreting a Weierstrass point as an 
RFC 7748 point).


Um.  What?   It actually won't.


This is the sort of problem I want to avoid:

KeyPairGenerator kpg = KeyPairGenerator.getInstance("ECDH");
KeyPair kp = kpg.generateKeyPair();
KeyFactory eckf = KeyFactory.getInstance("ECDH");
ECPrivateKeySpec priSpec = eckf.getKeySpec(kf.getPrivate(), 
ECPrivateKeySpec.class);


KeyFactory xdhkf = KeyFactory.getInstance("XDH");
PrivateKey xdhPrivate = xdhkf.generatePrivate(priSpec);

// Now use xdhPrivate for key agreement, which uses the wrong 
algorithm and curve, and may leak information about the private key


This is setting up a strawman and knocking it down.  It's already 
possible to do the above with any software based key - either directly 
or by pulling out the data.  Creating the API as you suggest will 
still not prevent this as long as I can retrieve the private value 
from the key.


If you want absolute protection from this - go to hardware based keys.


The goal is the prevention of common programming errors that lead to 
security issues, not absolute protection like you would get from 
hardware crypto. More like the kind of assurance you get from a type 
system.






A) We don't need to expose actual curve parameters over the API. 
Curves can be specified using names (e.g. "X25519") or OIDs. The 
underlying implementation will likely support arbitrary Montgomery 
curves, but the JCA application will only be able to use the 
supported named curves.


Strangely, this hasn't turned out all that well.  There needs to be 
a name, OID in the public space (primarily for the encodings and 
PKIX stuff) and to be honest - you really want the parameters in 
public space as well (the ECParameterSpec and its ilk) so that a 
given key can be used with different providers or even to play 
around internally with new curves before giving them a name.


I don't understand why we need public curve parameters to allow keys 
to be used with different providers. It seems like this should work 
as long as the providers all understand the OIDs or curve names. Can 
you explain this part a bit more?
Because names and OIDs get assigned later than the curve parameters.  
There are two parts to the JCA - the general crypto part and then 
there's the PKIX part.  For the EC stuff, they sort of overlap because 
of a desire not to have to have everyone remember each of the 
parameter sets (curves) and those sets are tagged by name(s) and OID.  
But its still  perfectly possible to do EC math on curves that were 
generated elsewhere (or even with a curve where everything but the 
basepoint is the same with a public curve).


What you need to be able to do is to pass to an "older" provider a 
"newer" curve - assuming the curve fits within the math already 
implemented.  There's really no good reason to implement a whole new 
set of API changes just to permit a single new curve.


Okay, thanks. If I am reading this right, this feature supports 
interoperability with providers that don't know about a specific curve 
name/OID, but support all curves within some family. Without a way to 
express the curve parameters in JCA, you would resort to using some 
provider-specific parameter spec, which would be unfortunate.




Related to tinkering with new curves that don't have a name: I don't 
think that this is a feature that JCA needs to have. In the common 
use case, the programmer wants to only use standard algorithms and 
curves, and I think we should focus on that use case.


The common use case is much wider than you think it is.  I find myself 
using the curve parameters much more than I would like - specifically 
because I use JCA in conjunction with PKCS11, HSMs and smart cards.   
So no - focusing on a software only subset of things is really not the 
right approach.


I actually would have expected hardware crypto to have *less* support 
for arbitrary curves, and so this issue would come up more with software 
implementations. Why does this come up so frequently in hardware?










These two assumptions greatly simplify the API. We won't need 
classes that mirror ECParameterSpec, EllipticCurve, ECPoint, 
ECField, ECPublicKey, etc. for X25519/X448.


That assumption holds only if your various other assumptions hold. 
My opinion is that they probably don't.  (BTW - I'm pretty sure, 
given that every single asymmetric JCA crypto api takes a PublicKey 
or PrivateKey you're going to need to mirror those classes at least; 
you'll also need a ParameterSpec and a GenParameterSpec class with 
whatever underlying supporting classes are required to deal with 
KeyFactory's)


I agree with the second part of your parenthetical statement, but I 
need more information about the first. It sounds like what you are 
saying is that I will need something like XDHPublicKey and 
XDHPrivateKey in 

Re: JCA design for RFC 7748

2017-08-08 Thread Anders Rundgren

On 2017-08-08 21:42, Xuelei Fan wrote:

On 8/8/2017 8:45 AM, Anders Rundgren wrote:

Object myOwnEncrypt(PublicKey publicKey) throws SecurityException {
 if (publicKey instanceof RSAKey) {
   // RSA
 } else {
   // It should be EC
 }
}


The code above is not reliable unless one understand the underlying
JCA/JCE provider behavior exactly this way.  For a certain provider, an
RSA key may be not an instance of RSAKey.  I would use
key.getAlgorithm() instead.


You mean that some providers do not always adhere even to RSAPublicKey (which 
extends RSAKey)?

Well, then there's a lot of broken stuff out there.

Anders




Xuelei


CC:ing the creator of OKP keys.

https://tools.ietf.org/html/rfc8037#section-2

Anders




Re: JCA design for RFC 7748

2017-08-08 Thread Adam Petcher

On 8/8/2017 12:50 PM, Michael StJohns wrote:



We'll leave this for later.  But generally, the JCA is a general 
interface to a set of crypto primitives modeled on just a few key 
types.  To go in the direction you want to go it you need to explain 
why its impossible to model an elliptic curve as an elliptic curve.   
As I noted, I think that the inclusion of extension of ECField is 
probably all that's necessary for representing both public and private 
key pairs here.


The problem with the existing EC classes (EllipticCurve, ECPoint, etc.) 
is that they are intended to represent curves in Weierstrass form: y^2 = 
x^3 + ax + b. EllipticCurve has two parameters "a" and "b" corresponding 
to the coefficients in the equation above. RFC 7748 uses elliptic curves 
in Montgomery form: y^2 = x^3 + ax^2 + x. So the parameters are 
different. Further complicating things: every curve in Montgomery form 
has an isomorphic curve in Weierstrass form (but not vice-versa).


So if we reuse EllipticCurve (and related classes), we could map the 
parameters onto Montgomery curve coefficients. For example interpret "a" 
as the second-degree coefficient instead of the first-degree 
coefficient, and ignore "b". But we have the problem that the programmer 
may not know when the parameters will be interpreted as Weierstrass 
coefficients instead of Montgomery coefficients. I am particularly 
concerned about this because these parameters were always interpreted as 
Weierstrass coefficients in the past.


So we would want a way to tag the objects and check the tags to ensure 
that they are not misused. You suggested making new ECField subclasses 
for Montgomery/Edwards curves. The field used in RFC 7748/8032 is GF(p), 
which corresponds to the existing class ECFieldFp. So it seems strange 
and surprising to use this member to identify how coefficients should be 
interpreted, because this has nothing to do with the field. Though I can 
see why this approach is appealing, because the field is the only part 
of EllipticCurve that was designed to be extensible. If the coefficients 
(and their interpretation) were similarly extensible, then we wouldn't 
have these problems.


In short: I'm not sure that reusing the existing EC classes is a good 
idea, because they were intended for something else, they are not 
general enough, and the potential for misuse/confusion is high.





Re: JCA design for RFC 7748

2017-08-08 Thread Xuelei Fan

On 8/8/2017 8:45 AM, Anders Rundgren wrote:

On 2017-08-08 17:25, Adam Petcher wrote:


It sounds like what you are saying is
that I will need something like XDHPublicKey and XDHPrivateKey in
java.security.interfaces. Can you tell me why? What is it that we can't
do without these interfaces?


Every JOSE Java library I have seen constructs and deconstructs RSA and 
EC keys
based on JWK definitions.  Maybe we don't need XDH keys but it would be 
nice to

hear what the solution would be without such.

Then there's lot of stuff out there like this which also needs some
explanations on how to enhance with RFC7748 on board:

Object myOwnEncrypt(PublicKey publicKey) throws SecurityException {
if (publicKey instanceof RSAKey) {
  // RSA
} else {
  // It should be EC
}
}

The code above is not reliable unless one understand the underlying 
JCA/JCE provider behavior exactly this way.  For a certain provider, an 
RSA key may be not an instance of RSAKey.  I would use 
key.getAlgorithm() instead.


Xuelei


CC:ing the creator of OKP keys.

https://tools.ietf.org/html/rfc8037#section-2

Anders


Re: JCA design for RFC 7748

2017-08-08 Thread Adam Petcher

On 8/8/2017 11:45 AM, Anders Rundgren wrote:


On 2017-08-08 17:25, Adam Petcher wrote:


It sounds like what you are saying is
that I will need something like XDHPublicKey and XDHPrivateKey in
java.security.interfaces. Can you tell me why? What is it that we can't
do without these interfaces?


Every JOSE Java library I have seen constructs and deconstructs RSA 
and EC keys
based on JWK definitions.  Maybe we don't need XDH keys but it would 
be nice to

hear what the solution would be without such.


Of course, you could get the X.509 or PKCS#8 encoding of the key, and 
then convert that to JWK (I think all the information you need is in the 
algorithm ID), but that is not a very good solution. So perhaps it would 
be helpful to have an interface that exposes the key array along with 
its associated algorithm name and parameters. This would be similar to 
your OKP interface, but we can make it a bit more general so that it can 
be reused by other algorithms. We could also add interfaces for RFC 7748 
public keys and private keys, but they wouldn't have any additional 
information, and I still don't know if they are needed.




Then there's lot of stuff out there like this which also needs some
explanations on how to enhance with RFC7748 on board:

Object myOwnEncrypt(PublicKey publicKey) throws SecurityException {
   if (publicKey instanceof RSAKey) {
 // RSA
   } else {
 // It should be EC
   }
}


Like before, there is a (not very good) solution using X.509 encoding, 
and a better solution involving a new interface that provides the 
required information. Something like:


Object myOwnEncrypt(PublicKey publicKey) throws SecurityException {
   if (publicKey instanceof RSAKey) {
 // RSA
   } else if (publicKey instanceof ECKey) {
 // EC
   } else if (publicKey instanceof ByteArrayKey) {
 ByteArrayKey baKey = (ByteArrayKey)publicKey;
 if(baKey.getAlgorithm().equals("XDH") {
   // RFC 7748
 }
   }
}

The ByteArrayKey would also hold a parameter spec that can be used to 
determine which curve is associated with the key. Would something like 
this work?





CC:ing the creator of OKP keys.

https://tools.ietf.org/html/rfc8037#section-2

Anders




Re: JCA design for RFC 7748

2017-08-08 Thread Anders Rundgren

On 2017-08-08 17:25, Adam Petcher wrote:


It sounds like what you are saying is
that I will need something like XDHPublicKey and XDHPrivateKey in
java.security.interfaces. Can you tell me why? What is it that we can't
do without these interfaces?


Every JOSE Java library I have seen constructs and deconstructs RSA and EC keys
based on JWK definitions.  Maybe we don't need XDH keys but it would be nice to
hear what the solution would be without such.

Then there's lot of stuff out there like this which also needs some
explanations on how to enhance with RFC7748 on board:

Object myOwnEncrypt(PublicKey publicKey) throws SecurityException {
   if (publicKey instanceof RSAKey) {
 // RSA
   } else {
 // It should be EC
   }
}

CC:ing the creator of OKP keys.

https://tools.ietf.org/html/rfc8037#section-2

Anders


Re: JCA design for RFC 7748

2017-08-08 Thread Adam Petcher

Thanks for the feedback. See below for my responses.

On 8/7/2017 5:52 PM, Michael StJohns wrote:

On 8/7/2017 4:37 PM, Adam Petcher wrote:
I'm working on the Java implementation of RFC 7748 (Diffie-Hellman 
with X25519 and X448). I know some of you have been anxious to talk 
about how this would fit into JCA, and I appreciate your patience 
while I learned enough about JCA and existing crypto implementations 
to develop this API proposal. This API/design proposal is for RFC 
7748 only, and it does not include the API for RFC 8032 (EdDSA).


So you're expecting yet another set of APIs to cover those as well? 
Rather than one API to cover all of the new curves?  Seems to be a bit 
short sighted.


I'm expecting that we don't need to expose curve parameters/points in 
the API, so we won't need any new API for EdDSA, other than the 
algorithm name. If we decide we need to expose curve parameters, then we 
may want to back up and consider how EdDSA fits into this.




Of course, I expect many of the decisions that we make for RFC 7748 
will also impact RFC 8032.


First off, I think it is important to separate RFC 7748 from the 
existing ECDH API and implementation. RFC 7748 is a different standard,


It's still an elliptic curve.  Note that there is already a worked 
example here - F2m vs Fp curves.



it uses different encodings and algorithms,


From a JCA point of view, the public key gets encoded as an 
SubjectPublicKeyInfo and the private key gets encoded as a PKCS8 - 
that's for lots and lots of compatibility reasons.  The public point 
of the public key might be encoded (inside the SPKI) as little endian 
OCTET STRING array vs a  big endian ASN1 INTEGER, but its still just 
an integer internally.


The algorithms are Key Agreement and Signature - those are at least 
what JCA will see them as.  The actual 
KeyAgreement.getInstance("name") is of course going to be different 
than KeyAgreement.getInstance("ECDH") for example.




and it has different properties.


Details please?  Or do you mean that you can't use a given type of key 
for both key agreement and signature?


Specifically, RFC 7748 resists (some) side-channel attacks and 
invalid-point attacks. The "ECDH" algorithm in JCA (PKCS#3) does not 
have these properties, so I want to make sure programmers don't get them 
confused. This difference in security properties partially motivates the 
use of a new algorithm name, rather than reusing "ECDH" for RFC 7748.




Further, this separation will reduce the probability of programming 
errors (e.g. accidentally interpreting a Weierstrass point as an RFC 
7748 point).


Um.  What?   It actually won't.


This is the sort of problem I want to avoid:

KeyPairGenerator kpg = KeyPairGenerator.getInstance("ECDH");
KeyPair kp = kpg.generateKeyPair();
KeyFactory eckf = KeyFactory.getInstance("ECDH");
ECPrivateKeySpec priSpec = eckf.getKeySpec(kf.getPrivate(), 
ECPrivateKeySpec.class);


KeyFactory xdhkf = KeyFactory.getInstance("XDH");
PrivateKey xdhPrivate = xdhkf.generatePrivate(priSpec);

// Now use xdhPrivate for key agreement, which uses the wrong algorithm 
and curve, and may leak information about the private key


This sort of thing can happen if we use the existing EC spec classes for 
RFC 7748 (e.g. redefining what "a" and "b" mean in EllipticCurve when 
used with a Montgomery curve, leaving "y" null in ECPoint). Of course, 
we can prevent it by tagging these objects and checking the tags to make 
sure they are used with the correct algorithms, but I would prefer to 
use separate classes (if necessary) and let the type checker do this.


My intention is that errors like the one above are impossible in my 
proposed design. We should be able to accomplish this by only using 
encoded key specs (which already have checking based on OID), or by 
using new classes, if this is necessary (and I hope it is not).





So I propose that we use distinct algorithm names for RFC 7748,

Yes.

and that we don't use any of the existing EC classes like 
EllipticCurve and ECPoint with RFC 7748.
No.   (My opinion but...) It's *hard* to add new meta classes for 
keys.  Just considering the EC stuff you have ECKey, ECPublicKey, 
ECPrivateKey, EllipticCurve, ECPublicKeySpec, ECPrivateKeySpec, 
ECPoint, ECParameterSpec, ECGenParameterSpec, EllipticCurve and 
ECField (with ECFieldF2M and ECFieldF2P being the differentiator for 
all of the various keys within this space).




We can achieve this separation without duplicating a lot of code if 
we start with some simplifying assumptions. My goal is to remove 
functionality that nobody needs in order to simplify the design and 
API. If I am simplifying away something that you think you will need, 
please let me know.


There's a difference with what you do with the public API vs what you 
do with the plugin provider.Throwing away all of the 
"functionality that nobody needs" will probably come back to bite 
those who come later with something that looks *almost* like 

Re: JCA design for RFC 7748

2017-08-08 Thread Anders Rundgren

On 2017-08-07 23:52, Michael StJohns wrote:

On 8/7/2017 4:37 PM, Adam Petcher wrote:



These two assumptions greatly simplify the API. We won't need classes
that mirror ECParameterSpec, EllipticCurve, ECPoint, ECField,
ECPublicKey, etc. for X25519/X448.


That assumption holds only if your various other assumptions hold. My
opinion is that they probably don't.  (BTW - I'm pretty sure, given that
every single asymmetric JCA crypto api takes a PublicKey or PrivateKey
you're going to need to mirror those classes at least; you'll also need
a ParameterSpec and a GenParameterSpec class with whatever underlying
supporting classes are required to deal with KeyFactory's)


+1

There are virtually tons of third-party encryption libraries out there using a 
PublicKey
as input argument but internally do things differently depending on if it is an
RSAKey or ECKey.  This is also needed for JSON (JWK) serialization.

Anders
https://github.com/cyberphone/java-cfrg-spec


Re: JCA design for RFC 7748

2017-08-07 Thread Michael StJohns

On 8/7/2017 4:37 PM, Adam Petcher wrote:
I'm working on the Java implementation of RFC 7748 (Diffie-Hellman 
with X25519 and X448). I know some of you have been anxious to talk 
about how this would fit into JCA, and I appreciate your patience 
while I learned enough about JCA and existing crypto implementations 
to develop this API proposal. This API/design proposal is for RFC 7748 
only, and it does not include the API for RFC 8032 (EdDSA).


So you're expecting yet another set of APIs to cover those as well? 
Rather than one API to cover all of the new curves?  Seems to be a bit 
short sighted.


Of course, I expect many of the decisions that we make for RFC 7748 
will also impact RFC 8032.


First off, I think it is important to separate RFC 7748 from the 
existing ECDH API and implementation. RFC 7748 is a different standard,


It's still an elliptic curve.  Note that there is already a worked 
example here - F2m vs Fp curves.



it uses different encodings and algorithms,


From a JCA point of view, the public key gets encoded as an 
SubjectPublicKeyInfo and the private key gets encoded as a PKCS8 - 
that's for lots and lots of compatibility reasons.  The public point of 
the public key might be encoded (inside the SPKI) as little endian OCTET 
STRING array vs a  big endian ASN1 INTEGER, but its still just an 
integer internally.


The algorithms are Key Agreement and Signature - those are at least what 
JCA will see them as.  The actual KeyAgreement.getInstance("name") is of 
course going to be different than KeyAgreement.getInstance("ECDH") for 
example.




and it has different properties.


Details please?  Or do you mean that you can't use a given type of key 
for both key agreement and signature?


Further, this separation will reduce the probability of programming 
errors (e.g. accidentally interpreting a Weierstrass point as an RFC 
7748 point).


Um.  What?   It actually won't.


So I propose that we use distinct algorithm names for RFC 7748,

Yes.

and that we don't use any of the existing EC classes like 
EllipticCurve and ECPoint with RFC 7748.
No.   (My opinion but...) It's *hard* to add new meta classes for keys.  
Just considering the EC stuff you have ECKey, ECPublicKey, ECPrivateKey, 
EllipticCurve, ECPublicKeySpec, ECPrivateKeySpec, ECPoint, 
ECParameterSpec, ECGenParameterSpec, EllipticCurve and ECField (with 
ECFieldF2M and ECFieldF2P being the differentiator for all of the 
various keys within this space).




We can achieve this separation without duplicating a lot of code if we 
start with some simplifying assumptions. My goal is to remove 
functionality that nobody needs in order to simplify the design and 
API. If I am simplifying away something that you think you will need, 
please let me know.


There's a difference with what you do with the public API vs what you do 
with the plugin provider.Throwing away all of the "functionality 
that nobody needs" will probably come back to bite those who come later 
with something that looks *almost* like what you did, but needs just one 
more parameter than you were kind enough to leave behind.




A) We don't need to expose actual curve parameters over the API. 
Curves can be specified using names (e.g. "X25519") or OIDs. The 
underlying implementation will likely support arbitrary Montgomery 
curves, but the JCA application will only be able to use the supported 
named curves.


Strangely, this hasn't turned out all that well.  There needs to be a 
name, OID in the public space (primarily for the encodings and PKIX 
stuff) and to be honest - you really want the parameters in public space 
as well (the ECParameterSpec and its ilk) so that a given key can be 
used with different providers or even to play around internally with new 
curves before giving them a name.


B) We don't need direct interoperability between different providers 
using opaque key representations. We can communicate with other 
providers using X509/PKCS8 encoding, or by using KeyFactory and key 
specs.
I don't actually understand that statement.  Keys of different providers 
generally don't interoperate anyways, but you can mostly take an 
"encoded" one and create a new one in a new provider via the 
Keyfactory.  KeySpecs provide you with a way of manually building a key 
- and that turns out to be VERY necessary, especially when you're 
dealing with adapting hardware modules to the JCA.




These two assumptions greatly simplify the API. We won't need classes 
that mirror ECParameterSpec, EllipticCurve, ECPoint, ECField, 
ECPublicKey, etc. for X25519/X448.


That assumption holds only if your various other assumptions hold. My 
opinion is that they probably don't.  (BTW - I'm pretty sure, given that 
every single asymmetric JCA crypto api takes a PublicKey or PrivateKey 
you're going to need to mirror those classes at least; you'll also need 
a ParameterSpec and a GenParameterSpec class with whatever underlying 
supporting classes are required to deal with