Re: RFR: JDK-8159544: Remove deprecated classes in com.sun.security.auth.**

2017-08-17 Thread Weijun Wang
Hi Sean

Change looks fine.

And I found another 4 references in comments in 
jdk/src/java.base/share/classes/javax/security/auth/Policy.java.

BTW, do we have a test to show what lines in various Resources.java files are 
used where and if one is useless?

Thanks
Max

> On Aug 18, 2017, at 3:08 AM, Sean Mullan  wrote:
> 
> Please review this JDK 10 change to remove the deprecated classes in 
> com.sun.security.auth.** that have been previously marked with 
> forRemoval=true in JDK 9.
> 
> webrev: http://cr.openjdk.java.net/~mullan/webrevs/8159544/
> 
> I have also copied Jan for reviewing a change in langtools, and also 
> build-dev for a change to one of the JDK Makefiles.
> 
> Thanks,
> Sean



Re: JCA design for RFC 7748

2017-08-17 Thread Xuelei Fan

On 8/17/2017 11:35 AM, Michael StJohns wrote:

On 8/17/2017 1:28 PM, Xuelei Fan wrote:
This is the same for ANY current publicly known curve - different 
providers may implement all some or none of them.  So extending this 
model for the curve25519 stuff isn't going to be any different old 
provider and new provider wise than is currently the case.   If you 
want the new curves, you have to specify the new providers. If the 
new and old providers don't implement the same curves, you may need 
to deal with two different providers simultaneously - and that's not 
something that just happens.


I see your points.  Not-binding to a provider cause problems; binding 
to a provider cause other problems.  There are a few complains on the 
problems, and impact the real world applications in practice.


Basically, this is a failing of imagination when the various 
getInstance() methods were defined.  Now its possible to use 
Security.getProvider(Map)  to good effect (but more work) 
to find appropriate providers for appropriate signature/key agreement 
algorithms and curves.


I'm not sure how this applies to the compatibility impact concern.  See 
more in the example bellow.







I don't think your concerns are valid.  I may still be missing 
something here - but would ask for a real-world example that actually 
shows breakage.



I happened to have a real-world example.  See
https://bugs.openjdk.java.net/browse/JDK-8064330


I'm not sure how this applies to the current question of whether or not 
its possible to integrate new EC curves?





This is an interesting bug.  At first it is requested to support 
SHA224 in JSSE implementation. And, SHA224 is added as the supported 
hash algorithm for TLS.  However, because SunMSCAPI does not support 
SHA224 signature, compatibility issues comes.  So we removed SHA224 if 
the SunMSCAPI is presented.  Later, one found the code is unusual as 
SHA224 and the related signature algorithms are supported by the 
underlying providers, look like no reason to limit the use of SHA224.  
So, SHA224 is added back and then the compatibility issues come back 
again.  Then we removed SHA224 again if the SunMSCAPI is presented.  
However, at the same time, another request is asking to support SHA224 
on Windows.  The API design itself put me in a either-or situation.  I 
would try to avoid it if possible for new design.


This appears to be an MSCAPI issue vice a JSSE issue.
MSCAPI is fine as it does not support SHA224.  JSSE is then in a bad 
position because it cannot support SHA224 in a general way even one of 
the underlying provider supports SHA224 but another one not.


And the JCA 
specifically disclaims the guaranteed ability to use cryptographic 
objects from one provider in another provider.
It's not the real problem.  The real problem is that there is a provider 
support the requested algorithms, why not use it?  We can say, the spec 
does not guarantee the behavior, but the application still has a 
problem.  We also can say, we have a design flaw, but the question is 
still there.


Secondary users like 
the JSSE probably need to stick to a single provider for a given connection.


Stick to a single provider will open other windows for different 
problems.  JCE spec does not grant one provider could implement all 
services.





Treat these simply as new curves and let's move forward with very 
minimal changes to the public API.


I would like to treat it as two things.  One is to support new curves 
for new forms.  The other one is to support named curves [1].  For the 
support of new forms,  there are still significant problems to solve. 
For the support of named curves (including the current EC form), looks 
like we are in a not-that-bad situation right now.  Will the named 
curves solution impacts the support of new curves APIs in the future?  
I don't see the impact yet.  I may missing something, but I see no 
reason to option out the named curves support.




I'm not sure why you think this (the example in [1]) can't be done?

I gave the example elsewhere, but let me expand it with my comment above 
(possibly faking the hash key names - sorry):



HashMap neededAlgs = new HashMap<>();
neededAlgs.put("Signature.EdDSA", "");
 neededAlgs.put("AlgorithmParameters.EC SupportedCurves", "ed25519")




Provider[] p = Security.getProviders(neededAlgs);
if (p == null) throw new Exception ("Oops");



AlgorithmParameters parameters = AlgorithmParameters.getInstance("EC", p[0]);
 parameters.init(new ECGenParameterSpec("ed25519"));
 ECParameterSpec ecParameters = 
parameters.getParameterSpec(ECParameterSpec.class);

 return KeyFactory.getInstance("EC", p[0]).generatePublic(new 
ECPublicKeySpec(new ECPoint(x, y), ecParameters));



Hm, good example!

But it is really too weight to use for general application development.

In the example, two crypto operations ("EC" AlgorithmParameters and "EC" 

RFR: JDK-8159544: Remove deprecated classes in com.sun.security.auth.**

2017-08-17 Thread Sean Mullan
Please review this JDK 10 change to remove the deprecated classes in 
com.sun.security.auth.** that have been previously marked with 
forRemoval=true in JDK 9.


webrev: http://cr.openjdk.java.net/~mullan/webrevs/8159544/

I have also copied Jan for reviewing a change in langtools, and also 
build-dev for a change to one of the JDK Makefiles.


Thanks,
Sean


Re: JCA design for RFC 7748

2017-08-17 Thread Michael StJohns

On 8/17/2017 1:28 PM, Xuelei Fan wrote:
This is the same for ANY current publicly known curve - different 
providers may implement all some or none of them.  So extending this 
model for the curve25519 stuff isn't going to be any different old 
provider and new provider wise than is currently the case.   If you 
want the new curves, you have to specify the new providers. If the 
new and old providers don't implement the same curves, you may need 
to deal with two different providers simultaneously - and that's not 
something that just happens.


I see your points.  Not-binding to a provider cause problems; binding 
to a provider cause other problems.  There are a few complains on the 
problems, and impact the real world applications in practice.


Basically, this is a failing of imagination when the various 
getInstance() methods were defined.  Now its possible to use 
Security.getProvider(Map)  to good effect (but more work) 
to find appropriate providers for appropriate signature/key agreement 
algorithms and curves.






I don't think your concerns are valid.  I may still be missing 
something here - but would ask for a real-world example that actually 
shows breakage.



I happened to have a real-world example.  See
https://bugs.openjdk.java.net/browse/JDK-8064330


I'm not sure how this applies to the current question of whether or not 
its possible to integrate new EC curves?





This is an interesting bug.  At first it is requested to support 
SHA224 in JSSE implementation. And, SHA224 is added as the supported 
hash algorithm for TLS.  However, because SunMSCAPI does not support 
SHA224 signature, compatibility issues comes.  So we removed SHA224 if 
the SunMSCAPI is presented.  Later, one found the code is unusual as 
SHA224 and the related signature algorithms are supported by the 
underlying providers, look like no reason to limit the use of SHA224.  
So, SHA224 is added back and then the compatibility issues come back 
again.  Then we removed SHA224 again if the SunMSCAPI is presented.  
However, at the same time, another request is asking to support SHA224 
on Windows.  The API design itself put me in a either-or situation.  I 
would try to avoid it if possible for new design.


This appears to be an MSCAPI issue vice a JSSE issue.  And the JCA 
specifically disclaims the guaranteed ability to use cryptographic 
objects from one provider in another provider.   Secondary users like 
the JSSE probably need to stick to a single provider for a given connection.





Treat these simply as new curves and let's move forward with very 
minimal changes to the public API.


I would like to treat it as two things.  One is to support new curves 
for new forms.  The other one is to support named curves [1].  For the 
support of new forms,  there are still significant problems to solve. 
For the support of named curves (including the current EC form), looks 
like we are in a not-that-bad situation right now.  Will the named 
curves solution impacts the support of new curves APIs in the future?  
I don't see the impact yet.  I may missing something, but I see no 
reason to option out the named curves support.




I'm not sure why you think this (the example in [1]) can't be done?

I gave the example elsewhere, but let me expand it with my comment above 
(possibly faking the hash key names - sorry):



HashMap neededAlgs = new HashMap<>();
neededAlgs.put("Signature.EdDSA", "");
 neededAlgs.put("AlgorithmParameters.EC SupportedCurves", "ed25519")




Provider[] p = Security.getProviders(neededAlgs);
if (p == null) throw new Exception ("Oops");



AlgorithmParameters parameters = AlgorithmParameters.getInstance("EC", p[0]);
 parameters.init(new ECGenParameterSpec("ed25519"));
 ECParameterSpec ecParameters = 
parameters.getParameterSpec(ECParameterSpec.class);

 return KeyFactory.getInstance("EC", p[0]).generatePublic(new 
ECPublicKeySpec(new ECPoint(x, y), ecParameters));


If you're talking more generally, NamedCurves should be a form of 
ECParameterSpec so you can read the name from the key, but there's no 
support for adding names to that spec.  Maybe extend it?  E.g.:


package java.security.spec;
public class NamedECParameterSpec extends ECParameterSpec {

   private Collection names;
   private Collection oids;
   public NamedECParameterSpec (EllipticCurve curve, ECPoint g, 
BigInteger n, int h, Collection names, Collection oids) {

super (curve, g, n, h);
if (names != null) {
this.names = new ArrayList(names);
}
if (oids != null) {
this.oids = new ArrayList(oids);
   }
  }

   public Collection getNames() {
if (names == null)
 return (Collection)Collections.EMPTY_LIST;
 else
return Collections.unmodifiableList(names);
}

etc

   This makes it easier to get exactly what you want from a key. 
Assuming the provider 

Re: JCA design for RFC 7748

2017-08-17 Thread Michael StJohns

See inline.

On 8/17/2017 11:19 AM, Adam Petcher wrote:

On 8/16/2017 3:17 PM, Michael StJohns wrote:


On 8/16/2017 11:18 AM, Adam Petcher wrote:


My intention with this ByteArrayValue is to only use it for 
information that has a clear semantics when represented as a byte 
array, and a byte array is a convenient and appropriate 
representation for the algorithms involved (so there isn't a lot of 
unnecessary conversion). This is the case for public/private keys in 
RFC 7748/8032:


1) RFC 8032: "An EdDSA private key is a b-bit string k." "The EdDSA 
public key is ENC(A)." (ENC is a function from integers to 
little-endian bit strings.


Oops, minor correction. Here A is a point, so ENC is a function from 
points to little-endian bit strings.


2) RFC 7748: "Alice generates 32 random bytes in a[0] to a[31] and 
transmits K_A =X25519(a, 9) to Bob..." The X25519 and X448 
functions, as described in the RFC, take bit strings as input and 
produce bit strings as output.


Thanks for making my point for me.  The internal representation of 
the public point is an integer.  It's only when encoding or decoding 
that it gets externally represented as an array of bytes.  (And yes, 
I understand that the RFC defines an algorithm using little endian 
byte array representations of the integers - but that's the 
implementation's call, not the API).


With respect to the output of the KeyAgreement algorithm - your (2) 
above, the transmission representation (e.g. the encoded public key) 
is little endian byte array representation of an integer.  The 
internal representation is - wait for it - integer.


I have no problems at all with any given implementation using little 
endian math internally.  For the purposes of using JCA, stick with 
BigInteger to represent your integers.  Use your provider encoding 
methods to translate between what the math is internally and what the 
bits are externally if necessary. Implement the conversion methods 
for the factory and for dealing with the existing EC classes.   Maybe 
get BigInteger to be extended to handle (natively) littleEndian 
representation (as well as fixed length outputs necessary for things 
like ECDH).




All good points, and I think BigInteger may be a reasonable 
representation to use for public/private key values. I'm just not sure 
that it is better than byte arrays. I'll share some relevant 
information that affects this decision.


First off, one of the goals of RFC 7748 and 8032 is to address some of 
the implementation challenges related to ECC. These algorithms are 
designed to eliminate the need for checks at various stages, and to 
generally make implementation bugs less likely. These improvements are 
motivated by all the ECC implementation bugs that have emerged in the 
last ~20 years. I mention this because I think it is important that we 
choose an API and implementation that allows us to benefit from these 
improvements in the standards. That means we shouldn't necessarily 
follow all the existing ECC patterns in the API and implementation.


No - it means that the authors of the RFCs have a bias to have their 
code be the only code.  As I note below I don't actually think they got 
everything right.   The underlying math is really what matters, and the 
API should be able to handle any implementation that gets the math correct.




Specifically, these standards have properties related to byte arrays 
like: "The Curve25519 function was carefully designed to allow all 
32-byte strings as Diffie-Hellman public keys."[1]


This statement is actually a problem.  Valid keys are in the range of 1 
to p-1 for the field (with some additional pruning).   32 byte strings 
(or 256 bit integers) do not map 1-1 into that space.  E.g. there are 
some actual canonical keys where multiple (at least 2) 32 byte strings 
map to them.  (See the pruning and clamping algorithms).  The NIST 
private key generation for EC private keys mitigates this bias by either 
(a) repeatedly generating random keys until you get one in the range or 
(b) generating a key stream with extra (64) bits and reducing that mod p 
of the curve.



If we use representations other than byte strings in the API, then we 
should ensure that our representations have the same properties (e.g. 
every BigInteger is a valid public key).


It's best to talk about each type on its own. Of course, one of the 
benefits of using bit strings is that we may have the option of using 
the same class/interface in the API to hold all of these.


RFC 7748 public keys: I think we can reasonably use BigInteger to hold 
public key values. One minor issue is that we need to specify how 
implementations should handle non-canonical values (numbers that are 
less than 0 or greater than p-1). This does not seem like a huge 
issue, though, and the existing ECC API has the same issue. Another 
minor issue is that modeling this as a BigInteger may encourage 
implementations to use BigInteger in the RFC 7748 Montgomery 

Re: JCA design for RFC 7748

2017-08-17 Thread Xuelei Fan

On 8/17/2017 8:25 AM, Michael StJohns wrote:

On 8/16/2017 12:31 PM, Xuelei Fan wrote:

On 8/16/2017 8:18 AM, Adam Petcher wrote:


I don't worry about this issue any more.  At present, each 
java.security.Key has three characters (see the API Java doc):

. an algorithm
. an encoded form
. a format

The format could be "X.509", and could be "RAW" (like 
ByteArrayValue.getValue()).  I would suggest have the named curve 
in the algorithm characters, and use "RAW" as the encode format.

If X.509 encoding is required, KeyFactory.getKeySpec​() could do it.
Um... I think that doesn't make a lot of sense.  The default 
contract for public keys is X.509 and the default for private keys 
is PKCS#8. Almost all uses of the encoded formats are related to 
PKIX related functions.   (See for info the javadoc for PublicKey).


I'm concerned about this, too. Ideally, we want PKI code to handle 
these new keys without modification. The javadoc wording makes it a 
little unclear whether public keys *must* use X.509 encoding, but 
using other encodings for public keys would probably be surprising.
I have not had a conclusion, but I was wondering if almost all uses of 
the encoded formats are related to PKIX related functions, whether it 
is still a priority to support encoding other than X.509?


So far, I think our proposal works.  The concern is mainly about how 
to *simplify* the transaction between two formats (Raw, PKIX, XML and 
JSON) in applications.  I don't worry about the transaction too much 
as it is doable with our current proposal, but just not as 
straightforward as exposing the RAW public key.


If we want to simplify the transaction, I see your concerns of my 
proposal above.  We may keep using "X.509" for default, and define a 
new key spec.  The encoded form for X.509 is as:


 SubjectPublicKeyInfo ::= SEQUENCE {
 algorithm AlgorithmIdentifier,
 subjectPublicKey BIT STRING
 }

The new encoded form could be just the subjectPublicKey field in the 
SubjectPublicKeyInfo above.


However, I'm not very sure of how common the transaction is required 
and whether it is beyond the threshold to be a public API in JRE.


There's a proposed defined format for PKIX keys on the new curves - the 
IETF CURDLE working group is working through that.  JCA should use that 
and not invent something new.






>
> The new keys remain tagged as ECKeys.  Old code won't notice (because
> old code is using old curves).  New code (new providers) will have to
> pay attention to EllipticCurve and ECField information if its handling
> both types of curves.  As is the case now, no provider need support
> every curve or even every field type.
>
My concern is about the case to use old provider and new provider all 
together at the same time.  JDK is a multiple providers coexisting 
environment.  We cannot grant that old code only uses old curves (old 
providers) and new code only uses new curves (new providers).


I did see this email and commented on it.  I think you're still missing 
the point that not every provider implements (or is required to 
implement) every curve.  And in fact I argued that it was somewhat 
stupid that the underlying C code in the SunEC provider was at odds with 
the code in the SunEC java side with respect to known curves and how 
they were handled. (Subject was RFR 8182999: SunEC throws 
ProviderException on invalid curves).



Consider - currently if I don't specify a provider, and there are two 
providers, and the lower priority provider implements "curveFoobar" but 
the higher priority provider does not, and I use an ECGenParameterSpec 
of "curveFoobar" to try and generate a key, I get a failure because I 
got the higher priority provider (which gets selected because JCA 
doesn't yet know that I want curveFoobar ) that doesn't do that curve. I 
need to specify the same provider throughout the entire process to make 
sure things work as expected AND I need to specify the provider that 
implements the curve I want.



This is the same for ANY current publicly known curve - different 
providers may implement all some or none of them.  So extending this 
model for the curve25519 stuff isn't going to be any different old 
provider and new provider wise than is currently the case.   If you want 
the new curves, you have to specify the new providers.  If the new and 
old providers don't implement the same curves, you may need to deal with 
two different providers simultaneously - and that's not something that 
just happens.


I see your points.  Not-binding to a provider cause problems; binding to 
a provider cause other problems.  There are a few complains on the 
problems, and impact the real world applications in practice.



I don't think your concerns are valid.  I may still be missing something 
here - but would ask for a real-world example that actually shows breakage.



I happened to have a real-world example.  See
   https://bugs.openjdk.java.net/browse/JDK-8064330

This is an 

Re: JCA design for RFC 7748

2017-08-17 Thread Michael StJohns

On 8/16/2017 12:31 PM, Xuelei Fan wrote:

On 8/16/2017 8:18 AM, Adam Petcher wrote:


I don't worry about this issue any more.  At present, each 
java.security.Key has three characters (see the API Java doc):

. an algorithm
. an encoded form
. a format

The format could be "X.509", and could be "RAW" (like 
ByteArrayValue.getValue()).  I would suggest have the named curve 
in the algorithm characters, and use "RAW" as the encode format.

If X.509 encoding is required, KeyFactory.getKeySpec​() could do it.
Um... I think that doesn't make a lot of sense.  The default 
contract for public keys is X.509 and the default for private keys 
is PKCS#8. Almost all uses of the encoded formats are related to 
PKIX related functions.   (See for info the javadoc for PublicKey).


I'm concerned about this, too. Ideally, we want PKI code to handle 
these new keys without modification. The javadoc wording makes it a 
little unclear whether public keys *must* use X.509 encoding, but 
using other encodings for public keys would probably be surprising.
I have not had a conclusion, but I was wondering if almost all uses of 
the encoded formats are related to PKIX related functions, whether it 
is still a priority to support encoding other than X.509?


So far, I think our proposal works.  The concern is mainly about how 
to *simplify* the transaction between two formats (Raw, PKIX, XML and 
JSON) in applications.  I don't worry about the transaction too much 
as it is doable with our current proposal, but just not as 
straightforward as exposing the RAW public key.


If we want to simplify the transaction, I see your concerns of my 
proposal above.  We may keep using "X.509" for default, and define a 
new key spec.  The encoded form for X.509 is as:


 SubjectPublicKeyInfo ::= SEQUENCE {
 algorithm AlgorithmIdentifier,
 subjectPublicKey BIT STRING
 }

The new encoded form could be just the subjectPublicKey field in the 
SubjectPublicKeyInfo above.


However, I'm not very sure of how common the transaction is required 
and whether it is beyond the threshold to be a public API in JRE.


There's a proposed defined format for PKIX keys on the new curves - the 
IETF CURDLE working group is working through that.  JCA should use that 
and not invent something new.






>
> The new keys remain tagged as ECKeys.  Old code won't notice (because
> old code is using old curves).  New code (new providers) will have to
> pay attention to EllipticCurve and ECField information if its handling
> both types of curves.  As is the case now, no provider need support
> every curve or even every field type.
>
My concern is about the case to use old provider and new provider all 
together at the same time.  JDK is a multiple providers coexisting 
environment.  We cannot grant that old code only uses old curves (old 
providers) and new code only uses new curves (new providers).


I did see this email and commented on it.  I think you're still missing 
the point that not every provider implements (or is required to 
implement) every curve.  And in fact I argued that it was somewhat 
stupid that the underlying C code in the SunEC provider was at odds with 
the code in the SunEC java side with respect to known curves and how 
they were handled. (Subject was RFR 8182999: SunEC throws 
ProviderException on invalid curves).



Consider - currently if I don't specify a provider, and there are two 
providers, and the lower priority provider implements "curveFoobar" but 
the higher priority provider does not, and I use an ECGenParameterSpec 
of "curveFoobar" to try and generate a key, I get a failure because I 
got the higher priority provider (which gets selected because JCA 
doesn't yet know that I want curveFoobar ) that doesn't do that curve.  
I need to specify the same provider throughout the entire process to 
make sure things work as expected AND I need to specify the provider 
that implements the curve I want.



This is the same for ANY current publicly known curve - different 
providers may implement all some or none of them.  So extending this 
model for the curve25519 stuff isn't going to be any different old 
provider and new provider wise than is currently the case.   If you want 
the new curves, you have to specify the new providers.  If the new and 
old providers don't implement the same curves, you may need to deal with 
two different providers simultaneously - and that's not something that 
just happens.


I don't think your concerns are valid.  I may still be missing something 
here - but would ask for a real-world example that actually shows breakage.


Treat these simply as new curves and let's move forward with very 
minimal changes to the public API.



Mike







There is an example in my August 10 reply:

http://mail.openjdk.java.net/pipermail/security-dev/2017-August/016199.html 



Xuelei





Re: JCA design for RFC 7748

2017-08-17 Thread Adam Petcher

On 8/16/2017 3:17 PM, Michael StJohns wrote:


On 8/16/2017 11:18 AM, Adam Petcher wrote:


My intention with this ByteArrayValue is to only use it for 
information that has a clear semantics when represented as a byte 
array, and a byte array is a convenient and appropriate 
representation for the algorithms involved (so there isn't a lot of 
unnecessary conversion). This is the case for public/private keys in 
RFC 7748/8032:


1) RFC 8032: "An EdDSA private key is a b-bit string k." "The EdDSA 
public key is ENC(A)." (ENC is a function from integers to 
little-endian bit strings.


Oops, minor correction. Here A is a point, so ENC is a function from 
points to little-endian bit strings.


2) RFC 7748: "Alice generates 32 random bytes in a[0] to a[31] and 
transmits K_A =X25519(a, 9) to Bob..." The X25519 and X448 functions, 
as described in the RFC, take bit strings as input and produce bit 
strings as output.


Thanks for making my point for me.  The internal representation of the 
public point is an integer.  It's only when encoding or decoding that 
it gets externally represented as an array of bytes.  (And yes, I 
understand that the RFC defines an algorithm using little endian byte 
array representations of the integers - but that's the 
implementation's call, not the API).


With respect to the output of the KeyAgreement algorithm - your (2) 
above, the transmission representation (e.g. the encoded public key) 
is little endian byte array representation of an integer.  The 
internal representation is - wait for it - integer.


I have no problems at all with any given implementation using little 
endian math internally.  For the purposes of using JCA, stick with 
BigInteger to represent your integers.  Use your provider encoding 
methods to translate between what the math is internally and what the 
bits are externally if necessary. Implement the conversion methods for 
the factory and for dealing with the existing EC classes.   Maybe get 
BigInteger to be extended to handle (natively) littleEndian 
representation (as well as fixed length outputs necessary for things 
like ECDH).




All good points, and I think BigInteger may be a reasonable 
representation to use for public/private key values. I'm just not sure 
that it is better than byte arrays. I'll share some relevant information 
that affects this decision.


First off, one of the goals of RFC 7748 and 8032 is to address some of 
the implementation challenges related to ECC. These algorithms are 
designed to eliminate the need for checks at various stages, and to 
generally make implementation bugs less likely. These improvements are 
motivated by all the ECC implementation bugs that have emerged in the 
last ~20 years. I mention this because I think it is important that we 
choose an API and implementation that allows us to benefit from these 
improvements in the standards. That means we shouldn't necessarily 
follow all the existing ECC patterns in the API and implementation.


Specifically, these standards have properties related to byte arrays 
like: "The Curve25519 function was carefully designed to allow all 
32-byte strings as Diffie-Hellman public keys."[1] If we use 
representations other than byte strings in the API, then we should 
ensure that our representations have the same properties (e.g. every 
BigInteger is a valid public key).


It's best to talk about each type on its own. Of course, one of the 
benefits of using bit strings is that we may have the option of using 
the same class/interface in the API to hold all of these.


RFC 7748 public keys: I think we can reasonably use BigInteger to hold 
public key values. One minor issue is that we need to specify how 
implementations should handle non-canonical values (numbers that are 
less than 0 or greater than p-1). This does not seem like a huge issue, 
though, and the existing ECC API has the same issue. Another minor issue 
is that modeling this as a BigInteger may encourage implementations to 
use BigInteger in the RFC 7748 Montgomery ladder. This would be 
unfortunate because it would leak sensitive information through timing 
channels.


RFC 7748 private keys: This one is a bit more difficult. RFC 7748 
defines a "clamping" operation that ensures that the integers 
corresponding to bit strings have certain properties (e.g. they are a 
multiple of the cofactor). So if we use BigInteger for private keys in 
the API, we need to specify whether the value is clamped or unclamped. 
If an unclamped value is treated as clamped, then this can result in 
security and correctness issues. Also, the RFC treats private keys as 
bit strings---they are not used in any integer operations. So modeling 
them with byte arrays seems just as valid as modeling them with BigInteger.


RFC 8042 public keys: The analysis here is similar to RFC 7748 public 
keys, except we also need to store the (probably compressed) x 
coordinate. So if we don't use byte arrays, we would need to use 
something