On 8/16/2017 11:18 AM, Adam Petcher wrote:
On 8/15/2017 7:05 PM, Michael StJohns wrote:
On 8/15/2017 1:43 PM, Xuelei Fan wrote:
On 8/11/2017 7:57 AM, Adam Petcher wrote:

I'm also coming to the conclusion that using X.509 encoding for this sort of interoperability is too onerous, and we should come up with something better. Maybe we should add a new general-purpose interface that exposes some structure in an algorithm-independent way. Something like this:

package java.security.interfaces;
public interface ByteArrayValue {

     String getAlgorithm();
     AlgorithmParameterSpec getParams();
     byte[] getValue();
}

I'm not sure how to use the above interface in an application.

This is sort of the moral equivalent of using the TXT RR record in DNS and the arguments are similar.

This is a bad idea.

I'm not a DNS expert, so I apologize in advance if I misunderstood your argument. What I think you are saying is that it is bad to store something in a string or byte array which has no semantics, and it is better to store information in a more structured way that has a clear semantics. I agree with this.



My intention with this ByteArrayValue is to only use it for information that has a clear semantics when represented as a byte array, and a byte array is a convenient and appropriate representation for the algorithms involved (so there isn't a lot of unnecessary conversion). This is the case for public/private keys in RFC 7748/8032:

1) RFC 8032: "An EdDSA private key is a b-bit string k." "The EdDSA public key is ENC(A)." (ENC is a function from integers to little-endian bit strings. 2) RFC 7748: "Alice generates 32 random bytes in a[0] to a[31] and transmits K_A =X25519(a, 9) to Bob..." The X25519 and X448 functions, as described in the RFC, take bit strings as input and produce bit strings as output.

Thanks for making my point for me. The internal representation of the public point is an integer. It's only when encoding or decoding that it gets externally represented as an array of bytes. (And yes, I understand that the RFC defines an algorithm using little endian byte array representations of the integers - but that's the implementation's call, not the API).

With respect to the output of the KeyAgreement algorithm - your (2) above, the transmission representation (e.g. the encoded public key) is little endian byte array representation of an integer. The internal representation is - wait for it - integer.

I have no problems at all with any given implementation using little endian math internally. For the purposes of using JCA, stick with BigInteger to represent your integers. Use your provider encoding methods to translate between what the math is internally and what the bits are externally if necessary. Implement the conversion methods for the factory and for dealing with the existing EC classes. Maybe get BigInteger to be extended to handle (natively) littleEndian representation (as well as fixed length outputs necessary for things like ECDH).







So I think that a byte array is the correct representation for public/private keys in these two RFCs.

Nope.




I don't worry about this issue any more. At present, each java.security.Key has three characters (see the API Java doc):
. an algorithm
. an encoded form
. a format

The format could be "X.509", and could be "RAW" (like ByteArrayValue.getValue()). I would suggest have the named curve in the algorithm characters, and use "RAW" as the encode format.
If X.509 encoding is required, KeyFactory.getKeySpec​() could do it.
Um... I think that doesn't make a lot of sense. The default contract for public keys is X.509 and the default for private keys is PKCS#8. Almost all uses of the encoded formats are related to PKIX related functions. (See for info the javadoc for PublicKey).

I'm concerned about this, too. Ideally, we want PKI code to handle these new keys without modification. The javadoc wording makes it a little unclear whether public keys *must* use X.509 encoding, but using other encodings for public keys would probably be surprising.

There's two different things going on here - many encodings (e.g. for EC public keys there's the fixed length X and fixed length Y byte array big endian big integer as well as the X.92 representation indicating both compressed and uncompressed points and then the wrapping of those in SubjectPublicKeyInfo then there's the raw signature vs the ASN1 SEQUENCE OF INTEGER version.) JCA uses the X.509 stuff quite a bit to deal with all of the CertificatePathValidation and Certificate validation things. But the "rawer" stuff such as DH sometimes uses bare points with the curve being understood by context (see for example the Javacard KeyAgreement classes).

In the current case, we still need a way to "sign" and "carry" the new public points and to be as interoperable with all of the "old" stuff. I guess you could come up with a whole new set of ideas in this space, but you would be hated both by the implementers and the folks that actually had to try and use it.

If you were just going to use this for crypto protocols (TLS and IPSEC) I might agree that X.509 wasn't necessary - but I don't think you want to say that either.



To be JCA compliant you need all of: <snip>

Are you describing hard compliance requirements or more informal expectations? If it's the former, where are these requirements documented?


AFAICT this is implicit in the way the JCA is structured. A given set of key classes is supposed to be able to be converted and used for all appropriate APIs (Signature, KeyAgreement, Cipher, KeyPair, KeyPairGenerator, KeyFactory etc) and all of those classes I mentioned appear to be necessary to ensure this is possible.

Happy to have an argument against that point of view.

Mike


Reply via email to