CVE-2022-21449: Psychic Signatures in Java

2022-04-26 Thread Michael StJohns

Hi -

FYI - This is currently getting some play time on the Crypto Forum 
Research Group (related to the IETF): 
https://neilmadden.blog/2022/04/19/psychic-signatures-in-java/ The 
thread starts here: 
https://mailarchive.ietf.org/arch/msg/cfrg/wlIuVws-pmccvbGbBrIBVBhN2GQ/


It's probably covered by an existing patch, but I thought the thread was 
a useful pointer to some tools.


Later, Mike







Re: RFR: 8285404: RSA signature verification should follow RFC 8017 8.2.2 Step 4

2022-04-22 Thread Michael StJohns

On 4/22/2022 1:21 PM, Weijun Wang wrote:

Compare encoded instead of decoded digest in RSA signature verification.

-

Commit messages:
  - RFC 8017 8.2.2 Step 4

Changes:https://git.openjdk.java.net/jdk/pull/8365/files
  Webrev:https://webrevs.openjdk.java.net/?repo=jdk=8365=00
   Issue:https://bugs.openjdk.java.net/browse/JDK-8285404
   Stats: 30 lines in 2 files changed: 3 ins; 26 del; 1 mod
   Patch:https://git.openjdk.java.net/jdk/pull/8365.diff
   Fetch: git fetchhttps://git.openjdk.java.net/jdk  pull/8365/head:pull/8365

PR:https://git.openjdk.java.net/jdk/pull/8365


This is a weird one.  AFAICT the way it was being done is valid and 
allowed by RFC8017 - I would have closed the bug report as notabug.  
Here's the relevant text from  RFC8017:



  4.  Compare the encoded message EM and the second encoded message
   EM'.  If they are the same, output "valid signature";
   otherwise, output "invalid signature".

   Note:*_Another way to implement the signature verification operation is to 
apply a "decoding" operation (not specified in this document) to the 
encoded message to recover the underlying hash value, and then compare 
it to a newly computed hash value._*

   This has the advantage that it requires less intermediate storage
   (two hash values rather than two encoded messages), but the
   disadvantage that it requires additional code.


Re: RFR: 8284553: Deprecate the DEFAULT static field of OAEPParameterSpec

2022-04-11 Thread Michael StJohns

On 4/11/2022 9:34 PM, Valerie Peng wrote:

This trivial change is to deprecate the DEFAULT static field of 
OAEPParameterSpec class. Wordings are mostly the same as the previous 
PSSParameterSpec deprecation change. Rest are just minor code re-factoring.

The CSR will be filed once review is somewhat finished.

Thanks,
Valerie

-

Commit messages:
  - 8284553: Deprecate the DEFAULT static field of OAEPParameterSpec

Changes:https://git.openjdk.java.net/jdk/pull/8191/files
  Webrev:https://webrevs.openjdk.java.net/?repo=jdk=8191=00
   Issue:https://bugs.openjdk.java.net/browse/JDK-8284553
   Stats: 42 lines in 1 file changed: 13 ins; 10 del; 19 mod
   Patch:https://git.openjdk.java.net/jdk/pull/8191.diff
   Fetch: git fetchhttps://git.openjdk.java.net/jdk  pull/8191/head:pull/8191

PR:https://git.openjdk.java.net/jdk/pull/8191


Hi Valerie -

I think deprecating DEFAULT  is wrong.  RFC8017 still has this definition:


RSAES-OAEP-params ::= SEQUENCE {
hashAlgorithm  [0] HashAlgorithm DEFAULT sha1,
maskGenAlgorithm   [1] MaskGenAlgorithm  DEFAULT mgf1SHA1,
pSourceAlgorithm   [2] PSourceAlgorithm  DEFAULT pSpecifiedEmpty
}
and DEFAULT is what you should be getting if you see an empty sequence 
in the param field.  It's DEFAULT in ASN1 terms, not DEFAULT in terms of 
what you should use going forward  to create signatures, and the ASN1 
DEFAULT won't change.


In any event, I can't find where RFC8017 says anything about deprecating 
the defaults.  AFAICT, although there's general guidance to go away from 
SHA1,  the math suggests that SHA1 is still sufficient for OAEP,  and 
there's no guidance specific to OAEP's use of SHA1 that I can find other 
than the requirement in NIST SP800-56B rev 2 to use "Approved Hash 
Functions" for OAEP. If there's a section in 8017 that you're looking at 
for this guidance that I missed, you may want to update your comment to 
point to it.


Take care - Mike




Re: [Internet]Re: "Pluggable" key serialization in JCE/JCA

2022-03-26 Thread Michael StJohns

On 3/26/2022 11:05 AM, xueleifan(XueleiFan) wrote:

Hi Anders,

I would like to have look at the COSE/JOSE specs.  If it is convenient to you, 
any suggestions about where I could start from? RFC 8812?  Do you know where 
(areas and products) the COSE/JOSE specs are used in practice?

Thanks,
Xuelei


Hi Xuelei -

Just for clarification - JOSE/COSE are data description languages  with 
specific provisions for encoding various type of cryptographic key 
material.  E.g. think ASN1 ~= JOSE or COSE and the RFC's that Anders is 
pointing you at as approximately equal to PKCS8 and X.509 plus the key 
type specific stuff (e.g. PKCS1 for RSA key encodings, X9.63 for EC key 
encodings, later IETF RFCs for newer encodings).


This isn't about math so much as it is about encodings.

Mike




On Mar 25, 2022, at 11:56 AM, Anders Rundgren  
wrote:

Hi Michael & the JDK crypto team,

Although it might be cool writing a JEP it is not really my job.  There are 
also multiple ways of addressing this issue.

BTW, the COSE/JOSE folks who are about to introduce new algorithms want to 
overload RFC 8037 which defines a key type OKP.
I'm not in favor of this idea since breaks existing OKP code.
I'm suggesting that each new crypto system should get its own name space and 
thus long-term corresponding Java interfaces.

Since this is happening NOW and there is no turning back, it would be useful to 
get some early feedback from the JDK folks.  In fact, this is the origin of 
this request.

It seems that nobody representing JDK crypto is involved in COSE/JOSE.

Thanx,
Anders


On 2022-03-25 18:23, Michael StJohns wrote:

On 3/25/2022 12:33 PM, Anders Rundgren wrote:

On 2022-03-25 17:12, Anthony Scarpino wrote:

When you say "construct and EC key", do you mean creating an EC key from
an existing set of values via PKCS8 or X509 encoding?  Or are you
talking about EC key generation?

I was talking about creating keys from parameter data supplied by for
example JOSE:
   {
 "kty": "EC",
 "crv": "P-256",
 "x": "6BKxpty8cI-exDzCkh-goU6dXq3MbcY0cd1LaAxiNrU",
 "y": "mCbcvUzm44j3Lt2b5BPyQloQ91tf2D2V-gzeUxWaUdg"
   }

Apparently this particular issue have solution (as Michael StJohns
showed), although it is not particularity intuitive as well as
undocumented.

Another take on this issue:
https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/security/Key.html#getEncoded()

"Returns the key in its primary encoding format, or null if this key
does not support encoding"

With COSE/JOSE there is no longer an obvious primary encoding format.

Of course there is.  You may not like that it's not COSE or JOSE, but
the encoding spec remains as is and 10s of 1000s of implementations that
use that encoding would be annoyed if you tried to claim a new "primary
encoding format".
The SubjectPublicKeyInfo encoding for PublicKeys, the PKCS8 encoding for
private keys, and RAW encoding for SecretKeys is what's there and I'm
pretty sure won't change.  I occasionally wished for a getEncoded()
method that took an encoding type as an argument (e.g.
getEncoded("bareXY") or some such), but that's not in the API.
What I'd suggest is that you write a JEP for adding EncodedKeySpec
classes for COSE and JOSE to the API.   I'd probably also add a
GenericKeyEncodingSpec class.  That should be simple enough as a first step.
The second step would be to write and contribute a Jose and Cose
KeyFactory implementation that uses those classes.
As I noted, it should be possible to externalize any preferred encoding
by using the getKeySpec method of KeyFactory rather than just the key
types default encoding.
Later, Mike

Anders


Tony






Re: "Pluggable" key serialization in JCE/JCA

2022-03-25 Thread Michael StJohns

On 3/25/2022 12:33 PM, Anders Rundgren wrote:

On 2022-03-25 17:12, Anthony Scarpino wrote:

When you say "construct and EC key", do you mean creating an EC key from
an existing set of values via PKCS8 or X509 encoding?  Or are you
talking about EC key generation?


I was talking about creating keys from parameter data supplied by for 
example JOSE:

  {
    "kty": "EC",
    "crv": "P-256",
    "x": "6BKxpty8cI-exDzCkh-goU6dXq3MbcY0cd1LaAxiNrU",
    "y": "mCbcvUzm44j3Lt2b5BPyQloQ91tf2D2V-gzeUxWaUdg"
  }

Apparently this particular issue have solution (as Michael StJohns 
showed), although it is not particularity intuitive as well as 
undocumented.


Another take on this issue:
https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/security/Key.html#getEncoded() 

"Returns the key in its primary encoding format, or null if this key 
does not support encoding"


With COSE/JOSE there is no longer an obvious primary encoding format.


Of course there is.  You may not like that it's not COSE or JOSE, but 
the encoding spec remains as is and 10s of 1000s of implementations that 
use that encoding would be annoyed if you tried to claim a new "primary 
encoding format".


The SubjectPublicKeyInfo encoding for PublicKeys, the PKCS8 encoding for 
private keys, and RAW encoding for SecretKeys is what's there and I'm 
pretty sure won't change.  I occasionally wished for a getEncoded() 
method that took an encoding type as an argument (e.g. 
getEncoded("bareXY") or some such), but that's not in the API.


What I'd suggest is that you write a JEP for adding EncodedKeySpec 
classes for COSE and JOSE to the API.   I'd probably also add a 
GenericKeyEncodingSpec class.  That should be simple enough as a first step.


The second step would be to write and contribute a Jose and Cose 
KeyFactory implementation that uses those classes.


As I noted, it should be possible to externalize any preferred encoding 
by using the getKeySpec method of KeyFactory rather than just the key 
types default encoding.


Later, Mike




Anders



Tony 




Re: "Pluggable" key serialization in JCE/JCA

2022-03-25 Thread Michael StJohns

On 3/25/2022 12:07 PM, Michael StJohns wrote:
AFAIK, there is still no support for using named curves to construct 
an EC key. Names curves are MANDATORY in JOSE/CODE.


Use AlgorithmParameterGenerator with ECGenParameterSpec.  Works like a 
charm. 



Sorry - Got that slightly wrong.  Use this instead:

AlgorithmParameters parameters = AlgorithmParameters.getInstance("EC", 
"SunEC"); parameters.init(newECGenParameterSpec 
<https://www.tabnine.com/code/java/methods/java.security.spec.ECGenParameterSpec/%3Cinit%3E>("secp128r1")); 
ECParameterSpec ecParameters = 
parameters.getParameterSpec(ECParameterSpec.class); ECPublicKeySpec 
pubKeySpec = newECPublicKeySpec(point, ecParameters); PublicKey key = 
KeyFactory.getInstance("EC", "SunEC").generatePublic(pubKeySpec);


Re: "Pluggable" key serialization in JCE/JCA

2022-03-25 Thread Michael StJohns

On 3/25/2022 4:03 AM, Anders Rundgren wrote:

Hi Mike & the JDK crypto team,

What I'm saying is that key serialization in Java haven't gotten 
enough attention.  Examples:


AFAIK, there is still no support for using named curves to construct 
an EC key.  Names curves are MANDATORY in JOSE/CODE.


Use AlgorithmParameterGenerator with ECGenParameterSpec.  Works like a 
charm.




The gap between EdDSA keys as expressed in JOSE/COSE and the Java API 
is simply put GIGANTIC:
https://mail.openjdk.java.net/pipermail/security-dev/2020-August/022379.html 


The only obvious solution is rolling your own:
https://github.com/cyberphone/openkeystore/blob/master/library/src/org/webpki/crypto/OkpSupport.java 


This code [probably] stinks but it is hardly my fault...


TL;dr - but if you're talking about EDDSA and the equivalent ECDH 
functions their related key  ilk - well, I tried, but lost the battle to 
use the same API as the existing key material.  I'd call that a self 
inflicted wound.




The JDK crypto team is the only party who can clean up this mess.

Since importing X.509 certificates through PKCS12/KeyStore works out 
of the box without any a prior knowledge of key algorithms, there is 
already a precedent.  It is almost just copy and paste :)


Yeah, but even then that model  has problems.  Cf the discussion on this 
list about an RSA key that was marked with RSA-OAEP as its algorithm and 
the issued I encountered being able to use it as java key material.   I 
got back a PublicKey, but one that I couldn't coerce into an 
RSAPublicKey without re-encoding the key and changing the algorithm OID 
- it just wasn't supported.  And that model only knows about ASN1 - or 
did someone update the CertificateFactory for JOSE/COSE format 
certificates (if such a thing exists)?




The new crypto systems that are on the way in (BLS, Post Quantum), 
accentuates this issue.

And the "converter" as you call it will be far behind.


Regarding HSMs, I gave it one more thought and I didn't come up with 
any connections except maybe in Keytool but HSMs appear to be 
X.509-oriented which not the case with COSE/JOSE.


Think PKCS11, the PKCS11 Java provider, and the ability to retrieve a 
public key from an HSM for various purposes.  The ability to place a 
private key on the HSM to secure it (there are a number of use cases 
where generating an unextractable private key on the HSM - the usual 
model - can't be used).  HSMs are cryptographically oriented not X509 
oriented and new mechanisms and key types are added all the time.  They 
have tools to do the key loading directly - from well known formats, or 
programatically through things like Java PKCS11 or C PKCS11.


Later, Mike




Regards,
Anders

On 2022-03-25 3:12, Michael StJohns wrote:

On 3/24/2022 12:28 PM, Anders Rundgren wrote:

On 2022-03-24 16:59, Michael StJohns wrote:

On 3/24/2022 2:46 AM, Anders Rundgren wrote:

Hi List,

I find it a bit strange that every user of crypto either have to 
write

or install specific software for converting JOSE/COSE/PEM keys
back-and-forth to Java's internal representation. This reduces the
value of the abstract types as well.

Now there is whole bunch of new algorithms coming to the Java 
platform

making this task even less attractive.

So my question is simply: How about including this part in an 
enhanced

JCE/JCA?  That is, making the encoder/decoder "pluggable" and hiding
this complexity from users and library developers?


Hi Anders -


Hi Mike,



Isn't that the exact purpose for KeyFactory and its plugins?


They presume that you know not only the key algorithm but the
associated parameters as well.  Why should users or library developers
have to deal with that?


Um - no.  They presume you know the key algorithm, but (using EC Public
keys for example), you can use either of the X509PublicKeySpec (to
decode an encoded key), or the ECPublicKeySpec (to deal with building a
key from scratch using the parameters.  You can notionally (haven't
tried it, but should work) use KeyFactory.getKeySpec() to convert
between the two formats or use Key.getEncoded() to get the default
encoding for the key.

The equivalent on the ECPrivateKey side are the ECPrivateKeySpec and
PKCS8EncodedKeySpec clases.

PEM is actually not an encoding, but a wrapping of an encoding. The only
part of Java that deals with it natively (I believe) is the
CertificateFactory for X.509 certificates.  You may have an extra step
to add or remove a PEM style envelope (or the equivalent BASE64 bare
string), but that's reasonable.



The requirement to know key algorithm in advance forces you into ASN.1
decoding for dealing with PEMs.  Been there done that.


Ah - not sure why you wouldn't know the key algorithm, BUT:  It should
theoretically be possible to write a KeyFactory provider that does that
decoding for you from the X509PublicKeySpec (or PKCS8PrivateKeySpec) and
returns an appropriate PublicKey. Check the actual return type

Re: "Pluggable" key serialization in JCE/JCA

2022-03-24 Thread Michael StJohns

On 3/24/2022 12:28 PM, Anders Rundgren wrote:

On 2022-03-24 16:59, Michael StJohns wrote:

On 3/24/2022 2:46 AM, Anders Rundgren wrote:

Hi List,

I find it a bit strange that every user of crypto either have to write
or install specific software for converting JOSE/COSE/PEM keys
back-and-forth to Java's internal representation. This reduces the
value of the abstract types as well.

Now there is whole bunch of new algorithms coming to the Java platform
making this task even less attractive.

So my question is simply: How about including this part in an enhanced
JCE/JCA?  That is, making the encoder/decoder "pluggable" and hiding
this complexity from users and library developers?


Hi Anders -


Hi Mike,



Isn't that the exact purpose for KeyFactory and its plugins?


They presume that you know not only the key algorithm but the 
associated parameters as well.  Why should users or library developers 
have to deal with that?


Um - no.  They presume you know the key algorithm, but (using EC Public 
keys for example), you can use either of the X509PublicKeySpec (to 
decode an encoded key), or the ECPublicKeySpec (to deal with building a 
key from scratch using the parameters.  You can notionally (haven't 
tried it, but should work) use KeyFactory.getKeySpec() to convert 
between the two formats or use Key.getEncoded() to get the default 
encoding for the key.


The equivalent on the ECPrivateKey side are the ECPrivateKeySpec and 
PKCS8EncodedKeySpec clases.


PEM is actually not an encoding, but a wrapping of an encoding. The only 
part of Java that deals with it natively (I believe) is the 
CertificateFactory for X.509 certificates.  You may have an extra step 
to add or remove a PEM style envelope (or the equivalent BASE64 bare 
string), but that's reasonable.




The requirement to know key algorithm in advance forces you into ASN.1 
decoding for dealing with PEMs.  Been there done that.


Ah - not sure why you wouldn't know the key algorithm, BUT:  It should 
theoretically be possible to write a KeyFactory provider that does that 
decoding for you from the X509PublicKeySpec (or PKCS8PrivateKeySpec) and 
returns an appropriate PublicKey. Check the actual return type to figure 
out what type of key.  E.g.:


KeyFactory kf = KeyFactory.getInstance("GenericDecoder");

As I said below, you'll need to define the equivalent opaque format 
KeySpec classes to handle COSE and JOSE.  I think that's a class per 
type of encoding.






You might need to add KeySpec types for Jose and Cose
(JOSEEncodedKeySpec and COSEEncodedKeySpec) assuming both of those cover
all of the secret, public and private key specifications.

Or hmm... GenericEncodedKeySpec (String alg, byte[] encoded key) or
GenericEncodedKeySpec (String alg, String encodedKey).




Eventually you could end up with something like:

PrivateKey privateKey = new KeyConverter().readPrivateKey(byte[] or
stream);

You would not even have to know in which format the key is supplied in
since this could be accomplished by simple "sniffing".


Nope.  This isn't safe as someone might up with yet another
representation that looks like one of the "sniffable" ones.  You could
build a private implementation that takes its best shot, but 


Finding out if the container is COSE, JOSE, or PEM is (AFAICT) 
trivial.  If the guess is incorrect a properly designed decoder should 
simply fail.


It's trivial NOW because its a closed set of possibilities.  And even 
then, you're assuming you don't have to detect ASN1 vs raw key encodings 
vs string encodings.  Simply detecting a) whether a file is character, 
b) detecting the character encoding, and c) accidentally thinking a 
character file is actually binary or vice versa is fragile.


The addition of the "new" EC variant keys required a substantial 
reworking of the API to support them and the arguments were fierce.  The 
additions were not free.


I deal with no fewer than 3 encodings for an EC public key.  Two or 
three for an EC private key. 2 or 3 text versions of a RSA public key 
along with the X509Encoded  binary version and the TPM binary version.  
Etc.  I usually know what type of key I'm dealing with when I get it, 
but I would love a magic DWIM class that did the right thing on all inputs.









To make "pluggability" feasible, I'm trying to convince the JOSE/COSE
folks to give each new crypto system a separate namespace as an
alternative to overloading OKP (RFC 8037), even if the parameters
match technically.  AFAICT, X.509 public keys essentially already
adhere to this notion.

I would exclude private key import and export in HSMs since these are
specific and rare occasions.


Again, no.  Don't do this in an incomplete way - it will come back to
bite you.  You don't have to implement the plugin that talks to the HSM,
but you have to define the mechanism/API so that the HSM vendors don't
curse your first born child later.



This is 

Re: "Pluggable" key serialization in JCE/JCA

2022-03-24 Thread Michael StJohns

On 3/24/2022 2:46 AM, Anders Rundgren wrote:

Hi List,

I find it a bit strange that every user of crypto either have to write 
or install specific software for converting JOSE/COSE/PEM keys 
back-and-forth to Java's internal representation. This reduces the 
value of the abstract types as well.


Now there is whole bunch of new algorithms coming to the Java platform 
making this task even less attractive.


So my question is simply: How about including this part in an enhanced 
JCE/JCA?  That is, making the encoder/decoder "pluggable" and hiding 
this complexity from users and library developers?


Hi Anders -

Isn't that the exact purpose for KeyFactory and its plugins?

You might need to add KeySpec types for Jose and Cose 
(JOSEEncodedKeySpec and COSEEncodedKeySpec) assuming both of those cover 
all of the secret, public and private key specifications.


Or hmm... GenericEncodedKeySpec (String alg, byte[] encoded key) or 
GenericEncodedKeySpec (String alg, String encodedKey).





Eventually you could end up with something like:

PrivateKey privateKey = new KeyConverter().readPrivateKey(byte[] or 
stream);


You would not even have to know in which format the key is supplied in 
since this could be accomplished by simple "sniffing".


Nope.  This isn't safe as someone might up with yet another 
representation that looks like one of the "sniffable" ones.  You could 
build a private implementation that takes its best shot, but 




To make "pluggability" feasible, I'm trying to convince the JOSE/COSE 
folks to give each new crypto system a separate namespace as an 
alternative to overloading OKP (RFC 8037), even if the parameters 
match technically.  AFAICT, X.509 public keys essentially already 
adhere to this notion.


I would exclude private key import and export in HSMs since these are 
specific and rare occasions.


Again, no.  Don't do this in an incomplete way - it will come back to 
bite you.  You don't have to implement the plugin that talks to the HSM, 
but you have to define the mechanism/API so that the HSM vendors don't 
curse your first born child later.


Mike




WDYT?

Thanx,
Anders





Re: RFR: 8277976: Break up SEQUENCE in X509Certificate::getSubjectAlternativeNames and X509Certificate::getIssuerAlternativeNames in otherName [v8]

2022-02-18 Thread Michael StJohns

On 2/18/2022 3:05 PM, Weijun Wang wrote:

On Thu, 17 Feb 2022 16:00:46 GMT, Weijun Wang  wrote:


The enhancement adds two extra items in the `getSubjectAlternativeNames()` 
output for an OtherName.

It also fix several errors:
1. In `OtherName.java`, `nameValue` should be the value inside `CONTEXT [0]` 
without the tag and length bytes.
2. The argument in constructor `extClass.getConstructor(Object.class)` is 
suspicious. Maybe it meant `byte[]`.

Weijun Wang has updated the pull request incrementally with one additional 
commit since the last revision:

   s/can/may/
_Mailing list message from Michael StJohns...

I'll use your exception message on not a `[0]`.

For dealing with the error while the parsing of `nameValue`, we've discussed it 
and my last comment is at 
https://github.com/openjdk/jdk/pull/7167#discussion_r81016. I'd rather not 
fail.

-

PR: https://git.openjdk.java.net/jdk/pull/7167


Hi Weijun -

You're already adding an error condition for a badly encoded OtherName 
that wasn't there before.  Extending that error condition check to cover 
the wrapped ASN1 object doesn't seem to me to be much of a stretch.   
Looking at the OtherName specifications, there shouldn't be any 
ambiguity here - ANY is required to be a valid ASN1 encoded object.


Hmm... wouldn't these (both) errors be masked by the location in 
X509CertImpl you were discussing?  E.g. you're still going to have the 
raw bytes, but the IOException(s) will just be captured (and maybe 
logged per Sean's suggestion) and ignored.


Later, Mike







Re: RFR: 8277976: Break up SEQUENCE in X509Certificate::getSubjectAlternativeNames and X509Certificate::getIssuerAlternativeNames in otherName [v5]

2022-02-18 Thread Michael StJohns


OtherName.java @93,97

PR:https://git.openjdk.java.net/jdk/pull/7167

    if (derValue1.isContextSpecific((byte) 0) && 
derValue1.isConstructed()) {

    nameValue = derValue1.data.toByteArray();
    } else {
    throw new IOException("value is not [0]");
    }
That exception string isn't correct (the value is usually not just the 
tag), nor very descriptive.  How about instead:


throw new IOException ("value is not EXPLICTly tagged [0]");

Also, the derValue1.data should be parseable into a DerValue itself. 
Should that be checked here as well and an error thrown if invalid?  
I.e.,  add this after nameValue = ...


   try {
   new DerValue (nameValue);
   } catch (IOException ex) {
   throw new IOException ("Body of OtherName is not a valid BER or
   DER value", ex);
   }


Thanks - Mike



Re: RFR: 8255739: x509Certificate returns � for invalid subjectAlternativeNames [v2]

2022-01-24 Thread Michael StJohns

On 1/24/2022 2:23 PM, Weijun Wang wrote:

On Fri, 14 Jan 2022 11:18:23 GMT, Masanori Yano  wrote:


Could you please review the JDK-8255739 bug fix?

I think sun.security.x509.SubjectAlternativeNameExtension() should throw an 
exception for incorrect SubjectAlternativeNames instead of returning the 
substituted characters, which is explained in the description of BugDB.

I modified DerValue.readStringInternal() not to read incorrect 
SubjectAlternativeNames and throw an IOException. 
sun.security.x509.X509CertInfo.parse() catch the IOExcepton and ignore it if 
SAN is a non-ciritical extension like the behavior of the IOException in 
readStringInternal(). So I added a test with -Djava.security.debug=x509 to 
confirm that.

Masanori Yano has updated the pull request incrementally with one additional 
commit since the last revision:

   8255739: x509Certificate returns � for invalid subjectAlternativeNames

Hi Mike, we don't support an OAEP key (i.e. RSA keys with OAEP parameters). If 
you want OAEP encryption, just use a plain RSA key and pass an 
OAEPParameterSpec to the RSA Cipher's init method.

-

PR: https://git.openjdk.java.net/jdk/pull/6928


Hi -

The RSA key just used a different AlgorithmIdentifier from the normal 
parameterless rsaEncryption AlgorithmIdentifier.   This issue basically: 
https://github.com/openssl/openssl/issues/7907. The TCG originally 
specified that the EK certificate contained the TPM's endorsement public 
key would be identified by the  that any endorsement key public key 
would be identified with the RSAES-OAEP  { pkcs-1 7 } object identifier 
instead of the rsaEncryption  {pkcs-1 1 } OID since the key was only 
supposed to be used for decrypting a credential encrypted under OAEP.  
e.g. the cert contained this sequence for the SubjectPublicKeyInfo:



  SEQUENCE {
 175   34:   SEQUENCE {
 177    9: OBJECT IDENTIFIER rsaOAEP (1 2 840 113549 1 1 7)
 188   21: SEQUENCE {
 190   19:   [2] {
 192   17: SEQUENCE {
 194    9:   OBJECT IDENTIFIER
 : rsaOAEP-pSpecified (1 2 840 113549 1 1 9)
 205    4:   OCTET STRING 'TCPA'
 :   }
 : }
 :   }
 : }
 211  271:   BIT STRING, encapsulates {
 216  266: SEQUENCE {
 220  257:   INTEGER
 : 00 8A 80 D6 D1 A9 AD 54 92 A4 D2 AA F1 8A C8 D3
 : 48 12 2E 17 E4 2C FC 75 CF 57 A3 E4 33 EC C8 FD
 : 7D 2D 5E B4 13 F9 5D A9 45 51 E1 F1 22 F9 3C DA
 : 39 69 AB 8D 85 72 74 D9 97 9C A2 6E D1 BE 9B 93
 : 0A 83 1A 9D 2F 30 AA B1 F8 B8 89 04 34 1B 2C 72
 : 31 BF 1F 67 44 79 B9 9E 35 A8 99 06 56 7B 92 28
 : C4 21 B9 90 7E C8 41 09 F5 88 C2 23 28 DE 25 CA
 : AF C2 2B 2E 61 DB 8B 17 83 BB 74 9A 4F A8 D4 A3
 : [ Another 129 bytes skipped ]
 481    3:   INTEGER 65537
 :   }
 : }
 :   }


Except for the weird AlgorithmIdentifier this is a bog standard RSA 
public key.


I don't know that its worth the time to fix this - it's already been 
deprecated by the TCG and I haven't seen an endorsement certificate with 
this particular encoding in a long while.  The certificate this key is 
from was issued 26 April 2012 and - for some strange reason - expires 
the same date this year.


Later, Mike




Re: RFR: 8255739: x509Certificate returns � for invalid subjectAlternativeNames

2022-01-24 Thread Michael StJohns

On 1/24/2022 9:51 AM, Sean Mullan wrote:

On Sat, 22 Jan 2022 22:48:29 GMT, Michael StJohns  wrote:


I originally started using the BC certificate factory
because the SUN factory didn't understand RSA-OAEP as a key type in
SubjectKeyInfo and I was getting a few of those from a group of TPMs.??

Is that still an issue? I would have expected this to be fixed since JDK 11 
when we added support for RSASSA-PSS.
*sigh* doing archeology on my code, that may not have been the reason I 
started using the BC factory.  I *think* I ran into another rogue 
certificate that the SUN library couldn't parse, but the BC library 
code.  However, the RSA-OAEP thing does exist.  See below


-

PR: https://git.openjdk.java.net/jdk/pull/6928


Looks like it probably wasn't fixed.    This was actually an OID in the 
SubjectKeyInfo meant to indicate that the RSA key was solely to be used 
for OAEP.  I.e. 1.2.840.113549.1.1.7.  The place this gets rejected is 
in (looking at the current github) - sun/crypto/provider/RSACipher.java 
@line 261: RSAKey rsaKey = RSAKeyFactory.toRSAKey(key);


That calls sun/security/rsa/RSAKeyFactory.java::toRSAKey(Key key), which 
in turn calls 
sun/security/rsa/RSAUtil.KeyType::lookup(key.getAlgorithm()) @line 128.  
That fails because the key is neither the rsaEncryption OID 
(1.2.840.113549.1.1.1) nor RSASSA_PSS_oid ( .10).



Here's the function I use to fix the encoding. Basically, it changes the 
last octet of the key's algorithmID from a '7' to a '1' and regenerates 
the key.



  public static PublicKey fixPubKey (PublicKey k)
    throws PrivacyCAException{
    if (!k.getAlgorithm().equals("1.2.840.113549.1.1.7"))
  return k;  // its not a problem
    byte[] encodedKey = k.getEncoded();
    encodedKey[16] = (byte)1;
    X509EncodedKeySpec kspec = new X509EncodedKeySpec(encodedKey);

    try {
  KeyFactory kf = KeyFactory.getInstance("RSA");
  return kf.generatePublic(kspec);
    } catch (GeneralSecurityException ex) {
  throw new PrivacyCAException ("Unable to convert an OAEP RSA 
Public Key to a normal RSA public key", ex);

    }
  }


The fixed public key is fed to this code code which worked even in JDK8:

Cipher rsaoaep = Cipher.getInstance 
("RSA/ECB/oaepwithsha1andmgf1padding", "SunJCE");

    String hashName = "SHA-256"; // name alg hash for EK?
    String hmacName = "HmacSHA256";
    int hashLen = 256;
    byte[] identityLabel = 
Charset.forName("UTF-8").encode("IDENTITY").array();


    // zero terminate
    identityLabel = Arrays.copyOf(identityLabel, identityLabel.length 
+ 1);

    logBuffer("Identity label", identityLabel);
    OAEPParameterSpec oaepspec =
  new OAEPParameterSpec (hashName, "MGF1", new MGF1ParameterSpec 
(hashName),

             new PSource.PSpecified(identityLabel));
    rsaoaep.init (Cipher.ENCRYPT_MODE, ekPubKey, oaepspec);
    byte[] encryptedSeed = rsaoaep.doFinal(seed);


If I get a chance, I'll try and take a properly formatted RSA (.1) key 
and turn it into an RSAOAEP  (.7) key and see what happens with the 
above code.  It may take a few days.


AFAIK, this convention was only used by the TPM1.2 space, and only 
briefly as it was realized it was a backwards compatibility nightmare.  
It was actually in an early version of the TPM spec and then removed.  I 
haven't seen any of these keys in any of the TPM2.0s I've played with.


Strangely, I haven't seen any PSS (.10) (vs OAEP) encoded keys.

Mike





Re: RFR: 8255739: x509Certificate returns � for invalid subjectAlternativeNames [v2]

2022-01-22 Thread Michael StJohns

Hi Sean -

Inline.

On 1/21/2022 11:33 AM, Sean Mullan wrote:

On Fri, 14 Jan 2022 11:18:23 GMT, Masanori Yano  wrote:


Could you please review the JDK-8255739 bug fix?

I think sun.security.x509.SubjectAlternativeNameExtension() should throw an 
exception for incorrect SubjectAlternativeNames instead of returning the 
substituted characters, which is explained in the description of BugDB.

I modified DerValue.readStringInternal() not to read incorrect 
SubjectAlternativeNames and throw an IOException. 
sun.security.x509.X509CertInfo.parse() catch the IOExcepton and ignore it if 
SAN is a non-ciritical extension like the behavior of the IOException in 
readStringInternal(). So I added a test with -Djava.security.debug=x509 to 
confirm that.

Masanori Yano has updated the pull request incrementally with one additional 
commit since the last revision:

   8255739: x509Certificate returns � for invalid subjectAlternativeNames
_Mailing list message from [Michael StJohns](mailto:mstjo...@comcast.net) on 
[security-dev](mailto:security-...@mail.openjdk.java.net):_

On 1/18/2022 4:10 PM, Sean Mullan wrote:

Hi -

Bouncycastle started enforcing properly encoded? INTEGERs a while back and that caused 
one of my programs to start failing due to a TPM X509 endorsement certificate just having 
the TPM serial number stuffed into the certificate serial number without normalizing it 
to the appropriate INTEGER encoding.? BC provided a work around (setting 
"org.bouncycastle.asn1.allow_unsafe_integer") which gave me time to re-code 
around the problem.? If you're going to break things, it may be useful to provide a work 
around similar to what they did.

Do you know the behavior of the JDK X.509 CertificateFactory implementation? 
Did it accept or reject this serial number?


It accepted the serial number:


[
[
  Version: V3
  Subject:
  Signature Algorithm: SHA256withECDSA, OID = 1.2.840.10045.4.3.2

  Key:  Sun RSA public key, 2048 bits
  modulus: 
244030580540745092613654475993648434932553330564847517407727188658248

32223177660143311229959664585582409529625785574450992069681603560492386013716136
32364832338166792867574600908839414339721021812629840173006767022634407898831293
85934831807840338360685996173024268943864659869938519459993447390570376982441341
45952422087437605410761245329340711833296479315725697546185458730724484238852701
80315770789789435860018869046480700786553465339467127182438028195537091676054789
84722755281453369500125757853796162260084162669462853871135753998130894343437638
04195331153338279046418652746376364246586098919744901616926689536893
  public exponent: 65537
  Validity: [From: Sun Nov 26 20:52:10 EST 2017,
   To: Thu Dec 30 19:00:00 EST 2049]
  Issuer: CN=www.intel.com, OU=TPM EK intermediate for 
SPTH_EPID_PROD_RK_0, O=In

tel Corporation, L=Santa Clara, ST=CA, C=US
_*SerialNumber: [    048fe61d 2882d3cd 488ab130 b94fbc89 284b32]*_

Certificate Extensions: 9
[1]: ObjectId: 2.5.29.9 Criticality=false
Extension unknown: DER encoded OCTET string =
: 04 1A 30 18 30 16 06 05   67 81 05 02 10 31 0D 30 ..0.0...g1.0
0010: 0B 0C 03 32 2E 30 02 01   00 02 01 5D ...2.0.]


[2]: ObjectId: 1.3.6.1.5.5.7.1.1 Criticality=false
AuthorityInfoAccess [
  [
   accessMethod: caIssuers
   accessLocation: URIName: 
http://upgrades.intel.com/content/CRL/ekcert/SPTH_EP

ID_PROD_RK_0.cer
]
]

[3]: ObjectId: 2.5.29.35 Criticality=false
AuthorityKeyIdentifier [
KeyIdentifier [
: 6C A9 DF 62 A1 AA E2 3E   0F EB 7C 3F 5E B8 E6 1E l..b...>...?^...
0010: CA C1 7C B7    
]
]

[4]: ObjectId: 2.5.29.19 Criticality=true
BasicConstraints:[
  CA:false
  PathLen: undefined
]

[5]: ObjectId: 2.5.29.31 Criticality=false
CRLDistributionPoints [
  [DistributionPoint:
 [URIName: 
http://upgrades.intel.com/content/CRL/ekcert/SPTH_EPID_PROD_RK_0.

crl]
]]

[6]: ObjectId: 2.5.29.32 Criticality=false
CertificatePolicies [
  [CertificatePolicyId: [1.2.840.113741.1.5.2.1]
[PolicyQualifierInfo: [
  qualifierID: 1.3.6.1.5.5.7.2.1
  qualifier: : 16 46 68 74 74 70 3A 2F   2F 75 70 67 72 61 64 65  
.Fhttp://u

pgrade
0010: 73 2E 69 6E 74 65 6C 2E   63 6F 6D 2F 63 6F 6E 74 s.intel.com/cont
0020: 65 6E 74 2F 43 52 4C 2F   65 6B 63 65 72 74 2F 45 ent/CRL/ekcert/E
0030: 4B 63 65 72 74 50 6F 6C   69 63 79 53 74 61 74 65 KcertPolicyState
0040: 6D 65 6E 74 2E 70 64 66 ment.pdf

], PolicyQualifierInfo: [
  qualifierID: 1.3.6.1.5.5.7.2.2
  qualifier: : 30 2A 0C 28 54 43 50 41   20 54 72 75 73 74 65 64  
0*.(TCPA T

rusted
0010: 20 50 6C 61 74 66 6F 72   6D 20 4D 6F 64 75 6C 65 Platform Module
0020: 20 45 6E 64 6F 72 73 65   6D 65 6E 74 Endorsement

]]  ]
]

[7]: ObjectId: 2.5.29.37 Criticality=false
ExtendedKeyUsages [
  2.23.133.8.1
]

[8]: ObjectId: 2.5.29.15 Criticality=true
KeyUsage [
  Key_Encipherment
]

[9]: ObjectId: 2.5.29.17 Criticality=true
SubjectAlternativeName [
  OID.2.23.133.2.3=id:0002, OID.2.23.133.2.2=SPT, 
OID.2.23.133.2.1=

Re: RFR: 8255739: x509Certificate returns � for invalid subjectAlternativeNames

2022-01-20 Thread Michael StJohns

On 1/18/2022 4:10 PM, Sean Mullan wrote:

On Thu, 6 Jan 2022 20:28:22 GMT, Sean Mullan  wrote:


Could you please review the JDK-8255739 bug fix?

I think sun.security.x509.SubjectAlternativeNameExtension() should throw an 
exception for incorrect SubjectAlternativeNames instead of returning the 
substituted characters, which is explained in the description of BugDB.

I modified DerValue.readStringInternal() not to read incorrect 
SubjectAlternativeNames and throw an IOException. 
sun.security.x509.X509CertInfo.parse() catch the IOExcepton and ignore it if 
SAN is a non-ciritical extension like the behavior of the IOException in 
readStringInternal(). So I added a test with -Djava.security.debug=x509 to 
confirm that.

I understand the reasons for making the code more robust and detecting invalid 
DER encodings, but this may have a non-trivial compatibility risk. In general, 
I think detecting invalid encodings is generally the right thing to do, but 
compatibility needs to be considered. Sometimes other implementations have 
encoding bugs that we need to workaround, etc. This change affects not only 
certificates but other security components that use DER in the JDK. 
Certificates already treat non-critical extensions that are badly encoded as 
not a failure, so there is some compatibility built-in already. But I think it 
makes sense to look at other code that calls into the DerValue methods and 
evaluate the potential compatibility risk. At a minimum, a CSR must be filed. 
As a compromise, it may make sense to (at least initially) reduce the 
compatibility risk by allowing the caller (ex: `sun.security.x509.DNSName`) to 
decide if it wants a stricter parsing of the DER-encoded string.

I would like @wangweij or @valeriepeng to also review this.

With respect to the test, it seems like overkill to launch a java process 
inside the test to run each test. Instead, I would just have separate methods 
for each test and run them in the same process as the main test.
@seanjmullan @wangweij I have commented on what you pointed out, so could you 
please reply?

I need another couple of days to look at this issue again before I can reply.

-

PR:https://git.openjdk.java.net/jdk/pull/6928


Hi -

Bouncycastle started enforcing properly encoded  INTEGERs a while back 
and that caused one of my programs to start failing due to a TPM X509 
endorsement certificate just having the TPM serial number stuffed into 
the certificate serial number without normalizing it to the appropriate 
INTEGER encoding.  BC provided a work around (setting 
"org.bouncycastle.asn1.allow_unsafe_integer") which gave me time to 
re-code around the problem.  If you're going to break things, it may be 
useful to provide a work around similar to what they did.


In any event, DerValue.java uses "new String(byteArrayValue, 
charsetType)" to produce the various String values including in 
getIA5String().  I.e.,



public String(byte[] bytes,
   Charset  charset)
Constructs a new |String| by decoding the specified array of bytes 
using the specified charset. The length of the new |String| is a 
function of the charset, and hence may not be equal to the length of 
the byte array.


_*This method always replaces malformed-input and unmappable-character 
sequences with this charset's default replacement string.*_ The 
|CharsetDecoder| class should be used when more control over the 
decoding process is required.


Perhaps it might make sense to update the various places where this is 
used in DerValue to CharsetDecoder and to use 
charsetDecoder.onMalformedInput() and 
charsetDecoder.onUnmappableCharacter() to set the appropriate action 
related to parsing the byte array into a String of a given charset?  
That action could be set based on the presence of the bypass property.


I don't think the change as proposed is backward-compatible safe enough, 
nor does it really address the general issue of poorly encoded DER 
String values in a certificate.


Mike



Re: RFR: 8279066: Still see private key entries without certificates in a PKCS12 keystore

2021-12-21 Thread Michael StJohns

On 12/21/2021 1:26 PM, Sean Mullan wrote:

On Tue, 21 Dec 2021 16:31:57 GMT, Weijun Wang  wrote:


Before password-less PKCS12 keystores are supported, certificates in a PKCS12 
file are always encrypted. Therefore if one loads the keystore with a null 
pass, it contains `PrivateKeyEntry`s without certificates. This has always been 
awkward (and most likely useless) so when JDK-8076190 introduced the 
password-less feature I also added a line to remove such an entry.

https://github.com/openjdk/jdk/blob/a729a70c0119ed071ff490b0dfd4e3e2cb1a5ae4/src/java.base/share/classes/sun/security/pkcs12/PKCS12KeyStore.java#L2272

Unfortunately, the line is not coded correctly, it should have been 
`remove(key)` but here it's `remove(value)`.

This code change correctly removes the entry.

That said, this behavior, although weird, has been there from the beginning 
since PKCS12 keystore was introduced. If you can find out a usage of a private 
key entry without any certificate and think it's worth kept that way, I can 
simply remove the `remove` call and leave the entry there.

I still think it's useful even if I can't see the certificate chain. I'd rather 
see the entry if it actually exists in the keystore and I think removing it is 
odd because it still exists in the keystore. Also, sometimes I use keytool 
without a storepass just to see what is in it, and then if I see the 
certificates are not showing up, I can try again with the password.

-

PR: https://git.openjdk.java.net/jdk/pull/6910


I got curious and took a look at P11KeyStore.java - 
https://github.com/openjdk/jdk/blob/master/src/jdk.crypto.cryptoki/share/classes/sun/security/pkcs11/P11KeyStore.java 
- and it appears to do exactly the opposite of what you've got here.  
E.g. if there's no certificate that matches up with the private key, 
then it returns a null - that's in engineGetEntry() around line 963.


Speaking personally, I've always found it a bit annoying that a 
KeyStore.PrivateKeyEntry required a certificate(chain).  It actually 
made dealing with PKCS11 painful as the Java conventions didn't always 
conform to the actual structure of the PKCS11 contents.


Not suggesting that a change necessarily needs to be made, but perhaps 
you might want to have a consistent behavior between the two?



Mike





Re: RFR: 8225181: KeyStore should have a getAttributes method [v5]

2021-12-01 Thread Michael StJohns

On 11/30/2021 10:07 PM, Wei-Jun Wang wrote:

My understanding is that Java's PKCS12KeyStore will fabricate an alias string 
if there is no friendlyName, since every entry inside a KeyStore object must 
have an alias. I'll take some look tomorrow.


Ah - I see it now in PKCS12KeyStore - it assigns an "unfriendlyName" as 
an alias if "friendlyName" is missing - basically "0", "1", etc.


Thanks - Mike



Thanks,
Max


On Nov 30, 2021, at 10:01 PM, Michael StJohns  wrote:

Hi -

Generically, PKCS12 doesn't require an alias (friendlyName) for a particular Bag, but 
does permit it. Which means that getAttributes(String alias) could fail on a legal 
PKCS12.  It may be worthwhile to add a Set 
KeyStore::getAttributes(int bagNumber) method.

Mike

On 11/30/2021 8:15 PM, Weijun Wang wrote:

Add `KeyStore::getAttributes` so that one can get the attributes of an entry 
without retrieving the entry first. This is especially useful for a private key 
entry which can only be retrieved with a password.

Weijun Wang has updated the pull request with a new target base due to a merge 
or a rebase. The incremental webrev excludes the unrelated changes brought in 
by the merge/rebase. The pull request contains seven additional commits since 
the last revision:

  - final spec change
  - Merge branch 'master' into 8225181
  - Merge branch 'master' into 8225181
  - redine spec
  - more clear and precise spec
  - clarification on protected attributes
  - 8225181: KeyStore should have a getAttributes method
8225181: KeyStore should have a getAttributes method

-

Changes:
   - all: https://git.openjdk.java.net/jdk/pull/6026/files
   - new: https://git.openjdk.java.net/jdk/pull/6026/files/901bdf83..702168db

Webrevs:
  - full: https://webrevs.openjdk.java.net/?repo=jdk=6026=04
  - incr: https://webrevs.openjdk.java.net/?repo=jdk=6026=03-04

   Stats: 929909 lines in 2271 files changed: 483097 ins; 432951 del; 13861 mod
   Patch: https://git.openjdk.java.net/jdk/pull/6026.diff
   Fetch: git fetch https://git.openjdk.java.net/jdk pull/6026/head:pull/6026

PR: https://git.openjdk.java.net/jdk/pull/6026






Re: RFR: 8225181: KeyStore should have a getAttributes method [v5]

2021-11-30 Thread Michael StJohns

Hi -

Generically, PKCS12 doesn't require an alias (friendlyName) for a 
particular Bag, but does permit it. Which means that 
getAttributes(String alias) could fail on a legal PKCS12.  It may be 
worthwhile to add a Set KeyStore::getAttributes(int 
bagNumber) method.


Mike

On 11/30/2021 8:15 PM, Weijun Wang wrote:

Add `KeyStore::getAttributes` so that one can get the attributes of an entry 
without retrieving the entry first. This is especially useful for a private key 
entry which can only be retrieved with a password.

Weijun Wang has updated the pull request with a new target base due to a merge 
or a rebase. The incremental webrev excludes the unrelated changes brought in 
by the merge/rebase. The pull request contains seven additional commits since 
the last revision:

  - final spec change
  - Merge branch 'master' into 8225181
  - Merge branch 'master' into 8225181
  - redine spec
  - more clear and precise spec
  - clarification on protected attributes
  - 8225181: KeyStore should have a getAttributes method

8225181: KeyStore should have a getAttributes method


-

Changes:
   - all: https://git.openjdk.java.net/jdk/pull/6026/files
   - new: https://git.openjdk.java.net/jdk/pull/6026/files/901bdf83..702168db

Webrevs:
  - full: https://webrevs.openjdk.java.net/?repo=jdk=6026=04
  - incr: https://webrevs.openjdk.java.net/?repo=jdk=6026=03-04

   Stats: 929909 lines in 2271 files changed: 483097 ins; 432951 del; 13861 mod
   Patch: https://git.openjdk.java.net/jdk/pull/6026.diff
   Fetch: git fetch https://git.openjdk.java.net/jdk pull/6026/head:pull/6026

PR: https://git.openjdk.java.net/jdk/pull/6026





Re: RFR: 8277246: No need to check about KeyUsage when validating a TSA certificate [v2]

2021-11-16 Thread Michael StJohns

On 11/16/2021 7:46 PM, Weijun Wang wrote:

On Tue, 16 Nov 2021 21:00:12 GMT, Weijun Wang  wrote:


There is no need to check for the KeyUsage extension when validating a TSA 
certificate.

A test is modified where a TSA cert has a KeyUsage but without the 
DigitalSignature bit.

Weijun Wang has updated the pull request incrementally with one additional 
commit since the last revision:

   clarify RFC requirement

Really? The TSA ishttp://timestamp.digicert.com  and the cert chain is

CN=DigiCert Timestamp 2021, O="DigiCert, Inc.", C=US
KeyUsage: DigitalSignature
ExtendedKeyUsages: timeStamping

CN=DigiCert SHA2 Assured ID Timestamping CA, OU=www.digicert.com, O=DigiCert 
Inc, C=US
KeyUsage: DigitalSignature,  Key_CertSign, Crl_Sign
ExtendedKeyUsages: timeStamping

You mean this CA can be used for time stamping as well? I understand that when 
KU is using you can find out its usage in EKU (vice versa), but here it's a CA 
that can sign cert and CRLs. Does it really need to act as the end entity cert 
of a TSA server?

-

PR:https://git.openjdk.java.net/jdk/pull/6416


It doesn't need to act as an EE of a TSA server, but with those markings 
it can.


Whoever issued these over marked them.   I think their intent was to say 
that this CA chain would issue time stamp issuing certificates, but  
extendedKeyUsage contents are not transitive to the subordinate 
certificates so that extension is pretty much extraneous in a CA.  That 
said, if you got a timestamp verifiable by the public key in this CA 
certificate it would be valid (based on the certificate only).    The 
TSA RFC doesn't actually prohibit having a basicConstraints ca=true 
extension.   If I were verifying a timestamp, I'd probably filter out 
any signatures from certificates that are claiming to be CAs, but that's 
not strictly according to standards.



If I were issuing this chain, there would be no extendedKeyUsage 
extensions in the intermediate certificate(s), and the keyPurpose would 
only be keyCertSign or keyCertSign,cRLSign depending.  The EE 
certificate would have eku {id-kp-timestamping} and ku { 
digitalSignature } as I probably couldn't ensure non-repudiation for the 
time stamp (not the data being wrapped by the timestamp which is what 
the Rekor chain is trying to claim I think).



Hmm... while I was researching - I found this in RFC5280 - 4.2.1.12 
defining extendedKeyUsage oids:



  This extension indicates one or more purposes for which the certified
public key may be used, in addition to or in place of the basic
purposes indicated in the key usage extension._In general, this extension 
will appear only in end entity certificates._   This
extension is defined as follows


and


id-kp-timeStampingOBJECT IDENTIFIER ::= { id-kp 8 }
-- Binding the hash of an object to a time
-- Key usage bits that may be consistent:_digitalSignature_
-- and/or_nonRepudiation_


I hope that helps.


Mike



Re: RFR: 8277246: No need to check about KeyUsage when validating a TSA certificate [v2]

2021-11-16 Thread Michael StJohns

On 11/16/2021 6:37 PM, Weijun Wang wrote:

On Tue, 16 Nov 2021 21:00:12 GMT, Weijun Wang  wrote:


There is no need to check for the KeyUsage extension when validating a TSA 
certificate.

A test is modified where a TSA cert has a KeyUsage but without the 
DigitalSignature bit.

Weijun Wang has updated the pull request incrementally with one additional 
commit since the last revision:

   clarify RFC requirement

I did see an issuer of TSA certs whose own certificate has EKU with 
id-kp-timeStamping and KU with both DigitialSignature and keyCertsign. This 
cert should be rejected if it signed a timestamp response.

-

PR: https://git.openjdk.java.net/jdk/pull/6416


Not quite.   The rule is that if there's both an ExtendedKeyUsage and 
KeyUsage extensions, for any given OID in the EKU there has to be at 
least one bit in the KeyUsage extenstion that's compatible - there may 
be more than one.  If there's an EKU, and no KeyUsage, then only the EKU 
needs to have an OID for the key usage purpose - in this case signing a 
timestamp.


The cert you cite would be valid for timestamping.

Mike




Re: RFR: 8277246: No need to check about KeyUsage when validating a TSA certificate [v2]

2021-11-16 Thread Michael StJohns

On 11/16/2021 5:58 PM, Michael StJohns wrote:

On 11/16/2021 4:05 PM, Weijun Wang wrote:

On Tue, 16 Nov 2021 21:00:12 GMT, Weijun Wang  wrote:


There is no need to check for the KeyUsage extension when validating a TSA 
certificate.

A test is modified where a TSA cert has a KeyUsage but without the 
DigitalSignature bit.

Weijun Wang has updated the pull request incrementally with one additional 
commit since the last revision:

   clarify RFC requirement

Hi Michael. Thanks for the comment. That was also our previous understanding 
but we are seeing timestamp returning by sigstore.dev (see the `rekor 
timestamp` command athttps://github.com/sigstore/rekor) whose cert does not 
have the DigitialSignature bit set.

-BEGIN CERTIFICATE-
MIIBvzCCAWWgAwIBAgICBnowCgYIKoZIzj0EAwIwHzEdMBsGA1UEChMUaHR0cHM6
Ly9zaWdzdG9yZS5kZXYwHhcNMjExMTAyMTgxODI5WhcNMzExMTAyMTgxODI5WjAi
MSAwHgYDVQQKExdSZWtvciBUaW1lc3RhbXBpbmcgQ2VydDBZMBMGByqGSM49AgEG
CCqGSM49AwEHA0IABJIsOXOZUdgXhnNmvue9AAsxSYWdp+1HvEQQMYuZUsU0Ylf2
bKqIp3zVrc0a58pGJZvwGjyOHgY5lRevPP1huuOjgY0wgYowDgYDVR0PAQH/BAQD
AgZAMAwGA1UdEwEB/wQCMAAwDgYDVR0OBAcEBQECAwQGMB8GA1UdIwQYMBaAFIiV
AzEbE/rHgP3CA3x7tofqTtIcMCEGA1UdEQQaMBiHBH8AAAGHEAAA
AAEwFgYDVR0lAQH/BAwwCgYIKwYBBQUHAwgwCgYIKoZIzj0EAwIDSAAwRQIg
XdDBMPMTrtXiHmhFJOgQW8DDz/IaHkNZ+hXNL19dmuICIQCw3lE5+52F41kpY3B/
sJaPjuKmeIuEyYZDnMnlhHSusg==
-END CERTIFICATE-

-

PR:https://git.openjdk.java.net/jdk/pull/6416


The certificate is either incorrect, or weird and correct. I actually 
think its incorrect, but let's assume the latter and that either of 
these two key purposes can be used.     Change the check to permit 
either of the two KUs in a KeyUsageExtension:



// insert around line 109.

private static final int KU_NON_REPUDIATION = 1;

// replace line 359.

if (checkKeyUsage (cert, KU_SIGNATURE) == false && checkKeyUsage(cert, 
KU_NON_REPUDATION) == false) {


From RFC5280:


The digitalSignature bit is asserted when the subject public key
   is used for verifying digital signatures, other than signatures on
   certificates (bit 5) and CRLs (bit 6), such as those used in an
   entity authentication service, a data origin authentication
   service, and/or an integrity service.

   The nonRepudiation bit is asserted when the subject public key is
   used to verify digital signatures, other than signatures on
   certificates (bit 5) and CRLs (bit 6), used to provide a non-
   repudiation service that protects against the signing entity
   falsely denying some action.  In the case of later conflict, a
   reliable third party may determine the authenticity of the signed
   data.  (Note that recent editions of X.509 have renamed the
   nonRepudiation bit to contentCommitment.)


In any event, if you have a KU extension and it includes neither of 
these bits, the certificate is invalid for timestamping.  So simply 
deleting the check is wrong.


I'll reach out again to my expert and let you know what I find out.

Thanks - Mike


According to the guy who wrote RFC5280, nonRepudiation (aka 
contentCommitment) is a valid alternative for a keyUsage that pairs with 
an extended key usage of id-kp-timestamping.  I'd add a check that 
requires one or the other or both if a KeyUsage extension exists.


I added a note to the Rekor CA repository, hopefully at the correct 
location suggesting they set both bits going forward. This was code they 
published only back in May.


Mike




Re: RFR: 8277246: No need to check about KeyUsage when validating a TSA certificate [v2]

2021-11-16 Thread Michael StJohns

On 11/16/2021 4:05 PM, Weijun Wang wrote:

On Tue, 16 Nov 2021 21:00:12 GMT, Weijun Wang  wrote:


There is no need to check for the KeyUsage extension when validating a TSA 
certificate.

A test is modified where a TSA cert has a KeyUsage but without the 
DigitalSignature bit.

Weijun Wang has updated the pull request incrementally with one additional 
commit since the last revision:

   clarify RFC requirement

Hi Michael. Thanks for the comment. That was also our previous understanding 
but we are seeing timestamp returning by sigstore.dev (see the `rekor 
timestamp` command athttps://github.com/sigstore/rekor) whose cert does not 
have the DigitialSignature bit set.

-BEGIN CERTIFICATE-
MIIBvzCCAWWgAwIBAgICBnowCgYIKoZIzj0EAwIwHzEdMBsGA1UEChMUaHR0cHM6
Ly9zaWdzdG9yZS5kZXYwHhcNMjExMTAyMTgxODI5WhcNMzExMTAyMTgxODI5WjAi
MSAwHgYDVQQKExdSZWtvciBUaW1lc3RhbXBpbmcgQ2VydDBZMBMGByqGSM49AgEG
CCqGSM49AwEHA0IABJIsOXOZUdgXhnNmvue9AAsxSYWdp+1HvEQQMYuZUsU0Ylf2
bKqIp3zVrc0a58pGJZvwGjyOHgY5lRevPP1huuOjgY0wgYowDgYDVR0PAQH/BAQD
AgZAMAwGA1UdEwEB/wQCMAAwDgYDVR0OBAcEBQECAwQGMB8GA1UdIwQYMBaAFIiV
AzEbE/rHgP3CA3x7tofqTtIcMCEGA1UdEQQaMBiHBH8AAAGHEAAA
AAEwFgYDVR0lAQH/BAwwCgYIKwYBBQUHAwgwCgYIKoZIzj0EAwIDSAAwRQIg
XdDBMPMTrtXiHmhFJOgQW8DDz/IaHkNZ+hXNL19dmuICIQCw3lE5+52F41kpY3B/
sJaPjuKmeIuEyYZDnMnlhHSusg==
-END CERTIFICATE-

-

PR:https://git.openjdk.java.net/jdk/pull/6416


The certificate is either incorrect, or weird and correct. I actually 
think its incorrect, but let's assume the latter and that either of 
these two key purposes can be used.     Change the check to permit 
either of the two KUs in a  KeyUsageExtension:



// insert around line 109.

private static final int KU_NON_REPUDIATION = 1;

// replace line 359.

if (checkKeyUsage (cert, KU_SIGNATURE) == false && checkKeyUsage(cert, 
KU_NON_REPUDATION) == false) {


From RFC5280:


The digitalSignature bit is asserted when the subject public key
   is used for verifying digital signatures, other than signatures on
   certificates (bit 5) and CRLs (bit 6), such as those used in an
   entity authentication service, a data origin authentication
   service, and/or an integrity service.

   The nonRepudiation bit is asserted when the subject public key is
   used to verify digital signatures, other than signatures on
   certificates (bit 5) and CRLs (bit 6), used to provide a non-
   repudiation service that protects against the signing entity
   falsely denying some action.  In the case of later conflict, a
   reliable third party may determine the authenticity of the signed
   data.  (Note that recent editions of X.509 have renamed the
   nonRepudiation bit to contentCommitment.)


In any event, if you have a KU extension and it includes neither of 
these bits, the certificate is invalid for timestamping.  So simply 
deleting the check is wrong.


I'll reach out again to my expert and let you know what I find out.

Thanks - Mike



Re: RFR: 8277246: No need to check about KeyUsage when validating a TSA certificate

2021-11-16 Thread Michael StJohns

On 11/16/2021 2:43 PM, Weijun Wang wrote:

There is no need to check for the KeyUsage extension when validating a TSA 
certificate.

A test is modified where a TSA cert has a KeyUsage but without the 
DigitalSignature bit.

-

Commit messages:
  - 8277246: No need to check about KeyUsage when validating a TSA certificate

Changes:https://git.openjdk.java.net/jdk/pull/6416/files
  Webrev:https://webrevs.openjdk.java.net/?repo=jdk=6416=00
   Issue:https://bugs.openjdk.java.net/browse/JDK-8277246
   Stats: 7 lines in 2 files changed: 0 ins; 6 del; 1 mod
   Patch:https://git.openjdk.java.net/jdk/pull/6416.diff
   Fetch: git fetchhttps://git.openjdk.java.net/jdk  pull/6416/head:pull/6416

PR:https://git.openjdk.java.net/jdk/pull/6416


I just ran this by one of the PKIX folk and this fix isn't correct.   
Basically, if a certificate has both an Extended Key Usage and a Key 
Usage extension  then they both have to be consistent with the actual 
purpose and both have to be checked. The former for the specific use 
described in the TSA document, and the latter for general use (RFC3280 
and 5280).


In this case, checkKeyUsage attempts to find  keyUsageExtension and if 
it's not present returns true (e.g. key usage is acceptable).  Otherwise 
it checks to see if there's a digitalSignature bit set, and if it's not 
set checkKeyUsage returns false.   The code as written (before the 
change) is correct.  Here's the utility method in EndEntityChecker.java



/**
220  * Utility method checking if bit 'bit' is set in this certificates
221  * key usage extension.
222  * @throws CertificateException if not
223  */
224 private boolean checkKeyUsage(X509Certificate cert, int bit)
225 throws CertificateException {
226 boolean[] keyUsage = cert.getKeyUsage();
227 if (keyUsage == null) {
228 return true;
229 }
230 return (keyUsage.length > bit) && keyUsage[bit];
231 }



It would be acceptable to have a certificate without a 
keyUsageExtension, but if the KUE is present, then the digitalSignature 
bit needs to be set.


Recommend backing out the change and closing the bug report as mistaken.


Mike



Re: RFR: 8273977: Reduce unnecessary BadPaddingExceptions in RSAPadding

2021-11-04 Thread Michael StJohns

On 11/4/2021 9:13 PM, Michael StJohns wrote:

On 11/3/2021 3:03 PM, Lari Hotari wrote:
On Mon, 20 Sep 2021 09:35:57 GMT, Lari Hotari  
wrote:



### Motivation

When profiling an application that uses JWT token authentication, 
it was noticed that a high number of 
`javax.crypto.BadPaddingException`s were created. When 
investigating the code in RSAPadding, one can see that 
BadPaddingException is created in all cases, also on the success path:
https://github.com/openjdk/jdk/blob/dc7f452acbe3afa5aa6e31d316bd5e669c86d6f6/src/java.base/share/classes/sun/security/rsa/RSAPadding.java#L369-L375 



### Modifications

Inline the unnecessary local variable to prevent creating the 
exception on the success path.
For anyone interested, there's an explanation of the 
[Bleichenbacher's CCA attack on PKCS#1 v1.5 on 
Stackexchange](https://crypto.stackexchange.com/questions/12688/can-you-explain-bleichenbachers-cca-attack-on-pkcs1-v1-5). 
The original paper is ["Chosen Ciphertext Attacks Against Protocols 
Based on the RSA Encryption Standard PKCS #1" 
](http://archiv.infsec.ethz.ch/education/fs08/secsem/bleichenbacher98.pdf).


The reason for constant time is to not leak information about a 
possible bad padding to the attacker based on the difference in 
response time between a valid and bad padding. The attacker can use 
this information to narrow the search to find the pre-master secret.
Hi @lhotari, please submit an OCA at 
https://oca.opensource.oracle.com/ if you are contributing on your 
own behalf. If you are contributing on your employers behalf, please 
send me an e-Mail at 
[dalibor.to...@oracle.com](mailto:dalibor.to...@oracle.com) so that 
I can verify your account.
@robilad This is a contribution on my own behalf. I have signed [OCA 
in 2014 while contributing to 
Btrace](https://github.com/btraceio/btrace/pull/101#issuecomment-6404). 
Is that sufficient? I cannot sign OCA again online, it gives me an 
error message "The provided GitHub username lhotari does already 
appear in an existing OCA, please use another one.".


-

PR: https://git.openjdk.java.net/jdk/pull/5581


Hi -

If you're trying for constant time, you might be defeated by the "if" 
at line 460 and the blank line ("// continue;") at 462.   As well as 
the if clause at 450.


Maybe:

int zeroCount = 0;
 int msgZeroCount = 0;

 int mlenCount = 0;
 int msgOne = 0;

 int oneFlag = 0;

 int bpcount = -1;

 boolean logicNotBp = false;

 // substitute for 450-451
 if (lHash[i] != EM[dbStart + i]) {
   bp = true;
 } else {
   logicNotBp = true;
 }

// add at line 454

 if (logicNotBp) {
   bpcount = 0;
 }

 if (bp) {
   bpcount= 1;
 }

The above is a bit convoluted, but makes sure you have the same number 
and type of operations regardless of whether or not there is a match 
at any given position. bpcount will be set to 1 if any of the bytes 
don't match.  This shouldn't be optimized out



 // substitute for 458-469

 for (int i = padStart; i < EM.length; i++) {
   int value = EM[i];
   if (oneFlag != 0) {

if (oneFlag == 0) {  // sorry about that.

switch (value) {
    case 0x00:
  zeroCount++;
  break;
    case 0x01:
  oneFlag++;
  break;
    default:
  bpcount++;
  }

   } else {
 switch (value) {
   case 0x00:
 msgZeroCount++;
 break;
   case 0x01:
 msgOne++;
 break;
   default:
 mlenCount++;
 }
   }
 }



 bp = (bpcount >= 1);
 mlenCount = otherZeroCount + dupOne + mlenCount;

This allows you to add an additional check for consistency - mlenCount 
+ zeroCount + 1 should equal EM.length - padStart. Checking those will 
prevent the optimizer from optimizing out the code above.


I used switch instead of if/else/else because its usually closer to 
constant time and each of the branches are increments.


FYI - I do have a signed contributor agreement from oracle days, but 
lack time to do this against a build environment.


Mike






Re: RFR: 8273977: Reduce unnecessary BadPaddingExceptions in RSAPadding

2021-11-04 Thread Michael StJohns

On 11/3/2021 3:03 PM, Lari Hotari wrote:

On Mon, 20 Sep 2021 09:35:57 GMT, Lari Hotari  wrote:


### Motivation

When profiling an application that uses JWT token authentication, it was 
noticed that a high number of `javax.crypto.BadPaddingException`s were created. 
When investigating the code in RSAPadding, one can see that BadPaddingException 
is created in all cases, also on the success path:
https://github.com/openjdk/jdk/blob/dc7f452acbe3afa5aa6e31d316bd5e669c86d6f6/src/java.base/share/classes/sun/security/rsa/RSAPadding.java#L369-L375

### Modifications

Inline the unnecessary local variable to prevent creating the exception on the 
success path.

For anyone interested, there's an explanation of the [Bleichenbacher's CCA attack on 
PKCS#1 v1.5 on 
Stackexchange](https://crypto.stackexchange.com/questions/12688/can-you-explain-bleichenbachers-cca-attack-on-pkcs1-v1-5).
 The original paper is ["Chosen Ciphertext Attacks Against Protocols Based on the 
RSA Encryption Standard PKCS #1" 
](http://archiv.infsec.ethz.ch/education/fs08/secsem/bleichenbacher98.pdf).

The reason for constant time is to not leak information about a possible bad 
padding to the attacker based on the difference in response time between a 
valid and bad padding. The attacker can use this information to narrow the 
search to find the pre-master secret.
Hi @lhotari, please submit an OCA at https://oca.opensource.oracle.com/ if you 
are contributing on your own behalf. If you are contributing on your employers 
behalf, please send me an e-Mail at 
[dalibor.to...@oracle.com](mailto:dalibor.to...@oracle.com) so that I can 
verify your account.

@robilad This is a contribution on my own behalf. I have signed [OCA in 2014 while 
contributing to 
Btrace](https://github.com/btraceio/btrace/pull/101#issuecomment-6404). Is that 
sufficient? I cannot sign OCA again online, it gives me an error message "The 
provided GitHub username lhotari does already appear in an existing OCA, please use 
another one.".

-

PR: https://git.openjdk.java.net/jdk/pull/5581


Hi -

If you're trying for constant time, you might be defeated by the "if" at 
line 460 and the blank line ("// continue;") at 462.   As well as the if 
clause at 450.


Maybe:

int zeroCount = 0;
 int msgZeroCount = 0;

 int mlenCount = 0;
 int msgOne = 0;

 int oneFlag = 0;

 int bpcount = -1;

 boolean logicNotBp = false;

 // substitute for 450-451
 if (lHash[i] != EM[dbStart + i]) {
   bp = true;
 } else {
   logicNotBp = true;
 }

// add at line 454

 if (logicNotBp) {
   bpcount = 0;
 }

 if (bp) {
   bpcount= 1;
 }

The above is a bit convoluted, but makes sure you have the same number 
and type of operations regardless of whether or not there is a match at 
any given position. bpcount will be set to 1 if any of the bytes don't 
match.  This shouldn't be optimized out



 // substitute for 458-469

 for (int i = padStart; i < EM.length; i++) {
   int value = EM[i];
   if (oneFlag != 0) {
 switch (value) {
    case 0x00:
  zeroCount++;
  break;
    case 0x01:
  oneFlag++;
  break;
    default:
  bpcount++;
  }

   } else {
 switch (value) {
   case 0x00:
 msgZeroCount++;
 break;
   case 0x01:
 msgOne++;
 break;
   default:
 mlenCount++;
 }
   }
 }



 bp = (bpcount >= 1);
 mlenCount = otherZeroCount + dupOne + mlenCount;

This allows you to add an additional check for consistency - mlenCount + 
zeroCount + 1 should equal EM.length - padStart. Checking those will 
prevent the optimizer from optimizing out the code above.


I used switch instead of if/else/else because its usually closer to 
constant time and each of the branches are increments.


FYI - I do have a signed contributor agreement from oracle days, but 
lack time to do this against a build environment.


Mike




Re: RFR: 8248268: Support KWP in addition to KW [v7]

2021-05-24 Thread Michael StJohns

Some more general comments - related to the restructuring.

In AESKeyWrap at 152-155 - that check probably should be moved to W().   
KWP should do the formatting prior to passing the data to W().  Also at 
185-187 - move that to W_INV().


AESKeyWrap at 158 - shouldn't you be returning the cipher text length?  
That's what the comment in FeedbackCipher says.  W() should probably be 
returning the final length.


The length of the final ciphertext data should be 8 bytes longer than 
the plaintext. decryptFinal() seems to do the right thing by decreasing 
the length returned.   But again - shouldn't that be the purview of W_INV()?


The call in KeyWrapCipher.engineGetOutputSize() should probably also be 
passed into KWUtil rather than being  done in KeyWrapCipher.  And the 
more I look at this, the more I think that all of the engineUpdate 
operations should throw UnsupportedOperationException - it would 
certainly simplify the logic.  That would make the call return either  
inputLength + 8 or inputLength - 8 depending on mode.


KWUtil.W() - should probably check that in.length >= inLen + 8 and throw 
a ShortBufferException if not.


Would it be useful to add a comment in KeyWrapCipher that  warns 
maintainers that  AESKeyWrap(Padded).encryptFinal() and decryptFinal() 
uses the input buffer as the output buffer? That's a reasonable approach 
for decryption, but I'm wondering if you might want to support in-place 
encryption as there's no spec prohibition requiring data to be held 
until the encryption is complete.


Mike









On 5/24/2021 7:01 PM, Valerie Peng wrote:

On Fri, 21 May 2021 20:44:57 GMT, Xue-Lei Andrew Fan  wrote:


Valerie Peng has updated the pull request with a new target base due to a merge 
or a rebase. The pull request now contains seven commits:

  - Merge master into JDK-8248268
  - Minor update to address review comments.
  - Changed AESParameters to allow 4-byte, 8-byte IVs and removed
KWParameters and KWPParameters.
  - Refactor code to reduce code duplication
Address review comments
Add more test vectors
  - Changed AlgorithmParameters impls to register under AES/KW/NoPadding and
AES/KWP/NoPadding
  - Restored Iv algorithm parameters impl.
  - 8248268: Support KWP in addition to KW

Updated existing AESWrap support with AES/KW/NoPadding cipher

transformation. Added support for AES/KWP/NoPadding and
AES/KW/PKCS5Padding support to SunJCE provider.

src/java.base/share/classes/com/sun/crypto/provider/AESParameters.java line 50:


48:
49: public AESParameters() {
50: core = new BlockCipherParamsCore(AESConstants.AES_BLOCK_SIZE, 4, 8);

A cipher object may not take different IV sizes at the same time.  I was just 
wondering how it could be used in practice.  Maybe something like:


AlgorithmParameters algParams = AlgorithmParameters.getInstance("AES");
algParams.init(ivParameterSpec);

The IV parameter is given with the init() method.  Then, it may be not 
necessary to construct the BlockCipherParamsCore object will all potential IV 
sizes.  See the comments in BlockCipherParamsCore.

Cipher objects normally takes just one iv size. BlockCipherParamsCore is used 
by various impls of AlgorithmParametersSpi class which may be used with 
different block cipher algorithms. The iv parameter is given with the 
AlgorithmParametersSpi.init() method and invalid iv will lead to exceptions. 
Since there are iv size checks built in BlockCipherParamsCore already, it seems 
safer to relax the check a bit for AES (4 and 8 for KWP and KW). The other 
choice is to just remove the size check from BlockCipherParamsCore for AES 
algorithm, but then we'd not be able to reject invalid iv lengths as before.


src/java.base/share/classes/com/sun/crypto/provider/BlockCipherParamsCore.java 
line 52:


50: private byte[] iv = null;
51:
52: private int[] moreSizes = null;

The moreSizes is not used other than the init() method field.  It may be not 
easy to check the specific size if we cache all supported sized in the object.  
For example, if the required IV size if 8 bytes, it may be a problem about how 
to make sure the iv size is 8 bytes exactly for a specific algorithm.

Maybe, we could just have a ivSize filed.  The default value is block_size, 
which coupe be set with the init(ivParameterSpec) method.


 
 private int ivSize;
 ...
BlockCipherParamsCore(int blkSize) {
block_size = blkSize;
ivSize = blkSize;
 }
 ...
void init(AlgorithmParameterSpec paramSpec) {
 ivSize = ...;  // reset IV size.
 }

 // use ivSize while encoding and decoding.

The problem with this is that this assumes that the specified paramSpec 
argument of the init() method is always valid.

-

PR: https://git.openjdk.java.net/jdk/pull/2404





Re: RFR: 8248268: Support KWP in addition to KW [v7]

2021-05-22 Thread Michael StJohns

On 5/22/2021 1:57 PM, Xue-Lei Andrew Fan wrote:

On Fri, 14 May 2021 00:33:12 GMT, Valerie Peng  wrote:


This change updates SunJCE provider as below:
- updated existing AESWrap support with AES/KW/NoPadding cipher transformation.
- added support for AES/KWP/NoPadding and AES/KW/PKCS5Padding.

Existing AESWrap impl, i.e. AESWrapCipher class, is re-factored and renamed to 
KeyWrapCipher class. The W and W_inverse functions are moved to KWUtil class. 
The KW and KWP support are in the new AESKeyWrap and AESKeyWrapPadded classes 
which extend FeedbackCipher and used in KeyWrapCipher class. To minimize data 
copying, AESKeyWrap and AESKeyWrapPadded will do the crypto operation over the 
same input buffer which is allocated and managed by KeyWrapCipher class.

Also note that existing AESWrap impl does not take IV. However, the 
corresponding PKCS#11 mechanisms do, so I added support for accepting IVs to 
both KW and KWP.

Thanks,
Valerie

Valerie Peng has updated the pull request with a new target base due to a merge 
or a rebase. The pull request now contains seven commits:

  - Merge master into JDK-8248268
  - Minor update to address review comments.
  - Changed AESParameters to allow 4-byte, 8-byte IVs and removed
KWParameters and KWPParameters.
  - Refactor code to reduce code duplication
Address review comments
Add more test vectors
  - Changed AlgorithmParameters impls to register under AES/KW/NoPadding and
AES/KWP/NoPadding
  - Restored Iv algorithm parameters impl.
  - 8248268: Support KWP in addition to KW

Updated existing AESWrap support with AES/KW/NoPadding cipher

transformation. Added support for AES/KWP/NoPadding and
AES/KW/PKCS5Padding support to SunJCE provider.

Good points, Mike!  Thank you!


_Mailing list message from [Michael StJohns](mailto:mstjo...@comcast.net) on 
[security-dev](mailto:security-...@mail.openjdk.java.net):_

src/java.base/share/classes/com/sun/crypto/provider/AESParameters.java line 50:

48:
49: public AESParameters() {
50: core = new BlockCipherParamsCore(AESConstants.AES_BLOCK_SIZE, 4, 8);
A cipher object may not take different IV sizes at the same time.  I was just 
wondering how it could be used in practice.  Maybe something like:

The mode is KW - it has a fixed length 8 byte non-iv integrity tag.???
KWP is a special case of KW where there's still an 8 byte tag, but part
of it is interpreted by KWP to figure out how much padding was
included.?? KW (AKA RFC3394) permits user (actually specification
specified) IV values.? KWP (aka RFC5649) does not.


Hm, I missed this point.  KW (RFC 3394) supports Alternative IV (AIV), while 
KWP is just a case about how to construct the alternative IV for KWP operations.


Yes.  Sorry if I wasn't clearer in earlier emails.




I'd treat KWP as a final (in the Java final sense) extension to KW with
a fixed AIV flag value and a defined interpretation for the 8 byte AIV
tag.? E.g. if you try to specify an IV for KWP, it should fail.?? If
someone else wants to do something like KWP or even twiddle with the 4
byte AIV, let them do their own KW wrap around - which they should be
able to do that via the KW/NoPadding model by specifying their own AIV.?
That should improve interoperability by preventing some monkey see
monkey do errors.


I agreed.  Maybe, we could do:
1. Support Alternative IV for KW, for the flexibility as described in section 
2.2.3.2 of RFC 3394.


I think that's acceptable.  Hmm... is it possible to make the set ICV 
call protected rather than public?   Generally, the defined ICVs match 
up with specific algorithm OIDS.  You shouldn't see a third one unless 
someone defines a brand new variant.  But you want to make it possible 
for someone to do an extension.



2. Use default IV if no AIV specified for KW, as described in section 2.2.3.1 
of RFC 3394.

Works.  Or see the above.

3. Use a fixed IV for KWP, as described in section 3of RFC 5649.

Yes.



This is sort of one reason I was arguing for AES/KW/KWPPadding rather
than AES/KWP/NoPadding.


I thought of the "AES/KW/KWPPadding" style as well.  The "AES/KWP/NoPadding" looks a 
little bit weird to me because there is padding while we call it with "NoPadding".

KWP is not exactly like the traditional AES/CBC/PKCS5Padding, where the 
components could be treated separately.  The padding scheme of KWP impact the 
IV as well.  So it was arguable to me.


We keep referring to it as an IV  - but it isn't - it's an Integrity 
Check Value.  Unlike an IV, it doesn't inform the decryption stage, it's 
only checked after the decryption is complete.  (E.g. in a mode with an 
IV, you need to know the IV before you encrypt and before you decrypt).  
Then there's this from Section 6.3 of SP800-38F.



Let S = ICV2|| [len(P)/8]32 || P|| PAD



Step 5 of the KWP-AE algorithm basically passes the A65959A6 value 
pre-prepended to the pad length both pre-pended to the padded data 
through to W().


The 

Re: RFR: 8248268: Support KWP in addition to KW [v7]

2021-05-22 Thread Michael StJohns

In line

On 5/21/2021 5:01 PM, Xue-Lei Andrew Fan wrote:

On Fri, 14 May 2021 00:33:12 GMT, Valerie Peng  wrote:


This change updates SunJCE provider as below:
- updated existing AESWrap support with AES/KW/NoPadding cipher transformation.
- added support for AES/KWP/NoPadding and AES/KW/PKCS5Padding.

Existing AESWrap impl, i.e. AESWrapCipher class, is re-factored and renamed to 
KeyWrapCipher class. The W and W_inverse functions are moved to KWUtil class. 
The KW and KWP support are in the new AESKeyWrap and AESKeyWrapPadded classes 
which extend FeedbackCipher and used in KeyWrapCipher class. To minimize data 
copying, AESKeyWrap and AESKeyWrapPadded will do the crypto operation over the 
same input buffer which is allocated and managed by KeyWrapCipher class.

Also note that existing AESWrap impl does not take IV. However, the 
corresponding PKCS#11 mechanisms do, so I added support for accepting IVs to 
both KW and KWP.

Thanks,
Valerie

Valerie Peng has updated the pull request with a new target base due to a merge 
or a rebase. The pull request now contains seven commits:

  - Merge master into JDK-8248268
  - Minor update to address review comments.
  - Changed AESParameters to allow 4-byte, 8-byte IVs and removed
KWParameters and KWPParameters.
  - Refactor code to reduce code duplication
Address review comments
Add more test vectors
  - Changed AlgorithmParameters impls to register under AES/KW/NoPadding and
AES/KWP/NoPadding
  - Restored Iv algorithm parameters impl.
  - 8248268: Support KWP in addition to KW

Updated existing AESWrap support with AES/KW/NoPadding cipher

transformation. Added support for AES/KWP/NoPadding and
AES/KW/PKCS5Padding support to SunJCE provider.

src/java.base/share/classes/com/sun/crypto/provider/AESParameters.java line 50:


48:
49: public AESParameters() {
50: core = new BlockCipherParamsCore(AESConstants.AES_BLOCK_SIZE, 4, 8);

A cipher object may not take different IV sizes at the same time.  I was just 
wondering how it could be used in practice.  Maybe something like:


The mode is KW - it has a fixed length 8 byte non-iv integrity tag.    
KWP is a special case of KW where there's still an 8 byte tag, but part 
of it is interpreted by KWP to figure out how much padding was 
included.   KW (AKA RFC3394) permits user (actually specification 
specified) IV values.  KWP (aka RFC5649) does not.


I'd treat KWP as a final (in the Java final sense) extension to KW with 
a fixed AIV flag value and a defined interpretation for the 8 byte AIV 
tag.  E.g. if you try to specify an IV for KWP, it should fail.   If 
someone else wants to do something like KWP or even twiddle with the 4 
byte AIV, let them do their own KW wrap around - which they should be 
able to do that via the KW/NoPadding model by specifying their own AIV.  
That should improve interoperability by preventing some monkey see 
monkey do errors.


This is sort of one reason I was arguing for AES/KW/KWPPadding rather 
than AES/KWP/NoPadding.



Mike






AlgorithmParameters algParams = AlgorithmParameters.getInstance("AES");
algParams.init(ivParameterSpec);

The IV parameter is given with the init() method.  Then, it may be not 
necessary to construct the BlockCipherParamsCore object will all potential IV 
sizes.  See the comments in BlockCipherParamsCore.

src/java.base/share/classes/com/sun/crypto/provider/BlockCipherParamsCore.java 
line 52:


50: private byte[] iv = null;
51:
52: private int[] moreSizes = null;

The moreSizes is not used other than the init() method field.  It may be not 
easy to check the specific size if we cache all supported sized in the object.  
For example, if the required IV size if 8 bytes, it may be a problem about how 
to make sure the iv size is 8 bytes exactly for a specific algorithm.

Maybe, we could just have a ivSize filed.  The default value is block_size, 
which coupe be set with the init(ivParameterSpec) method.


 
 private int ivSize;
 ...
BlockCipherParamsCore(int blkSize) {
block_size = blkSize;
ivSize = blkSize;
 }
 ...
void init(AlgorithmParameterSpec paramSpec) {
 ivSize = ...;  // reset IV size.
 }

 // use ivSize while encoding and decoding.

src/java.base/share/classes/com/sun/crypto/provider/BlockCipherParamsCore.java 
line 81:


79: expectedLen + " bytes long");
80: }
81: iv = tmpIv.clone();

The moreSizes is not used after initialization.  The iv/tmpIv could be a value 
other than the block_size.   The getEncoded() method would use the iv value for 
the encoding.  While in the decoding method init(byte[]) method, the IV sizes 
other block_size is not considered, and IOE will be thrown.  Could this be a 
problem?

-

PR: https://git.openjdk.java.net/jdk/pull/2404





Re: Tentative suggestion: Make X509Key.getKey() non-protected

2021-05-04 Thread Michael StJohns

Hi Tim -

Pardon the top posting, and I'm only speaking for myself, but your 
suggestion is unlikely to get any traction.


First and most importantly, sun.security.x509 et al are non-public 
classes (e.g. not part of the JDK API, rather than a reference to their 
java tagging), and generally align their public behaviors with the API, 
and modify their private behaviors as necessary to make them more 
efficient etc.   E.g. you can't even count on X509Key to exist from JDK 
to JDK as its possible that the plugin maintainer may completely 
refactor those classes at any time.


Secondly, the API and the encoding of the public and private keys 
generally reflects what the various RFCs and other standards documents 
specify.  Most of the key material has multiple ways to encode it, but 
that's not the responsibility of either the JDK or the underlying 
plugins - they all support a common PKIX encoding basis and expect that 
"foreign" keys (e.g. keys originating outside of a given provider) will 
be encoded in the common fashion when they enter or leave the 
provider. It's left to the programmer to do whatever other encodings 
are necessary and to do the translations to and from those.  (I have 
personal code for translating between raw EC keys and at least two 
different forms of secure element native forms, as well as moving public 
key material in and out of smart cards in various formats).


Third, yes it is 80s technology or earlier, but for better or worse, its 
what the infrastructure supports.


You may find a better place to make this suggestion over with the 
BouncyCastle folk.  There are a host of utilities included with their 
ASN1 libraries and crypto Providers that can be used to get what you 
want - but I see you've already noted that.


You could always build your own provider classes to do what you want.

Later, Mike



On 5/4/2021 2:48 PM, Tim Bray wrote:
Pardon the interruption.  I'm working on a social-media identity 
federation project involving zero-knowledge proofs where you post 
public-key/nonce/sig combinations to multiple places to prove the 
poster identities' shared ownership of a private key. Details (not 
really relevant to the following): 
https://www.tbray.org/ongoing/When/202x/2020/12/01/Bluesky-Identity 



The representation of the public key needs to be textual. The most 
straightforward way to do that is a one-liner like so: 
Base64.getEncoder().encodeToString(key.getEncoded()) which gives you 
Base64'ed PKIX.


Problem is, people are wondering why they need PKIX. They want to use 
EdDSA and they don't see the need for algorithm agility (there's a 
faction that thinks it's actively harmful) and  point out that ed25119 
keys are just 32 bytes worth of bits and don't see why they need to 
use Eighties technologies to express them in ASCII and ask "why can't 
I just base64 those bytes?"  Which is straightforward in Go and JS and 
so on.


It is however not straightforward in Java (unless I'm missing 
something obvious, wouldn't be surprising) 
because sun.security.x509.X509Key.getKey() is protected; see 
https://github.com/openjdk/jdk/blob/master/src/java.base/share/classes/sun/security/x509/X509Key.java#L131 



I have verified, via abuse of the reflection APIs, that the BitArray 
that call returns does indeed represent the appropriate 32 ed25519 
bytes. So, I have an object that has the data that I need inside but 
for reasons of policy it is declining to reveal them.  BTW you can do 
this in BouncyCastle and friends.


Rather than changing protection on a long-established method, maybe a 
new public byte[] getKeyBytes() would be the thing to do?  Or maybe it 
would be better to make this an EdDSA-specific operation? Maybe the 
EdDSA classes already provide a way to do this that I couldn't find?


So, with an apology for possibly wasting your time, two questions:

1. Is there a principled Java 15 way to get the key bytes that I'm not 
smart enough to have found?
2. Failing that, would a proposal to allow access to un-PKIX'ed EdDSA 
key bytes be ruled out in principle for some good reason that I don't 
know about?






Re: RFR: 8248268: Support KWP in addition to KW [v4]

2021-04-07 Thread Michael StJohns

*sigh* Minor correction in line.

On 4/7/2021 2:49 PM, Michael StJohns wrote:

On 4/7/2021 1:28 PM, Greg Rubin wrote:

Mike,

Yes, this was in response to your comment.

I'm aware that the IV really serves more as an integrity check and 
mode signalling mechanism than anything else. My concern is that in 
the past few years I've seen various issues related to "in band 
signalling" where something about the ciphertext (or directly 
associated metadata) changes how the data is decrypted and 
authenticated. This has reached the level where several cryptographic 
forums I participate in are starting to consider it a full anti-pattern.


The proposed "AutoPadding" mode is an example of in-band signalling 
in which an externally provided ciphertext changes how it is 
interpreted. While I cannot personally think of a specific risk in 
this case, I would be inclined not to include this mode unless there 
is a strong driving need from our users. While I have definitely seen 
people not knowing if their data was encrypted with KW or KW+PKCS5/7, 
I haven't personally seen uncertainty between KW and KWP. (I also 
haven't worked with all possible HSMs, just a few of them.)  So, from 
a position of caution, I'd avoid "AutoPadding", but this is a 
preference based on current best-practice rather than a strong 
objection based on specific concerns or risks.



I sent a note off to the original mode inventor - Russ Housley:

Can you think of any reason why there might be an issue with 
providing an autopadding mode for KW/KWP (e.g. select which to use 
based on the input data for encrypt and determine which was used 
after running the unwrap function but before removing the initial 
block and any padding)?


I got back:

As long as every party supports both modes, you could use KW id [sic 
- I think he meant "is"]


"if" not "is"

the inout is a multiple of 64 bits, otherwise use KWP.  Of course, 
the algorithm identifier needs to be set appropriately.


Which sort of confirms what I thought, but added a question: Are there 
algorithm OIDs for KW with PKCS5 padding or do people just use the KW 
OID( 2.16.840.1.101.3.4.1.{5,25,45}?  As far as I can tell, there are 
no OIDs for KW with PKCS5.


Does there need to be an autopad OID?

If it were me, I'd be avoiding implementing the PKCS5 padding mode 
here.  I can't actually find a specification that includes it and it 
looks like a hack that was fixed by the specification of KWP.  I'd 
prefer not to extend the hack's lifetime, given that  RFC5649 is 10+ 
years old.


WRT to HSM uncertainty, I ran into problems especially trying to wrap 
RSA private keys.  Turned out that some encoded as 8 byte multiples 
and some did not.  In any event, I mentioned HSMs, but I really care 
about the general model for the JCE. I'd *really* like to avoid having 
to have to first figure out the private key encoding length (which may 
be difficult as a provider may not choose to export an unwrapped 
private key even if its a software provider) before choosing the 
wrapping algorithm.   Doing it that way just fits the JCE model better.


At some point, there needs to be an RFC written that specifies the 
default encodings for keys wrapped by this algorithm.


Later, Mike




Thank you,
Greg

On Sat, Apr 3, 2021 at 4:38 PM Michael StJohns <mailto:mstjo...@comcast.net>> wrote:


On 4/3/2021 11:35 AM, Greg Rubin wrote:
> I'd advise against the AutoPadding scheme without more careful
analysis and discussion. Have we seen either KW or KWP
specifications which recommend that behavior?
>
> My concern is that we've seen cases before where two different
cryptographic algorithms could be selected transparently upon
decryption and it lowers the security of the overall system. (A
variant of in-band signalling.) The general consensus that I've
been seeing in the (applied) cryptographic community is strongly
away from in-band signalling and towards the decryptor fully
specifying the algorithms and behavior prior to attempting
decryption.

I think this is in response to my comment?

The wrap function can take a Key as an input and can have the unwrap
method produce a Key as an output - indeed it should be used
primarily
for this rather than the more general encrypt/decrypt functions. 
The
problem is that the encoding of the key may not be known prior to
the
attempt to wrap it - hence it's not known whether or not padding
need be
applied.  This is especially problematic with HSMs. Providing an
AutoPadding mode would allow the wrapping algorithm to decide
whether to
use either of the RFC 3394 (AKA KW) Integrity Check Value (ICV)
or the
RFC5649 (aka KWP) value and padding length.

The key thing to remember here is that the IV (initial value - RFC
language) /ICV (integrity check value - NIST language)actually
 

Re: RFR: 8248268: Support KWP in addition to KW [v4]

2021-04-07 Thread Michael StJohns

On 4/7/2021 1:28 PM, Greg Rubin wrote:

Mike,

Yes, this was in response to your comment.

I'm aware that the IV really serves more as an integrity check and 
mode signalling mechanism than anything else. My concern is that in 
the past few years I've seen various issues related to "in band 
signalling" where something about the ciphertext (or directly 
associated metadata) changes how the data is decrypted and 
authenticated. This has reached the level where several cryptographic 
forums I participate in are starting to consider it a full anti-pattern.


The proposed "AutoPadding" mode is an example of in-band signalling in 
which an externally provided ciphertext  changes how it is 
interpreted. While I cannot personally think of a specific risk in 
this case, I would be inclined not to include this mode unless there 
is a strong driving need from our users. While I have definitely seen 
people not knowing if their data was encrypted with KW or KW+PKCS5/7, 
I haven't personally seen uncertainty between KW and KWP. (I also 
haven't worked with all possible HSMs, just a few of them.) So, from a 
position of caution, I'd avoid "AutoPadding", but this is a preference 
based on current best-practice rather than a strong objection based on 
specific concerns or risks.



I sent a note off to the original mode inventor - Russ Housley:

Can you think of any reason why there might be an issue with providing 
an autopadding mode for KW/KWP (e.g. select which to use based on the 
input data for encrypt and determine which was used after running the 
unwrap function but before removing the initial block and any padding)?


I got back:

As long as every party supports both modes, you could use KW id [sic - 
I think he meant "is"] the inout is a multiple of 64 bits, otherwise 
use KWP.  Of course, the algorithm identifier needs to be set 
appropriately.


Which sort of confirms what I thought, but added a question:  Are there 
algorithm OIDs for KW with PKCS5 padding or do people just use the KW 
OID( 2.16.840.1.101.3.4.1.{5,25,45}?  As far as I can tell, there are no 
OIDs for KW with PKCS5.


Does there need to be an autopad OID?

If it were me, I'd be avoiding implementing the PKCS5 padding mode 
here.  I can't actually find a specification that includes it and it 
looks like a hack that was fixed by the specification of KWP.  I'd 
prefer not to extend the hack's lifetime, given that RFC5649 is 10+ 
years old.


WRT to HSM uncertainty, I ran into problems especially trying to wrap 
RSA private keys.  Turned out that some encoded as 8 byte multiples and 
some did not.  In any event, I mentioned HSMs, but I really care about 
the general model for the JCE.  I'd *really* like to avoid having to 
have to first figure out the private key encoding length (which may be 
difficult as a provider may not choose to export an unwrapped private 
key even if its a software provider) before choosing the wrapping 
algorithm.   Doing it that way just fits the JCE model better.


At some point, there needs to be an RFC written that specifies the 
default encodings for keys wrapped by this algorithm.


Later, Mike




Thank you,
Greg

On Sat, Apr 3, 2021 at 4:38 PM Michael StJohns <mailto:mstjo...@comcast.net>> wrote:


On 4/3/2021 11:35 AM, Greg Rubin wrote:
> I'd advise against the AutoPadding scheme without more careful
analysis and discussion. Have we seen either KW or KWP
specifications which recommend that behavior?
>
> My concern is that we've seen cases before where two different
cryptographic algorithms could be selected transparently upon
decryption and it lowers the security of the overall system. (A
variant of in-band signalling.) The general consensus that I've
been seeing in the (applied) cryptographic community is strongly
away from in-band signalling and towards the decryptor fully
specifying the algorithms and behavior prior to attempting decryption.

I think this is in response to my comment?

The wrap function can take a Key as an input and can have the unwrap
method produce a Key as an output - indeed it should be used
primarily
for this rather than the more general encrypt/decrypt functions.  The
problem is that the encoding of the key may not be known prior to the
attempt to wrap it - hence it's not known whether or not padding
need be
applied.  This is especially problematic with HSMs.  Providing an
AutoPadding mode would allow the wrapping algorithm to decide
whether to
use either of the RFC 3394 (AKA KW) Integrity Check Value (ICV) or
the
RFC5649 (aka KWP) value and padding length.

The key thing to remember here is that the IV (initial value - RFC
language) /ICV (integrity check value - NIST language)actually
isn't an
IV(initialization vector) in the ordinary meaning, it's a flag,
padding
and integrity indicator and 

Re: RFR: 8248268: Support KWP in addition to KW [v4]

2021-04-03 Thread Michael StJohns

On 4/3/2021 11:35 AM, Greg Rubin wrote:

I'd advise against the AutoPadding scheme without more careful analysis and 
discussion. Have we seen either KW or KWP specifications which recommend that 
behavior?

My concern is that we've seen cases before where two different cryptographic 
algorithms could be selected transparently upon decryption and it lowers the 
security of the overall system. (A variant of in-band signalling.) The general 
consensus that I've been seeing in the (applied) cryptographic community is 
strongly away from in-band signalling and towards the decryptor fully 
specifying the algorithms and behavior prior to attempting decryption.


I think this is in response to my comment?

The wrap function can take a Key as an input and can have the unwrap 
method produce a Key as an output - indeed it should be used primarily 
for this rather than the more general encrypt/decrypt functions.  The 
problem is that the encoding of the key may not be known prior to the 
attempt to wrap it - hence it's not known whether or not padding need be 
applied.  This is especially problematic with HSMs.  Providing an 
AutoPadding mode would allow the wrapping algorithm to decide whether to 
use either of the RFC 3394 (AKA KW) Integrity Check Value (ICV) or the 
RFC5649 (aka KWP) value and padding length.


The key thing to remember here is that the IV (initial value - RFC 
language) /ICV (integrity check value - NIST language)actually isn't an 
IV(initialization vector) in the ordinary meaning, it's a flag, padding 
and integrity indicator and will be fixed for all keys of the same 
length that use the specified values.   E.g. unlike other modes that 
require an initialization vector, you don't need to know the ICV to 
decrypt the underlying key stream, but you can  (and for that matter 
MUST) easily test the recovered first block against the expected ICV to 
determine whether the output needs padding removed or not.


FWIW, the actual cryptographic operations between padded data and 
non-padded data (of the right multiple length) are identical. It's only 
the pre or post processing that's looking for different data.


Obviously, this doesn't work if someone provides their own IV - but 
that's fairly unlikely.  CF CCM and its non-normative example formatting 
function appendix A -  each and every implementation I've seen uses that 
formatting function, even though it isn't actually required by the 
standard.  I'd be surprised if anyone decided to use a different set of 
non-standard IV values.


If an AutoPadding mode were implemented, I'd throw exceptions if someone 
tried to set the IV.


Later, Mike




Re: RFR: 8248268: Support KWP in addition to KW [v3]

2021-03-23 Thread Michael StJohns

On 3/22/2021 5:43 PM, Valerie Peng wrote:

This change updates SunJCE provider as below:
- updated existing AESWrap support with AES/KW/NoPadding cipher transformation.
- added support for AES/KWP/NoPadding and AES/KW/PKCS5Padding.

Existing AESWrap impl, i.e. AESWrapCipher class, is re-factored and renamed to 
KeyWrapCipher class. The W and W_inverse functions are moved to KWUtil class. 
The KW and KWP support are in the new AESKeyWrap and AESKeyWrapPadded classes 
which extend FeedbackCipher and used in KeyWrapCipher class. To minimize data 
copying, AESKeyWrap and AESKeyWrapPadded will do the crypto operation over the 
same input buffer which is allocated and managed by KeyWrapCipher class.

Also note that existing AESWrap impl does not take IV. However, the 
corresponding PKCS#11 mechanisms do, so I added support for accepting IVs to 
both KW and KWP.

Thanks,
Valerie



PR: https://git.openjdk.java.net/jdk/pull/2404


KeyWrapCipher.java:


  /**
212  * Sets the padding mechanism of this cipher. Currently, only
213  * "NoPadding" scheme is accepted for this cipher.
214  *
215  * @param padding the padding mechanism
216  *
217  * @exception NoSuchPaddingException if the requested padding mechanism
218  * does not match the supported padding scheme
219  */
220 @Override
221 protected void engineSetPadding(String padding)
222 throws NoSuchPaddingException {
223 if ((this.padding == null && 
!"NoPadding".equalsIgnoreCase(padding)) ||
224 this.padding instanceof PKCS5Padding &&
225 "PKCS5Padding".equalsIgnoreCase(padding)) {
226 throw new NoSuchPaddingException();
227 }
228 }


Shouldn't this be rejecting PKCS5Padding?

Actually, I keep wondering why this isn't  AES/KW/NoPadding, 
AES/KW/KWPPadding and AES/KW/AutoPadding where the latter doesn't take a 
user provided IV, but figures out which of the two default IV values to 
use based on whether the input is a multiple of the blocksize or not?


Decrypting is similar - do the decryption, and check which IV you get to 
figure out padded or not padded.


Mike




Re: RFR: 8248268: Support KWP in addition to KW [v3]

2021-03-23 Thread Michael StJohns

On 3/23/2021 4:15 PM, Greg Rubin wrote:

177: System.out.println("Testing " + ALGO);
178: c = Cipher.getInstance(ALGO, "SunJCE");
179: for (int i = 0; i < MAX_KWP_PAD_LEN; i++) {

I see that here (and earlier) we do test all padding lengths. I'd still like 
some KATs generated by a known good implementation to ensure that we are not 
just compatible with ourselves.



http://csrc.nist.gov/groups/STM/cavp/documents/mac/kwtestvectors.zip has 
the NIST test vectors.  See 
https://csrc.nist.gov/CSRC/media/Projects/Cryptographic-Algorithm-Validation-Program/documents/mac/KWVS.pdf 
for details.


Mike




Re: RFR: 8263404: RsaPrivateKeySpec is always recognized as RSAPrivateCrtKeySpec in RSAKeyFactory.engineGetKeySpec [v4]

2021-03-20 Thread Michael StJohns

On 3/20/2021 2:46 PM, SalusaSecondus wrote:

On Sat, 20 Mar 2021 17:52:00 GMT, SalusaSecondus 
 wrote:


@valeriepeng Sorry for the delay. There were unknown Windows build failure 
during the pre-submit tests that I have to rebase my commits on top of the  
master tip. This new revision should cover all comments you left before. Thank 
you!

Mike,

 From what I can find, if you try to get a spec from a non-extractable key 
you'll get an `InvalidKeySpecException`.
1. `C_GetAttributeValue`will throw a `PKCS11Exception`
2. The `PKCS11Exception` gets caught in 
[P11KeyFactory](https://github.com/openjdk/jdk/blob/master/src/jdk.crypto.cryptoki/share/classes/sun/security/pkcs11/P11KeyFactory.java#L98-L99)
 which rethrows it as an `InvalidKeySpecException`.

We seem to have a choice and I'm not sure the best way to approach this.

1. We trust the properties in `P11Key` and just ask it if the values are both 
sensitive and extractable. [1]
2. But if we already trust P11Key, why not also trust that it properly 
implements the RSAPrivateKey interfaces [2]. This is the strategy used by the 
snippet I posted earlier (delegating to `implGetSoftwareFactory()`)
3. We don't trust P11Key except to use getKeyId(), this yields the current 
design where we pull the attributes every time the factory needs them.

We should probably reduce calls to `C_GetAttributeValue` as they may be very 
slow. At the least they cross the JNI boundary and at worst they interact with 
a slow piece of hardware (possibly over a network). The current design will 
have two calls in a worst case, but is likely to have only one call the vast 
majority of the time.

[1] 
https://github.com/openjdk/jdk/blob/master/src/jdk.crypto.cryptoki/share/classes/sun/security/pkcs11/P11Key.java#L92
[2] 
https://github.com/openjdk/jdk/blob/master/src/jdk.crypto.cryptoki/share/classes/sun/security/pkcs11/P11Key.java#L375-L406

-

PR: https://git.openjdk.java.net/jdk/pull/2949


Actually, the important lines are 63-66 and 365-373:

 * If the components are not accessible, we use a generic class that
 * only implements PrivateKey (or SecretKey). Whether the components of a
 * key are extractable is automatically determined when the key object is
 * created.

attributes = getAttributes(session, keyID, attributes, new 
CK_ATTRIBUTE[] {

    new CK_ATTRIBUTE(CKA_TOKEN),
    new CK_ATTRIBUTE(CKA_SENSITIVE),
    new CK_ATTRIBUTE(CKA_EXTRACTABLE),
    });
    if (attributes[1].getBoolean() || (attributes[2].getBoolean() 
== false)) {

    return new P11PrivateKey
    (session, keyID, algorithm, keyLength, attributes);
    }


If the key is non-extractable, then the only attributes will be these 
three and the underlying type will be P11Key.P11PrivateKey rather than 
one of the RSA variants.


Simple check at the top.

Mike




Re: RFR: 8263404: RsaPrivateKeySpec is always recognized as RSAPrivateCrtKeySpec in RSAKeyFactory.engineGetKeySpec [v4]

2021-03-20 Thread Michael StJohns

On 3/20/2021 1:54 PM, SalusaSecondus wrote:

On Thu, 18 Mar 2021 20:25:59 GMT, Ziyi Luo  wrote:


This looks to cover the cases and fixes we talked about.

@valeriepeng Sorry for the delay. There were unknown Windows build failure 
during the pre-submit tests that I have to rebase my commits on top of the  
master tip. This new revision should cover all comments you left before. Thank 
you!

Mike,

 From what I can find, if you try to get a spec from a non-extractable key 
you'll get an `InvalidKeySpecException`.
1. `C_GetAttributeValue`will throw a `PKCS11Exception`
2. The `PKCS11Exception` gets caught in 
[P11KeyFactory](https://github.com/openjdk/jdk/blob/master/src/jdk.crypto.cryptoki/share/classes/sun/security/pkcs11/P11KeyFactory.java#L98-L99)
 which rethrows it as an `InvalidKeySpecException`.

-

PR: https://git.openjdk.java.net/jdk/pull/2949


Given that, I'd refactor the code to pull the CKA_SENSITIVE and 
CKA_EXPORTABLE attributes first and throw a more specific message if the 
key is not extractable rather than having to fail twice before throwing 
the error.  (I.e., you try both combos of the attributes and both are 
failing on the inability to pull the private exponent).


Either that or fail early by checking the error code of the first thrown 
PKCS11Exception against CKR_ATTRIBUTE_SENSITIVE.



  } catch (final PKCS11Exception ex) {

if (ex.getErrorCode() == PKCS11Constants.CKR_ATTRIBUTE_SENSITIVE) {
 throw new InvalidKeySpecException ("Sensitive key may not be 
extracted", ex);

}

 // bubble this up if RSAPrivateCrtKeySpec is specified
 // otherwise fall through to RSAPrivateKeySpec
 if (!keySpec.isAssignableFrom(RSAPrivateKeySpec.class)) {
 throw ex;
 }
 }  finally {
 key.releaseKeyID();
 }


Later, Mike



Re: RFR: 8263404: RsaPrivateKeySpec is always recognized as RSAPrivateCrtKeySpec in RSAKeyFactory.engineGetKeySpec [v4]

2021-03-19 Thread Michael StJohns

On 3/19/2021 2:24 PM, Valerie Peng wrote:


some* reason (even if I cannot figure out why).
Well, for `P11RSAKeyFactory`, I think some minor modification may be needed 
given the additional P11PrivateKey type.
I'd expect it to be something like:
 // must be either RSAPrivateKeySpec or RSAPrivateCrtKeySpec
 if (keySpec.isAssignableFrom(RSAPrivateCrtKeySpec.class)) {
 session[0] = token.getObjSession();
 CK_ATTRIBUTE[] attributes = new CK_ATTRIBUTE[] {
 new CK_ATTRIBUTE(CKA_MODULUS),
 new CK_ATTRIBUTE(CKA_PUBLIC_EXPONENT),
 new CK_ATTRIBUTE(CKA_PRIVATE_EXPONENT),


If the PKCS11 private key has the CKA_SENSITIVE attribute set to true or 
CKA_EXPORTABLE set to false, you can't retrieve the above attribute.  
AIRC, the contract for getting a Key from an unextractable PKCS11 
private key is to return a key that implements both PrivateKey and 
RSAKey, but doesn't implement either of the RSAPrivateKey interfaces.   
I don't know what the contract is for producing KeySpec's from 
unextractable keys.


Mike



 new CK_ATTRIBUTE(CKA_PRIME_1),
 new CK_ATTRIBUTE(CKA_PRIME_2),
 new CK_ATTRIBUTE(CKA_EXPONENT_1),
 new CK_ATTRIBUTE(CKA_EXPONENT_2),
 new CK_ATTRIBUTE(CKA_COEFFICIENT),
 };
 long keyID = key.getKeyID();
 try {
 token.p11.C_GetAttributeValue(session[0].id(), keyID, 
attributes);
 KeySpec spec = new RSAPrivateCrtKeySpec(
 attributes[0].getBigInteger(),
 attributes[1].getBigInteger(),
 attributes[2].getBigInteger(),
 attributes[3].getBigInteger(),
 attributes[4].getBigInteger(),
 attributes[5].getBigInteger(),
 attributes[6].getBigInteger(),
 attributes[7].getBigInteger()
 );
 return keySpec.cast(spec);
 } catch (final PKCS11Exception ex) {
 // bubble this up if RSAPrivateCrtKeySpec is specified
 // otherwise fall through to RSAPrivateKeySpec
 if (!keySpec.isAssignableFrom(RSAPrivateKeySpec.class)) {
 throw ex;
 }
 }  finally {
 key.releaseKeyID();
 }

 attributes = new CK_ATTRIBUTE[] {
 new CK_ATTRIBUTE(CKA_MODULUS),
 new CK_ATTRIBUTE(CKA_PRIVATE_EXPONENT),
 };
 keyID = key.getKeyID();
 try {
 token.p11.C_GetAttributeValue(session[0].id(), keyID, 
attributes);
 } finally {
 key.releaseKeyID();
 }

 KeySpec spec = new RSAPrivateKeySpec(
 attributes[0].getBigInteger(),
 attributes[1].getBigInteger()
 );
 return keySpec.cast(spec);
 } else { // PKCS#8 handled in superclass
 throw new InvalidKeySpecException("Only RSAPrivate(Crt)KeySpec "
 + "and PKCS8EncodedKeySpec supported for RSA private keys");
 }
 }

-

PR: https://git.openjdk.java.net/jdk/pull/2949





Re: Integrated: 8261472: BasicConstraintsExtension::toString shows "PathLen:2147483647" if there is no pathLenConstraint

2021-02-09 Thread Michael StJohns

On 2/9/2021 9:02 PM, Weijun Wang wrote:

On Wed, 10 Feb 2021 01:39:15 GMT, Weijun Wang  wrote:


Print out "no limit" instead. This is the words RFC 5280 uses: "Where 
pathLenConstraint does not appear, no limit is imposed".

No regression test. Trivial.

This pull request has now been integrated.

Changeset: 4619f372
Author:Weijun Wang 
URL:   https://git.openjdk.java.net/jdk/commit/4619f372
Stats: 11 lines in 1 file changed: 8 ins; 1 del; 2 mod

8261472: BasicConstraintsExtension::toString shows "PathLen:2147483647" if 
there is no pathLenConstraint

Reviewed-by: jnimeh

-

PR: https://git.openjdk.java.net/jdk/pull/2493



Sorry - not quite right, it's not quite that trivial a fix.

The definition for BasicConstraints is


BasicConstraints ::= SEQUENCE {
 cA  BOOLEAN DEFAULT FALSE,
 pathLenConstraint   INTEGER (0..MAX) OPTIONAL }


If pathLenConstraint is not present, then the path length is infinite.   
The flag value for that looks like it was encoded as both any negative 
number (and set to -1 to start) and Integer.MAX_VALUE.  I'm not quite 
sure why it was done that way, but there's a problem doing that - 
actually a bunch of them.


You really ought to get the same encoding coming and going (e.g. 
creating an object from DER should encode back to the exact same DER).  
The current code does not do that.


1) It's not valid to encode or decode pathLenConstraint in the DER as a 
negative number.   This isn't a problem for encoding, but the 
BasicConstraintsException(Boolean critical, Object value) needs a value 
check after line 157 to make sure that the decoded pathLengthConstraint 
field is positive - i'd throw an IOException on failure.    I'd also 
change line 149 to just return - the initial value of pathLen is -1 and 
that's an appropriate flag for a missing field.


2) I'm not sure why the set/get methods were added.  I think it was to 
provide access for the PathValidation stuff? Given that they are 
present, I'd insert a line after line 222 (set) : "if (pathLen < 0) 
pathLen = -1;" // treat any negative value as unconstrained path length


3) And since the only place pathLen is available externally is in the 
get method, I'd change line 237 to "return (pathLen < 0) ? 
Integer.MAX_VALUE : Integer.valueOf(pathLen);"   I think this is more 
correct than returning -1.


4) And to fix the problem that led to this discussion, change line 176 
to 'pathLenAsString = " unconstrained"' and delete lines 177-178.  E.g. 
there's no such thing as undefined here - both a negative number and 
MAX_VALUE mean unconstrained length in the previous version of the code.


5) One more - in the other constructor, change line 108 to "this.pathLen 
= (len < 0 || len == Integer.MAX_VALUE) ? -1 : len;"


6) *sigh* Delete lines 197-201.  I have no idea why they are overriding 
the specified value of critical based on whether ca is true or not, but 
it's wrong.    Conversely, the call to the constructor at line 95 is 
correct.


Mike




Re: RFR: 8023980: JCE doesn't provide any class to handle RSA private key in PKCS#1 [v3]

2021-01-18 Thread Michael StJohns

On 1/17/2021 9:29 PM, Valerie Peng wrote:

On Fri, 15 Jan 2021 01:45:07 GMT, Valerie Peng  wrote:


Marked as reviewed by weijun (Reviewer).

_Mailing list message from [Michael StJohns](mailto:mstjo...@comcast.net) on 
[security-dev](mailto:security-dev@openjdk.java.net):_

Sorry - I'm coming to this a bit late.

Any chance of adding the logic for generatePublic() from a PKCS8 RSA
private key??? RFC3477 has the PKCS1 RSAPrivateKey ASN1 which includes
the modulus and publicExponent - so it should be a pretty straight
forward add to generate a public key.

PKCS11 2.40 started requiring that the publicExponent be stored with the
private key to allow for the public key to be regenerated from a private
key object.?? Going forward,? it might be a good idea to modify the
RSAPrivate(Crt)KeyImpl class to store the publicExponent if provided.

Mike

You are correct that for RSA private CRT keys the necessary values are there 
for figuring out its corresponding public keys.

This change is about adding support for PKCS#1 encoded RSA keys and be able to 
translate them into PKCS#8 encoded keys and/or extract various key specs out of 
them. If you already have PKCS#8 RSAPrivateCrtKey obj from SunRsaSign provider, 
you can call its getPublicExponent() method and use that to create a 
RSAPublicKeySpec and generate RSA public key with it. If you are using 3rd 
party impl which does not return the public exponent value somehow, then you 
can translate it using the RSA key factory impl from SunRsaSign provider and 
then repeat the fore-mentioned step. Will this address your need? If not, could 
you elaborate the usage that you have in mind? Not sure if you are suggesting a 
new KeyFactory.generatePublic() method which take a PrivateKey or else.

Mike,
We can continue your feedback with a separate RFE since this RFE is just about 
adding support for PKCS#1 encoding.
I need to wrap this up before my upcoming trip this Wed, hope that's ok with 
you.

Thanks! Valerie

-

PR: https://git.openjdk.java.net/jdk/pull/1787


No worries - I got busy with other things for a few days.

To answer your other question, I've had a few cases where the public key 
for a private key has been misplaced and where we needed it back.  
Ideally,  it should be possible to take a PrivateKey object (in whatever 
form including just PrivateKey) and run it through the KeyFactory to 
extract a PublicKey.  I've used the technique you suggest above before 
(and the related multiply the private scalar against G for EC keys) to 
retrieve the data needed to build a public key, but each of these is a 
bit ad hoc. I'd really prefer to have a standard pattern for all key types.


About 4 years or so ago (e.g. when 2.40 was released), the PKCS11 group 
started requiring that the private keys include the attributes necessary 
to retrieve the public key - for RSA it was the public exponent, and for 
EC it was the public point (which could be stored or regenerated upon 
demand).  It may be time to think about doing something similar here and 
going forward for any given asymmetric key type.


That's a more general RFE than just updating the current implementing 
classes, but as far as I can tell, doesn't change any of the APIs, but 
may change the factory class implementation guidance.


An interesting addition would be to have the Impl classes implement both 
the appropriate public key and private key interfaces.





Re: RFR: 8023980: JCE doesn't provide any class to handle RSA private key in PKCS#1 [v3]

2021-01-14 Thread Michael StJohns

Sorry - I'm coming to this a bit late.

Any chance of adding the logic for generatePublic() from a PKCS8 RSA 
private key?   RFC3477 has the PKCS1 RSAPrivateKey ASN1 which includes 
the modulus and publicExponent - so it should be a pretty straight 
forward add to generate a public key.


PKCS11 2.40 started requiring that the publicExponent be stored with the 
private key to allow for the public key to be regenerated from a private 
key object.   Going forward,  it might be a good idea to modify the 
RSAPrivate(Crt)KeyImpl class to store the publicExponent if provided.



Mike


On 1/14/2021 4:06 PM, Valerie Peng wrote:

On Wed, 13 Jan 2021 17:00:36 GMT, Weijun Wang  wrote:


Valerie Peng has updated the pull request incrementally with one additional 
commit since the last revision:

   Update copyright year from 2020 to 2021.

test/jdk/sun/security/rsa/TestKeyFactory.java line 159:


157: throw new Exception("Encodings not equal");
158: }
159: }

Can we combine the 2 blocks above into one? That is to say, when key1 and key2 
have the same format, we check the equality of both getEncoded() and 
themselves. Same for the P11 test.

Sure, will do.

-

PR: https://git.openjdk.java.net/jdk/pull/1787





Re: Missing documentation for EdDSA key serialization

2020-08-31 Thread Michael StJohns

On 8/31/2020 2:00 PM, Anthony Scarpino wrote:

On 8/31/20 8:16 AM, Anders Rundgren wrote:

On
https://tools.ietf.org/html/rfc8032#page-24
you can find test vectors that are also used by rfc8037 (JOSE).

However, there seems to be no information on how to create an EdDSA 
public key from such a vector.
Apparently you must be an expert on the inner workings of EdDSA in 
order to use this API.


I have though managed(...) but 1) it looks strange 2) it may be 
incorrect.


Steps
1. Convert the hex-code to a byte[] array.
2. Reverse (!) all the bytes in the byte[] array.
3. publicKey = kf.generatePublic(
 new EdECPublicKeySpec(new NamedParameterSpec(alg),
   new EdECPoint(false, new 
BigInteger(1, theByteArray;


Ideally, EdECPoint should have an constructor that does whatever it 
takes based on a byte[] array.


It is equally fuzzy in the other direction.  A "getByteArray()" on 
EdECPoint had been great.


Thanx,
Anders


Hi,

It does seem like a constructor and/or helper methods would be a good 
addition.  I filed a bug to track this:


https://bugs.openjdk.java.net/browse/JDK-8252595

thanks

Tony



Generically, maybe this should be added not to the ED based stuff, but 
to BigInteger:


public static BigInteger fromLittleEndianEncoding(int signum, byte[] 
magnitude);


public  byte[] getMagnitude(boolean bigEndian, int sizeInBytes); // if 
size in bytes < actual magnitude size, returns the magnitude bytes, 
otherwise returns the magnitude bytes left or right padded to 
sizeInBytes depending on endianness.


Almost all of the crypto stuff related to converting between the various 
encodings of signatures and keys would benefit from a standard version 
of the left/right padding stuff.


Mike

ps - there's a long thread maybe a year or two ago about internal vs 
external representations of the EDDSA and EDDH keys.  I'm actually kind 
of surprised that the internal representation turned out to be 
BigInteger.   I'm glad it did, but its now kind of strange that we have 
6 extra interface classes when we could probably have gotten away with 
folding them in under the EC* classes.



pps - the alternate way of doing this (and probably the most correct 
way) would be to encode the bytes from the test vector into a 
SubjectPublicKeyInfo public key (see RFC8401)  and run that through the 
key factory.  That should do all the reversing and generating.  You can 
confirm that by doing a getEncoded() on your built public key and 
running it back through the key factory.






Re: RFR: 8225068: Remove DocuSign root certificate that is expiring in May 2020

2020-04-30 Thread Michael StJohns

On 4/30/2020 2:11 PM, Rajan Halade wrote:
Please review this patch to remove Keynectis CA certificate from 
cacerts store.


Webrev: http://cr.openjdk.java.net/~rhalade/8225068/webrev.00/

Thanks,
Rajan


Hi -

There are three other certificates expiring a week after the one you're 
removing - why aren't they going too?


Mike




Re: [15] RFR 8172680: Support SHA-3 based Hmac algorithms

2020-04-03 Thread Michael StJohns

Hi Valerie -

In line

On 4/3/2020 5:32 PM, Valerie Peng wrote:


Hi Mike,

We can update SunPKCS11 provider when the PKCS#11 official header 
files are updated with SHA3 and Hmac w/ SHA3.


I agree with you on the ideal case is to have no lagging in JCA and 
the SunPKCS11 provider.


The main reason for the lagging is that we need to test and make sure 
the added functionality works. I checked NSS which is what existing 
PKCS11 regression tests use and it does not have any SHA3 support. Do 
you know other PKCS11 vendors which supports SHA3 and Hmac w/ SHA3? If 
there are many, it'll help me justifying this when the official 
headers are not updated yet.


I've got an include file from Utimaco dated 27 March 2017 that includes 
the SHA3 assignments from PKCS11 - and their collateral says they 
implement SHA3 (this is all of the message digest, hmac and signature 
mechanisms, and key derivation mechanisms specified for PKCS11 3.0.


Safenet ProtectServer has it 
https://data-protection-updates.gemalto.com/2018/04/27/product-release-safenet-protecttoolkit-5-6/


I can't find anything that says nCipher has it.

That's two out of three of the big ones.

I am not sure if I understand your suggestion of PKCS11 specific 
mechanism naming convention. Is it about duplicating the pending SHA3 
mechanism definitions in SunPKCS11 provider? It's trivial to add the 
SHA3 related mechanism definitions to SunPKCS11 provider, but the 
convention is to add things only after they are official as it may be 
hard to change due to backward compatibility concern.


Something like MessageDigest.getInstance ("SHA3_256", pkcs11provider) 
ends up mapping to an underlying call "CK_MECHANISM m = new CK_MECHANISM 
(CKM_SHA3_256);" where CKM_SHA3_256 is  "public static long CKM_SHA3_256 
= 0x02b0L;"


There are at times a number of proprietary or provider specific 
algorithms that the underlying PKCS11 dll might support, but for which 
the Java PKCS11 provider doesn't have the string (name) to mechanism 
number mapping, but for which the API is the same as for any other 
algorithm of its class.


For example, the Utimaco PKCS11 definitions include

#define CKM_DES3_RETAIL_MAC    0x8135    // Retail-MAC with 
0-Padding


Which is unlikely to be part of any PKCS11 standard, but could be 
accessed by


Mac.getInstance ("PKCS11_MAC_16_8135", pkcs11provider); // 16 is the 
mac length.


So this is an escape mechanism to permit access to provider extensions 
without having to reflect them back into the Java PKCS11 provider.


(When support for EC algorithms were being kept from various software - 
including NSS - due to nervousness about patent claims, I ended up using 
the PKCS11 wrapper classes directly specifically because I couldn't do 
an ECDSA via the PKCS11 provider.  That hasn't been the case for a 
while, but it's always bothered me that the JCA got in the way of the 
underlying provider.)


I don't know that is doable given the current architecture (which 
usually requires an OID for a mechanism to register it for SunPKCS11), 
but something to think about.



Thanks & hope you get enough sleep during this difficult time...

*laugh* I'm doing better thanks.  I wrenched something in my shoulder 
and it kept me awake or woke me up when I was sleeping. Much better now.


Thanks! Mike



Valerie

On 3/31/2020 10:15 AM, Michael StJohns wrote:

Sorry - this one got past me.

For PKCS11 - the assignment of mechanism numbers can happen at any 
time and doesn't necessarily result in a new version of the 
specification.  In this case, the API won't change, so there's no 
reason - since the mechanism numbers have been assigned since last 
May at least - to wait for V3.  Among other things, I would expect 
that various vendors have already implemented these in their 2.xx 
implementations.


One of the reasons I ended up using the SunPKCS11 wrapper classes 
directly quite a while ago was that the PKCS11 spec hadn't been 
updated, but that my PKCS11 provider was supplying various EC 
mechanisms that I needed. Ideally, the JCA and SunPKCS11 provider 
support should *precede* any given underlying PKCS11 device support, 
not trail it by 6-12 months.


The assignment of mechanism numbers is exactly equivalent to the 
assignment of TLS cipher suite numbers - the underlying protocol 
doesn't change, so this is mostly a change to the mapping tables and 
enclosed classes.


In any event, any given PKCS11 implementation may or may not support 
any given set of mechanisms - the provider really ought to be calling 
C_GetMechanismList() and using that as the list of supported JNA 
mechanisms.


Sorry - I'm dealing with a bit of lack of sleep here so I may be 
babbling, but I'm wondering if it might not be a bad idea to add some 
sort of PKCS11 specific mechanism naming convention to allow for the 
lag/lead problem?   E.g   PKCS11_DIGEST_02B0 would be PKCS11's 
CKM_SHA3_256 hashing func

Re: [15] RFR 8172680: Support SHA-3 based Hmac algorithms

2020-03-31 Thread Michael StJohns

Sorry - this one got past me.

For PKCS11 - the assignment of mechanism numbers can happen at any time 
and doesn't necessarily result in a new version of the specification.  
In this case, the API won't change, so there's no reason - since the 
mechanism numbers have been assigned since last May at least - to wait 
for V3.  Among other things, I would expect that various vendors have 
already implemented these in their 2.xx implementations.


One of the reasons I ended up using the SunPKCS11 wrapper classes 
directly quite a while ago was that the PKCS11 spec hadn't been updated, 
but that my PKCS11 provider was supplying various EC mechanisms that I 
needed.   Ideally, the JCA and SunPKCS11 provider support should 
*precede* any given underlying PKCS11 device support, not trail it by 
6-12 months.


The assignment of mechanism numbers is exactly equivalent to the 
assignment of TLS cipher suite numbers - the underlying protocol doesn't 
change, so this is mostly a change to the mapping tables and enclosed 
classes.


In any event, any given PKCS11 implementation may or may not support any 
given set of mechanisms - the provider really ought to be calling 
C_GetMechanismList() and using that as the list of supported JNA mechanisms.


Sorry - I'm dealing with a bit of lack of sleep here so I may be 
babbling, but I'm wondering if it might not be a bad idea to add some 
sort of PKCS11 specific mechanism naming convention to allow for the 
lag/lead problem?   E.g PKCS11_DIGEST_02B0 would be PKCS11's 
CKM_SHA3_256 hashing function given



#define CKM_SHA3_2560x02B0


Just a thought.

Thanks - Mike


On 3/19/2020 5:27 PM, Valerie Peng wrote:

Hi Mike,

Thanks for heads up. From what I can gather, SHA3 inclusion is part of 
PKCS#11 v3. Is this the next release which you are referring to? Or 
will there be an update to v2.40? Is there any schedule info for these 
update/release do you know?


Following the convention, we normally don't add something which the 
underlying library has no support yet. With the new 6-month JDK 
release cycle, it's much faster for the added functionality to be 
available. So, I'd still prefer to update SunPKCS11 provider with 
SHA-3 once they are officially included.


Valerie

On 3/18/2020 4:07 PM, Michael StJohns wrote:

On 3/18/2020 6:57 PM, Valerie Peng wrote:


Anyone has time to help review this straight forward RFE? It's to 
add SHA-3 support to Hmac.


RFE: https://bugs.openjdk.java.net/browse/JDK-8172680

Webrev: http://cr.openjdk.java.net/~valeriep/8172680/webrev.00/

Mach5 run is clean.

Thanks,
Valerie


Valerie -

I know the RFE says no PKCS11 because 2.40 doesn't include those 
items, but OASIS PKCS11 has proposed  SHA3 identifiers at 
https://github.com/oasis-tcs/pkcs11/blob/master/working/identifier_db/sha3.result 
- maybe reach out and ask if these have been allocated pending the 
next release?


Mike


#define CKM_SHA3_256  0x02b0UL
 #define CKM_SHA3_256_HMAC 0x02b1UL
 #define CKM_SHA3_256_HMAC_GENERAL 0x02b2UL
 #define CKM_SHA3_224  0x02b5UL
 #define CKM_SHA3_224_HMAC 0x02b6UL
 #define CKM_SHA3_224_HMAC_GENERAL 0x02b7UL
 #define CKM_SHA3_384  0x02c0UL
 #define CKM_SHA3_384_HMAC 0x02c1UL
 #define CKM_SHA3_384_HMAC_GENERAL 0x02c2UL
 #define CKM_SHA3_512  0x02d0UL
 #define CKM_SHA3_512_HMAC 0x02d1UL
 #define CKM_SHA3_512_HMAC_GENERAL 0x02d2UL






Re: [15] RFR 8172680: Support SHA-3 based Hmac algorithms

2020-03-18 Thread Michael StJohns

On 3/18/2020 6:57 PM, Valerie Peng wrote:


Anyone has time to help review this straight forward RFE? It's to add 
SHA-3 support to Hmac.


RFE: https://bugs.openjdk.java.net/browse/JDK-8172680

Webrev: http://cr.openjdk.java.net/~valeriep/8172680/webrev.00/

Mach5 run is clean.

Thanks,
Valerie


Valerie -

I know the RFE says no PKCS11 because 2.40 doesn't include those items, 
but OASIS PKCS11 has proposed  SHA3 identifiers at 
https://github.com/oasis-tcs/pkcs11/blob/master/working/identifier_db/sha3.result 
- maybe reach out and ask if these have been allocated pending the next 
release?


Mike


#define CKM_SHA3_256  0x02b0UL
 #define CKM_SHA3_256_HMAC 0x02b1UL
 #define CKM_SHA3_256_HMAC_GENERAL 0x02b2UL
 #define CKM_SHA3_224  0x02b5UL
 #define CKM_SHA3_224_HMAC 0x02b6UL
 #define CKM_SHA3_224_HMAC_GENERAL 0x02b7UL
 #define CKM_SHA3_384  0x02c0UL
 #define CKM_SHA3_384_HMAC 0x02c1UL
 #define CKM_SHA3_384_HMAC_GENERAL 0x02c2UL
 #define CKM_SHA3_512  0x02d0UL
 #define CKM_SHA3_512_HMAC 0x02d1UL
 #define CKM_SHA3_512_HMAC_GENERAL 0x02d2UL




Re: RFR 8237218: Support NIST Curves verification in java implementation

2020-02-28 Thread Michael StJohns

On 2/28/2020 3:05 PM, Michael StJohns wrote:

Sorry - running behind on this thread.

In ECUtil.decodePoint(),

Since this code is open, I'm wondering if it might be useful to add 
the checks specified in NIST SP800-186 Appendix D.1.  or SP800-56Ar1 
5.6.2.3 E.g.



D.1.2 Montgomery Curves
D.1.2.1 Partial Public Key Validation



Inputs:
1. Montgomery curve MA,B defined over the prime field GF(p)

2. Point Q=(u, v) 1712
Output: ACCEPT or REJECT Q as an affine point on MA,B.
Process:
1. If Q is the point at infinity ∅, output REJECT.
2. Verify that both u and v are integers in the interval [0, p−1]. 
Output REJECT if  verification fails.
3. Verify that (u, v) is a point on the MA,B by checking that (u, v) 
satisfies the defining equation v2 = u (u2 + A u + 1) where 
computations are carried out in GF(p). Output  REJECT if verification 
fails.

4. Otherwise output ACCEPT.



D.1.2.2 Full Public Key Validation
Inputs:
1. Montgomery curve MA,B defined over the prime field GF(p)
2. Point Q
Output: ACCEPT or REJECT Q as a point on MA,B of order n.
Process:
1. Perform partial public key validation on Q using the procedure of 
Appendix D.1.2.1.  Output REJECT if this procedure outputs REJECT.

2. Verify that n Q = ∅. Output REJECT if verification fails.
3. Otherwise output ACCEPT.


This mainly ensures that the X/Y provided is actually a point on the 
curve.   The threat to receiving a bad public key is more on the ECDH 
side, but this appears to be the code that would need to be modified 
so...


Later, Mike

Here's the function I use in my application code - the last check 
(needed for full verification) uses BouncyCastle as I didn't have access 
to the internal methods in the SunEC provider.  Would have to be 
refactored slightly to be used in ECUtil.decodePoint().


  /**
   *  This function performs the checks described in NIST SP800-56A,
   *  section 5.6.2.5 over an ECPublicKey.  It throws a
   *  GeneralSecurityException if the key does not validate
   *
   * @param k the key to validate
   *
   * @throws InvalidKeyException if the key is invalid
   */
  public static void checkECPublicKey (ECPublicKey k)
    throws InvalidKeyException {

    // Step 0 - not in the SP document, but we don't support F2M
    // curves
    if (!((k.getParams().getCurve().getField()) instanceof ECFieldFp)) {
  throw new InvalidKeyException ("ECPublicKey is not on a Prime 
Curve - not supported");

    }

    ECPoint point = k.getW();
    // Step 1:
    if (point.equals(ECPoint.POINT_INFINITY)) {
  throw new InvalidKeyException ("ECPublic key is point at Infinity");
    }
    // Step 2:
    EllipticCurve curve = k.getParams().getCurve();
    BigInteger p = ((ECFieldFp)curve.getField()).getP();
    BigInteger x = point.getAffineX();
    BigInteger y = point.getAffineY();
    if (x.compareTo(BigInteger.ZERO) <= 0 || x.compareTo(p) >= 0)
  throw new InvalidKeyException ("ECPublicKey X out of Range");
    if (y.compareTo(BigInteger.ZERO) <= 0 || y.compareTo(p) >= 0)
  throw new InvalidKeyException ("ECPublicKey Y out of Range");

    // Step 3:
    BigInteger y2 = y.pow(2).mod(p);
    BigInteger x3 = x.pow(3);
    BigInteger ax = curve.getA().multiply(x);
    BigInteger sum = x3.add(ax).add(curve.getB()).mod(p);
    if (!y2.equals(sum))
  throw new InvalidKeyException ("ECPublicKey Point is not on Curve");
    // Step 4:  check nQ == INFINITY
    BigInteger n = k.getParams().getOrder();

    org.bouncycastle.math.ec.ECPoint bcPoint =
  //  EC5Util.convertPoint(k.getParams(), point, false); A/O 1.64
  EC5Util.convertPoint(k.getParams(), point);
    org.bouncycastle.math.ec.ECPoint testPoint =
  bcPoint.multiply(n);
    if (!testPoint.isInfinity())
  throw new InvalidKeyException ("ECPublicKey invalid order");


  }



Re: RFR 8237218: Support NIST Curves verification in java implementation

2020-02-28 Thread Michael StJohns

Sorry - running behind on this thread.

In ECUtil.decodePoint(),

Since this code is open, I'm wondering if it might be useful to add the 
checks specified in NIST SP800-186 Appendix D.1.  or SP800-56Ar1 5.6.2.3 
E.g.



D.1.2 Montgomery Curves
D.1.2.1 Partial Public Key Validation



Inputs:
1. Montgomery curve MA,B defined over the prime field GF(p)

2. Point Q=(u, v) 1712
Output: ACCEPT or REJECT Q as an affine point on MA,B.
Process:
1. If Q is the point at infinity ∅, output REJECT.
2. Verify that both u and v are integers in the interval [0, p−1]. 
Output REJECT if  verification fails.
3. Verify that (u, v) is a point on the MA,B by checking that (u, v) 
satisfies the defining equation v2 = u (u2 + A u + 1) where 
computations are carried out in GF(p). Output  REJECT if verification 
fails.

4. Otherwise output ACCEPT.



D.1.2.2 Full Public Key Validation
Inputs:
1. Montgomery curve MA,B defined over the prime field GF(p)
2. Point Q
Output: ACCEPT or REJECT Q as a point on MA,B of order n.
Process:
1. Perform partial public key validation on Q using the procedure of 
Appendix D.1.2.1.  Output REJECT if this procedure outputs REJECT.

2. Verify that n Q = ∅. Output REJECT if verification fails.
3. Otherwise output ACCEPT.


This mainly ensures that the X/Y provided is actually a point on the 
curve.   The threat to receiving a bad public key is more on the ECDH 
side, but this appears to be the code that would need to be modified so...


Later, Mike


On 2/20/2020 11:03 PM, Anthony Scarpino wrote:

I'm ok with this update

Tony

On 2/19/20 5:35 AM, Weijun Wang wrote:

New webrev at

   http://cr.openjdk.java.net/~weijun/8237218/webrev.04/

Only test change. For each signature, modify it a little and check if 
verify fails.


Thanks,
Max

On Feb 18, 2020, at 2:09 AM, Anthony Scarpino 
 wrote:


The change looks fine.  I'm trusting that the existing Known Answer 
Tests are proving your verifySignedDigest() is correct.


You may want to comment in the code that your test depends on these 
method names.  I was going to suggest simplifying the all the 
verifySigned*() methods until I saw the test was dependent on it.


Tony


On 2/13/20 3:06 AM, Weijun Wang wrote:

Webrev updated at
    http://cr.openjdk.java.net/~weijun/8237218/webrev.03/
The test is modified. Instead of adding a hacked ECDSASignature I'm 
using JDI to detect if the Java impl or the native impl is used. 
Two method names in ECDSASignature are modified to ease method 
lookup in the test.

Thanks,
Max
On Feb 11, 2020, at 7:52 PM, Weijun Wang  
wrote:


Please take a review at

   http://cr.openjdk.java.net/~weijun/8237218/webrev.02/

A test is added that uses a patched ECDSASignature.java that 
exposes how the signature is verified.


BTW, I also updated ECDSASignature.java a little to accept non 
SunEC keys, so that I can do some interop testing. If you believe 
this is unnecessary I can revert the change.


Thanks,
Max











Re: RFR 8163251 : Hard coded loop limit prevents reading of smart card data greater than 8k

2020-02-12 Thread Michael StJohns

Hi -

I needed to go take a quick look at 7816-3 to figure out what's what.

Basically, this code is a bit problematic.

1) Since this code doesn't support extended length APDUs for T=0, you 
should never have to do multiple calls to get the responses - e.g. the 
max response handled here should be 256 bytes.  But this is the only 
case in which the GetResponse code should actually be active???  T=1 
uses low level mapping of I-Blocks rather than an APDU mapping.  And 
looking at pcsc.c - you've got 8192 as the maximum size for a per-call 
return from SCardTransmit, and I'm not sure what SCardTransmit will do - 
I'm wondering if its returning the 6Cxx error code.


2) The iteration count should probably be set from the Le value in the 
transmitted CommandAPDU rather than set to a fixed value. Or even 
better, don't use an iteration count, but simply count down from the 
specified Le and stop when negative.


I'm very confused now.

What protocol is the customer using?   What's the actual APDU they're 
sending to get the data? (Mostly curious about Lc and Le here).


If you changed the iteration loop to parse the Le field and use that to 
count down received data, would that break things?


Mike


On 2/12/2020 11:23 PM, Valerie Peng wrote:


Hi, Ivan,

I share your thought/confusion on the current impl as I am also not 
familiar with ISO/IEC 7816-4.


Based on my reading of this standard and the CardImpl code, it looks 
like the while-true loop is for retrieving additional response data. 
Per the standard and javadoc, max length of the response data is 
65536. Given that SW2 is only one byte, it would probably only return 
at most 256 byte response data at a time. Thus, the iteration count 
would be at most 256.


So far we are on the same page.

With the latest webrev (webrev.01), it seems that the loop will only 
be run 255 instead of 256 times as k is incremented before comparison. 
Thus, I think we should fix the check.


Valerie

On 2/11/2020 12:18 PM, Ivan Gerasimov wrote:

Hi Valerie!

To be honest, the all these limitations are not quite clear to me.

If the command is using an extended Le word to specify the expected 
length of the response data, then this length can be at most 65536.


If a short Le was used, then the length can be at most 256.

However, if we received 0x61 in the SW1, that means that more data 
bytes are available and they can be retrieved by issuing another 
transmit call with a short Le.  This next call can again potentially 
result in 0x61 in SW1, and so on.


In the standard, I cannot see any explicit limitations on the number 
of retries.  So, I see it as it might be possible to retrieve more 
data than 65536 bytes.


On the other hand, in the specification for CommandAPDU [1] we have 
hardcoded limit for the maximum response length, which is 65536.  So, 
even if it were possible to retrieve larger data, there's no point to 
try, as the current API prohibits it.


Assuming that the 0x61 response can only be received when a short Le 
is used, the maximum RESPONSE_ITERATOR should be set to 256, and an 
exception should be thrown once that number is exceeded.


I've updated the webrev in-place accordingly.

[0] http://cr.openjdk.java.net/~igerasim/8163251/01/webrev/

[1] 
https://docs.oracle.com/en/java/javase/13/docs/api/java.smartcardio/javax/smartcardio/CommandAPDU.html#%3Cinit%3E(int,int,int,int,int)


With kind regards,

Ivan


On 2/10/20 6:40 PM, Valerie Peng wrote:

Hi Ivan,

You removed the "=", so the actual iteration count is reduced by one.

Should the iteration count be 256 or 257? If the actual count should 
be 257, then may be the check needs to be changed to k++ from ++k?


Valerie

On 2/10/2020 5:07 PM, Ivan Gerasimov wrote:

Thank you Michael!

It's a good point about maximum length.

Here's the updated webrev with the new System property dropped and 
the increased number of iterations:


http://cr.openjdk.java.net/~igerasim/8163251/01/webrev/

With kind regards,
Ivan


On 2/10/20 4:18 PM, Michael StJohns wrote:

On 2/10/2020 6:49 PM, Ivan Gerasimov wrote:

Hello!

Current implementation of the method 
javax.smartcardio.CardChannel.transmit() has an internal 
limitation on the maximum allowed amount of the transmitted data.


This limitation is dictated by the maximum number of iterations 
to transmit data from a card:  Each iteration can transmit up to 
256 bytes of data, and we have a hardcoded limit of 32 iterations.


Over time, we've received requests to increase this limit, as 
there are occasions when the effective limit of 8k is not enough.


Would you please help review a proposal:  First, it is proposed 
to increase the default limit of iteration to 128 (so that up to 
32k of data can be transmitted); Second, the limit of iterations 
is made configurable via a System property. This limit can be 
increased up to 4096 (so that the total amount of transmitted 
data can be made up to 1m).


BUGURL: https://bugs.openjdk.java.net/browse/JDK-8163

Re: RFR 8163251 : Hard coded loop limit prevents reading of smart card data greater than 8k

2020-02-10 Thread Michael StJohns

On 2/10/2020 6:49 PM, Ivan Gerasimov wrote:

Hello!

Current implementation of the method 
javax.smartcardio.CardChannel.transmit() has an internal limitation on 
the maximum allowed amount of the transmitted data.


This limitation is dictated by the maximum number of iterations to 
transmit data from a card:  Each iteration can transmit up to 256 
bytes of data, and we have a hardcoded limit of 32 iterations.


Over time, we've received requests to increase this limit, as there 
are occasions when the effective limit of 8k is not enough.


Would you please help review a proposal:  First, it is proposed to 
increase the default limit of iteration to 128 (so that up to 32k of 
data can be transmitted);  Second, the limit of iterations is made 
configurable via a System property.  This limit can be increased up to 
4096 (so that the total amount of transmitted data can be made up to 1m).


BUGURL: https://bugs.openjdk.java.net/browse/JDK-8163251
WEBREV: http://cr.openjdk.java.net/~igerasim/8163251/00/webrev/

If there is an agreement on the proposal, I'll file a CSR to document 
a new System property.


Thanks in advance!

Given that the maximum length for an extended APDU is 64K (65536) (hmm 
+7 for the header and +2 for LE), and for its return is 64K + 2 bytes,  
I'm not quite sure why you'd up the number to 4096/1M - I'd set the 
default and fixed value to 257  (64K) and leave it at that.


Mike







Re: RFR [14] 8214483: Remove algorithms that use MD5, DES, or ECB from security requirements

2019-11-06 Thread Michael StJohns

On 11/6/2019 11:27 AM, Sean Mullan wrote:
Please remove this change to remove the Java SE requirements to 
implement security algorithms based on DES, MD5, or ECB. It makes 
sense to periodically review these requirements and remove algorithms 
or modes that are known to be weak and of which usage has declined 
significantly and thus compatibility risk is much lower.


Note that we are not removing the actual implementations of these 
algorithms from the JDK. This just means that an SE implementation is 
not required to support these algorithms.


webrev: https://cr.openjdk.java.net/~mullan/webrevs/8214483/webrev.00/
CSR: https://bugs.openjdk.java.net/browse/JDK-8233607

Thanks,
Sean



I don't have a problem with removing  DES or MD5 from the must-implement 
list, but ECB is a fundamental building block mode.  It's going to be 
how you implement a new mode before there's specific support for that 
mode.   Pretty much any mode can be implemented using ECB as its only 
real crypto operation.   E.g. CBC, CTR, CCM, GCM, CFB, OFB etc are all 
wrapped around ECB in some form.   Please continue to require that it be 
implemented. Policy MAY restrict the use of the mode for a given key, 
but that's a provider issue.


Mike




Re: Is Entry.Attribute meant to be encrypted for a PrivateKeyEntry?

2019-06-03 Thread Michael StJohns

On 6/3/2019 7:27 PM, Weijun Wang wrote:



On Jun 4, 2019, at 1:34 AM, Michael StJohns  wrote:

On 6/3/2019 11:05 AM, Weijun Wang wrote:

The storepass and the keypass are usually the same but they are independent. If 
the Mac is there and a non-null storepass is provided at loading, then 
integrity checking is performed. We do allow storepass being null in 
KeyStore::load and the method spec says:

* @param password the password used to check the integrity of
* the keystore, the password used to unlock the keystore,
* or {@code null}

And now we support password-less pkcs12, i.e. key is protected, by cert is not, 
and there is no Mac.

Right - and compare and contrast to PKCS11, you don't generally have a password 
on the private key entry, but you do on the key store.  Hmm...

Rather than change the API for KeyStore, or maybe in addition to changing the 
API,  you do:

If you pass in a null password  (0 length password, known password 
'attributesonly'?) in a PasswordProtection class in getEntry() for a PKCS12 key 
store,  you (optionally) get a KeyStore.Entry of an internal concrete type that 
contains only the attributes.

This looks a little too hackish to me. We'll need to describe how 
PrivateKeyEntry::getPrivateKey behaves and apps would need to check for the 
result, etc.


Well, you'd need to describe how the JDK implementation of PKCS12 
behaves.  Which is going to possibly be different than another 
provider's implementation.  But I take your point about hackish.



  


This allows you not to have to retrofit other KeyStore implementations to 
support a getAttributes() method.

And I *think* that works well with PKCS11.

It always works. The Entry::getAttributes will still be there.


KeyStore depends on KeyStoreSpi and that's going to need the change too 
- but it's going to need to be a concrete method there - correct - or it 
breaks other implementations?   But Entry.Attribute is defined in 
KeyStore and KeyStore references KeyStoreSpi and I kind of hate circular 
dependencies even if java can work around them.


so

public Set 
KeyStoreSpi::engineGetAttributes(String alias) {


 return containsAlias(alias) ?  Collections.emptySet() : return null; }


and

public final Set KeyStore::getAttributes(String alias) {

if (!initialized) {
 throw new KeyStoreException ("Uninitialized keystore");
}

return keyStoreSpi.engine.getAttributes(alias);

}


WFM.

Mike






Re: Is Entry.Attribute meant to be encrypted for a PrivateKeyEntry?

2019-06-03 Thread Michael StJohns

On 6/3/2019 11:05 AM, Weijun Wang wrote:

The storepass and the keypass are usually the same but they are independent. If 
the Mac is there and a non-null storepass is provided at loading, then 
integrity checking is performed. We do allow storepass being null in 
KeyStore::load and the method spec says:

* @param password the password used to check the integrity of
* the keystore, the password used to unlock the keystore,
* or {@code null}

And now we support password-less pkcs12, i.e. key is protected, by cert is not, 
and there is no Mac.


Right - and compare and contrast to PKCS11, you don't generally have a 
password on the private key entry, but you do on the key store.  Hmm...


Rather than change the API for KeyStore, or maybe in addition to 
changing the API,  you do:


If you pass in a null password  (0 length password, known password 
'attributesonly'?) in a PasswordProtection class in getEntry() for a 
PKCS12 key store,  you (optionally) get a KeyStore.Entry of an internal 
concrete type that contains only the attributes.


This allows you not to have to retrofit other KeyStore implementations 
to support a getAttributes() method.


And I *think* that works well with PKCS11.

Mike





--Max


On Jun 3, 2019, at 10:53 PM, Michael StJohns  wrote:

On 6/3/2019 10:18 AM, Sean Mullan wrote:

On 6/2/19 10:28 PM, Weijun Wang wrote:

I am playing with KeyStore.Entry.Attribute and found out it's only retrievable 
with Entry::getAttributes. This means for a PrivateKeyEntry that is protected 
with a password, you will have to provide that password to get the entry first 
to get the attributes.

Is this by design? So there is no way to add public attributes to these 
entries? If I read PKCS12 correctly, the attributes is out of the bag value. 
Therefore even if I cannot decrypt a pkcs8ShroudedKeyBag, the attributes are 
still visible.

Or am I missing something?

Probably not. Maybe we need something like a KeyStore.getAttributes(String 
alias) method?

--Sean

 From what I remember, the whole thing of the PKCS12 is protected by a MAC 
which is calculated from the PKCS12 password which is generally the same as the 
PKCS8 password for the entry.   This is somewhat the same model that was in 
place for the JKS version of the key store.Is it possible that the password 
is necessary to validate the attribute integrity and that's why it's there?

Mike






Re: Is Entry.Attribute meant to be encrypted for a PrivateKeyEntry?

2019-06-03 Thread Michael StJohns

On 6/3/2019 10:18 AM, Sean Mullan wrote:

On 6/2/19 10:28 PM, Weijun Wang wrote:
I am playing with KeyStore.Entry.Attribute and found out it's only 
retrievable with Entry::getAttributes. This means for a 
PrivateKeyEntry that is protected with a password, you will have to 
provide that password to get the entry first to get the attributes.


Is this by design? So there is no way to add public attributes to 
these entries? If I read PKCS12 correctly, the attributes is out of 
the bag value. Therefore even if I cannot decrypt a 
pkcs8ShroudedKeyBag, the attributes are still visible.


Or am I missing something?


Probably not. Maybe we need something like a 
KeyStore.getAttributes(String alias) method?


--Sean


From what I remember, the whole thing of the PKCS12 is protected by a 
MAC which is calculated from the PKCS12 password which is generally the 
same as the PKCS8 password for the entry.   This is somewhat the same 
model that was in place for the JKS version of the key store.    Is it 
possible that the password is necessary to validate the attribute 
integrity and that's why it's there?


Mike




Re: RFR CSR for 8213400: Support choosing curve name in keytool keypair generation

2018-11-07 Thread Michael StJohns

Inline below.

On 11/6/2018 2:18 AM, Weijun Wang wrote:



On Nov 6, 2018, at 1:06 PM, Xuelei Fan  wrote:

On 11/5/2018 8:37 PM, Weijun Wang wrote:

On Nov 6, 2018, at 12:12 PM, Xuelei Fan  wrote:

On 11/5/2018 7:13 PM, Weijun Wang wrote:

Please take a review at the CSR at
https://bugs.openjdk.java.net/browse/JDK-8213401
As for implementation, I intend to report an error when -keyalg is not EC but 
-curvename is provided. If both -curvename and -keysize are provided, I intend 
to ignore -keysize no matter if they match or not.

Why not use a strict mode: fail if not match.  It might be misleading if 
ignoring unmatched options.

We can do that, but misleading to what? That we treat -curvename and -keysize 
the same important?

If the option "-keysize 256 -curvename sect163k1" work, I may think that the 
key size if 256 bits.  I want to create a 256 bits sect163k1 EC key, and the tool allows 
this behavior, so I should get a 256 bits sect163k1 EC key.  Sure, that's incorrect, but 
I don't know it is incorrect as the tool ignore the key size.  What's the problem of the 
command, I don't know either unless I clearly understand sect163k1 is not 256 bits.  The 
next question to me, what's the key size actually is?  256 bits or 163 bits?  which 
option are used?  It adds more confusing to me.

Well explained. I've updated the CSR and this will be an error.


Sorry to drop in late.

Basically, for EC private keys - either binary or prime curves, you will 
reduce whatever initial random value you generate mod n of the curve to 
get the final private key.  The generation logic should take care of 
this.    You could use key size as a way of controlling how many extra 
bits are generated(see FIPS 186-4, section B.4.1) and error only if key 
size was less than the size of the curve's n.


So 1) generate a random value of keysize length or if not specified the 
length of the N of the curve plus 64, 2) reduce mod N.


Mime.




We can ignore the -keysize option, but it is complicated to me to use the tool.


Another question: in sun.security.util.CurveDB, we have
 // Return EC parameters for the specified field size. If there are known
 // NIST recommended parameters for the given length, they are returned.
 // Otherwise, if there are multiple matches for the given size, an
 // arbitrary one is returns.
 // If no parameters are known, the method returns null.
 // NOTE that this method returns both prime and binary curves.
 static NamedCurve lookup(int length) {
 return lengthMap.get(length);
 }
FIPS 186-4 has 2 recommendations (K- and B-) for a binary curve field size. Do 
we have a choice?
In fact, CurveDB.java seems to have a bug when adding the curves:
 add("sect163k1 [NIST K-163]", "1.3.132.0.1", BD,...
 add("sect163r2 [NIST B-163]", "1.3.132.0.15", BD,... // Another default?
 add("sect233k1 [NIST K-233]", "1.3.132.0.26", BD,...
 add("sect233r1 [NIST B-233]", "1.3.132.0.27", B,...
and now 163 is sect163r2 and 233 is sect233k1.
I assume we should always prefer the K- one?

TLS 1.3 uses secp256r1/secp384r1/secp521r1, no K- curves.

There is no ambiguity for prime curves.

Do you mean if no -curvename option, there is a need to choose a named curve?

ECKeyPairGenerator::initialize(int) will choose one and keytool will use it. I 
just meant if we have a bug here and if we should be more public on what curve 
is chosen.

I see your concerns.

It might be a potential issue if we use a named curve if no curvename 
specified.  If the compatibility is not serious, I may suggest supported named 
curves only, or use arbitrary curves but with a warning.

If people only want prime curves then -keysize still works. A warning is enough since in 
the CSR I've also said "we recommend".

Thanks
Max


Xuelei





Re: RFR 8026953: Add support for MS Cryptography next generation (CNG) (step 1)

2018-10-29 Thread Michael StJohns

On 10/25/2018 11:09 PM, Weijun Wang wrote:

Hi Mike

Thanks for the feedback.

I understand the current SunMSCAPI implementation recognizes RSA keys only and 
it's certainly incorrect to put something like getModulus() and 
getPublicExponent() in a general CKey class. They will be fixed later. When I 
have more sub class names, I'll definitely use them. You can see I've moved 
some CSignature methods into CSignature$RSA. I just haven't done it everywhere.

OK.


We'll still need a base CKey for a CNG key, no matter what the underlying 
algorithm is. Since CNG uses the same NCRYPT_KEY_HANDLE type for different 
types of keys, we will do something similar on the Java side. Since the current 
CPublicKey and CPrivateKey are very light, it will be easy to move the content 
into algorithm-specific classes.

This is where I think you need to fix the structure:

abstract class CKey

public class CRSAPublicKey extends CKey implements RSAKey, RSAPublicKey, 
PublicKey
public class CRSAExtractablePrivateKey extends CKey implements RSAKey, 
RSAPrivateKey, PrivateKey[,RSAMultiPrimePrivateCrt | RSAPrivateCrtKey]

public class CRSAPrivateKey extends CKey implements RSAKey, PrivateKey

public class CECPublicKey extends CKey implements ECKey, ECPublicKey, 
PublicKey
public class CECExtractablePrivateKey extends CKey implements ECKey, 
PrivateKey
public class CECPrivateKey extends CKey implements ECKey, ECPrivateKey, 
ECPrivateKey


Note the two different versions of the private keys to match up with the 
key handling bits as well as some additional interfaces that may be 
needed to be added to support the underlying provider's requirements for 
the RSA keys.


Also, I'm looking ahead a little bit and thinking about how the JCA 
would use the windows PCP (Platform Crypto Provider) which uses the TPM 
to enforce hardware security.  It would be useful if you didn't have to 
re-write everything just because of a different underlying Windows 
provider. (There's a PCP development kit that's got some useful sample 
code that might help a little bit with refactoring the JCA MSCAPI 
provider even for the existing code).   E.g. eventually supporting an 
MSCAPI-PCP provider shouldn't require all new code.




The main reason I want to take this first step is that after some study on CNG 
I make some progress and also see some blockers. For example, I am able to 
expose a EC CNG key stored in Windows-MY now and use it to sign/verify. 
However, I still don't know how to import external keys inside there 
(certmgr.exe can so it's possible). Until now, the most requested function is 
to use existing keys in signatures and I want to start working on it. The first 
thing I noticed then is that the current class names are unsuitable and I think 
a refactoring will make them look better.
AFAICT, you're not going to be able to use any external key without 
importing it or running it through a key factory.  The main class you're 
going to be using is NCryptImportKey (or alternately BCryptImportKeyPair).




Once I start working on the next step, I'll need to have different sub classes 
in CKey and CSignature. I even have an alternative plan to ditch CPublicKey and 
use the PublicKey classes in SunEC and SunRsaSign. This was actually already 
used in the RSASSA-PSS signature handling in SunMSCAPI we added into JDK 11 in 
the last minute.


So you just use software classes in another provider for 
encrypting/verifying?  To be honest this sounds messy and may come back 
to bite you down the road.




As for KeyFactory, we do not have an urgent requirement to use external keys in 
a CNG Signature object or store them into Windows-MY. Also, we can use the one 
in SunRsaSign.
Hmm... one of the more common things is to move around .p12 files with 
your certs and keys.  They can be imported by the Windows tools - it 
would be *really* nice if you can do the same thing with the Java provider.




Thanks again.

--Max


On Oct 26, 2018, at 1:25 AM, Michael StJohns  wrote:

Hi Max -

For the same reason I was pushing back on Adam's EC provider I think I need to 
push back here.  You're recasting an RSAPublicKey to just a PublicKey and 
making it difficult to move key material in and out of the MSCAPI proivider.   
Same thing with the private key.

For example, in the CPublicKey class you still have "getModulus()" and 
"getPublicExponent()", but to get at them you'd have to use CPublicKey rather than 
PublicKey.

And looking forward, I'm not sure how you actually implement the EC classes 
here using this model.

I'd suggest not making the change this way and overloading the existing 
classes, but instead adding the appropriate provider classes for new key types 
as you implement support for them.  E.g. Keep CRSAKey, CRSAPublicKey and 
CRSAPrivateKey as distinct classes, add CECKey, CECPublicKey and CECPrivateKey 
when you get there.

Are you missing a KeyFactory class as well?

Lastly, you may want to change the subclass/me

Re: RFR 8026953: Add support for MS Cryptography next generation (CNG) (step 1)

2018-10-25 Thread Michael StJohns

Hi Max -

For the same reason I was pushing back on Adam's EC provider I think I 
need to push back here.  You're recasting an RSAPublicKey to just a 
PublicKey and making it difficult to move key material in and out of the 
MSCAPI proivider.   Same thing with the private key.


For example, in the CPublicKey class you still have "getModulus()" and 
"getPublicExponent()", but to get at them you'd have to use CPublicKey 
rather than PublicKey.


And looking forward, I'm not sure how you actually implement the EC 
classes here using this model.


I'd suggest not making the change this way and overloading the existing 
classes, but instead adding the appropriate provider classes for new key 
types as you implement support for them.  E.g. Keep CRSAKey, 
CRSAPublicKey and CRSAPrivateKey as distinct classes, add CECKey, 
CECPublicKey and CECPrivateKey when you get there.


Are you missing a KeyFactory class as well?

Lastly, you may want to change the subclass/methods for CSignature (and 
probably other classes) to reflect the type of Signature you're returning:



   if (algo.equals("NONEwithRSA")) {
- return new RSASignature.Raw();
+ return new CSignature.Raw();


Instead:   "return new CSignature.RSARaw()"

And this definitely isn't going to work if you have even one other 
Cipher you'll be returning:

  if (algo.equals("RSA") ||
  algo.equals("RSA/ECB/PKCS1Padding")) {
- return new RSACipher();
+ return new CCipher();
  }



Later, Mike





On 10/25/2018 4:38 AM, Weijun Wang wrote:

Please review the change at

https://cr.openjdk.java.net/~weijun/8026953/webrev.00/

(I will use a sub-task id for this change but currently JBS is down).

The major change is renaming classes. Since we are going to support algorithms 
other than RSA, I've renamed the classes like RSAPrivateKey -> CPrivateKey. 
Classes that have the same name as JCA classes (like Key, KeyStore) are also 
renamed (to CKey, CKeyStore) so it's easy to tell them apart.

Others are not about renaming but they are also related to supporting other 
algorithms, and there is no behavior change. They include:

- CKey (plus its child classes CPublicKey and CPrivateKey) has a new field 
"algorithm". This field is used by CKeyStore::generateRSAKeyAndCertificateChain 
and its value is obtained from the public key algorithm in a cert [1].

- Child class named "RSA" of CKeyPairGenerator.

- Child class named "RSA" of CSignature. I also moved some RSA-related methods 
into this child class as overridden methods.

- CKeyStore::setPrivateKey's key parameter has a new type Key, but it still 
only accepts RSAPrivateCrtKey now.

Noreg-cleanup.

Thanks
Max

[1] 
https://docs.microsoft.com/en-gb/windows/desktop/api/wincrypt/ns-wincrypt-_crypt_algorithm_identifier





Re: RFR 8205476: KeyAgreement#generateSecret is not reset for ECDH based algorithm

2018-10-17 Thread Michael StJohns

On 10/17/2018 4:45 PM, Adam Petcher wrote:

Webrev: http://cr.openjdk.java.net/~apetcher/8205476/webrev.00/
Bug: https://bugs.openjdk.java.net/browse/JDK-8205476
CSR: https://bugs.openjdk.java.net/browse/JDK-8212051

Please review the following change for a conformance bug in the ECDH 
service. The KeyAgreement is supposed to reset itself after the call 
to generateSecret, but it is not doing that. I'm also clarifying the 
spec, and this change has the new wording. The CSR was just submitted, 
and it will also need to be approved before this code change is pushed.


In ECDHKeyAgreement.java, suggest instead using a try-catch-finally 
construct and place the nulling of the publicValue in the final block 
rather than adding the two additional steps of assigning the result to a 
temp array and nulling publicValue before returning the result.  AFAICT, 
there is no instance on which you would not erase the publicValue even 
on error.


Mike



Re: DSA default algorithm for keytool -genkeypair. Bad choice?

2018-10-12 Thread Michael StJohns
I wasn't thinking so much a re-write as just forking the code and fixing 
the things that need to be fixed while leaving the old version to wither 
on the vine.  The usage page for the "new" program would indicate that 
there are no defaults  anymore and that users should use the conf file 
approach if they want to establish some.


This is more about not having to deal with the backwards compatibility 
issues.


Later, Mike


On 10/12/2018 8:24 AM, Weijun Wang wrote:

At least, this single annoyance does not deserve a rewrite, the compatibility 
impact should be quite low.

So far, I've heard these requests related to keytool:

1. Make it GNU-style, say, "keytool -list -keystore cacerts" to "keytool list 
--keystore=cacerts".

2. Add more functions, say, -importprivatekey.

3. Make some functions as APIs, say, -genkeypair, -certreq.

but still need no rewrite.

--Max




On Oct 12, 2018, at 12:06 AM, Michael StJohns  wrote:

Any thought to just deprecating keytool as such and adding a new tool with more modern 
semantics?e.g. don't mess with what people are using (except for including a 
deprecation message), but fork the keytool source tree and do some developments to 
"ktts" - Key tool - the sequel.   A lot more freedom to rethink the syntax and 
semantics of key pair and key store generation.

Mike

On 10/11/2018 11:44 AM, Sean Mullan wrote:

I think if we all really think we are better off in the long run not having 
defaults, we probably want to do this over 2 releases and give an advance 
warning that the change is coming. In JDK 12, we could emit a warning, ex:

$ keytool -genkeypair ...
Warning: the default keypair alg (DSA) is a legacy algorithm and is no longer 
recommended. In the next release of the JDK, defaults will be removed and the 
-keyalg option must be specified.

(that's a bit wordy, but you get the idea)

--Sean

On 10/11/18 9:30 AM, Adam Petcher wrote:

On 10/10/2018 5:05 PM, Anthony Scarpino wrote:


On 10/10/2018 07:42 AM, Weijun Wang wrote:

If not DSA, should RSA be the new default? Or maybe RSASSA-PSS (I wonder if 
RSASSA-PSS signature can always use legacy RSA keys) or EC? We don't have an 
option to specify ECCurve in keytool yet (a string -keysize).

--Max




I would rather get rid of the default completely.

+1

In addition to the usual problems with defaults, there is also the issue that 
the user doesn't specify how the key pair can be used. The current default 
produces a key that can only be used with signatures, but if we change the 
default, then the key may also be used for encryption (RSA) or key agreement 
(EC). I worry about the problems that can arise if we change the default in a 
way that increases the capability of the key pair that is produced.





Re: DSA default algorithm for keytool -genkeypair. Bad choice?

2018-10-11 Thread Michael StJohns
Any thought to just deprecating keytool as such and adding a new tool 
with more modern semantics?    e.g. don't mess with what people are 
using (except for including a deprecation message), but fork the keytool 
source tree and do some developments to "ktts" - Key tool - the 
sequel.   A lot more freedom to rethink the syntax and semantics of key 
pair and key store generation.


Mike

On 10/11/2018 11:44 AM, Sean Mullan wrote:
I think if we all really think we are better off in the long run not 
having defaults, we probably want to do this over 2 releases and give 
an advance warning that the change is coming. In JDK 12, we could emit 
a warning, ex:


$ keytool -genkeypair ...
Warning: the default keypair alg (DSA) is a legacy algorithm and is no 
longer recommended. In the next release of the JDK, defaults will be 
removed and the -keyalg option must be specified.


(that's a bit wordy, but you get the idea)

--Sean

On 10/11/18 9:30 AM, Adam Petcher wrote:

On 10/10/2018 5:05 PM, Anthony Scarpino wrote:


On 10/10/2018 07:42 AM, Weijun Wang wrote:


If not DSA, should RSA be the new default? Or maybe RSASSA-PSS (I 
wonder if RSASSA-PSS signature can always use legacy RSA keys) or 
EC? We don't have an option to specify ECCurve in keytool yet (a 
string -keysize).


--Max





I would rather get rid of the default completely.


+1

In addition to the usual problems with defaults, there is also the 
issue that the user doesn't specify how the key pair can be used. The 
current default produces a key that can only be used with signatures, 
but if we change the default, then the key may also be used for 
encryption (RSA) or key agreement (EC). I worry about the problems 
that can arise if we change the default in a way that increases the 
capability of the key pair that is produced.






Re: DSA default algorithm for keytool -genkeypair. Bad choice?

2018-10-10 Thread Michael StJohns

On 10/10/2018 10:42 AM, Weijun Wang wrote:



On Oct 10, 2018, at 7:59 PM, Sean Mullan  wrote:

There is really no other reason other than DSA keys have been the default 
keypairs generated by keytool for a long time, so there are some compatibility 
issues we would have to think through before changing it to another algorithm 
such as RSA. Weijun might have more insight into that.

Not really. It was the default before I join Sun Microsystems many many years 
ago. Maybe it was a NIST standard?
us government FIPS.  It still is. But mostly US gov't is doing EC these 
days... at least until all the quantum fear and doubt started creeping in.




As for compatibility, as long as someone is still using DSA then they might not 
be specifying the -keyalg option.

If not DSA, should RSA be the new default? Or maybe RSASSA-PSS (I wonder if 
RSASSA-PSS signature can always use legacy RSA keys) or EC? We don't have an 
option to specify ECCurve in keytool yet (a string -keysize).


I'm away from the source code - but isn't it possible to configure the 
default in java.security?   Maybe what you is add a warning of the new 
default unless disabled in java.security or explicitly set there?


Mike



--Max






Re: Conceptual feedback on new ECC JEP

2018-09-25 Thread Michael StJohns

On 9/25/2018 12:06 PM, Xuelei Fan wrote:



On 9/25/2018 8:34 AM, Adam Petcher wrote:
Yes, it is possible, at the expense of some assurance related to 
security against side-channel attacks. This interoperable 
implementation will be available by default in SunEC. A 
higher-assurance form of the same implementation will be available in 
the new provider. The additional effort required to put this 
implementation in both providers is expected to be relatively small.
Can we have the same security level impl in SunEC in some 
circumstances?  For example, when the key is not imported for the 4 
named curves. Using a new provider means we force applications to 
choose between weak and interop, just because we cannot provide both 
at the same time.


Thanks,
Xuelei


There's a hole in my desk from where I've been pounding my head on this 
subject.


Basically, for the opaque operations - signing, verifying, deriving 
shared secrets - Adam can meet his branchless goal.  For the non-opaque 
operations - importing and exporting the private key according to the 
interface specifications, he can't or thinks he can't.   And even with 
the latter - with the use of PKCS12 as the standard key store format - 
the import of a private key from a key store (e.g. internally stored as 
PKCS8) should be able to be implemented branchless without too much effort.


It's only when moving key material to/from his implementation where 
there are interoperability requirements that might involve branching 
(and not even then if he makes his own version of BigInteger to avoid 
those branches in the new() methods).  Given that once the key leaves 
his implementation he has no control over whether it's used branchless, 
it's utterly disingenuous to claim that the process of exporting also 
must be branchless.


So to clarify: all of his opaque code will be "high assurance" - and 
that's will be what's used about 99.999% of the time. The rest of this 
is just religious dogma that doesn't consider the actual use cases and 
costs related to moving keys to/from his implementation.


Later, Mike






Re: Conceptual feedback on new ECC JEP

2018-09-19 Thread Michael StJohns

On 9/19/2018 11:45 AM, Adam Petcher wrote:

On 9/18/2018 4:24 PM, Michael StJohns wrote:



Adam -

Basically, the JCE is all about plugging in not about the 
implementations.  If this is truly an EC library, I should be able to 
get the benefit of your library with very minimal changes - e.g. 
specifying your provider in the various getInstance() calls.   As it 
stands, I doubt this will end up in anyone's "must use" category 
because it will break existing code.


Is this standard of being "truly an EC library" met by PKCS11 
providers that don't export keys?


This is what you want to hang your hat on?   If you want to implement 
your code inside an HSM and follow the PKCS11 rules, feel free.  
Software libraries are not hardware libraries and the mapping between 
them is pretty good, but not perfect.


And to be clear, it's not the provider that doesn't export the keys - 
its the underlying HSM and not for all keys.  It's perfectly OK to 
generate an exportable key on an HSM, but the usual default is the roach 
motel model.  The provider humors this reality by following the various 
interface contracts and only providing a PrivateKey object for 
non-exportable private keys, regardless of type.






You want folks to convince you that interoperability is a significant 
problem when what we (or at least I) want is for you to convince us 
that these interop breaks are warranted due to how wonderful your 
approach is and that they're absolutely necessary due the secret 
sauce of wonderfulness.  You're not there yet.


I don't agree with this logic. 


Color me surprised.

Different providers have different levels of interoperability, 
No - different providers mostly provide different suites of crypto, but 
build to a well-defined interface or set of interfaces.   E.g. I really 
don't give a damn about what you do inside your own country, but you 
need to have the same type of passport as everyone else once you leave 
those boundaries.



and this interoperability is a feature that requires effort to develop 
and maintain. I don't want to commit myself or this community to this 
effort unless I/we think it is worthwhile. Some of the proposals to 
increase interoperability for this effort (e.g. subclassing 
BigInteger) would require a significant amount of additional effort, 
and I think this requires justification.


Implementing to a defined interface is somewhat different than 
"maintaining interoperability" - at least if you do it right. AFAICT, 
you're balking at providing the getS() method of the ECPrivateKey 
interface because you're afraid that conversion to/from BigInteger will 
leak something.  Guess what... any time you border cross you leak 
something.  Even
 moving key material from software to hardware will cause you to leak 
something.





Of course, if you know of a way to have more interoperability without 
reducing assurance or significantly increasing effort, I would love to 
hear about it. For assurance, an important property is that the 
implementation should not branch on secrets after the developer has 
put it in branchless mode (by selecting the branchless provider or by 
using some other interface to switch modes).


*bleah*  This is not a useful discussion - you've drawn a line in the 
sand without actually understanding the cost/benefit of that line.   And 
I gave you several approaches - simplest is to clone BigInteger and 
de-branch it for the methods you need.  Actually, simplest is to use the 
existing BigInteger, because if the provider user is exporting the key 
material, they're probably going to be putting it into a whole different 
provider with "weaker" branchless protections.






As for PKCS11 - there are exportable and non-exportable private keys 
(e.g. PKCS11 with an accelerator vs an HSM for example). The 
exportable ones show up as ECPrivateKeys, the non-exportable ones as 
PrivateKeys (and I think with an underlying type of PKCS11Key...).  
So it follows the model for exportable keys. And every PKCS11 
provider I've used at least has a way of IMPORTING ECPrivateKeys.


My goal is for the new provider to be at least as interoperable as 
PKCS11 providers with non-exportable keys. Do all the PKCS11 providers 
that you have used allow importing private keys over JCA, or over some 
other API? Do you have to put the HSM in some sort of "import mode" in 
order for this import to be allowed? Is there any difference in the 
way that you can use keys that were loaded over this API vs keys that 
were generated on the device (or loaded securely)?



Again - you seriously want to hang your hat on this?:

1) Yes.  Over the JCA.  (Umm.. PKCS11 provider is a JCA/JCE provider so 
of course this works).


2) No.  In fact, the store operation of the PKCS11 keystore 
implementation does this just fine.


3) Depends.  If you generate or load the HSM PKCS11 keys according to 
the JCA PKCS11 guidance (e.g. with the right set of at

Re: Conceptual feedback on new ECC JEP

2018-09-18 Thread Michael StJohns

On 9/18/2018 4:01 PM, Adam Petcher wrote:

On 9/11/2018 11:07 AM, Adam Petcher wrote:



I still haven't been convinced that this lack of interoperability is 
a significant problem. In the proposed design, the new KeyFactory 
will not support ECPrivateKeySpec, and the implementation will 
produce private keys that inherit from PrivateKey, but not 
ECPrivateKey. Specifically, what problems in JCE are introduced by 
this design? How are these interoperability issues different from the 
ones you encounter with a PKCS11 provider that doesn't export private 
keys? If the developer wants more interoperability, why not use 
SunEC? If we decide that we want the new implementation to have 
better interoperability in the future, does something prevent us from 
enhancing it? These questions are for anyone who can help me 
understand the objections that have been raised related to 
interoperability.


A week has passed since I asked these questions to the mailing list, 
and I haven't gotten any answers. Without additional information, I'm 
not motivated to change the interoperability goals in the draft JEP.



Adam -

Basically, the JCE is all about plugging in not about the 
implementations.  If this is truly an EC library, I should be able to 
get the benefit of your library with very minimal changes - e.g. 
specifying your provider in the various getInstance() calls.   As it 
stands, I doubt this will end up in anyone's "must use" category because 
it will break existing code.


You want folks to convince you that interoperability is a significant 
problem when what we (or at least I) want is for you to convince us that 
these interop breaks are warranted due to how wonderful your approach is 
and that they're absolutely necessary due the secret sauce of 
wonderfulness.  You're not there yet.


As for PKCS11 - there are exportable and non-exportable private keys 
(e.g. PKCS11 with an accelerator vs an HSM for example).  The exportable 
ones show up as ECPrivateKeys, the non-exportable ones as PrivateKeys 
(and I think with an underlying type of PKCS11Key...).  So it follows 
the model for exportable keys.  And every PKCS11 provider I've used at 
least has a way of IMPORTING ECPrivateKeys.


Later, Mike




Re: Conceptual feedback on new ECC JEP

2018-09-07 Thread Michael StJohns

On 9/6/2018 9:53 AM, Adam Petcher wrote:

On 9/5/2018 5:53 PM, Michael StJohns wrote:



BigInteger is not required to encode a PKCS8 key, but its trivial to 
translate from BigInteger to PKCS8 and vice versa. Note that 
internally, the current BigInteger stores the magnitude as an array 
of int in big endian order and stores the sign separately. The 
BigInteger (int sigNum, byte[] magnitude) constructor mostly just 
copies the magnitude bytes over (and converts 4 bytes to an integer) 
for positive integers.   While the current BigInteger doesn't have a 
byte[] BigInteger.getMagnitude() call, it would be trivial to 
subclass BigInteger to copy the magnitude bytes (without branching) out.


In any event, for NewEC to OldEc - you do use the sign/magnitude call 
to create the BigInteger - no leaking on the part of NewEC, and once 
it gets to OldEC its not your problem.  For OldEc to NewEc you copy 
the BigInteger to NewBigInteger (trivial wrapper class) and do a 
getMagnitude().


See the code for BigInteger[1]. The sign/magnitude constructor calls 
stripLeadingZeroBytes (line 405) on the magnitude to convert it to an 
int array. This method compares the leading values in the byte array 
to zero in a short-circuit expression in a for loop (line 4298). So 
this constructor branches on the value that is supplied to it, and it 
cannot be used in a branchless implementation.


I don't know how to write this branchless getMagnitude method that you 
are proposing. I would need to convert from a variable-length int 
array to a fixed-length byte array. In doing this, I can't branch on 
the length of the variable-length array, because doing so would leak 
whether the high-order bits of the key are zero. If you know how to do 
this, then please provide some code to illustrate how it is done.


Simple way to do this is to extend BigInteger and fix these perceived 
problems.  You may have to replace/replicate pretty much everything, but 
as long as it has a BigInteger signature when output, you're good to 
go.  Actually, do that - clone BigInteger.java as SafeBigInteger.java, 
have extend java.math.BigInteger and change out the things that bother 
you.   I checked - BigInteger is not a final class so this should work.


 E.g. as long as *I* can call the functions I need to call on the 
object you exported, I'm fine with it.


In any event, you're still missing the point here.  You're EXPORTING the 
key from your provider to other providers which probably already don't 
care that much about the one in 256 (approx - slightly larger)  time 
where exporting the key might leak the fact of leading zero bits.




[1] 
http://hg.openjdk.java.net/jdk/jdk/file/805807f15830/src/java.base/share/classes/java/math/BigInteger.java



While the contract for getEncoded() explicitly allows for a null 
result, getS() of ECPrivateKey does not.    This more than any other 
reason appears to be why the PKCS 11 provider represents the 
ECPrivateKey as a simple PrivateKey object.


Good point. There is no need for the private keys for this provider to 
implement ECPrivateKey, since there is only one method that is 
effectively unimplemented. We should use a new implementation class 
(not ECPrivateKeyImpl) that implements PrivateKey, not ECPrivateKey.


*pounds head on table*  Sure.  And then you can't use any translation 
features and then you break all the code.


So implementing your provider requires other providers to change? 
Because?   Do you expect BouncyCastle and NSS to change as well?


I think the situation is improved by not having the new private keys 
implement ECPrivateKey. Then the JDK ECKeyFactory code does not need 
to change, and neither do other providers, assuming they behave in a 
similar way. Though I think it is acceptable if other providers don't 
behave this way, and therefore cannot translate private keys from this 
new provider. I've updated the JEP to indicate that interoperability 
with these providers is a non-goal.


OK.  At this point you're no longer calling this an EC key. Make sure 
you catch all the places where ECPrivateKey is used instead of 
PrivateKey.  I think you're incredibly short sighted and not a lot of 
folks will implement this, but go for it.  I think its a *really* bad 
idea and that you're fixing the wrong problems.








The only way to get private keys in or out of the new provider is 
through a PKCS#8 encoding. Moving keys to/from another provider that 
supports ECPrivateKeySpec but not PKCS#8 encoding can be 
accomplished by translating the specs---possibly with the help of a 
KeyFactory that supports both, like the one in SunEC.


*pounds head against table*   Bits are bits are bits.  Creating a 
PKCS8EncodedKeySpec gets you a byte array which you can convert to a 
BigInteger by decoding the PKCS8 structure and taking the PKCS8 
PrivateKey octets and doing new BigInteger (1, privateKeyOctets).


That doesn't require ASN1 integer representation (e.g. leading byte 
is zero if high bit

Re: RFR 6913047: SunPKCS11 memory leak

2018-09-05 Thread Michael StJohns

On 9/4/2018 9:59 PM, Valerie Peng wrote:
These sun.security.pkcs11.wrapper classes are internal and subject to 
changes without notice.
No arguments there.  But that interface has been stable since the 
initial contribution and to be blunt - the PKCS11 provider only works 
well if you use the keys you created through the provider. There's a set 
of idiosyncratic choices for how to map keys to certs to keys that make 
keys created by non-provider calls (e.g. C code or other non-java libs) 
to the token difficult to use through the provider and vice versa.  That 
led to us using the wrapper classes directly.


Maybe its time to provide a PKCS11AttributeSpec  of some sort for key 
creation and for looking things up?  The current model is literally 
12-15 years old AFAICT.




For the time being, maybe we can leave these method intact and add a 
comment about calling the new methods which use P11Key argument 
instead of the key handle argument.


That should work.  You may want to think about deprecating those methods 
and target removing them for a later release in a couple of years.


Thanks - Mike




Regards,
Valerie

On 9/1/2018 11:18 AM, Michael StJohns wrote:

Hi Valerie -

I may be speaking only for myself, but there are probably a number of 
folk using sun.security.pkcs11.wrapper.* in lieu of the PKCS11 
provider for a number of reasons including an ability to manage the 
key storage and wrapping efficiently. Looking at this I'm thinking 
there will be a large number of things breaking because of the way 
you refactored things.


While I understand that none of the sun.security.* classes are 
"public" - these were mostly taken directly from the IAIK 
contribution way back when and the folks I worked with used them in 
lieu of external libraries to have a consistent PKCS11 interface 
java-to-java.


Perhaps instead you'd consider doing something like leaving the Java 
to Native public methods intact?


Mike




On 8/31/2018 7:53 PM, Valerie Peng wrote:


Hi Martin,

With the new model of "creating the key handle as needed", I think 
we should not allow the direct access of keyID field outside of 
P11Key class. This field should be made private and accessed through 
methods. In addition, existing PKCS11.C_XXX() methods with P11 keyID 
arguments can be made private and we can add wrapper methods with 
P11Key object argument to ensure proper usage of the P11Key object 
and its keyID field.


I have also refactored the keyID management code into a separate 
holder class. The prototype code can be found at: 
http://cr.openjdk.java.net/~valeriep/6913047Exp/webrev.00/


The main changes that I made are in P11Key.java and PKCS11.java. The 
other java classes are updated to call the wrapper methods in 
PKCS11.java.


Thanks & have a great Labor day weekend,

Valerie


On 8/16/2018 5:28 PM, Valerie Peng wrote:


Hi Martin,


Since there is no regression test for this test, you will need to 
add a "noreg-xxx" label to the bug record. For this issue, it'll 
probably be "noreg-hard".


To understand the changes better, please find my questions and some 
review comments below:



- line 397: It's incorrect to change initialize() to 
ensureInitialized(). If the cipher object were to be initialized 
twice with two different keys, you need to re-initialize again.

- line 471: Why do you change it to Throwable from PKCS11Exception?


- line 99: comment is somewhat unclear. typo: "temporal" should be 
"temporary".
- line 148-149: JDK-8165996 has been fixed. This does not apply 
now, does it?
- You changed the constructors of all the P11Key-related classes to 
take an additional boolean argument "tmpNativeKey". However, this 
boolean argument is only used when the underlying token is "NSS". 
Why limiting to NSS only? It seems that all callers except for 
PKCS11 KeyStore pass true for "tmpNativeKey". When "tmpNativeKey" 
is true, the keyID handle first destroyed in constructor and later 
created again (ref count == 1) and destroyed (when ref count drops 
to 0). This replaced the PhantomReference approach in SessionKeyRef 
class, right? Now both approaches seem to be used and key 
destruction is taking places at different methods. I think we 
should clarify the cleanup model and perhaps refactor the code so 
it's easier which approach is in use. Or grouping all these 
cleanup-related fields/variables into a separate class for 
readability/clarity.
- line 157-175: nativeKeyWrapperKeyID is a static variable, 
shouldn't it be synchronized on a class level object?
- line 1343: the impl doesn't look right. Why do you call both 
removeObject() and addObject() here with the same check?
- line 1363: the change seems incorrect, you should still call 
session.removeObject(). the call setKeyIDAndSession(0, null) does 
not lower the session object count.


Thanks,
Valerie

On 8/7/2018 5:41 PM, Valerie Peng wrote:


Re: Conceptual feedback on new ECC JEP

2018-09-04 Thread Michael StJohns

On 9/4/2018 3:19 PM, Adam Petcher wrote:
I think what you are suggesting is that the implementation should 
convert between BigInteger and the internal representation when 
necessary. The problem with this approach is that it is too easy to 
inadvertently supply a BigInteger to the implementation, and this 
would result in a branch. I understand that this branch may be 
acceptable in some circumstances, but we would need something in the 
API to tell the implementation whether it is okay to branch or not. I 
think the simplest way to do that is to have a provider that never 
branches. If branching is okay, then SunEC can be used.


Basically yes.

But I don't understand what you mean by "inadvertently supply a 
BigInteger"?  AFAICT, you "supply" a BigInteger in an ECPrivateKeySpec 
and you retrieve one when you call getEncoded() or getS().    Your 
implementation would convert between the BigInteger and internal 
representation during the use of the engineGeneratePrivate() method of 
the KeyFactorySpi and would convert from your internal representation 
when exporting S, or encoding something as PKCS8.


Again - you're wrongly conflating interface requirements with 
implementation requirements.


And how do you expect to move key material between SunEC and this 
implementation?  See below for my commentary on that.



That's essentially the plan. Calling PrivateKey::getEncoded will 
return null, which is a convention for non-extractable keys. Trying to 
convert from/to an ECPrivateKeySpec using the KeyFactory in the new 
provider will result in an exception---so you won't have an object to 
call getS() on.
That's not what PKCS11 does - it just gives you a "PrivateKey" object 
with an internal type of sun.security.pkcs11.P11Key.  While that object 
is not type safe exactly, it is provider safe.


You're still wanting to use the various EC classes of java.security, 
java.security.spec, and java.security.interfaces, but you're unwilling 
to actually meet their contracts for some really suspect reasons.





To create the key from stored information, the best way is to 
construct a PKCS8EncodedKeySpec using the encoded key. If you are 
starting with a BigInteger, and if branching is acceptable, you can 
use the KeyFactory from SunEC to convert an ECPrivateKeySpec to 
PrivateKey to get the encoded value. 


Umm... what?

If you were doing NewEC -> SunEC manually (getEncoded() -> KeySpec) - 
you'll need to end up emitting a PKCS8 blob using RFC5915, which - 
unsurprisingly has  BigEndian INTEGERs  (yes, its an octet string, but 
the encoding is specified by RFC3447 as pretty much the big endian 
encoding of an integer).  E.g. it may look opaque from Java's point of 
view, but it's not really opaque. (See below)


Or have you got a different way of encoding the PKCS8 blob for the new 
provider?  E.g. point me at a specification please.


My head hurt when I tried to work through the various cases of 
translating a private key from your provider to SunEC or to BouncyCastle 
and vice versa.  Basically, if you don't support the getS() call, then 
KeyFactory.translateKey() will fail.  (Below from 
sun.security.ec.ECKeyFactory.java - the SunEC provider's implementation).



 private PrivateKey implTranslatePrivateKey(PrivateKey key)
    throws InvalidKeyException {
    if (key instanceof ECPrivateKey) {
    if (key instanceof ECPrivateKeyImpl) {
    return key;
    }
    ECPrivateKey ecKey = (ECPrivateKey)key;
    return new ECPrivateKeyImpl(
    ecKey.getS(),
    ecKey.getParams()
    );
    } else if ("PKCS#8".equals(key.getFormat())) {
    return new ECPrivateKeyImpl(key.getEncoded());
    } else {
    throw new InvalidKeyException("Private keys must be instance "
    + "of ECPrivateKey or have PKCS#8 encoding");
    }





  4.1  I2OSP



I2OSP converts a nonnegative integer to an octet string of a
specified length.

I2OSP (x, xLen)

Input:
xnonnegative integer to be converted
xLen intended length of the resulting octet string

Output:
Xcorresponding octet string of length xLen

Error: "integer too large"

Steps:

1. If x >= 256^xLen, output "integer too large" and stop.

2. Write the integer x in its unique xLen-digit representation in
   base 256:

  x = x_(xLen-1) 256^(xLen-1) + x_(xLen-2) 256^(xLen-2) + ...
  + x_1 256 + x_0,

   where 0 <= x_i < 256 (note that one or more leading digits will be
   zero if x is less than 256^(xLen-1)).

3. Let the octet X_i have the integer value x_(xLen-i) for 1 <= i <=
   xLen.  Output the octet string

  X = X_1 X_2 ... X_xLen.




Re: Conceptual feedback on new ECC JEP

2018-09-04 Thread Michael StJohns

Below

On 9/4/2018 8:57 AM, Adam Petcher wrote:

On 9/1/2018 2:03 PM, Michael StJohns wrote:


On 8/23/2018 1:50 PM, Adam Petcher wrote:
It will only support a subset of the API that is supported by the 
implementation in SunEC. In particular, it will reject any private 
keys with scalar values specified using BigInteger (as in 
ECPrivateKeySpec), and its private keys will not return scalar 
values as BigInteger (as in ECPrivateKey.getS()). 


Um... why?   EC Private keys are integers I've said this multiple 
times and - with the single exception of EDDSA keys because of a very 
idiosyncratic (and IMHO short-sighted) RFC specification - all of the 
EC private keys of whatever curve can be expressed as integers.




The explanation is in the JEP:

"The existing API for ECC private keys has some classes that specify 
private scalar values using BigInteger. There is no way to get a value 
out of a BigInteger (into, for example, a fixed-length array) without 
branching."


There is no problem with making private keys integers in the API. The 
problem is specifically with BigInteger and its implementation. 
BigInteger stores the value in the shortest int array possible. To 
access the value, you need to branch on the length of the array, which 
leaks whether the high-order bits of the private key are 0.





*buzz* wrong answer.   Sorry.   The internal storage of the key can be 
anything you want it to be if you want to prevent a non-constant-time 
issue for normal calculation.  But the import/export of the key really 
isn't subject to the cargo cult "must not branch" dogma - hint - I'm 
moving across a security boundary line anyway.    So if I do a 
"getEncoded()" or a "getS()" on an ECPrivateKey object or provide a 
BigInteger on an ECPrivateKeySpec there is no additional threat over the 
presence of the private key bits in public space.


If you believe this to be such a problem, then I'd suggest instead 
updating BigInteger to allow for BigEndian or LittleEndian encoding 
input/output and fix the constant time issue there inside the 
implementation rather than breaking public APIs.


Alternately, I guess you could throw some sort of exception if someone 
tries to call getS() or getEncoded().  Or you could do what PKCS11 does 
for non-extractable private keys and only class the private key as a 
PrivateKey - make it opaque.  But then how do you create the key from 
stored information?


Later, Mike



Re: RFR 6913047: SunPKCS11 memory leak

2018-09-01 Thread Michael StJohns

Hi Valerie -

I may be speaking only for myself, but there are probably a number of 
folk using sun.security.pkcs11.wrapper.* in lieu of the PKCS11 provider 
for a number of reasons including an ability to manage the key storage 
and wrapping efficiently.   Looking at this I'm thinking there will be a 
large number of things breaking because of the way you refactored things.


While I understand that none of the sun.security.* classes are "public" 
- these were mostly taken directly from the IAIK contribution way back 
when and the folks I worked with used them in lieu of external libraries 
to have a consistent PKCS11 interface java-to-java.


Perhaps instead you'd consider doing something like leaving the Java to 
Native public methods intact?


Mike




On 8/31/2018 7:53 PM, Valerie Peng wrote:


Hi Martin,

With the new model of "creating the key handle as needed", I think we 
should not allow the direct access of keyID field outside of P11Key 
class. This field should be made private and accessed through methods. 
In addition, existing PKCS11.C_XXX() methods with P11 keyID arguments 
can be made private and we can add wrapper methods with P11Key object 
argument to ensure proper usage of the P11Key object and its keyID field.


I have also refactored the keyID management code into a separate 
holder class. The prototype code can be found at: 
http://cr.openjdk.java.net/~valeriep/6913047Exp/webrev.00/


The main changes that I made are in P11Key.java and PKCS11.java. The 
other java classes are updated to call the wrapper methods in PKCS11.java.


Thanks & have a great Labor day weekend,

Valerie


On 8/16/2018 5:28 PM, Valerie Peng wrote:


Hi Martin,


Since there is no regression test for this test, you will need to add 
a "noreg-xxx" label to the bug record. For this issue, it'll probably 
be "noreg-hard".


To understand the changes better, please find my questions and some 
review comments below:



- line 397: It's incorrect to change initialize() to 
ensureInitialized(). If the cipher object were to be initialized 
twice with two different keys, you need to re-initialize again.

- line 471: Why do you change it to Throwable from PKCS11Exception?


- line 99: comment is somewhat unclear. typo: "temporal" should be 
"temporary".
- line 148-149: JDK-8165996 has been fixed. This does not apply now, 
does it?
- You changed the constructors of all the P11Key-related classes to 
take an additional boolean argument "tmpNativeKey". However, this 
boolean argument is only used when the underlying token is "NSS". Why 
limiting to NSS only? It seems that all callers except for PKCS11 
KeyStore pass true for "tmpNativeKey". When "tmpNativeKey" is true, 
the keyID handle first destroyed in constructor and later created 
again (ref count == 1) and destroyed (when ref count drops to 0). 
This replaced the PhantomReference approach in SessionKeyRef class, 
right? Now both approaches seem to be used and key destruction is 
taking places at different methods. I think we should clarify the 
cleanup model and perhaps refactor the code so it's easier which 
approach is in use. Or grouping all these cleanup-related 
fields/variables into a separate class for readability/clarity.
- line 157-175: nativeKeyWrapperKeyID is a static variable, shouldn't 
it be synchronized on a class level object?
- line 1343: the impl doesn't look right. Why do you call both 
removeObject() and addObject() here with the same check?
- line 1363: the change seems incorrect, you should still call 
session.removeObject(). the call setKeyIDAndSession(0, null) does not 
lower the session object count.


Thanks,
Valerie

On 8/7/2018 5:41 PM, Valerie Peng wrote:


Hi Martin,

Thanks for the update, I will resume the review on this one with 
your latest webrev.


BTW, there is no webrev.07 for your other fix, i.e. JDK-8029661, 
right? Just checking.


Regards,
Valerie

On 8/3/2018 2:13 PM, Martin Balao wrote:

Hi Valerie,

 * 
http://cr.openjdk.java.net/~mbalao/webrevs/6913047/6913047.webrev.10/ 

 * 
http://cr.openjdk.java.net/~mbalao/webrevs/6913047/6913047.webrev.10.zip 



New in webrev 10:

 * Bug fix when private DSA/EC keys are sensitive
  * I found this bug removing "attributes = compatibility" entry in 
NSS configuration file so keys were CKA_SENSITIVE.
  * This is really an NSS bug when unwrapping ephemeral keys 
(NSC_UnwrapKey function), because CKA_NETSCAPE_DB attribute is 
required but not used/needed. There was a similar bug when creating 
objects (NSC_CreateObject function), fixed many years ago [1].
  * In those cases in which the bug occurs (private keys, of DSA/EC 
type, sensitive and without CKA_NETSCAPE_DB attribute set), I store 
an empty CKA_NETSCAPE_DB attribute in the buffer that will later be 
used for key unwrapping. I'm not doing a C_SetAttributeValue call 
because keys 

Re: Conceptual feedback on new ECC JEP

2018-09-01 Thread Michael StJohns

On 8/23/2018 1:50 PM, Adam Petcher wrote:
It will only support a subset of the API that is supported by the 
implementation in SunEC. In particular, it will reject any private 
keys with scalar values specified using BigInteger (as in 
ECPrivateKeySpec), and its private keys will not return scalar values 
as BigInteger (as in ECPrivateKey.getS()). 


Um... why?   EC Private keys are integers I've said this multiple 
times and - with the single exception of EDDSA keys because of a very 
idiosyncratic (and IMHO short-sighted) RFC specification - all of the EC 
private keys of whatever curve can be expressed as integers.


Mike




Re: Please review EdDSA API

2018-07-26 Thread Michael StJohns

On 7/26/2018 4:24 PM, Adam Petcher wrote:

On 7/26/2018 3:58 PM, Michael StJohns wrote:


On 7/25/2018 2:05 PM, Adam Petcher wrote:


Did you mean PrivateKey ::= OctetToInteger(random)? Setting/clearing 
bits here destroys information. If we don't prune here, then we can 
reverse this operation later to get the byte array back to give to 
the hash.


No - I meant what I wrote:

1) generate a private key from a random and store it as a big 
integer.  E.g. generate a byte array of the appropriate length (32), 
twiddle the bits as described in step 2 of section 5.1.5 of RFC8032 
and - interpreting that buffer as a little-endian encoding, save 's' 
(the secret scalar - aka - the actual private key) as a BigInteger.


That's the limit of what goes into the PrivateKey spec/interface.
2) When you do a signing, do SigningValue = 
HASH(IntegerToLittleEndianOctets(s)).
3) When done with signing, throw away the hash value - it doesn't 
need to be stored.


Does this produce the same result as the signing function described in 
sections 3.2 and 3.3 of the RFC? If I do as you suggest, will the test 
vectors in Section 7 pass? It's not obvious to me that the signing 
procedure that you are proposing is the same function.


Note that the signing value (e.g. prefix) is used as part of the 
formation of 'r' in signing, but is not recoverable from the 
signature.   s is calculated from whatever value of r you get and the 
two taken together (r,s) form the actual signature.   Note that 'prefix' 
could be a random value if you wanted non-deterministic signatures, but 
the inclusion of a fixed prefix value means that the same signature will 
be generated by the same private key over the same data.


The test vectors will not pass, because they are calling the byte array 
from which the private key and the signing value are derived as the 
private key.


However, each and every signature generated by the above approach (e.g. 
using a *real* private key and a signing value downstream derived from 
that private key) *will* verify, and each and every signature by that 
private key over the same data using the above approach will produce 
identical signatures.


Mike




Re: Please review EdDSA API

2018-07-26 Thread Michael StJohns

On 7/25/2018 2:05 PM, Adam Petcher wrote:

On 7/25/2018 11:24 AM, Michael StJohns wrote:



*sigh* Private keys are big integers.  There's an associated 
parameter used in signing that the implementation described in the 
RFC (*not a standard please note*) generates from a common random 
byte array - that byte array is NOT a (or the) private key.


E.g.   Private key ::= OctetToInteger(Adjust(Left (HASH(random), 
length))) and SigningValue ::= Right(HASH(random),length).


Instead, you can get the exact same result (deterministic signatures) 
- and store a bog standard EC private key - by


PrivateKey ::= OctetToInteger(Adjust(random));


Did you mean PrivateKey ::= OctetToInteger(random)? Setting/clearing 
bits here destroys information. If we don't prune here, then we can 
reverse this operation later to get the byte array back to give to the 
hash.


No - I meant what I wrote:

1) generate a private key from a random and store it as a big integer.  
E.g. generate a byte array of the appropriate length (32), twiddle the 
bits as described in step 2 of section 5.1.5 of RFC8032 and - 
interpreting that buffer as a little-endian encoding, save 's' (the 
secret scalar - aka - the actual private key) as a BigInteger.


That's the limit of what goes into the PrivateKey spec/interface.
2) When you do a signing, do SigningValue = 
HASH(IntegerToLittleEndianOctets(s)).
3) When done with signing, throw away the hash value - it doesn't need 
to be stored.


The *only* important characteristics of the signing value are a) it's 
confidential, and b) it's the same for each signature.  It could even be 
a random value generated and stored at the same time as the private key 
- but why?







SigningValue ::= HASH (IntegerToOctet(PrivateKey)); // signing value 
may be regenerated at any time and need not be stored in the 
ECPrivateKey class.


With the modification above, I agree that this would give the value 
that can be split in half to produce the scalar value (after pruning 
and interpreting as an integer) and the prefix that is used in signing.


I think there may be some issues with this approach, but we need to 
start by agreeing on what you are proposing. Can you confirm that my 
understanding of your proposal is correct, or else clarify it for me?


What I'm trying to get to is make the java interfaces for EC private 
keys consistent regardless of which of the curve flavor you want to 
use.   In each and every case, looking behind the specified external 
representation, the scalar 's' can be represented as a BigInteger. 
Internal representation IN A SPECIFIC IMPLEMENTATION might use little 
endian internally, but that's irrelevant here for the purposes of java 
interfaces.    (At least in my opinion).


Later, Mike




Re: Please review EdDSA API

2018-07-25 Thread Michael StJohns

On 7/25/2018 10:29 AM, Adam Petcher wrote:
2) Similar to X25519/X448, private keys are byte arrays, and public 
keys coordinates. Though we can't get by with a single BigInteger 
coordinate for EdDSA, so I am using the new EdPoint class to hold the 
coordinates.


*sigh* Private keys are big integers.  There's an associated parameter 
used in signing that the implementation described in the RFC (*not a 
standard please note*) generates from a common random byte array - that 
byte array is NOT a (or the) private key.


E.g.   Private key ::= OctetToInteger(Adjust(Left (HASH(random), 
length))) and SigningValue ::= Right(HASH(random),length).


Instead, you can get the exact same result (deterministic signatures) - 
and store a bog standard EC private key - by


PrivateKey ::= OctetToInteger(Adjust(random));

SigningValue ::= HASH (IntegerToOctet(PrivateKey)); // signing value may 
be regenerated at any time and need not be stored in the ECPrivateKey class.


Adjust twiddles the bits in the byte array to make the byte array a 
valid little-endian private key before encoding for this set of curves.  
OctetToInteger turns that byte array into a BigInteger.


At this point in time, you've got the correct values to do your point 
math using common libraries if you don't happen to have implemented 
exactly what's in the RFC.



I know the above is a losing argument, but seriously, do we really need 
two new groups of EC classes simply because someone wrote an RFC that 
didn't consider existing representational structures?


Or will it be three or more?


Later, Mike




Re: EC weirdness

2018-07-19 Thread Michael StJohns

On 7/16/2018 4:42 PM, Adam Petcher wrote:
I think the reason for the hard-coded curves is largely historical, 
and I don't know of a security-related reason for it (other than the 
prevention of insecure parameters). Though it has the additional 
benefit that it simplifies the interface a lot, because the 
implementation for a curve is not completely determined by the domain 
parameters. 


Actually - they are.  An ECGenParameterSpec  (basically a named curve) 
is turned into an ECParameterSpec  (the actual curve and G stuff) which 
is turned into an AlgorithmSpecification of type (SunEC, EC) which 
strangely tries to do a reverse lookup of the parameter spec to get a 
name for the curve.   (That's not exactly it, but its close in spirit).


The implementation may also need to know which optimized finite field 
implementation, precomputed tables, etc to use. Looking up all of 
these things at once can be a lot simpler than trying to specify them 
over an interface.


Nope - that's not it.  The underlying C code mostly uses generic bignum 
operations on the curve parameters rather than pre-computed things.


My guess is that there were some dependencies on named curves in the EC 
library and the EC related stuff in the PKCS11 library.




I'm trying to figure out if your problem motivates support for 
arbitrary base points, or if there is another way to get what you 
want. Is the base point a secret in your scheme? Either way, can you 
implement your scheme using KeyAgreement (e.g. using the "base point" 
as the public key and a random integer as the private key)? What if 
ECDH KeyAgreement supported three-party Diffie-Hellman, or you extract 
a point from generateSecret()? Would that help?


In the scheme, G' is secret.  This is basically the EC version of 
https://en.wikipedia.org/wiki/SPEKE.  Two key pairs generated on P-256' 
(e.g. same curve params, but with G'), can use ECDH to get to the same 
shared secret.   If your key pair wasn't generated with G', then you 
can't get to the same secret with someone who's key pair was.


This is sort of an IOT experiment - I don't want to have to do an 
explicit authorization via a certificate binding a public key.  I can 
generate a group G', and have all the various devices generate a key 
pair using that G'.   The ability to get a pairwise shared secret with 
another device proves the membership in the group to that other device 
and vice versa.


The problem with using ECDH is that generic ECDH operations throw away 
the Y coordinate.  I can do xG' -> X'  using the SunEC ECDH, but what I 
actually get as a result is the scalar X'.x.


I've been trying to avoid using external libraries, but I'll probably 
end up using the BC point math stuff to make this work.


I may do a bit more archeology and see whether there are actual 
dependencies on the build in curve data (e.g. PKCS11).  If not, I may 
try and provide a patch refactoring out that limitation and doing a more 
thorough separation of the math from the packaging.


Later, Mike




On 7/13/2018 9:30 PM, Michael StJohns wrote:

Hi -

Every so often I run into some rather strange things in the way the 
Sun EC classes were built.  Most recently, I was trying to use the 
SunEC provider to do a PACE like protocol.  Basically, the idea was 
to be able to generate public key points on the P-256 curve, but with 
a different base point G (knowledge of G' or having a public key 
generated from G' would allow you to do a valid ECDH operation, keys 
with disjoint points would not).


I was able to generate a normal key pair using ECGenParameterSpec 
with a name, so I grabbed the ECParameterSpec from the public key, 
made a copy of most of the stuff (curve, cofactor), but substituted 
the public point W from the key pair I'd just generated, and passed 
that in as G to the new parameter spec.  I tried to initialize the 
KPG with my *almost* P-256 spec, and it failed with "unsupported curve".


Looking into the code and tracing through 
sun.crypto.ec.ECKeyPairGenerator to the native code, I'm once again 
surprised that all of the curves are hard coded into the C native 
code, rather than being passed in as data from the Java code.


Is there a good security reason for hard coding the curves at the C 
level that I'm missing?


This comes under the heading of unexpected behavior rather than a bug 
per se I think.   Although - I'd *really* like to be able to pass a 
valid ECParameterSpec in to the generation process and get whatever 
keys are appropriate for that curve and base point.


Later, Mike










EC weirdness

2018-07-13 Thread Michael StJohns

Hi -

Every so often I run into some rather strange things in the way the Sun 
EC classes were built.  Most recently, I was trying to use the SunEC 
provider to do a PACE like protocol.  Basically, the idea was to be able 
to generate public key points on the P-256 curve, but with a different 
base point G (knowledge of G' or having a public key generated from G' 
would allow you to do a valid ECDH operation, keys with disjoint points 
would not).


I was able to generate a normal key pair using ECGenParameterSpec with a 
name, so I grabbed the ECParameterSpec from the public key, made a copy 
of most of the stuff (curve, cofactor), but substituted the public point 
W from the key pair I'd just generated, and passed that in as G to the 
new parameter spec.  I tried to initialize the KPG with my *almost* 
P-256 spec, and it failed with "unsupported curve".


Looking into the code and tracing through 
sun.crypto.ec.ECKeyPairGenerator to the native code, I'm once again 
surprised that all of the curves are hard coded into the C native code, 
rather than being passed in as data from the Java code.


Is there a good security reason for hard coding the curves at the C 
level that I'm missing?


This comes under the heading of unexpected behavior rather than a bug 
per se I think.   Although - I'd *really* like to be able to pass a 
valid ECParameterSpec in to the generation process and get whatever keys 
are appropriate for that curve and base point.


Later, Mike






Re: RFR CSR 8202590: Customizing the generation of a PKCS12 keystore

2018-05-11 Thread Michael StJohns

On 5/9/2018 6:06 AM, Weijun Wang wrote:

Hi Mike

Your comments make sense. However,

1. I don't think it's easy in JDK to break a PBE algorithm into PBKDF + Cipher. For example, I cannot create 
a SecretKey from SecretKeyFactory.getInstance("PBEWithSHA1AndDESede") and use it to init a 
Cipher.getInstance("DESede"). I still have to use 
Cipher.getInstance("PBEWithSHA1AndDESede").


No - you'd instantiate a KDF using PBE, derive the secret key from the 
password, and use it with DESede.   Right now, you're hiding a KDF under 
the cover of turning a secret key spec into a secret key.


And right now, all of this is actually hidden under the KeyStore 
covers.    If you're going to use compound algorithm names, what I'd 
rather do is


   KeyStore ks = KeyStore.getInstance("PKCS12/PBKDF2/HMAC-SHA256/AES-KW");

Or  ///

And have the generic KeyStore.getInstance("PKCS12")  return what you 
specify in the preferences, or a general parser for things you're 
reading in.





2. If I read correctly, MacData in pkcs12 (https://tools.ietf.org/html/rfc7292#page-11) 
always uses the old style KDF (https://tools.ietf.org/html/rfc7292#appendix-B). If we 
have a "pbkdf" setting, it might mislead a user to believe it's also used by 
MacData generation.


You are - unfortunately - correct that Appendix B is still used for the 
KDF for macdata keys.  That appears to be a bug as you shouldn't be 
using the same key (password) for different algorithms and also appears 
to be a backwards compatibility item from the previous versions of 
PKCS12.  I'll ask the CFRG  what they think about it.


But I disagree with the "mislead a user".  As you note, Appendix B is 
specific that it's the only source for macdata KDFs, and you could 
include a note in the spec that this only applies to deriving data 
encryption keys rather than the macdata key.




I looked at openssl and it also has independent -certpbe and -keypbe.

Also, setting multiple algorithms with a preference order might not be what we 
wish for. The choice of algorithms is first about security and second about 
interoperability. By assigning *only* one algorithm for each usage, we are 
clear what the default algorithms are. For example, Oracle's JDK release will 
set them to match our crypto roadmap. Or, an application vendor or system 
administrator can choose their algorithms if it can be guaranteed that all 
parties using the application or inside a certain enterprise support the 
algorithms.


You've got a fair point - but I'd disagree that assigning default 
algorithms is about security, it really is about interoperability - 
making sure that the most number of people can handle the PKCS12 object 
without installing additional providers.   But seriously,


|keystore.pkcs12.certProtectionAlgorithm = PBEWithSHA1AndRC2_40|

is not a security choice!

But, after thinking about it a bit more, the preference list isn't 
useful without additional providers - and at that point you're probably 
generating stuff by specifying algorithms and such so I agree with no 
preference list.  Could you please, please, please not use weak 
algorithms as the default here though?




Thanks
Max


On May 5, 2018, at 10:50 PM, Michael StJohns <mstjo...@comcast.net> wrote:

On 5/5/2018 3:38 AM, Weijun Wang wrote:

Please take a review of


https://bugs.openjdk.java.net/browse/JDK-8202590



This enhancement has two major purposes:

1. Provide a way to change encryption and Mac algorithms used in PKCS 12.

2. The ability to create a password-less PKCS 12 keystore containing 
unencrypted certificates and no Mac.

Especially, the long paragraph in the spec on behavior of an existing keystore 
makes sure that once a password-less keystore is generated (with 
-Dkeystore.pkcs12.certProtectionAlgorithm=NONE and 
-Dkeystore.pkcs12.macAlgorithm=NONE), one can add new certificates to it 
without any special setting and keep it password-less.

Thanks
Max



I think you want to break this into two parts - the first part specifies the 
algorithm used to convert a password into key material. The second defines the 
algorithms used for protection for the various parts.
# password to key material scheme
.pbkdf=PBKDF2withHMAC-SHA256  (Form is base function with the PRF)
# PKCS12 macData
.macAlgorithm=HMAC-SHA256  # this is the algorithm for the PKCS12 macData 
component, if NONE, this component is not present
# protection scheme for PKCS8ShroudedKeyBagn if NONE, then a PKCS8KeyBag is 
produced instead.
.keyProtectionAlgorithm=AES-KWA
#protection scheme for certificates - produces an encryptedData object encrypted under 
the scheme, or a certBag object if "NONE" is specified
.certProtectionAlgorithm=NONE


Second, you probably want to do this as multi-choice entries in the 
java.security file ala providers:

.pbkdf.0=PBKDF2withSHA256
.pbkdf.9=PBKDF1withSHA1 # the current default aka pbe

So that you can specify a som

Re: RFR CSR 8202590: Customizing the generation of a PKCS12 keystore

2018-05-05 Thread Michael StJohns

On 5/5/2018 3:38 AM, Weijun Wang wrote:

Please take a review of

https://bugs.openjdk.java.net/browse/JDK-8202590

This enhancement has two major purposes:

1. Provide a way to change encryption and Mac algorithms used in PKCS 12.

2. The ability to create a password-less PKCS 12 keystore containing 
unencrypted certificates and no Mac.

Especially, the long paragraph in the spec on behavior of an existing keystore 
makes sure that once a password-less keystore is generated (with 
-Dkeystore.pkcs12.certProtectionAlgorithm=NONE and 
-Dkeystore.pkcs12.macAlgorithm=NONE), one can add new certificates to it 
without any special setting and keep it password-less.

Thanks
Max



|I think you want to break this into two parts - the first part 
specifies the algorithm used to convert a password into key material. 
The second defines the algorithms used for protection for the various parts.

# password to key material scheme
.pbkdf=PBKDF2withHMAC-SHA256  (Form is base function with the PRF)
# PKCS12 macData
.macAlgorithm=HMAC-SHA256  # this is the algorithm for the PKCS12 
macData component, if NONE, this component is not present
# protection scheme for PKCS8ShroudedKeyBagn if NONE, then a PKCS8KeyBag 
is produced instead.

.keyProtectionAlgorithm=AES-KWA
#protection scheme for certificates - produces an encryptedData object 
encrypted under the scheme, or a certBag object if "NONE" is specified

.certProtectionAlgorithm=NONE


Second, you probably want to do this as multi-choice entries in the 
java.security file ala providers:


.pbkdf.0=PBKDF2withSHA256
.pbkdf.9=PBKDF1withSHA1 # the current default aka pbe

So that you can specify a somewhat secure default, but still allow for 
providers that don't implement the stronger versions.


This requires a bit more work in figuring out what the embedded OIDs 
should be, and there is always the chance of mismatch, but it turns out 
there is the chance of mismatch even in the proposed version if you have 
protection algorithms coming from two different PBE schemes.


Specifying it this way is closer to the PKCS5 2.0 model rather than 
PKCS12 and matches the recommendations in the IETF's version of PKCS12.  
You also *really* don't want to use two different KDFs with the same 
password.


Mike




|


Re: KDF API review, round 2

2017-11-29 Thread Michael StJohns

On 11/29/2017 8:38 AM, Jamil Nimeh wrote:



On 11/28/2017 9:34 AM, Michael StJohns wrote:

On 11/28/2017 1:04 AM, Jamil Nimeh wrote:
Hi Mike, I know I said you made arguments in favor of specifying the 
keys up front in init, but I'm still really uncomfortable with 
this.  It's been bothering me all day. Comments below:


Before I get to those:

1) Do you know of any protocol using a KDF where the key production 
information is not known before you'd need to call the .init()?
I honestly don't.  I think it's safe to say you probably don't need a 
KDF instance until you know at least the first object you want out of 
it.  But for the protocols I know of all the objects are known once a 
cipher suite or proposal is agreed upon.
2) If you do, couldn't you simply provide an empty or null list of 
key derivation spec's to .init()?
You could, but that would end up necessitating two models of 
operation.  One where we give a  list up front of all objects and call 
derive actions with no parameters, and a second model where you 
specify nothing and then provide object specs one-by-one. Each one has 
pros and cons, but trying to support both models I think would make 
the API even more confusing.


I'm tempted to say that this case (where you don't know what the 
productions will be before the init) doesn't exist.   This degenerate 
case is simply a keyed PRNG and could probably be handled by 
deriveBytes(int length).


If you really want to produce keys from a keyed PRNG where the key 
objects are not a mixin then support both deriveKey() and 
deriveKey(params), but have the second throw a RuntimeException if you 
call it with a KDF initialized with a set of production params.




3) If you're doing a multiobject production from a single call to 
.init() do you expect in all cases to NOT include the production data 
as mixins?
In all cases?  I can't honestly say that.  For the protocols I know 
of, the individual object attributes (like length) are not mixins.  
But you later go on to say that you know of a couple protocols where 
they do.  If we have real-world scenarios where individual object 
lengths or other attributes really affect the keystream then I guess 
we need to take that into account.


My problem is that I have use cases where ALL of my key production 
information is used as mixins to the key stream.  Now I could provide 
a List as part of the KDF init algorithm 
parameter spec (kdfParams), but that means that I have to provide a 
different APS for each different key schedule (consider TLS1.3s 
various calls). If you take out the List out 
of the .init() I'll end up having to do that and probably having to 
accept null values for the deriveKey calls.


More in line.




On 11/27/2017 10:09 AM, Michael StJohns wrote:

On 11/27/2017 1:03 AM, Jamil Nimeh wrote:




HKDF and SP800-108 only deal with the creation of the key stream 
and ignore the issues with assigning the key stream to 
cryptographic objects.  In the TLS version of HDKF, the L value is 
mandatory and only a single object is assigned per init/call to the 
KDF.   An HSM can look at the HKDF label information and set the 
appropriate policies for the assigned cryptographic object (because 
if any of the label data changes, the entire key stream changes).  
That's not the case for the raw HKDF nor for any KDF that allows 
for multiple objects to be extracted out of a single key stream.  
Hence the per-component length values.
So enforce a no-zero-length key policy in your provider code. You 
probably can't affect the internals of the HSM, but you should be 
able to prevent it in the provider code.  I can't get away from the 
feeling that this could be dealt with in other ways besides 
specifying all this up-front.


The best way to understand this is to look at the PKCS11 TLS1.2 and 
before KDF stuff.  The key production schedule was for an encryption 
key, an integrity key and two IVs, all from the same key stream.  It 
turns out that NOTHING the HSM could do could prevent the extraction 
of key material because changing the boundaries between each object 
did not change the key stream. In the TLS case (and IPSec for that 
matter), it's a simple matter to move confidential key material into 
non-confidential IVs.  However, even if you limit the production to 
only confidential items, you still have a problem in that using the 
same key material for different algorithms (e.g. using part of an AES 
key as a single DES key) can lead to vulnerabilities.


TLS 1.3 fixed this problem by only doing single key productions for 
each call to the KDF (and by adding the length of the production to 
the mixins).  Because of this, an HSM can look at the mixin data and 
"do the right thing" with respect to policy. If TLS1.3 had kept the 
multiple object production model, they would have included the 
per-object lengths in the KDF mixin data.


The HSM can do the right thing because the bits it can depend upon 
(in the TLS 1.3 case

Re: KDF API review, round 2

2017-11-29 Thread Michael StJohns

Hi Jamil et al -

I think I finally understand the disconnect here - let me try and work 
through an explanation from another direction.


The TLS 1.3 KDF is NOT HKDF-Expand, it is HKDF-Expand-Label.

HKDF-Expand has a calling sequence of HKDF-Expand (Secret, Label, 
Length).   That results in underlying calls to T(n) = PRF (Secret, 
T(n-1) || label || n) for each block of key stream (RFC 5869 - page 3).


Two key things to understand here are:  that the Length value in 
HKDF-Expand is not passed in as a mixin, only as a value to determine 
how much key stream to produce; that the "label" parameter is just 
opaque data (remember this please - its important later).


HKDF-Expand-Label (Secret, Label, HashValue, Length)  is defined as 
HKDF-Expand (Secret, hkdfLabel, Length) where hkdfLabel is a structured 
value (NON-Opaque) consisting of the Length, Label (from the top level 
call) and Hash Value (also from the top level call) (TLS 1.3 section 7.1 
Key Schedule). The hkdfLabel ultimately is the "label" part of the call 
to the PRF.


A module creating a key from HKDF-Expand-Label knows that it MUST cut 
off the key stream after Length bytes (and if a new length value is 
provided, you get a whole different key stream).  It can read the 
hkdfLabel  from the call and know this production is for a key or an iv 
and set protections appropriately. HKDF-Expand-Label also knows that it 
will only produce a single object from each call.  In HDKF-Expand, the 
Label value is opaque.  It may contain the Length value, it may contain 
key tagging information, it may contain your dog's name, but since it's 
not structured in any way, an implementer of the HKDF-Expand calling 
convention can't do anything useful with that data in figuring out the 
assignment of the key stream to a key object.


I think what we have is a failure of language.  HKDF (and for that 
matter SP800-108) should be called key stream producers - not key 
derivation functions.  Mainly because they don't properly consider the 
assignment of the key stream bytes to the keys and how that interacts 
with the key stream production.


I need a way of making sure that 1) the key assignment/production 
specifications can be included in the mixins, and 2) a way of letting 
the module/provider be able to rely upon those specifications (e.g. by 
changing the key stream if the specifications change).  That means that 
the KeyDerivation calling convention needs to let me provide all of this 
data before the first call to derive a key.


This would have been more obvious had TLS1.3 kept the same multiple 
production per KDF call as was in previous versions - the HkdfLabel 
argument would have been an array of hkdfLabel structures.  They opted 
to clean it up slightly and limit it to a single production per KDF call.


Mike






Re: KDF API review, round 2

2017-11-28 Thread Michael StJohns

On 11/28/2017 1:04 AM, Jamil Nimeh wrote:
Hi Mike, I know I said you made arguments in favor of specifying the 
keys up front in init, but I'm still really uncomfortable with this.  
It's been bothering me all day.  Comments below:


Before I get to those:

1) Do you know of any protocol using a KDF where the key production 
information is not known before you'd need to call the .init()?
2) If you do, couldn't you simply provide an empty or null list of key 
derivation spec's to .init()?
3) If you're doing a multiobject production from a single call to 
.init() do you expect in all cases to NOT include the production data as 
mixins?


My problem is that I have use cases where ALL of my key production 
information is used as mixins to the key stream.  Now I could provide a 
List as part of the KDF init algorithm 
parameter spec (kdfParams), but that means that I have to provide a 
different APS for each different key schedule (consider TLS1.3s various 
calls). If you take out the List out of the 
.init() I'll end up having to do that and probably having to accept null 
values for the deriveKey calls.


More in line.




On 11/27/2017 10:09 AM, Michael StJohns wrote:

On 11/27/2017 1:03 AM, Jamil Nimeh wrote:




HKDF and SP800-108 only deal with the creation of the key stream and 
ignore the issues with assigning the key stream to cryptographic 
objects.  In the TLS version of HDKF, the L value is mandatory and 
only a single object is assigned per init/call to the KDF.   An HSM 
can look at the HKDF label information and set the appropriate 
policies for the assigned cryptographic object (because if any of the 
label data changes, the entire key stream changes).  That's not the 
case for the raw HKDF nor for any KDF that allows for multiple 
objects to be extracted out of a single key stream.  Hence the 
per-component length values.
So enforce a no-zero-length key policy in your provider code.  You 
probably can't affect the internals of the HSM, but you should be able 
to prevent it in the provider code.  I can't get away from the feeling 
that this could be dealt with in other ways besides specifying all 
this up-front.


The best way to understand this is to look at the PKCS11 TLS1.2 and 
before KDF stuff.  The key production schedule was for an encryption 
key, an integrity key and two IVs, all from the same key stream.  It 
turns out that NOTHING the HSM could do could prevent the extraction of 
key material because changing the boundaries between each object did not 
change the key stream.  In the TLS case (and IPSec for that matter), 
it's a simple matter to move confidential key material into 
non-confidential IVs.  However, even if you limit the production to only 
confidential items, you still have a problem in that using the same key 
material for different algorithms (e.g. using part of an AES key as a 
single DES key) can lead to vulnerabilities.


TLS 1.3 fixed this problem by only doing single key productions for each 
call to the KDF (and by adding the length of the production to the 
mixins).  Because of this, an HSM can look at the mixin data and "do the 
right thing" with respect to policy.  If TLS1.3 had kept the multiple 
object production model, they would have included the per-object lengths 
in the KDF mixin data.


The HSM can do the right thing because the bits it can depend upon (in 
the TLS 1.3 case the label and the length) are included in the mixin and 
not simply as part of the added on key creation stuff. Without this, 
there is nothing an HSM can do for enforcement because changing these 
inputs wouldn't change the key stream.






Ideally, there should be a complete object spec for each object to be 
generated that is part of the mixins (label and context) for any 
KDF.   That allows an HSM to rely upon the object spec when setting 
policy controls for each generated object - and incidentally allows 
for a KDF to generate both public and non-public data in a secure way.
Between different generations of keystreams do you expect to have 
different sets of policy controls?  The KDF API has no way for you to 
set those things so I would assume those would be pretty static, or at 
least controlled outside the KDF API.  If so, why is the KDF API 
concerning itself with how some HSM sets its policy on objects it makes?


If I call a KDF with the same key but with different key productions, I 
*want* the key stream to be different.  If I call it with the same key 
but with same key productions, I *want* it to be the same.   Say I call 
the KDF to produce two objects -  an AES key of length 16 bytes and a 
HMAC-SHA256 key of also length 16 bytes. If I then call the same kdf 
with the same key to produce two AES keys of length 16 bytes (same 
overall length of the key stream, but different objects), I would 
*really* like it if the second object did not have the same key bytes as 
the HMAC-SHA256 key of the first call.   The only way I can ensure this 
is to prov

Re: KDF API review, round 2

2017-11-27 Thread Michael StJohns

On 11/27/2017 2:16 PM, Jamil Nimeh wrote:
See above with respect to set/getParameter.  But hopefully you'll be 
happy with the API after this next round.  I have one other change I 
will be making.  I'm removing deriveObject.  I'm uncomfortable right 
now with the idea of the API executing an arbitrary class' 
constructor.  This is something I'm definitely willing to examine in 
the future once the most pressing tasks both with this API, and 
projects that are immediately depending on it are take care of. It is 
easier to add calls to the API than it is to remove/modify/deprecate 
them if there's a problem.  I will file an RFE so that we can track 
this enhancement.


Modifications to the KeyAgreement API are beyond the scope of this 
JEP.  We can certainly discuss ideas you have, but this KDF JEP isn't 
going to be dependent on those discussions.



Fair enough.

The deriveObject stuff is a problem because it doesn't fit well in the 
JCA.  Mostly we've got KeyGenerator/KeyPairGenerator/KeyFactory that 
produce objects of a particular provider.  KeyDerivation is weird in 
that one provider will be producing the derived key stream and 
potentially others might need to provide key or cryptographic objects 
from that stream.   I can see the point in delaying this to a later rev 
though it might make something like [KDF is Bouncycastle, keys are 
PKCS11] a bit difficult to work around.


Last one -

Can I get you to buy into a MasterKey/MasterKeySpec  that is not a sub 
class of SecretKey but has the same characteristics (and probably the 
same definitions) as those classes (and is what gets used in the .init() 
argument)?  This goes back to trying to prevent a SecretKey from being 
used both with a KDF and the underlying PRF of the KDF.  I know this is 
a don't care for software based providers but would be useful for 
security module based ones.


I'm really hoping to improve cryptographic type and use safety along the 
way.


Thanks - Mike





Re: KDF API review, round 2

2017-11-27 Thread Michael StJohns

On 11/27/2017 1:03 AM, Jamil Nimeh wrote:






One additional topic for discussion: Late in the week we talked 
about the current state of the API internally and one item to 
revisit is where the DerivationParameterSpec objects are passed. It 
was brought up by a couple people that it would be better to provide 
the DPS objects pertaining to keys at the time they are called for 
through deriveKey() and deriveKeys() (and possibly deriveData).


Originally we had them all grouped in a List in the init method. One 
reason for needing it up there was to know the total length of 
material to generate.  If we can provide the total length through 
the AlgorithmParameterSpec passed in via init() then things like:


Key deriveKey(DerivationParameterSpec param);
List deriveKeys(List params);

become possible.  To my eyes at least it does make it more clear 
what DPS you're processing since they're provided at derive time, 
rather than the caller having to keep track in their heads where in 
the DPS list they might be with each successive deriveKey or 
deriveKeys calls.  And I think we could do away with 
deriveKeys(int), too.


See above - the key stream is logically produced in its entirety 
before any assignment of that stream is made to any cryptographic 
objects because the mixins (except for the round differentiator) are 
the same for each key stream production round.   Simply passing in 
the total length may not give you the right result if the KDF 
requires a per component length (and it should to defeat (5) or it 
should only produce a single key).
From looking at 800-108, I don't see any place where the KDF needs a 
per-component length.  It looks like it takes L (total length) as an 
input and that is applied to each round of the PRF.  HKDF takes L 
up-front as an input too, though it doesn't use it as an input to the 
HMAC function itself.  For TLS 1.3 that component length becomes part 
of the context info (HkdfLabel) through the HKDF-Expand-Label 
function...and it's only doing one key for a given label which is also 
part of that context specific info, necessitating an init() call.  
Seems like the length can go into the APS provided via init (for those 
KDFs that need it at least) and you shouldn't need a DPS list up-front.




HKDF and SP800-108 only deal with the creation of the key stream and 
ignore the issues with assigning the key stream to cryptographic 
objects.  In the TLS version of HDKF, the L value is mandatory and only 
a single object is assigned per init/call to the KDF.   An HSM can look 
at the HKDF label information and set the appropriate policies for the 
assigned cryptographic object (because if any of the label data changes, 
the entire key stream changes).  That's not the case for the raw HKDF 
nor for any KDF that allows for multiple objects to be extracted out of 
a single key stream.  Hence the per-component length values.


Ideally, there should be a complete object spec for each object to be 
generated that is part of the mixins (label and context) for any KDF.   
That allows an HSM to rely upon the object spec when setting policy 
controls for each generated object - and incidentally allows for a KDF 
to generate both public and non-public data in a secure way.


So as long as you allow for the specification of all of the production 
objects as part of the .init() I'm good.   A given KDF might not require 
this - but I can't see any way of fixing the current KDFs to work in 
HSMs without something like this.


As far as your (5) scenario goes, I can see how you can twiddle the 
lengths to get the keystream output with zero-length keys and large IV 
buffers.  But that scenario really glosses over what should be a big 
hurdle and a major access control issue that stands outside the KDF 
API: That the attacker shouldn't have access to the input keying 
material in the first place.  Protect the input keying material 
properly and their attack cannot be done.


Let me give you an example.   I'm running an embedded HSM - to protect 
TLS keys and to do all of the crypto.  An attacker compromises the TLS 
server and now has access to the HSM.  No problem - I'm going to notice 
if the attacker starts extraditing large amounts of data from the server 
(e.g. copies of the TLS in the clear but possibly reencrypted data 
stream) so this isn't a threat or is it?  Smart attacker does an 
extraction attack on the TLS 1.2 and before KDF and turns all of the key 
stream material into IV material and exports it from the HSM.  The 
attacker now has the much smaller key material so he can send a few 
messages with those keys and allow for the passive external interception 
of the traffic and decryption thereof without the risk of detection of 
all that traffic being sent.  Alternately, I can place the key material 
in a picture via steganography and publish it as part of the server data.


The idea is to protect extraction of the key material from an HSM _*even 
from authorized users of 

Re: KDF API review, round 2

2017-11-20 Thread Michael StJohns

On 11/20/2017 1:10 PM, Jamil Nimeh wrote:


You're missing the point that setParameter() provides information 
used in all future calls to the signature generation, while init() 
provides data specifically for a given key stream production.  In 
Signature() you call .setParameter() to set up the PSS parameters (or 
use the defaults).  Each subsequent call to initSign or initVerify 
uses those PSS parameters.  The  equivalent part of .init() in 
KeyDerivation is actually the calls to .update() in signature as they 
provide the specific information for the production of the output key 
stream.  In fact, setting up an HMAC signature instance and passing 
it the mixin data as part of a .update() is a way of producing the 
key stream round.


So equivalences:

KeyDerivation.getInstance(PRF) == Signature.getInstance(HMAC)
KeyDerivation.setParameters() == Signature.setParameters()
KeyDerivation.init(key, List) == concatenation of the 
results of multiple calls (each key stream round based on the needed 
output length) to [Signature.initSign(Key) followed by 
Signature.update(converttobytearray(List)) followed by  
Signature.sign()] to produce the key stream
KeyDerivation.deriveKey() ==  various calls to key or object 
factories with parts of the key stream (signature).


Are you expecting that setParameters is called once per 
instantiation?  If so, then the parameters that would go into 
setParameter (an APS I assume) could just as easily go into the 
getInstance call.  It also removes the chance that someone would call 
it twice.


That was my original proposal.  .setParameter() was an alternative that 
matched the Signature pattern.


If you're expecting someone to call setParameter more than once, then 
I would expect an init must follow.  So why not place it in a form of 
init that allows you to change that particular set of params?  Either 
way it seems like we could coalesce this method into one of the calls 
that sandwich it in your proposed model.





I don't expect them to call it more than once.  The original (now 
deprecated) .setParameter (String, Object) method in Signature indicated 
it could be called only once and would throw an error if called again - 
I'm not sure why that wasn't brought forward to the 
Signature.setParameter(AlgorithmParameterSpec) method.


In any event, I'd rather do the parameter setting in the getInstance 
call than as a separate .setParameters call if it can be done without 
exploding the interface.


Hmm.. how does that map to the Spi?  Does the 
KeyDerivation.getInstance() code instantiate the object, call a 
setParameter() method on the SPI and then return the new object? Or 
what?  It may make more sense to just add the parameter related methods 
to both the KeyDerivationSpi and the KeyDerivation classes and leave the 
getInstance() method alone


I'm sort of a don't care as long as I have a way of tweaking the KDF 
before run the first derivation.




Mike






Re: KDF API review, round 2

2017-11-20 Thread Michael StJohns
Apologies in advance for top posting and a need to be a little pedantic 
about KDFs.  I'll have some comments inline below as well.


KDF's aren't well understood but people think they are.  The key stream 
generation part is pretty straightforward (keyed PRBG), but the 
interaction of how the key stream is generated and how the key stream is 
assigned to actual cryptographic objects is not.  Here's why:


1) KDF's are repeatable.  Given the exact same inputs (key, mixin data) 
they produce the same key stream.

2) Any change in the inputs changes ALL of the key stream.
3) Unless the overall length property is included, then changing the 
length of the key stream will not change the prefix (e.g. if the 
original call was for 10 bytes and a second call was for 20, the first 
10 bytes of both calls will produce the exact same key stream data)
4) The general format of each round of key stream generation is 
something like PRF (master key, mixins), where mixins are the 
concatenation of at least a label and context and a value to 
differentiate each round (a counter or the previous rounds output for 
example).  Including L in the mixin prevents the property described in 
(3) above.  Including a length for each subcomponent as a mixin prevents 
the property described in (5) below.
5) Unless the length for each derived object is included in the mix in, 
then it is possible to move the assignment of key stream bytes between 
objects.  For example, both TLS (1.2 and before) and IPSEC use KDFs that 
generate non-secret IV material along with secret session key 
material. This is less important for software only KDFs as both the 
secret key material and the IV material are both in the JVM memory 
domain.  This is very important if you're trying to keep your secret key 
material secure in an HSM.


Example:  a given TLS session may need 2 256 bit AES keys and 2 128 bit 
IVs.  That is a requirement for 96 bytes of key stream (if I've got my 
calculation correct).  We have the HSM produce this (see the PKCS11 
calling sequence for example) and we get out the IVs.  An attacker who 
has access to the HSM (which may or may not be on the same machine as 
the TLS instantiation) can call the derivation function with new output 
parameters (but with the same master key and mixins)  which specifies 
only IV material and have the function output the same key stream bytes 
that were previously assigned to the secret key material in the IV 
output.  A very easy key extraction attack.


This is why TLS1.3 only does single outputs per KDF call and makes the 
length of that output a mandatory mixin.  An HSM can also look at the 
labels and make a determination as to whether an object need be 
protected (key material) or in the clear (iv).


Given (3) and (5) I believe that both L and l[i] (subcomponent length) 
may need to be provided for BEFORE any key material is produced which 
argues for input during initialization phase.



On 11/20/2017 5:12 AM, Jamil Nimeh wrote:



On 11/19/2017 12:45 PM, Michael StJohns wrote:

On 11/17/2017 1:07 PM, Adam Petcher wrote:

On 11/17/2017 10:04 AM, Michael StJohns wrote:


On 11/16/2017 2:15 PM, Adam Petcher wrote:
So it seems like they could all be supplied to init. 
Alternatively, algorithm names could specify more concrete 
algorithms that include the mode/PRF/etc. Can you provide more 
information to explain why these existing patterns won't work in 
this case?
What I need to do is provide a lifecycle diagram, but its hard to 
do in text.  But basically, the .getInstance() followed by 
.setParameters() builds a concrete engine while the .init() 
initializes that engine with a key and the derivation parameters. 
Think about a TLS 1.2 instance - the PRF is selected once, but the 
KDF may be used multiple times.


This is the information I was missing. There are two sets of 
parameters, and the first set should be fixed, but the second set 
should be changed on each init.




I considered the mode/PRF/etc stuff but that works for things like 
Cipher and Signature because most of those have exactly the same 
pattern.  For the KDF pattern we;ve got fully specified KDFs (e.g. 
TLS 1.1 and before, IPSEC), almost fully specified KDFs (TLS 1.2 
and HDKF needs a PRF) and then the SP800 style KDFs which are 
defined to be *very* flexible.  So translating that into a naming 
convention is going to be restrictive and may not cover all of the 
possible approaches. I'd rather do it as an algorithmparameter 
instead.  With a given KDF implementation having a default if 
nothing is specified during instantiation.


I agree that this is challenging because there is so much variety in 
KDFs. But I don't think that SP 800-108 is a good example of 
something that should be exposed as an algorithm in JCA, because it 
is too broad. SP 800-108 is more of a toolbox that can be used to 
construct KDFs. Particular specializations of SP 800-108 are widely 
used, and they will get names that can be used

Re: KDF API review, round 2

2017-11-19 Thread Michael StJohns

On 11/17/2017 1:07 PM, Adam Petcher wrote:

On 11/17/2017 10:04 AM, Michael StJohns wrote:


On 11/16/2017 2:15 PM, Adam Petcher wrote:
So it seems like they could all be supplied to init. Alternatively, 
algorithm names could specify more concrete algorithms that include 
the mode/PRF/etc. Can you provide more information to explain why 
these existing patterns won't work in this case?
What I need to do is provide a lifecycle diagram, but its hard to do 
in text.  But basically, the .getInstance() followed by 
.setParameters() builds a concrete engine while the .init() 
initializes that engine with a key and the derivation parameters. 
Think about a TLS 1.2 instance - the PRF is selected once, but the 
KDF may be used multiple times.


This is the information I was missing. There are two sets of 
parameters, and the first set should be fixed, but the second set 
should be changed on each init.




I considered the mode/PRF/etc stuff but that works for things like 
Cipher and Signature because most of those have exactly the same 
pattern.  For the KDF pattern we;ve got fully specified KDFs (e.g. 
TLS 1.1 and before, IPSEC), almost fully specified KDFs (TLS 1.2 and 
HDKF needs a PRF) and then the SP800 style KDFs which are defined to 
be *very* flexible.  So translating that into a naming convention is 
going to be restrictive and may not cover all of the possible 
approaches.  I'd rather do it as an algorithmparameter instead.  With 
a given KDF implementation having a default if nothing is specified 
during instantiation.


I agree that this is challenging because there is so much variety in 
KDFs. But I don't think that SP 800-108 is a good example of something 
that should be exposed as an algorithm in JCA, because it is too 
broad. SP 800-108 is more of a toolbox that can be used to construct 
KDFs. Particular specializations of SP 800-108 are widely used, and 
they will get names that can be used in getInstance. For example, 
HKDF-Expand is a particular specialization of SP 800-108.




So I think the existing pattern of using algorithm names to specify 
concrete algorithms should work just as well in this API as it does in 
the rest of JCA. Of course, more flexibility in the API is a nice 
feature, but supporting this level of generality may be out of scope 
for this effort.



The more I think about it the more I think you're mostly right.  But 
let's split this slightly as almost every KDF allows for the 
specification of the PRF.  So


/    as the standard naming convention.

Or TLS13/HMAC-SHA256 and HKDF/HMAC-SHA256 (which are different because 
of the mandatory inclusion of "L" in the derivation parameters and each 
component object for TLS13)


Still - let's include the .setParameters() call as a failsafe as looking 
forward I can see the need for flexibility rearing its ugly head (e.g. 
adding PSS parameters to RSA signatures way late in the game.) and 
it does match the pattern for Signature so its not a new concept. A 
given provider need not support the call, but its there if needed.


Mike






Mike










Re: Draft design for Key Derivation API

2017-11-19 Thread Michael StJohns

Sorry - I've been on travel for the last few days... comments below.

On 11/17/2017 10:48 AM, Adam Petcher wrote:

On 11/17/2017 10:10 AM, Michael StJohns wrote:


On 11/16/2017 1:29 PM, Adam Petcher wrote:

On 11/8/2017 6:50 PM, Michael StJohns wrote:

What is the motivation behind this constructor that takes a byte 
array? It seems like this constructor wouldn't actually help in a 
hardware implementation. Would it be better to leave the 
construction of this object to the implementation?


This is a reasonable point, but misses a few things.   If you're 
calling the hardware implementation from software, you need to be 
able to pass data from the software domain to hardware domain.  If 
the KDF and the Object are both in hardware, then the provider 
implementation doesn't actually externalize the byte array from the 
KDF - it just returns the final pointer to the object.


The hardware/software boundary has some unique challenges - mostly 
these are handled OK in the JCA.  For this particular model, you need 
to be able to move bits from software to hardware which is the point 
of the constructor as specified. For hardware to hardware it happens 
under the hood.  For hardware to software it may be prohibited (e.g. 
you can't actually externalize the bits of the key stream), but if 
its permitted then you need a simple way of translating key stream 
bytes into an object.


That behavior all sounds reasonable, I just have doubts that this 
belongs in the spec. Are you expecting KeyDerivation to contain the 
logic in your last paragraph? Something like this:


class KeyDerivation{
  Object deriveObject() {
    try {
  return spi.deriveObject();
    } catch (...) {
  Class clazz = // get the class from the parameters
  return clazz.newInstance(deriveData(), length); // shorthand for 
getting the right ctor and calling it

    }
  }
}

I would expect something like that to happen in the KeyDerivationSpi 
implementation instead, in which case it could construct the object 
any way it wants. So the spec would not need to place any requirements 
on the class of objects returned by deriveObject.




KDFs are somewhat problematic in that *_they may not necessarily be 
producing objects from their own provider_*. This unfortunately isn't 
obvious, but let me try and explain.


A KDF is basically a keyed pseudo-random number generator.  From the 
input key (and mixin context and label stuff) it produces a stream of 
bits.   Once that's done, the stream of bits is assigned to various 
output objects - secret keys, private keys, a byte array[] or a 
cryptographic object of some sort (cf TLS exporters for an example of 
this).  The current draft has an implicit assumption that the Key based 
objects will be formed from the current provider.  The byte array output 
is a providerless java.lang object.  The last type provides a model to 
allow for the production of objects not within the current provider.   
You *could* just punt on this and assume that you take the output of the 
deriveData() call and feed it to a factory of the other provider, but 
that means that the derivation production will necessarily be in the 
clear because the stream data will pass through the JVM.


Here's where it gets even trickier:

A given provider has a given security domain.  E.g. most software 
providers share the JVM memory domain.  HSM providers have a security 
domain representing pointers to objects within the HSM secure 
perimeter.  Mostly now, HSM providers do not share the same domains - 
but there are some cases where this might be possible and desirable 
(different providers leveraged on top the same HSM implementation, two 
SMs with a trusted path between them - TPMs and TEEPs for example).  I'd 
*really* like it if there is some way to keep  data from two different 
providers sharing the same or compatible security domains from having to 
pass their key stream data through the unsecure JVM memory domain.


Maybe there's a different way to do this - perhaps by changing the 
DeriviationParameterSpec to include the output provider?  But then you 
still need a way of generating non-key secure cryptographic objects I 
think


Mike






Re: Draft design for Key Derivation API

2017-11-17 Thread Michael StJohns

On 11/16/2017 1:29 PM, Adam Petcher wrote:

On 11/8/2017 6:50 PM, Michael StJohns wrote:


On 11/3/2017 4:59 PM, Jamil Nimeh wrote:
Add a .deriveData() with a return class of byte[].   This gets a 
portion of the derived data stream in the clear. E.g. an IV.


Add a .deriveObject() with a return class of Object.  The returned 
object may not be an instance of java.security.Key.  This takes 
the derived data stream and converts it into the object type 
specified by the derivation parameter.  In a hardware security 
module, this might be a reference to a secured set of data or even 
an confidential IV.
Again, just want to make sure I understand fully: So in a case 
where I want a given output to be an Object, I would provide a 
DerivationParameterSpec with an alg of..."Object" (?), a byte 
length, and Object-specific parameters provided through the 
"params" argument to the DPS?


Working this through, but it should be a Class  being specified with 
a constructor of a byte array plus a length.


What is the motivation behind this constructor that takes a byte 
array? It seems like this constructor wouldn't actually help in a 
hardware implementation. Would it be better to leave the construction 
of this object to the implementation?


This is a reasonable point, but misses a few things.   If you're calling 
the hardware implementation from software, you need to be able to pass 
data from the software domain to hardware domain.  If the KDF and the 
Object are both in hardware, then the provider implementation doesn't 
actually externalize the byte array from the KDF - it just returns the 
final pointer to the object.


The hardware/software boundary has some unique challenges - mostly these 
are handled OK in the JCA.  For this particular model, you need to be 
able to move bits from software to hardware which is the point of the 
constructor as specified.  For hardware to hardware it happens under the 
hood.  For hardware to software it may be prohibited (e.g. you can't 
actually externalize the bits of the key stream), but if its permitted 
then you need a simple way of translating key stream bytes into an object.


Mike




Re: KDF API review, round 2

2017-11-15 Thread Michael StJohns

On 11/15/2017 11:43 AM, Jamil Nimeh wrote:

Hello all,

Thanks to everyone who has given input so far.  I've updated the 
KeyDerivation API with the comments I've received.  The new 
specification is here:


Text: http://cr.openjdk.java.net/~jnimeh/reviews/kdfspec/kdfspec.02.txt
Javadoc: http://cr.openjdk.java.net/~jnimeh/reviews/kdfspec/javadoc.02/

In terms of high level changes:

  * Moved to a getInstance/init usage pattern similar to Mac,
KeyAgreement, Cipher, etc.  This allows KDF objects to be reused
with different parameters by reinitializing.
  * Name change: DerivedKeyParameterSpec --> DerivationParameterSpec
  * Keys returned by derivation methods are now java.security.Key
rather than SecretKey
  * Provided additional derivation methods to support non-key based
output: deriveData, deriveObject
  * Added a new constructor to DerivationParameterSpec to support the
Object return type.

Thanks,
--Jamil



This is pretty close, but I think you need to add an AlgorithmParameters 
argument to each of the getInstance calls in KeyDerivation - or require 
each KDF to specify a default model - not all KDFs are fully specified 
in a given document.


Alternately, you could use the .setParameter/.getParameter model of 
signature,  but it may be that underlying code will actually be creating 
a whole new instance.  (E.g. getInstance("NIST-SP800-108") vs 
getInstance("NIST-SP800-108-Counter") vs 
getInstance("NIST-SP800-108/Counter"))



Here's the model I'm thinking about:

   SP800-108 is a parameterized set of Key derivation functions which
   goes something like:

   Pick either Counter or Feedback

   Pick the PRF (e.g. HMAC-SHA256, AES-128-CMAC, etc)
   Pick the size of the counter and endianness:  (e.g. Big endian
   Uint16)

   Pick the size and endianness of L

   Pick whether the counter precedes or follows the fixed data (for
   counter mode).
   Pick whether the counter is included and whether it precedes or
   follows the fixed data (for feedback mode)


Taken together those instantiation parameters define a particular KDF model.

Then for the .init() call, the kdfParams is where the Label and Context 
data go (what HKDF calls 'info').  For most KDFs this could just be a 
byte array.


For HKDF the getInstance must specify an underlying hash function - by 
definition mode is feedback, the size of the counter is fixed, L is not 
included in the base calculation.  (TLS1.3 uses HKDF and makes L a 
mandatory part of the HKDF).


I want to do a worked example from instantiation to use to make sure 
this covers the corner cases.  Give me a day  I'm currently in 
Singapore.


Mike








Re: Draft design for Key Derivation API

2017-11-08 Thread Michael StJohns

On 11/7/2017 8:17 PM, Jamil Nimeh wrote:


Hi Mike, thank you for your comments and feedback.  I have a few 
comments and questions inline:




(I have a little bit more time before my flight than i thought so 
see inline).



On 11/06/2017 05:25 PM, Michael StJohns wrote:

On 11/3/2017 4:59 PM, Jamil Nimeh wrote:

Hello all,

This is a review request for the draft of a new Key Derivation API.  
The goal of this API will be to provide a framework for KDF 
algorithms like HKDF, TLS-PRF, PBKDF2 and so forth to be publicly 
accessible.  We also plan to provide an SPI that let 3rd parties 
create their own implementations of KDFs in their providers, rather 
than trying to force them into KeyGenerators, SecretKeyFactories and 
the like.


Rather than stuff this email full of the specification text (since 
it is likely to get quite a few iterations of comments and 
comments-to-comments), I have placed the API both in simple text 
form and as a Javadoc at the following locations:


spec: http://cr.openjdk.java.net/~jnimeh/reviews/kdfspec/kdfspec.01.txt

javadoc: http://cr.openjdk.java.net/~jnimeh/reviews/kdfspec/javadoc.01/

They're both the same content, just use whichever is friendlier for 
your eyes.


In addition, I have opened up the JEP as well:

https://bugs.openjdk.java.net/browse/JDK-8189808

Thanks to those who have contributed to very early internal drafts 
of this so far, and thanks in advance to those who will be 
contributing comments going forward.


--Jamil


Most of the following suggestions (and please take them as such 
regardless of any directive language) represent things I've had to do 
manually that I'd really prefer to do in a real key derivation API.  
A few are related to how to keep things securely stored in an HSM.


Add a .reset() method to KeyDerivation.  Call this to clear the state 
of the KDF.


Add an .initialize(List, SecretKey 
masterSecret) method.  Remove the argument to deriveKey and 
deriveKeys.  This plays with the stuff to follow, but basically, a 
KDF may need all of the per-key derivation input to calculate the 
total length of the output key stream as an internal input to the KDF 
before ever emitting a single key.   Also - how exactly were you 
planning on keying the KDF? I guess you could pass that in in the 
KeyDerivation.getInstance() call or as part of the algorithmParameter 
but probably makes more sense to keep the KDF instance key-free 
to allow for reuse.
Well, let's get the easy one out of the way.  As you suspected I 
planned to pass the SecretKey in via AlgorithmParameterSpec.  The 
three classes unfortunately didn't show that.  Maybe on the next 
iteration I can put an HkdfParameterSpec in there just as a sample so 
folks can see that where the key comes in.  The reason I went that way 
was because the goal was to provide all algorithm paramters at 
instantiation time, and the SecretKey was just another input.  I don't 
know if just making the KDF key-free would be enough for reuse, at 
least not for all cases.  Thinking about HKDF and TLS 1.3 for 
instance, the key is the same for a collection of keys (like the 
client and server app traffic master keys that come from the master 
secret, for instance) - what changes are the other inputs to HKDF.


Yup - but that's easily handled through the new initialization call - 
which again matches the way Cipher, Signature and KeyAgreement do 
things.   Simplifying (??) the interface just to make one use case 
easier is probably not a great tradeoff.





One issue that came up on an early internal rev of the API was that we 
didn't want to separate instantiation and initialization, so all the 
inputs to the KDF now come in at getInstance time through 
AlgorithmParameterSpecs, rather than doing getInstance/init/... like 
KeyAgreement does.  I wonder if it would be OK to still have an init 
(and a reset as you wanted) method so we can provide new inputs 
top-to-bottom into the KDF object.  All the getInstance forms would 
stay more or less the same, so there's no way to make a KDF object 
without it being in an initialized state.  But when you need new 
inputs you don't have to make a new object.  I like being able to 
reuse the object and reset it to its starting state.  I don't know if 
the folks that brought up the instance/init issue would have a problem 
with that.  I think we're still adhering to the spirit of what they 
wanted to see since getInstance still gives you a fully initialized 
object.


As I noted in my other email, that's not the general form of a contract 
in the JCA.




That's a bit different than what you're talking about with your 
initialize method, I kinda birdwalked a bit.  Let me ask a couple 
questions: When you proposed initialize(), were you envisioning that 
applications would always need to call it before derive*?


Yes, init would always need to be called before you begin derives. A KDF 
call would require an instantiation (where you pass the parameters of 
the mechanism

Re: Draft design for Key Derivation API

2017-11-06 Thread Michael StJohns

On 11/3/2017 4:59 PM, Jamil Nimeh wrote:

Hello all,

This is a review request for the draft of a new Key Derivation API.  
The goal of this API will be to provide a framework for KDF algorithms 
like HKDF, TLS-PRF, PBKDF2 and so forth to be publicly accessible.  We 
also plan to provide an SPI that let 3rd parties create their own 
implementations of KDFs in their providers, rather than trying to 
force them into KeyGenerators, SecretKeyFactories and the like.


Rather than stuff this email full of the specification text (since it 
is likely to get quite a few iterations of comments and 
comments-to-comments), I have placed the API both in simple text form 
and as a Javadoc at the following locations:


spec: http://cr.openjdk.java.net/~jnimeh/reviews/kdfspec/kdfspec.01.txt

javadoc: http://cr.openjdk.java.net/~jnimeh/reviews/kdfspec/javadoc.01/

They're both the same content, just use whichever is friendlier for 
your eyes.


In addition, I have opened up the JEP as well:

https://bugs.openjdk.java.net/browse/JDK-8189808

Thanks to those who have contributed to very early internal drafts of 
this so far, and thanks in advance to those who will be contributing 
comments going forward.


--Jamil


Most of the following suggestions (and please take them as such 
regardless of any directive language) represent things I've had to do 
manually that I'd really prefer to do in a real key derivation API.  A 
few are related to how to keep things securely stored in an HSM.


Add a .reset() method to KeyDerivation.  Call this to clear the state of 
the KDF.


Add an .initialize(List, SecretKey 
masterSecret) method.  Remove the argument to deriveKey and deriveKeys.  
This plays with the stuff to follow, but basically, a KDF may need all 
of the per-key derivation input to calculate the total length of the 
output key stream as an internal input to the KDF before ever emitting a 
single key.   Also - how exactly were you planning on keying the KDF?  I 
guess you could pass that in in the KeyDerivation.getInstance() call or 
as part of the algorithmParameter but probably makes more sense to 
keep the KDF instance key-free to allow for reuse.


Rename DerivedKeyParameterSpec to DeriviationParameterSpec and provide 
an algorithm name for "IV" or "Cleartext".  See below for .deriveData()


deriveKey() emits the next key in the sequence using the data stream to 
key conversion rules.


deriveKeys() emits as many keys left in the stream to the next data 
derivation or the defined end of stream based on the input specs.  
deriveKeys(int num) derives the next num keys.


Add a .deriveData() with a return class of byte[].   This gets a portion 
of the derived data stream in the clear. E.g. an IV.


Add a .deriveObject() with a return class of Object.  The returned 
object may not be an instance of java.security.Key.  This takes the 
derived data stream and converts it into the object type specified by 
the derivation parameter.  In a hardware security module, this might be 
a reference to a secured set of data or even an confidential IV.


All of the derive methods throw an InvalidParameterSpecException if the 
next derivation parameter doesn't match the calling method (e.g. trying 
to deriveData when the parameter spec says emit a key).


In KeyDerivation, change the output class of the deriveKey to 
java.security.Key; similar for deriveKeys change the output to 
List.   Basically, its possible to use the output of a KDF stream 
to derive private keys and this should be supported. It's occasionally 
helpful (but not very often) for two devices to share a key pair that 
they create through a key agreement process (e.g. two HSMs acting as 
backup to each other).  Alternately, consider adding a "public KeyPair 
deriveKeyPair()" method.


Consider adding a marker interface  javax.crypto.MasterSecret (subclass 
of javax.crypto.SecretKey) and using that as class for the initialize 
call argument.


I'm happy to provide an edited .java file with these proposed changes - 
but not until at least next Monday; I'm on travel.


Mike









Re: AW: Arithmetic error in SunEC

2017-10-18 Thread Michael StJohns

On 10/18/2017 5:58 AM, Tobias Wagner wrote:

Hi,

yes, from what we know your understanding is correct. The NIST curve secp384r1 
is using these functions but seems not to be affected because of its prime. Any 
other curve will probably affected.

Regards
Tobias
  


Stupid question. Given that there this came over from NSS and that NSS 
had reported the error, has anyone checked the rest of NSS reported 
errors/fixes for EC for porting to the JDK?


Mike


-Ursprüngliche Nachricht-

Von:Adam Petcher 
Gesendet: Die 17 Oktober 2017 22:26
An: security-dev@openjdk.java.net
Betreff: Re: Arithmetic error in SunEC

On 10/17/2017 4:55 AM, Tobias Wagner wrote:


Hi,

we found an error in the GF(p)-arithmetics of SunEC, while adding
support for brainpool-curves in ECDHE for TLS connections as
suggested in RFC 7027.



Thanks! I created JDK-8189594[1] to track this issue. My understanding
is that this error doesn't cause any bugs in the existing JDK code, but
it may cause bugs if we add new curves that use this optimization. If I
am wrong about this, please let me know.

[1] https://bugs.openjdk.java.net/browse/JDK-8189594


Regards
Tobias







Re: JCA design for RFC 7748

2017-08-18 Thread Michael StJohns

On 8/18/2017 3:26 PM, Adam Petcher wrote:

On 8/17/2017 1:44 PM, Michael StJohns wrote:

See inline.

On 8/17/2017 11:19 AM, Adam Petcher wrote:



Specifically, these standards have properties related to byte arrays 
like: "The Curve25519 function was carefully designed to allow all 
32-byte strings as Diffie-Hellman public keys."[1]


This statement is actually a problem.  Valid keys are in the range of 
1 to p-1 for the field (with some additional pruning).   32 byte 
strings (or 256 bit integers) do not map 1-1 into that space.  E.g. 
there are some actual canonical keys where multiple (at least 2) 32 
byte strings map to them.  (See the pruning and clamping 
algorithms).  The NIST private key generation for EC private keys 
mitigates this bias by either (a) repeatedly generating random keys 
until you get one in the range or (b) generating a key stream with 
extra (64) bits and reducing that mod p of the curve.


If you are concerned about the distribution of private keys in these 
standards, then you may want to raise your concerns on the CFRG 
mailing list (c...@irtf.org). I don't have this concern, so I think we 
should implement the RFC as written.
 Basically, the RFC describes the form of the private key.  It also 
gives you an implementation for how to generate one.  BUT - any method 
that gives you an equivalent key and that has the same or better 
security guarantees is equally valid.WRT to the RFC, I missed this 
at the time it went through but noticed it after having to deal with 
generation of normal EC keys and a FIPS evaluation. The bias is small, 
but it exists.  The method of the RFC does not reduce the bias.  The 
NIST method does.   I know which one I would choose.


Regardless - this has no bearing on the fact that the output of the key 
generation process is an integer.








RFC 8032 private keys: These are definitely bit strings, and 
modeling them as integers doesn't make much sense. The only thing 
that is ever done with these private keys is that they are used as 
input to a hash function.


Again - no.   The actual private key is what you get after stage 3 of 
section 5.1.5.  E.g. generate a random string of 32 bytes. Hash it to 
help with the bad random generators (*sheesh*), Interpret the hash 
after pruning as a little endian integer. 


I'm not sure I understand what you are suggesting. For Ed25519, the 
initial 32-byte secret is hashed to produce 64 bytes. The first 32 
bytes are pruned, interpreted as an integer, and used to produce the 
public key. The second 32 bytes are also used in the signing operation 
and contribute to a deterministic nonce. See RFC 8032, section 5.1.6, 
step 1. Are you suggesting that we should represent all 64 bytes as an 
integer?


Sorry - I did miss this.   Unfortunately, it screws up making this work 
cleanly.There are two ways of doing this - I'd prefer (1) but could 
live with (2):


1) The private key is a BigInteger (and that's how it gets in and out of 
the ECPrivateKeySpec as 's') derived as up to step 3.  The nonce is the 
leftmost 32 bytes of the  SHA512 of the private key expressed as a 
little endian integer).   The key thing about the deterministic nonce is 
that it's the same for the key for its signature lifetime (which I'm 
really not sure is a good thing - but its what we have).  The way the 
nonce is derived in the RFC shows one way.  Hashing the private key 
shows another way.  I'd prefer this because you can use the same 
input/output from the 'spec method for the ed25519 key as the curve25519 
key.   (do you really want two different spec classes for edxx and 
curvexxx keys???)


2) The private key is the 32byte secret array of step 1 stored in the 
concrete class.  The input and output ECPrivateKeySpec defines a 
transform from that array to/from a BigInteger to that array.  The 
private key concrete class includes storage for both the internal 's' 
little endian integer value and the internal 'prefix' value. This isn't 
as clean because the scalar secret 's' isn't the same as the 
ECPrivateKeySpec.getS(), but allows for impedance matching between the 
existing classes and the strangeness of an ed25519 private key.  It 
would definitely divorce keys from the two curve uses by doing it this way.


*sigh* The private key is an integer and that's really what needs to be 
represented.


Mike






Re: JCA design for RFC 7748

2017-08-18 Thread Michael StJohns

On 8/17/2017 7:01 PM, Xuelei Fan wrote:

On 8/17/2017 11:35 AM, Michael StJohns wrote:

On 8/17/2017 1:28 PM, Xuelei Fan wrote:
This is the same for ANY current publicly known curve - different 
providers may implement all some or none of them.  So extending 
this model for the curve25519 stuff isn't going to be any different 
old provider and new provider wise than is currently the case. If 
you want the new curves, you have to specify the new providers. If 
the new and old providers don't implement the same curves, you may 
need to deal with two different providers simultaneously - and 
that's not something that just happens.


I see your points.  Not-binding to a provider cause problems; 
binding to a provider cause other problems.  There are a few 
complains on the problems, and impact the real world applications in 
practice.


Basically, this is a failing of imagination when the various 
getInstance() methods were defined.  Now its possible to use 
Security.getProvider(Map<String,String>)  to good effect (but more 
work) to find appropriate providers for appropriate signature/key 
agreement algorithms and curves.


I'm not sure how this applies to the compatibility impact concern.  
See more in the example bellow.







I don't think your concerns are valid.  I may still be missing 
something here - but would ask for a real-world example that 
actually shows breakage.



I happened to have a real-world example.  See
https://bugs.openjdk.java.net/browse/JDK-8064330


I'm not sure how this applies to the current question of whether or 
not its possible to integrate new EC curves?





This is an interesting bug.  At first it is requested to support 
SHA224 in JSSE implementation. And, SHA224 is added as the supported 
hash algorithm for TLS.  However, because SunMSCAPI does not support 
SHA224 signature, compatibility issues comes.  So we removed SHA224 
if the SunMSCAPI is presented.  Later, one found the code is unusual 
as SHA224 and the related signature algorithms are supported by the 
underlying providers, look like no reason to limit the use of 
SHA224.  So, SHA224 is added back and then the compatibility issues 
come back again.  Then we removed SHA224 again if the SunMSCAPI is 
presented.  However, at the same time, another request is asking to 
support SHA224 on Windows.  The API design itself put me in a 
either-or situation.  I would try to avoid it if possible for new 
design.


This appears to be an MSCAPI issue vice a JSSE issue.
MSCAPI is fine as it does not support SHA224.  JSSE is then in a bad 
position because it cannot support SHA224 in a general way even one of 
the underlying provider supports SHA224 but another one not.


No - I don't think both are fine.   MSCAPI could outsource algorithms it 
doesn't have internally to another provider - for a very limited set of 
classes (e.g. MessageDigest and Secure Random), but it would be smarter 
if JSSE just tried to see of the provider it was already using for other 
stuff had the available algorithm (e.g. using the long form of getInstance).


In the above case - MSCAPI should realize that it can't do a 
SHA224with... signature and just throw the appropriate error.  JSSE 
should know which TLS suites MSCAPI can support.  Or at least be able to 
figure out which ones it supports by checking which JCA algorithms are 
supported.




And the JCA specifically disclaims the guaranteed ability to use 
cryptographic objects from one provider in another provider.
It's not the real problem.  The real problem is that there is a 
provider support the requested algorithms, why not use it?  We can 
say, the spec does not guarantee the behavior, but the application 
still has a problem.  We also can say, we have a design flaw, but the 
question is still there.


There is no design flaw, there is only incorrect programming. Basically, 
the only non-key related crypto algorithms in the providers are hashing 
and random number generation.While its possible to use these across 
providers (and you might want to), special casing things to do this 
means (for example) that your FIPS approved signature mechanism might 
end up using a non-fips summary (hash) mechanism.


Every other crypto mechanism has or involves keys.  And there's no 
guarantee that keys are valid across providers.




Secondary users like the JSSE probably need to stick to a single 
provider for a given connection.


Stick to a single provider will open other windows for different 
problems.  JCE spec does not grant one provider could implement all 
services.


An application that needs to use two different cryptographic algorithm 
providers should be doing that explicitly and not having it happen under 
the covers.   For example, I was using the BC provider to deal with 
elliptic curve stuff for a long while.  That meant I was also using BC 
to deal with certificates (because the Sun provider didn't understand 
how to deal with the BC's ECKey implementation).   It wa

Re: JCA design for RFC 7748

2017-08-17 Thread Michael StJohns

On 8/17/2017 1:28 PM, Xuelei Fan wrote:
This is the same for ANY current publicly known curve - different 
providers may implement all some or none of them.  So extending this 
model for the curve25519 stuff isn't going to be any different old 
provider and new provider wise than is currently the case.   If you 
want the new curves, you have to specify the new providers. If the 
new and old providers don't implement the same curves, you may need 
to deal with two different providers simultaneously - and that's not 
something that just happens.


I see your points.  Not-binding to a provider cause problems; binding 
to a provider cause other problems.  There are a few complains on the 
problems, and impact the real world applications in practice.


Basically, this is a failing of imagination when the various 
getInstance() methods were defined.  Now its possible to use 
Security.getProvider(Map)  to good effect (but more work) 
to find appropriate providers for appropriate signature/key agreement 
algorithms and curves.






I don't think your concerns are valid.  I may still be missing 
something here - but would ask for a real-world example that actually 
shows breakage.



I happened to have a real-world example.  See
https://bugs.openjdk.java.net/browse/JDK-8064330


I'm not sure how this applies to the current question of whether or not 
its possible to integrate new EC curves?





This is an interesting bug.  At first it is requested to support 
SHA224 in JSSE implementation. And, SHA224 is added as the supported 
hash algorithm for TLS.  However, because SunMSCAPI does not support 
SHA224 signature, compatibility issues comes.  So we removed SHA224 if 
the SunMSCAPI is presented.  Later, one found the code is unusual as 
SHA224 and the related signature algorithms are supported by the 
underlying providers, look like no reason to limit the use of SHA224.  
So, SHA224 is added back and then the compatibility issues come back 
again.  Then we removed SHA224 again if the SunMSCAPI is presented.  
However, at the same time, another request is asking to support SHA224 
on Windows.  The API design itself put me in a either-or situation.  I 
would try to avoid it if possible for new design.


This appears to be an MSCAPI issue vice a JSSE issue.  And the JCA 
specifically disclaims the guaranteed ability to use cryptographic 
objects from one provider in another provider.   Secondary users like 
the JSSE probably need to stick to a single provider for a given connection.





Treat these simply as new curves and let's move forward with very 
minimal changes to the public API.


I would like to treat it as two things.  One is to support new curves 
for new forms.  The other one is to support named curves [1].  For the 
support of new forms,  there are still significant problems to solve. 
For the support of named curves (including the current EC form), looks 
like we are in a not-that-bad situation right now.  Will the named 
curves solution impacts the support of new curves APIs in the future?  
I don't see the impact yet.  I may missing something, but I see no 
reason to option out the named curves support.




I'm not sure why you think this (the example in [1]) can't be done?

I gave the example elsewhere, but let me expand it with my comment above 
(possibly faking the hash key names - sorry):



HashMap neededAlgs = new HashMap<>();
neededAlgs.put("Signature.EdDSA", "");
 neededAlgs.put("AlgorithmParameters.EC SupportedCurves", "ed25519")




Provider[] p = Security.getProviders(neededAlgs);
if (p == null) throw new Exception ("Oops");



AlgorithmParameters parameters = AlgorithmParameters.getInstance("EC", p[0]);
 parameters.init(new ECGenParameterSpec("ed25519"));
 ECParameterSpec ecParameters = 
parameters.getParameterSpec(ECParameterSpec.class);

 return KeyFactory.getInstance("EC", p[0]).generatePublic(new 
ECPublicKeySpec(new ECPoint(x, y), ecParameters));


If you're talking more generally, NamedCurves should be a form of 
ECParameterSpec so you can read the name from the key, but there's no 
support for adding names to that spec.  Maybe extend it?  E.g.:


package java.security.spec;
public class NamedECParameterSpec extends ECParameterSpec {

   private Collection names;
   private Collection oids;
   public NamedECParameterSpec (EllipticCurve curve, ECPoint g, 
BigInteger n, int h, Collection names, Collection oids) {

super (curve, g, n, h);
if (names != null) {
this.names = new ArrayList(names);
}
if (oids != null) {
this.oids = new ArrayList(oids);
   }
  }

   public Collection getNames() {
if (names == null)
 return (Collection)Collections.EMPTY_LIST;
 else
return Collections.unmodifiableList(names);
}

etc

   This makes it easier to get exactly what you want from a key. 
Assuming the provider 

Re: JCA design for RFC 7748

2017-08-17 Thread Michael StJohns

See inline.

On 8/17/2017 11:19 AM, Adam Petcher wrote:

On 8/16/2017 3:17 PM, Michael StJohns wrote:


On 8/16/2017 11:18 AM, Adam Petcher wrote:


My intention with this ByteArrayValue is to only use it for 
information that has a clear semantics when represented as a byte 
array, and a byte array is a convenient and appropriate 
representation for the algorithms involved (so there isn't a lot of 
unnecessary conversion). This is the case for public/private keys in 
RFC 7748/8032:


1) RFC 8032: "An EdDSA private key is a b-bit string k." "The EdDSA 
public key is ENC(A)." (ENC is a function from integers to 
little-endian bit strings.


Oops, minor correction. Here A is a point, so ENC is a function from 
points to little-endian bit strings.


2) RFC 7748: "Alice generates 32 random bytes in a[0] to a[31] and 
transmits K_A =X25519(a, 9) to Bob..." The X25519 and X448 
functions, as described in the RFC, take bit strings as input and 
produce bit strings as output.


Thanks for making my point for me.  The internal representation of 
the public point is an integer.  It's only when encoding or decoding 
that it gets externally represented as an array of bytes.  (And yes, 
I understand that the RFC defines an algorithm using little endian 
byte array representations of the integers - but that's the 
implementation's call, not the API).


With respect to the output of the KeyAgreement algorithm - your (2) 
above, the transmission representation (e.g. the encoded public key) 
is little endian byte array representation of an integer.  The 
internal representation is - wait for it - integer.


I have no problems at all with any given implementation using little 
endian math internally.  For the purposes of using JCA, stick with 
BigInteger to represent your integers.  Use your provider encoding 
methods to translate between what the math is internally and what the 
bits are externally if necessary. Implement the conversion methods 
for the factory and for dealing with the existing EC classes.   Maybe 
get BigInteger to be extended to handle (natively) littleEndian 
representation (as well as fixed length outputs necessary for things 
like ECDH).




All good points, and I think BigInteger may be a reasonable 
representation to use for public/private key values. I'm just not sure 
that it is better than byte arrays. I'll share some relevant 
information that affects this decision.


First off, one of the goals of RFC 7748 and 8032 is to address some of 
the implementation challenges related to ECC. These algorithms are 
designed to eliminate the need for checks at various stages, and to 
generally make implementation bugs less likely. These improvements are 
motivated by all the ECC implementation bugs that have emerged in the 
last ~20 years. I mention this because I think it is important that we 
choose an API and implementation that allows us to benefit from these 
improvements in the standards. That means we shouldn't necessarily 
follow all the existing ECC patterns in the API and implementation.


No - it means that the authors of the RFCs have a bias to have their 
code be the only code.  As I note below I don't actually think they got 
everything right.   The underlying math is really what matters, and the 
API should be able to handle any implementation that gets the math correct.




Specifically, these standards have properties related to byte arrays 
like: "The Curve25519 function was carefully designed to allow all 
32-byte strings as Diffie-Hellman public keys."[1]


This statement is actually a problem.  Valid keys are in the range of 1 
to p-1 for the field (with some additional pruning).   32 byte strings 
(or 256 bit integers) do not map 1-1 into that space.  E.g. there are 
some actual canonical keys where multiple (at least 2) 32 byte strings 
map to them.  (See the pruning and clamping algorithms).  The NIST 
private key generation for EC private keys mitigates this bias by either 
(a) repeatedly generating random keys until you get one in the range or 
(b) generating a key stream with extra (64) bits and reducing that mod p 
of the curve.



If we use representations other than byte strings in the API, then we 
should ensure that our representations have the same properties (e.g. 
every BigInteger is a valid public key).


It's best to talk about each type on its own. Of course, one of the 
benefits of using bit strings is that we may have the option of using 
the same class/interface in the API to hold all of these.


RFC 7748 public keys: I think we can reasonably use BigInteger to hold 
public key values. One minor issue is that we need to specify how 
implementations should handle non-canonical values (numbers that are 
less than 0 or greater than p-1). This does not seem like a huge 
issue, though, and the existing ECC API has the same issue. Another 
minor issue is that modeling this as a BigInteger may encourage 
implementations to use BigI

Re: JCA design for RFC 7748

2017-08-17 Thread Michael StJohns

On 8/16/2017 12:31 PM, Xuelei Fan wrote:

On 8/16/2017 8:18 AM, Adam Petcher wrote:


I don't worry about this issue any more.  At present, each 
java.security.Key has three characters (see the API Java doc):

. an algorithm
. an encoded form
. a format

The format could be "X.509", and could be "RAW" (like 
ByteArrayValue.getValue()).  I would suggest have the named curve 
in the algorithm characters, and use "RAW" as the encode format.

If X.509 encoding is required, KeyFactory.getKeySpec​() could do it.
Um... I think that doesn't make a lot of sense.  The default 
contract for public keys is X.509 and the default for private keys 
is PKCS#8. Almost all uses of the encoded formats are related to 
PKIX related functions.   (See for info the javadoc for PublicKey).


I'm concerned about this, too. Ideally, we want PKI code to handle 
these new keys without modification. The javadoc wording makes it a 
little unclear whether public keys *must* use X.509 encoding, but 
using other encodings for public keys would probably be surprising.
I have not had a conclusion, but I was wondering if almost all uses of 
the encoded formats are related to PKIX related functions, whether it 
is still a priority to support encoding other than X.509?


So far, I think our proposal works.  The concern is mainly about how 
to *simplify* the transaction between two formats (Raw, PKIX, XML and 
JSON) in applications.  I don't worry about the transaction too much 
as it is doable with our current proposal, but just not as 
straightforward as exposing the RAW public key.


If we want to simplify the transaction, I see your concerns of my 
proposal above.  We may keep using "X.509" for default, and define a 
new key spec.  The encoded form for X.509 is as:


 SubjectPublicKeyInfo ::= SEQUENCE {
 algorithm AlgorithmIdentifier,
 subjectPublicKey BIT STRING
 }

The new encoded form could be just the subjectPublicKey field in the 
SubjectPublicKeyInfo above.


However, I'm not very sure of how common the transaction is required 
and whether it is beyond the threshold to be a public API in JRE.


There's a proposed defined format for PKIX keys on the new curves - the 
IETF CURDLE working group is working through that.  JCA should use that 
and not invent something new.






>
> The new keys remain tagged as ECKeys.  Old code won't notice (because
> old code is using old curves).  New code (new providers) will have to
> pay attention to EllipticCurve and ECField information if its handling
> both types of curves.  As is the case now, no provider need support
> every curve or even every field type.
>
My concern is about the case to use old provider and new provider all 
together at the same time.  JDK is a multiple providers coexisting 
environment.  We cannot grant that old code only uses old curves (old 
providers) and new code only uses new curves (new providers).


I did see this email and commented on it.  I think you're still missing 
the point that not every provider implements (or is required to 
implement) every curve.  And in fact I argued that it was somewhat 
stupid that the underlying C code in the SunEC provider was at odds with 
the code in the SunEC java side with respect to known curves and how 
they were handled. (Subject was RFR 8182999: SunEC throws 
ProviderException on invalid curves).



Consider - currently if I don't specify a provider, and there are two 
providers, and the lower priority provider implements "curveFoobar" but 
the higher priority provider does not, and I use an ECGenParameterSpec 
of "curveFoobar" to try and generate a key, I get a failure because I 
got the higher priority provider (which gets selected because JCA 
doesn't yet know that I want curveFoobar ) that doesn't do that curve.  
I need to specify the same provider throughout the entire process to 
make sure things work as expected AND I need to specify the provider 
that implements the curve I want.



This is the same for ANY current publicly known curve - different 
providers may implement all some or none of them.  So extending this 
model for the curve25519 stuff isn't going to be any different old 
provider and new provider wise than is currently the case.   If you want 
the new curves, you have to specify the new providers.  If the new and 
old providers don't implement the same curves, you may need to deal with 
two different providers simultaneously - and that's not something that 
just happens.


I don't think your concerns are valid.  I may still be missing something 
here - but would ask for a real-world example that actually shows breakage.


Treat these simply as new curves and let's move forward with very 
minimal changes to the public API.



Mike







There is an example in my August 10 reply:

http://mail.openjdk.java.net/pipermail/security-dev/2017-August/016199.html 



Xuelei





Re: JCA design for RFC 7748

2017-08-16 Thread Michael StJohns

On 8/16/2017 11:18 AM, Adam Petcher wrote:

On 8/15/2017 7:05 PM, Michael StJohns wrote:

On 8/15/2017 1:43 PM, Xuelei Fan wrote:

On 8/11/2017 7:57 AM, Adam Petcher wrote:


I'm also coming to the conclusion that using X.509 encoding for 
this sort of interoperability is too onerous, and we should come up 
with something better. Maybe we should add a new general-purpose 
interface that exposes some structure in an algorithm-independent 
way. Something like this:


package java.security.interfaces;
public interface ByteArrayValue {

 String getAlgorithm();
 AlgorithmParameterSpec getParams();
 byte[] getValue();
}


I'm not sure how to use the above interface in an application.


This is sort of the moral equivalent of using the TXT RR record in 
DNS and the arguments are similar.


This is a bad idea.


I'm not a DNS expert, so I apologize in advance if I misunderstood 
your argument. What I think you are saying is that it is bad to store 
something in a string or byte array which has no semantics, and it is 
better to store information in a more structured way that has a clear 
semantics. I agree with this.





My intention with this ByteArrayValue is to only use it for 
information that has a clear semantics when represented as a byte 
array, and a byte array is a convenient and appropriate representation 
for the algorithms involved (so there isn't a lot of unnecessary 
conversion). This is the case for public/private keys in RFC 7748/8032:


1) RFC 8032: "An EdDSA private key is a b-bit string k." "The EdDSA 
public key is ENC(A)." (ENC is a function from integers to 
little-endian bit strings.
2) RFC 7748: "Alice generates 32 random bytes in a[0] to a[31] and 
transmits K_A =X25519(a, 9) to Bob..." The X25519 and X448 functions, 
as described in the RFC, take bit strings as input and produce bit 
strings as output.


Thanks for making my point for me.  The internal representation of the 
public point is an integer.  It's only when encoding or decoding that it 
gets externally represented as an array of bytes.  (And yes, I 
understand that the RFC defines an algorithm using little endian byte 
array representations of the integers - but that's the implementation's 
call, not the API).


With respect to the output of the KeyAgreement algorithm - your (2) 
above, the transmission representation (e.g. the encoded public key) is 
little endian byte array representation of an integer.  The internal 
representation is - wait for it - integer.


I have no problems at all with any given implementation using little 
endian math internally.  For the purposes of using JCA, stick with 
BigInteger to represent your integers.  Use your provider encoding 
methods to translate between what the math is internally and what the 
bits are externally if necessary.  Implement the conversion methods for 
the factory and for dealing with the existing EC classes.   Maybe get 
BigInteger to be extended to handle (natively) littleEndian 
representation (as well as fixed length outputs necessary for things 
like ECDH).









So I think that a byte array is the correct representation for 
public/private keys in these two RFCs.


Nope.







I don't worry about this issue any more.  At present, each 
java.security.Key has three characters (see the API Java doc):

. an algorithm
. an encoded form
. a format

The format could be "X.509", and could be "RAW" (like 
ByteArrayValue.getValue()).  I would suggest have the named curve in 
the algorithm characters, and use "RAW" as the encode format.

If X.509 encoding is required, KeyFactory.getKeySpec​() could do it.
Um... I think that doesn't make a lot of sense.  The default contract 
for public keys is X.509 and the default for private keys is PKCS#8.  
Almost all uses of the encoded formats are related to PKIX related 
functions.   (See for info the javadoc for PublicKey).


I'm concerned about this, too. Ideally, we want PKI code to handle 
these new keys without modification. The javadoc wording makes it a 
little unclear whether public keys *must* use X.509 encoding, but 
using other encodings for public keys would probably be surprising.


There's two different things going on here - many encodings (e.g. for EC 
public keys there's the fixed length X and fixed length Y byte array big 
endian big integer as well as the X.92 representation indicating both 
compressed and uncompressed points and then the wrapping of those in 
SubjectPublicKeyInfo then there's the raw signature vs the ASN1 SEQUENCE 
OF INTEGER version.)  JCA uses the X.509 stuff quite a bit to deal with 
all of the CertificatePathValidation  and Certificate validation 
things.  But the "rawer" stuff such as DH sometimes uses bare points 
with the curve being understood by context (see for example the Javacard 
KeyAgreement classes).


In the current case, we still need a way to "sign" and "carry" the new 
publ

Re: JCA design for RFC 7748

2017-08-15 Thread Michael StJohns

On 8/15/2017 1:43 PM, Xuelei Fan wrote:

On 8/11/2017 7:57 AM, Adam Petcher wrote:

On 8/10/2017 9:46 PM, Michael StJohns wrote:


On 8/10/2017 7:36 PM, Xuelei Fan wrote:
Right now there are 3 major APIs (JCA, PKCS11 and Microsoft CSP) 
and at least 4 major representational domains (Raw, PKIX, XML and 
JSON).  In the current situation, I can take a JCA EC Public key 
and convert it to pretty much any of the other APIs or 
representations. For much of the hardware based stuff (ie, smart 
cards), I go straight from JCA into raw and vice versa. Assuming 
you left the "getEncoded()" stuff in the API and the encoding was 
PKIX, I'd have to encode to PKIX, decode the PKIX to extract the 
actual raw key or encode a PKIX blob and hope that the KeyFactory 
stuff actually worked.


It's not just support of arbitrary keys, but the ability to 
convert things without having to do multiple steps or stages.


Good point!  It would be nice if transaction between two formats 
could be done simply.  Using X.509 encoding is doable as you said 
above, but maybe there are spaces to get improvements.


I need more time to think about it.  Please let me know if any one 
have a solution to simplify the transaction if keeping use the 
proposed named curves solution.




I'm also coming to the conclusion that using X.509 encoding for this 
sort of interoperability is too onerous, and we should come up with 
something better. Maybe we should add a new general-purpose interface 
that exposes some structure in an algorithm-independent way. 
Something like this:


package java.security.interfaces;
public interface ByteArrayValue {

 String getAlgorithm();
 AlgorithmParameterSpec getParams();
 byte[] getValue();
}


I'm not sure how to use the above interface in an application.


This is sort of the moral equivalent of using the TXT RR record in DNS 
and the arguments are similar.


This is a bad idea.



I don't worry about this issue any more.  At present, each 
java.security.Key has three characters (see the API Java doc):

. an algorithm
. an encoded form
. a format

The format could be "X.509", and could be "RAW" (like 
ByteArrayValue.getValue()).  I would suggest have the named curve in 
the algorithm characters, and use "RAW" as the encode format.

If X.509 encoding is required, KeyFactory.getKeySpec​() could do it.
Um... I think that doesn't make a lot of sense.  The default contract 
for public keys is X.509 and the default for private keys is PKCS#8.  
Almost all uses of the encoded formats are related to PKIX related 
functions.   (See for info the javadoc for PublicKey).


Raw formats are used pretty much exclusively for symmetric keys.

The KeyFactory.getKeySpec() requires an actual KeySpec definition which 
is different than the ByteArrayValue stuff being proposed above.


To be JCA compliant you need all of:

1) A master interface (e.g. ECKey) that's a marker interface for key 
material

2) A public key interface that extends PublicKey and the master interface
3) A private key interface that extends PrivateKey and the master interface
4) A public key specification that implements KeySpec
5) A private key specification that implements KeySpec
6) A generation parameter specification that implements 
AlgorithmParameterSpec

7) A key parameter specification (if required by the master interface)
8) A factory class that implements KeyFactoryImpl taht implements 
AlgorithmParameterSpec

9) A common encoding for each of the public and private keys
10) A transform to/from the public and private key specs


Here's what I think should happen:

1) ECPoint gets a document modification to handle compressed points.  
The X is the X value, the Y is -1, 0 or 1 depending on positive negative 
or don't care for the sign of the Y value.   (This means that the byte 
array public key gets handled as a BigInteger). Any value other than 0, 
-1 or 1 for Y indicates a normal X Y point.


2) EllipticCurve gets a document modification to describe the mappings 
for Edwards and Montgomery curves as discussed previously.


3) Two classes are added to java.security.spec: 
java.security.spec.ECFieldEdwards and 
java.security.spec.ECFieldMontgomery - both of which implement ECField.


4) The ECFieldEdwards/Montgomery classes contain an indication of 
whether the curve is signature only, key agreement only or both. They 
also contain any parameters that can't be mapped in EllipticCurve


5) Using the above, someone specifies the curve sets for the four new 
curves as ECParameterSpec's and we iterate until we're satisfied we've 
got a standard public representation that can be used for other than the 
4 curves.


6) Until the JCA is updated, a provider for the new curves can use its 
own concrete ECField classes and later make them be subclasses of the 
java.security.spec.ECFieldMontgomery etc.  It's not ideal, but it does 
let the guys who are chomping at the bit do an implementation while 
waiting 

Re: JCA design for RFC 7748

2017-08-10 Thread Michael StJohns

On 8/10/2017 7:36 PM, Xuelei Fan wrote:

Hi Michael,

Good points!  See comments inlines.

On 8/10/2017 3:20 PM, Michael StJohns wrote:


Instead of converting, I was thinking about mapping.  E.g. Montgomery 
A and B matches the A and B of the curve.  But the "x" of the 
Montgomery point is just the "x" of the ECPoint with the "y" left as 
null.  For Edwards, it looks like you would map "d" to A. For [3] I'd 
map "r" to A.  I'd leave B as null for both- no reason to throw an 
unsupported exception as the code generally has a clue about what 
types of keys they're dealing with (or we provide a marker so they 
can figure it out).


The conversion in and out for points is a conversion from little 
endian to big endian and vice versa, but that only has to be done if 
you're importing or exporting a parameter set and that's an 
implementation issue not an API issue.


Basically, all the math is BigIntegers under the hood.  The 
curve25519 RFC specifies an implementation that's little endian, but 
the actual math is just math and things like the public key is really 
just a BigInteger.


Old code would just continue to work - since it would not be using 
the new curves.  New code would have to look for the curve type 
marker (e.g. the ECField) if there was the possibility of confusion.


I understand your points.  The mapping may be confusing to application 
developers, but no problem for new codes if following the new coding 
guideline.  I'm not very sure of the old code, for similar reason to 
use the converting solution.


For example, an Edwards curve form of the SubjectPublicKeyInfo field 
in a X.509 cert is parsed as X509EncodedKeySpec, and "EC" KeyFactory 
is used to generate the ECPublicKey.  The algorithm name of the 
ECPublicKey instance is "EC", and the parameter is an instance of 
ECParameterSpec. Somehow, the ECPublicKey leave the key generation 
environment, and the curve OID is unknown in the new environment.  
Then the public could be used improperly.  In the past, it's fine as 
the only supported form is Weierstrass form, there is no need to tell 
the curve forms in a crypto implementation.  However, when a new form 
is introduces, identify the EC form of a key is an essential part for 
the following crypto operations. Old providers or codes may not be 
able to tell the form, as may result in compatibility issues.


I don't think any of this is an issue.   An X509EncodedKeySpec for 
either type of key has a id-ecPublicKey OID identifying it (embedded in 
the SubjectPublicKeyInfo encoding).  In the key body, there's the 
EcpkParameters structure which is a 'namedCurve' which consists of an 
OID.  The curve OIDs for 25519 and 447 are different than any of the 
Weiserstrass keys.   When the KeyFactory factory implementation reads 
the byte stream its going to build a JCA ECPublicKey that matches the 
OID AND that's a concrete ECPublicKey class of the key factory provider.


If the factory implementation doesn't understand the oid, then the 
provider throws an error.  I forget which one.


The concrete class for the ECPublic key is specific to the provider.  
Some providers may support the new key forms, some may not.  There's no 
guarantee (and there never has been a guarantee) that an ECPublic key 
from one provider can be used with another provider (e.g. PKCS11 
provider vs a software provider) - you have to convert the key into a 
keyspec and then run the factory method on it.


So I don't think there's anything we have to worry about here - no 
violation of the API contract as far as I can tell.


(As a more complete example - consider what happens when you have an F2M 
EC provider and an Fp EC provider both generating public keys and 
encoding them.  Neither provider can decode the other's encoded key 
because they don't have the OIDs and the parameter sets).









However, I'm not very sure of the compatibility impact (see above).

3. Where we are not now?
Using named curves is popular.  There is a ECGenParameterSpec class 
using named curves:

 ECGenParameterSpec​ ecgp =
 new ECGenParameterSpec​(secp256r1);
 KeyPairGenerator kpg = KeyPairGenerator.getInstance("EC");
 kpg.initialize(ecpg);
 KeyPair kp = kpg.generateKeyPair​();

 ECPublicKey pubKey = (ECPublicKey)kp.getPublic();
 String keyAlgorithm = pubKey.getAlgorithm​();  // "EC"

However, it is used for key generation only.  Once the keys are 
generated, there is no public API to know the name of the curve in 
ECKey.  ECKey.getAlgorithm() will return "EC" only. If it is 
required to known whether a key is of a named curve, the solution is 
not straightforward.


This ties back to "getEncoded()" representations.  Under the hood, if 
you do a getEncoded() there's a "which name does this parameter set 
match up to" search which checks various tables for an OID and uses 
that in an X.509 

Re: JCA design for RFC 7748

2017-08-10 Thread Michael StJohns

On 8/10/2017 9:44 AM, Adam Petcher wrote:
Does anyone know of a particular use case (that we haven't discuss 
already) that would require a provider to support arbitrary curves? 
Any other arguments for or against this feature? 


There are uses for changing out the base point.  PAKE and SPAKE use 
similar math (e.g. G^s*sharedSecret is the equivalent of a new base point).


There are uses for private curves - e.g. when you want to actually be 
sure that the curve was randomly generated (sort of the same argument 
that got us to Curve25519 in the first place).


There are the whole set of Edwards curves that are mostly not included 
in any provider (except possible Microsoft's) as of yet.


Basically, you're trying to argue that there are no better curves (for 
the 'new' math) than have already been specified and there never will 
be.  I think that's a very shortsighted argument.


Later, Mike




Re: JCA design for RFC 7748

2017-08-08 Thread Michael StJohns

On 8/8/2017 5:00 PM, Adam Petcher wrote:

On 8/8/2017 12:50 PM, Michael StJohns wrote:



Further, this separation will reduce the probability of 
programming errors (e.g. accidentally interpreting a Weierstrass 
point as an RFC 7748 point).


Um.  What?   It actually won't.


This is the sort of problem I want to avoid:

KeyPairGenerator kpg = KeyPairGenerator.getInstance("ECDH");
KeyPair kp = kpg.generateKeyPair();
KeyFactory eckf = KeyFactory.getInstance("ECDH");
ECPrivateKeySpec priSpec = eckf.getKeySpec(kf.getPrivate(), 
ECPrivateKeySpec.class);


KeyFactory xdhkf = KeyFactory.getInstance("XDH");
PrivateKey xdhPrivate = xdhkf.generatePrivate(priSpec);

// Now use xdhPrivate for key agreement, which uses the wrong 
algorithm and curve, and may leak information about the private key


This is setting up a strawman and knocking it down.  It's already 
possible to do the above with any software based key - either 
directly or by pulling out the data.  Creating the API as you suggest 
will still not prevent this as long as I can retrieve the private 
value from the key.


If you want absolute protection from this - go to hardware based keys.


The goal is the prevention of common programming errors that lead to 
security issues, not absolute protection like you would get from 
hardware crypto. More like the kind of assurance you get from a type 
system.


The engine implementation (provider plugin) is responsible for 
preventing the "common programming errors", not the API.  The JCA really 
doesn't have support for a type system style assurance.









A) We don't need to expose actual curve parameters over the API. 
Curves can be specified using names (e.g. "X25519") or OIDs. The 
underlying implementation will likely support arbitrary Montgomery 
curves, but the JCA application will only be able to use the 
supported named curves.


Strangely, this hasn't turned out all that well.  There needs to be 
a name, OID in the public space (primarily for the encodings and 
PKIX stuff) and to be honest - you really want the parameters in 
public space as well (the ECParameterSpec and its ilk) so that a 
given key can be used with different providers or even to play 
around internally with new curves before giving them a name.


I don't understand why we need public curve parameters to allow keys 
to be used with different providers. It seems like this should work 
as long as the providers all understand the OIDs or curve names. Can 
you explain this part a bit more?
Because names and OIDs get assigned later than the curve parameters.  
There are two parts to the JCA - the general crypto part and then 
there's the PKIX part.  For the EC stuff, they sort of overlap 
because of a desire not to have to have everyone remember each of the 
parameter sets (curves) and those sets are tagged by name(s) and 
OID.  But its still  perfectly possible to do EC math on curves that 
were generated elsewhere (or even with a curve where everything but 
the basepoint is the same with a public curve).


What you need to be able to do is to pass to an "older" provider a 
"newer" curve - assuming the curve fits within the math already 
implemented.  There's really no good reason to implement a whole new 
set of API changes just to permit a single new curve.


Okay, thanks. If I am reading this right, this feature supports 
interoperability with providers that don't know about a specific curve 
name/OID, but support all curves within some family. Without a way to 
express the curve parameters in JCA, you would resort to using some 
provider-specific parameter spec, which would be unfortunate.


Pretty much.   This is why I've been pushing back so hard on "just one 
more key type" style arguments.







Related to tinkering with new curves that don't have a name: I don't 
think that this is a feature that JCA needs to have. In the common 
use case, the programmer wants to only use standard algorithms and 
curves, and I think we should focus on that use case.


The common use case is much wider than you think it is.  I find 
myself using the curve parameters much more than I would like - 
specifically because I use JCA in conjunction with PKCS11, HSMs and 
smart cards.   So no - focusing on a software only subset of things 
is really not the right approach.


I actually would have expected hardware crypto to have *less* support 
for arbitrary curves, and so this issue would come up more with 
software implementations. Why does this come up so frequently in 
hardware?


Because the hardware tends to work either very generally or very 
specifically.  A PKCS11 big iron HSM for example mostly doesn't do 
multiple curves and requires that you just give them the curve OID, but 
the JCOP smart cards work over a broad set of curves - as long as you 
give them the entire curve data set.  But the JCOP cards don't do ASN1 
so I keep getting to have to conver

  1   2   3   >