[PATCH] crypto: testmgr - add aead cbc des, des3_ede tests
Test vectors were taken from existing test for CBC(DES3_EDE). Associated data has been added to test vectors. HMAC computed with Crypto++ has been used. Following algos have been covered. (a) authenc(hmac(sha1),cbc(des)) (b) authenc(hmac(sha1),cbc(des3_ede)) (c) authenc(hmac(sha224),cbc(des)) (d) authenc(hmac(sha224),cbc(des3_ede)) (e) authenc(hmac(sha256),cbc(des)) (f) authenc(hmac(sha256),cbc(des3_ede)) (g) authenc(hmac(sha384),cbc(des)) (h) authenc(hmac(sha384),cbc(des3_ede)) (i) authenc(hmac(sha512),cbc(des)) (j) authenc(hmac(sha512),cbc(des3_ede)) Signed-off-by: Vakul Garg va...@freescale.com [niteshnarayan...@freescale.com: added hooks for the missing algorithms test and tested the patch] Signed-off-by: Nitesh Lal niteshnarayan...@freescale.com --- crypto/tcrypt.c | 31 ++- crypto/testmgr.c | 174 ++- crypto/testmgr.h | 666 ++- 3 files changed, 848 insertions(+), 23 deletions(-) diff --git a/crypto/tcrypt.c b/crypto/tcrypt.c index 09c93ff2..ba247cf 100644 --- a/crypto/tcrypt.c +++ b/crypto/tcrypt.c @@ -1519,7 +1519,36 @@ static int do_test(int m) case 157: ret += tcrypt_test(authenc(hmac(sha1),ecb(cipher_null))); break; - + case 181: + ret += tcrypt_test(authenc(hmac(sha1),cbc(des))); + break; + case 182: + ret += tcrypt_test(authenc(hmac(sha1),cbc(des3_ede))); + break; + case 183: + ret += tcrypt_test(authenc(hmac(sha224),cbc(des))); + break; + case 184: + ret += tcrypt_test(authenc(hmac(sha224),cbc(des3_ede))); + break; + case 185: + ret += tcrypt_test(authenc(hmac(sha256),cbc(des))); + break; + case 186: + ret += tcrypt_test(authenc(hmac(sha256),cbc(des3_ede))); + break; + case 187: + ret += tcrypt_test(authenc(hmac(sha384),cbc(des))); + break; + case 188: + ret += tcrypt_test(authenc(hmac(sha384),cbc(des3_ede))); + break; + case 189: + ret += tcrypt_test(authenc(hmac(sha512),cbc(des))); + break; + case 190: + ret += tcrypt_test(authenc(hmac(sha512),cbc(des3_ede))); + break; case 200: test_cipher_speed(ecb(aes), ENCRYPT, sec, NULL, 0, speed_template_16_24_32); diff --git a/crypto/testmgr.c b/crypto/testmgr.c index dc3cf35..40c99c6 100644 --- a/crypto/testmgr.c +++ b/crypto/testmgr.c @@ -1831,8 +1831,38 @@ static const struct alg_test_desc alg_test_descs[] = { .suite = { .aead = { .enc = { - .vecs = hmac_sha1_aes_cbc_enc_tv_template, - .count = HMAC_SHA1_AES_CBC_ENC_TEST_VECTORS + .vecs = + hmac_sha1_aes_cbc_enc_tv_temp, + .count = + HMAC_SHA1_AES_CBC_ENC_TEST_VEC + } + } + } + }, { + .alg = authenc(hmac(sha1),cbc(des)), + .test = alg_test_aead, + .fips_allowed = 1, + .suite = { + .aead = { + .enc = { + .vecs = + hmac_sha1_des_cbc_enc_tv_temp, + .count = + HMAC_SHA1_DES_CBC_ENC_TEST_VEC + } + } + } + }, { + .alg = authenc(hmac(sha1),cbc(des3_ede)), + .test = alg_test_aead, + .fips_allowed = 1, + .suite = { + .aead = { + .enc = { + .vecs = + hmac_sha1_des3_ede_cbc_enc_tv_temp, + .count = + HMAC_SHA1_DES3_EDE_CBC_ENC_TEST_VEC } } } @@ -1843,12 +1873,44 @@ static const struct alg_test_desc alg_test_descs[] = { .suite = { .aead = { .enc = { - .vecs = hmac_sha1_ecb_cipher_null_enc_tv_template, - .count = HMAC_SHA1_ECB_CIPHER_NULL_ENC_TEST_VECTORS + .vecs = +
Crypto Fixes for 3.15
Hi Linus: This push fixes a NULL pointer dereference on allocation failure in caam, as well as a regression in the ctr mode on s390 that was added with the recent concurrency fixes. Please pull from git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6.git or master.kernel.org:/pub/scm/linux/kernel/git/herbert/crypto-2.6.git Harald Freudenberger (1): crypto: s390 - fix aes,des ctr mode concurrency finding. Horia Geanta (1): crypto: caam - add allocation failure handling in SPRINTFCAT macro arch/s390/crypto/aes_s390.c |3 +++ arch/s390/crypto/des_s390.c |3 +++ drivers/crypto/caam/error.c | 10 +++--- 3 files changed, 13 insertions(+), 3 deletions(-) Thanks, -- Email: Herbert Xu herb...@gondor.apana.org.au Home Page: http://gondor.apana.org.au/~herbert/ PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt -- To unsubscribe from this list: send the line unsubscribe linux-crypto in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH 1/6] crypto: SHA1 multibuffer map scatter gather walk's buffer address directly for x86_64
On Thu, May 15, 2014 at 11:12:03AM -0700, Tim Chen wrote: In which case your patch would basically do kmap if ASYNC and kmap_atomic otherwise. I'll try to make such a patch. Please see attached. It will be nice if scatterwalk_map and unmap can also be made async aware. Right now it also uses kmap_atomic. This will allow multi-buffer encryption to make use of it. Is this for ciphers? Could you use ablkcipher_walk since it doesn't actually map anything meaning that you can map it yourself? commit 75ecb231ff45b54afa9f4ec9137965c3c00868f4 Author: Herbert Xu herb...@gondor.apana.org.au Date: Wed May 21 20:56:12 2014 +0800 crypto: hash - Add real ahash walk interface Although the existing hash walk interface has already been used by a number of ahash crypto drivers, it turns out that none of them were really asynchronous. They were all essentially polling for completion. That's why nobody has noticed until now that the walk interface couldn't work with a real asynchronous driver since the memory is mapped using kmap_atomic. As we now have a use-case for a real ahash implementation on x86, this patch creates a minimal ahash walk interface. Basically it just calls kmap instead of kmap_atomic and does away with the crypto_yield call. Real ahash crypto drivers don't need to yield since by definition they won't be hogging the CPU. Signed-off-by: Herbert Xu herb...@gondor.apana.org.au diff --git a/crypto/ahash.c b/crypto/ahash.c index 6e72233..f2a5d8f 100644 --- a/crypto/ahash.c +++ b/crypto/ahash.c @@ -15,6 +15,7 @@ #include crypto/internal/hash.h #include crypto/scatterwalk.h +#include linux/bug.h #include linux/err.h #include linux/kernel.h #include linux/module.h @@ -46,7 +47,10 @@ static int hash_walk_next(struct crypto_hash_walk *walk) unsigned int nbytes = min(walk-entrylen, ((unsigned int)(PAGE_SIZE)) - offset); - walk-data = kmap_atomic(walk-pg); + if (walk-flags CRYPTO_ALG_ASYNC) + walk-data = kmap(walk-pg); + else + walk-data = kmap_atomic(walk-pg); walk-data += offset; if (offset alignmask) { @@ -93,8 +97,16 @@ int crypto_hash_walk_done(struct crypto_hash_walk *walk, int err) return nbytes; } - kunmap_atomic(walk-data); - crypto_yield(walk-flags); + if (walk-flags CRYPTO_ALG_ASYNC) + kunmap(walk-pg); + else { + kunmap_atomic(walk-data); + /* +* The may sleep test only makes sense for sync users. +* Async users don't need to sleep here anyway. +*/ + crypto_yield(walk-flags); + } if (err) return err; @@ -124,12 +136,31 @@ int crypto_hash_walk_first(struct ahash_request *req, walk-alignmask = crypto_ahash_alignmask(crypto_ahash_reqtfm(req)); walk-sg = req-src; - walk-flags = req-base.flags; + walk-flags = req-base.flags CRYPTO_TFM_REQ_MASK; return hash_walk_new_entry(walk); } EXPORT_SYMBOL_GPL(crypto_hash_walk_first); +int crypto_ahash_walk_first(struct ahash_request *req, + struct crypto_hash_walk *walk) +{ + walk-total = req-nbytes; + + if (!walk-total) + return 0; + + walk-alignmask = crypto_ahash_alignmask(crypto_ahash_reqtfm(req)); + walk-sg = req-src; + walk-flags = req-base.flags CRYPTO_TFM_REQ_MASK; + walk-flags |= CRYPTO_ALG_ASYNC; + + BUILD_BUG_ON(CRYPTO_TFM_REQ_MASK CRYPTO_ALG_ASYNC); + + return hash_walk_new_entry(walk); +} +EXPORT_SYMBOL_GPL(crypto_ahash_walk_first); + int crypto_hash_walk_first_compat(struct hash_desc *hdesc, struct crypto_hash_walk *walk, struct scatterlist *sg, unsigned int len) @@ -141,7 +172,7 @@ int crypto_hash_walk_first_compat(struct hash_desc *hdesc, walk-alignmask = crypto_hash_alignmask(hdesc-tfm); walk-sg = sg; - walk-flags = hdesc-flags; + walk-flags = hdesc-flags CRYPTO_TFM_REQ_MASK; return hash_walk_new_entry(walk); } diff --git a/include/crypto/internal/hash.h b/include/crypto/internal/hash.h index 821eae8..9b6f32a 100644 --- a/include/crypto/internal/hash.h +++ b/include/crypto/internal/hash.h @@ -55,15 +55,28 @@ extern const struct crypto_type crypto_ahash_type; int crypto_hash_walk_done(struct crypto_hash_walk *walk, int err); int crypto_hash_walk_first(struct ahash_request *req, struct crypto_hash_walk *walk); +int crypto_ahash_walk_first(struct ahash_request *req, + struct crypto_hash_walk *walk); int crypto_hash_walk_first_compat(struct hash_desc *hdesc, struct crypto_hash_walk *walk, struct
Re: [PATCH v7 1/6] SP800-90A Deterministic Random Bit Generator
Am Mittwoch, 21. Mai 2014, 06:18:58 schrieb Stephan Mueller: Hi, +/* + * Tests as defined in 11.3.2 in addition to the cipher tests: testing + * of the error handling. + * + * Note: testing of failing seed source as defined in 11.3.2 is not applicable + * as seed source of get_random_bytes does not fail. + * + * Note 2: There is no sensible way of testing the reseed counter + * enforcement, so skip it. + */ +static inline int __init drbg_healthcheck_sanity(void) +{ +#ifdef CONFIG_CRYPTO_FIPS + unsigned int len = 0; This variable must be signed int as otherwise the BUG_ON checks will always fail. This error is a leftover from the return code change of drbg_generate(). It will be fixed in the next release. Ciao Stephan -- | Cui bono? | -- To unsubscribe from this list: send the line unsubscribe linux-crypto in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH 1/6] crypto: SHA1 multibuffer map scatter gather walk's buffer address directly for x86_64
On Wed, 2014-05-21 at 21:05 +0800, Herbert Xu wrote: On Thu, May 15, 2014 at 11:12:03AM -0700, Tim Chen wrote: In which case your patch would basically do kmap if ASYNC and kmap_atomic otherwise. I'll try to make such a patch. Please see attached. It will be nice if scatterwalk_map and unmap can also be made async aware. Right now it also uses kmap_atomic. This will allow multi-buffer encryption to make use of it. Is this for ciphers? Could you use ablkcipher_walk since it doesn't actually map anything meaning that you can map it yourself? commit 75ecb231ff45b54afa9f4ec9137965c3c00868f4 Author: Herbert Xu herb...@gondor.apana.org.au Date: Wed May 21 20:56:12 2014 +0800 crypto: hash - Add real ahash walk interface Although the existing hash walk interface has already been used by a number of ahash crypto drivers, it turns out that none of them were really asynchronous. They were all essentially polling for completion. That's why nobody has noticed until now that the walk interface couldn't work with a real asynchronous driver since the memory is mapped using kmap_atomic. As we now have a use-case for a real ahash implementation on x86, this patch creates a minimal ahash walk interface. Basically it just calls kmap instead of kmap_atomic and does away with the crypto_yield call. Real ahash crypto drivers don't need to yield since by definition they won't be hogging the CPU. Signed-off-by: Herbert Xu herb...@gondor.apana.org.au Herbert, Thanks for the patch. I'll test it out for multi-buffer sha1 and then send out an update of the patch series. Tim -- To unsubscribe from this list: send the line unsubscribe linux-crypto in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html