Re: [PATCH 2/2] Doc:crypto: Fix typo in crypto-API.xml
Am Donnerstag, 4. Juni 2015, 00:01:21 schrieb Masanari Iida: Hi Masanari, This patch fix some typos found in crypto-API.xml. It is because the file is generated from comments in sources, so I had to fix typo in sources. Signed-off-by: Masanari Iida standby2...@gmail.com Acked-by: Stephan Mueller smuel...@chronox.de -- Ciao Stephan -- To unsubscribe from this list: send the line unsubscribe linux-crypto in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH v9 4/4] crypto: Add Allwinner Security System crypto accelerator
Le 23/05/2015 16:35, Boris Brezillon a écrit : Hi Corentin, On Sat, 23 May 2015 15:12:23 +0200 Corentin LABBE clabbe.montj...@gmail.com wrote: Le 17/05/2015 10:45, Boris Brezillon a écrit : Hi Corentin, I started to review this new version, and I still think there's something wrong with the way your processing crypto requests. From my POV this is not asynchronous at all (see my comments inline), but maybe Herbert can confirm that. I haven't reviewed the hash part yet, but I guess it has the same problem. For resuming your conversation with Herbert, I have removed all CRYPTO_ALG_ASYNC flags. Okay. I really think you can easily deal with asynchronous request (I had a look at the datasheet), but I'll let maintainers decide whether this is important or not. + + if (areq-src == NULL || areq-dst == NULL) { + dev_err(ss-dev, ERROR: Some SGs are NULL\n); + return -EINVAL; + } + + spin_lock_irqsave(ss-slock, flags); Do you really need to take this lock so early ? BTW, what is this lock protecting ? As I have written in the header, the spinlock protect the usage of the device. In this case, I need to lock just before writing the key in the device. I'm not denying the fact that you need some locking, just how this locking is done: you're preventing the all system from receiving interrupts for the all time your doing your crypto request. Here is a suggestion (if you still want to keep the synchronous model, which IMHO is not a good idea, but hey, that's not my call to make). 1/ wait for the device to be ready (using a waitqueue) 2/ take the lock 3/ check if the engine is busy (already handling another crypto request). 4.1/ If the engine is not busy, mark the engine as busy, release the lock and proceed with the crytpo request handling. 4.2/ If the engine is busy, release the lock and go back to 1/ I have done a version with a crypto_queue and a kthread. This works perfectly but.. the performance are really really bad. Never more than 20k request/s when both generic and synchronous SS can go beyond 80k. With this asynchronous driver, the Security System become useful only with request larger than 2048 bytes I have patched my driver to create stats on request size, this show that real usage is less than that. (512bytes for cryptoluks for example) Furthermore, my current patch for using the PRNG cannot be used with the kthread since it use the hwrng interface. But perhaps by using also a crypto_rng alg it could be done. So I think that if I want to make the SS driver useful, I cannot set it asynchronous. Perhaps when the DMA engine will be available, this will change. I have attached the patch that make my driver asynchronous for comments on possible improvement. IMHO, taking a spinlock and disabling irqs for the whole time you're executing a crypto request is not a good idea (it will prevent all other irqs from running, and potentially introduce latencies in other parts of the kernel). Since crypto operation could be called by software interrupt, I need to disable them. (Confirmed by http://www.makelinux.net/ldd3/chp-5-sect-5 5.5.3) Hm, you're not even using the interrupts provided by the IP to detect when the engine is ready to accept new data chunks (which is another aspect that should be addressed IMO), so I don't get why you need to disable HW interrupts. If you just want to disable SW interrupts, you can use spin_lock_bh() instead of spin_lock_irqsave(). Thanks I use spin_lock_bh now. What you can do though is declare the following fields in your crypto engine struct (sun4i_ss_ctx): - a crypto request queue (struct crypto_queue [1]) - a crypto_async_request variable storing the request being processed - a lock protecting the queue and the current request variable This way you'll only have to take the lock when queuing or dequeuing a request. Another comment, your implementation does not seem to be asynchronous at all: you're blocking the caller until its crypto request is complete. + + for (i = 0; i op-keylen; i += 4) + writel(*(op-key + i / 4), ss-base + SS_KEY0 + i); + + if (areq-info != NULL) { + for (i = 0; i 4 i ivsize / 4; i++) { + v = *(u32 *)(areq-info + i * 4); + writel(v, ss-base + SS_IV0 + i * 4); + } + } + writel(mode, ss-base + SS_CTL); + + sgnum = sg_nents(areq-src); + if (sgnum == 1) + miter_flag = SG_MITER_FROM_SG | SG_MITER_ATOMIC; + else + miter_flag = SG_MITER_FROM_SG; Why is the ATOMIC flag depending on the number of sg elements. IMO it should only depends on the context you're currently in, and in your specific case, you're always in atomic context since you've taken a spinlock (and disabled irqs) a few lines above. Note that with the approach I previously proposed, you can even get rid of this ATMIC flag (or always set it depending on
[PATCH RFC v3 2/3] crypto: RSA: KEYS: convert rsa and public key to new PKE API
Change the existing rsa and public key code to integrate it with the new Public Key Encryption API. Signed-off-by: Tadeusz Struk tadeusz.st...@intel.com --- crypto/asymmetric_keys/Kconfig|1 crypto/asymmetric_keys/Makefile |1 crypto/asymmetric_keys/pkcs7_parser.c |2 crypto/asymmetric_keys/pkcs7_trust.c |2 crypto/asymmetric_keys/pkcs7_verify.c |2 crypto/asymmetric_keys/public_key.c | 53 +-- crypto/asymmetric_keys/public_key.h | 36 -- crypto/asymmetric_keys/rsa.c | 467 - crypto/asymmetric_keys/rsa_pkcs1_v1_5.c | 259 crypto/asymmetric_keys/x509_cert_parser.c |2 crypto/asymmetric_keys/x509_public_key.c |4 include/crypto/public_key.h | 11 - 12 files changed, 540 insertions(+), 300 deletions(-) delete mode 100644 crypto/asymmetric_keys/public_key.h create mode 100644 crypto/asymmetric_keys/rsa_pkcs1_v1_5.c diff --git a/crypto/asymmetric_keys/Kconfig b/crypto/asymmetric_keys/Kconfig index 4870f28..4d27116 100644 --- a/crypto/asymmetric_keys/Kconfig +++ b/crypto/asymmetric_keys/Kconfig @@ -23,6 +23,7 @@ config ASYMMETRIC_PUBLIC_KEY_SUBTYPE config PUBLIC_KEY_ALGO_RSA tristate RSA public-key algorithm select MPILIB + select CRYPTO_AKCIPHER help This option enables support for the RSA algorithm (PKCS#1, RFC3447). diff --git a/crypto/asymmetric_keys/Makefile b/crypto/asymmetric_keys/Makefile index e47fcd9..a9cb1b8 100644 --- a/crypto/asymmetric_keys/Makefile +++ b/crypto/asymmetric_keys/Makefile @@ -8,6 +8,7 @@ asymmetric_keys-y := asymmetric_type.o signature.o obj-$(CONFIG_ASYMMETRIC_PUBLIC_KEY_SUBTYPE) += public_key.o obj-$(CONFIG_PUBLIC_KEY_ALGO_RSA) += rsa.o +obj-$(CONFIG_PUBLIC_KEY_ALGO_RSA) += rsa_pkcs1_v1_5.o # # X.509 Certificate handling diff --git a/crypto/asymmetric_keys/pkcs7_parser.c b/crypto/asymmetric_keys/pkcs7_parser.c index 3bd5a1e..054f110 100644 --- a/crypto/asymmetric_keys/pkcs7_parser.c +++ b/crypto/asymmetric_keys/pkcs7_parser.c @@ -15,7 +15,7 @@ #include linux/slab.h #include linux/err.h #include linux/oid_registry.h -#include public_key.h +#include crypto/public_key.h #include pkcs7_parser.h #include pkcs7-asn1.h diff --git a/crypto/asymmetric_keys/pkcs7_trust.c b/crypto/asymmetric_keys/pkcs7_trust.c index 1d29376..68ebae2 100644 --- a/crypto/asymmetric_keys/pkcs7_trust.c +++ b/crypto/asymmetric_keys/pkcs7_trust.c @@ -17,7 +17,7 @@ #include linux/asn1.h #include linux/key.h #include keys/asymmetric-type.h -#include public_key.h +#include crypto/public_key.h #include pkcs7_parser.h /** diff --git a/crypto/asymmetric_keys/pkcs7_verify.c b/crypto/asymmetric_keys/pkcs7_verify.c index cd45545..c32a337 100644 --- a/crypto/asymmetric_keys/pkcs7_verify.c +++ b/crypto/asymmetric_keys/pkcs7_verify.c @@ -16,7 +16,7 @@ #include linux/err.h #include linux/asn1.h #include crypto/hash.h -#include public_key.h +#include crypto/public_key.h #include pkcs7_parser.h /* diff --git a/crypto/asymmetric_keys/public_key.c b/crypto/asymmetric_keys/public_key.c index 2f6e4fb..4685aed 100644 --- a/crypto/asymmetric_keys/public_key.c +++ b/crypto/asymmetric_keys/public_key.c @@ -18,30 +18,26 @@ #include linux/slab.h #include linux/seq_file.h #include keys/asymmetric-subtype.h -#include public_key.h +#include crypto/public_key.h +#include crypto/akcipher.h MODULE_LICENSE(GPL); const char *const pkey_algo_name[PKEY_ALGO__LAST] = { - [PKEY_ALGO_DSA] = DSA, - [PKEY_ALGO_RSA] = RSA, + [PKEY_ALGO_DSA] = dsa, + [PKEY_ALGO_RSA] = rsa, }; EXPORT_SYMBOL_GPL(pkey_algo_name); -const struct public_key_algorithm *pkey_algo[PKEY_ALGO__LAST] = { -#if defined(CONFIG_PUBLIC_KEY_ALGO_RSA) || \ - defined(CONFIG_PUBLIC_KEY_ALGO_RSA_MODULE) - [PKEY_ALGO_RSA] = RSA_public_key_algorithm, -#endif -}; -EXPORT_SYMBOL_GPL(pkey_algo); - const char *const pkey_id_type_name[PKEY_ID_TYPE__LAST] = { [PKEY_ID_PGP] = PGP, [PKEY_ID_X509] = X509, }; EXPORT_SYMBOL_GPL(pkey_id_type_name); +int rsa_pkcs1_v1_5_verify_signature(const struct public_key *pkey, + const struct public_key_signature *sig); + /* * Provide a part of a description of the key for /proc/keys. */ @@ -52,7 +48,8 @@ static void public_key_describe(const struct key *asymmetric_key, if (key) seq_printf(m, %s.%s, - pkey_id_type_name[key-id_type], key-algo-name); + pkey_id_type_name[key-id_type], + pkey_algo_name[key-pkey_algo]); } /* @@ -74,37 +71,20 @@ EXPORT_SYMBOL_GPL(public_key_destroy); /* * Verify a signature using a public key. */ -int public_key_verify_signature(const struct public_key *pk, +int public_key_verify_signature(const struct public_key *pkey,
[PATCH RFC v3 0/3] crypto: Introduce Public Key Encryption API
This patch set introduces a Public Key Encryption API. What is proposed is a new crypto type called crypto_pkey_type plus new struct pkey_alg and struct pkey_tfm together with number of helper functions to register pkey type algorithms and allocate tfm instances. This is to make it similar to how the existing crypto API works for the ablkcipher, ahash, and aead types. The operations the new interface will allow to provide are: int (*sign)(struct pkey_request *pkeyreq); int (*verify)(struct pkey_request *pkeyreq); int (*encrypt)(struct pkey_request *pkeyreq); int (*decrypt)(struct pkey_request *pkeyreq); The benefits it gives comparing to the struct public_key_algorithm interface are: - drivers can add many implementations of RSA or DSA algorithms and user will allocate instances (tfms) of these, base on algorithm priority, in the same way as it is with the symmetric ciphers. - the new interface allows for asynchronous implementations that can use crypto hardware to offload the calculations to. - integrating it with linux crypto api allows using all its benefits i.e. managing algorithms using NETLINK_CRYPTO, monitoring implementations using /proc/crypto. etc New helper functions have been added to allocate pkey_tfm instances and invoke the operations to make it easier to use. For instance to verify a public_signature against a public_key using the RSA algorithm a user would do: struct crypto_pkey *tfm = crypto_alloc_pkey(rsa, 0, 0); struct pkey_request *req = pkey_request_alloc(tfm, GFP_KERNEL); pkey_request_set_crypt(req, pub_key, signature); int ret = crypto_pkey_verify(req); pkey_request_free(req); crypto_free_pkey(tfm); return ret; Additionally existing public_key and rsa code have been reworked to use the new interface for verifying signed modules. As part of the rework the struct public_key_algorithm type has been removed. Algorithm instance is allocated using crypto_alloc_pkey() and name defined in pkey_algo_name table indexed by pkey_algo enum that comes from the public key. In future this can be replaced by a string name can be obtained directly from the public key cert. Changes in v3: - changed input and output parameters type from sgl to void * and added separate src_len dst_len - requested by Herbert Xu - separated rsa implementation into cryptographic primitives and left encryption scheme details outside of the algorithm implementation - added SW implementation for RSA encrypt, decrypt and sign operation - added RSA test vectors Changes in v2: - remodeled not to use obsolete cra_u and crt_u unions - changed type/funct names from pke_* to pkey_* - retained the enum pkey_algo type for it is external to the kernel - added documentation --- Tadeusz Struk (3): crypto: add PKE API crypto: RSA: KEYS: convert rsa and public key to new PKE API crypto: add tests vectors for RSA crypto/Kconfig|6 crypto/Makefile |1 crypto/akcipher.c | 100 ++ crypto/asymmetric_keys/Kconfig|1 crypto/asymmetric_keys/Makefile |1 crypto/asymmetric_keys/pkcs7_parser.c |2 crypto/asymmetric_keys/pkcs7_trust.c |2 crypto/asymmetric_keys/pkcs7_verify.c |2 crypto/asymmetric_keys/public_key.c | 53 +-- crypto/asymmetric_keys/public_key.h | 36 -- crypto/asymmetric_keys/rsa.c | 467 - crypto/asymmetric_keys/rsa_pkcs1_v1_5.c | 259 crypto/asymmetric_keys/x509_cert_parser.c |2 crypto/asymmetric_keys/x509_public_key.c |4 crypto/crypto_user.c | 23 + crypto/testmgr.c | 151 + crypto/testmgr.h | 86 + include/crypto/akcipher.h | 385 include/crypto/public_key.h | 11 - include/linux/crypto.h|1 include/linux/cryptouser.h|6 21 files changed, 1299 insertions(+), 300 deletions(-) create mode 100644 crypto/akcipher.c delete mode 100644 crypto/asymmetric_keys/public_key.h create mode 100644 crypto/asymmetric_keys/rsa_pkcs1_v1_5.c create mode 100644 include/crypto/akcipher.h -- -- To unsubscribe from this list: send the line unsubscribe linux-crypto in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[PATCH RFC v3 1/3] crypto: add PKE API
Add Public Key Encryption API. Signed-off-by: Tadeusz Struk tadeusz.st...@intel.com --- crypto/Kconfig |6 + crypto/Makefile|1 crypto/akcipher.c | 100 +++ crypto/crypto_user.c | 23 +++ include/crypto/akcipher.h | 385 include/linux/crypto.h |1 include/linux/cryptouser.h |6 + 7 files changed, 522 insertions(+) create mode 100644 crypto/akcipher.c create mode 100644 include/crypto/akcipher.h diff --git a/crypto/Kconfig b/crypto/Kconfig index 0ff4cd4..917f880 100644 --- a/crypto/Kconfig +++ b/crypto/Kconfig @@ -87,6 +87,12 @@ config CRYPTO_PCOMP2 tristate select CRYPTO_ALGAPI2 +config CRYPTO_AKCIPHER + tristate Public Key Algorithms API + select CRYPTO_ALGAPI + help + Crypto API interface for public key algorithms. + config CRYPTO_MANAGER tristate Cryptographic algorithm manager select CRYPTO_MANAGER2 diff --git a/crypto/Makefile b/crypto/Makefile index 5db5b95..1ed2929 100644 --- a/crypto/Makefile +++ b/crypto/Makefile @@ -28,6 +28,7 @@ crypto_hash-y += shash.o obj-$(CONFIG_CRYPTO_HASH2) += crypto_hash.o obj-$(CONFIG_CRYPTO_PCOMP2) += pcompress.o +obj-$(CONFIG_CRYPTO_AKCIPHER) += akcipher.o cryptomgr-y := algboss.o testmgr.o diff --git a/crypto/akcipher.c b/crypto/akcipher.c new file mode 100644 index 000..92da8da8 --- /dev/null +++ b/crypto/akcipher.c @@ -0,0 +1,100 @@ +/* + * Public Key Encryption + * + * Copyright (c) 2015, Intel Corporation + * Authors: Tadeusz Struk tadeusz.st...@intel.com + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License as published by the Free + * Software Foundation; either version 2 of the License, or (at your option) + * any later version. + * + */ +#include linux/errno.h +#include linux/kernel.h +#include linux/module.h +#include linux/seq_file.h +#include linux/slab.h +#include linux/string.h +#include linux/crypto.h +#include crypto/algapi.h +#include linux/cryptouser.h +#include net/netlink.h +#include crypto/akcipher.h +#include internal.h + +#ifdef CONFIG_NET +static int crypto_akcipher_report(struct sk_buff *skb, struct crypto_alg *alg) +{ + struct crypto_report_akcipher rakcipher; + + strncpy(rakcipher.type, akcipher, sizeof(rakcipher.type)); + strncpy(rakcipher.subtype, alg-cra_name, sizeof(rakcipher.subtype)); + + if (nla_put(skb, CRYPTOCFGA_REPORT_AKCIPHER, + sizeof(struct crypto_report_akcipher), rakcipher)) + goto nla_put_failure; + return 0; + +nla_put_failure: + return -EMSGSIZE; +} +#else +static int crypto_akcipher_report(struct sk_buff *skb, struct crypto_alg *alg) +{ + return -ENOSYS; +} +#endif + +static void crypto_akcipher_show(struct seq_file *m, struct crypto_alg *alg) + __attribute__ ((unused)); + +static void crypto_akcipher_show(struct seq_file *m, struct crypto_alg *alg) +{ + seq_puts(m, type : akcipher\n); + seq_printf(m, subtype : %s\n, alg-cra_name); +} + +static int crypto_akcipher_init(struct crypto_tfm *tfm) +{ + return 0; +} + +static const struct crypto_type crypto_akcipher_type = { + .extsize = crypto_alg_extsize, + .init_tfm = crypto_akcipher_init, +#ifdef CONFIG_PROC_FS + .show = crypto_akcipher_show, +#endif + .report = crypto_akcipher_report, + .maskclear = ~CRYPTO_ALG_TYPE_MASK, + .maskset = CRYPTO_ALG_TYPE_MASK, + .type = CRYPTO_ALG_TYPE_AKCIPHER, + .tfmsize = offsetof(struct crypto_akcipher, base), +}; + +struct crypto_akcipher *crypto_alloc_akcipher(const char *alg_name, u32 type, + u32 mask) +{ + return crypto_alloc_tfm(alg_name, crypto_akcipher_type, type, mask); +} +EXPORT_SYMBOL_GPL(crypto_alloc_akcipher); + +int crypto_register_akcipher(struct akcipher_alg *alg) +{ + struct crypto_alg *base = alg-base; + + base-cra_type = crypto_akcipher_type; + base-cra_flags = ~CRYPTO_ALG_TYPE_MASK; + base-cra_flags |= CRYPTO_ALG_TYPE_AKCIPHER; + return crypto_register_alg(base); +} +EXPORT_SYMBOL_GPL(crypto_register_akcipher); + +void crypto_unregister_akcipher(struct akcipher_alg *alg) +{ + crypto_unregister_alg(alg-base); +} +EXPORT_SYMBOL_GPL(crypto_unregister_akcipher); +MODULE_LICENSE(GPL); +MODULE_DESCRIPTION(Generic public key cihper type); diff --git a/crypto/crypto_user.c b/crypto/crypto_user.c index 41dfe76..508e71d 100644 --- a/crypto/crypto_user.c +++ b/crypto/crypto_user.c @@ -27,6 +27,7 @@ #include net/net_namespace.h #include crypto/internal/aead.h #include crypto/internal/skcipher.h +#include crypto/akcipher.h #include internal.h @@ -110,6 +111,22 @@ nla_put_failure: return -EMSGSIZE; } +static int crypto_report_akcipher(struct sk_buff *skb, struct crypto_alg *alg) +{ + struct
[PATCH RFC v3 3/3] crypto: add tests vectors for RSA
New test vectors for RSA algorithm. Signed-off-by: Tadeusz Struk tadeusz.st...@intel.com --- crypto/testmgr.c | 151 ++ crypto/testmgr.h | 86 +++ 2 files changed, 237 insertions(+) diff --git a/crypto/testmgr.c b/crypto/testmgr.c index 717d6f2..54a5412 100644 --- a/crypto/testmgr.c +++ b/crypto/testmgr.c @@ -30,6 +30,8 @@ #include linux/string.h #include crypto/rng.h #include crypto/drbg.h +#include crypto/public_key.h +#include crypto/akcipher.h #include internal.h @@ -116,6 +118,11 @@ struct drbg_test_suite { unsigned int count; }; +struct akcipher_test_suite { + struct akcipher_testvec *vecs; + unsigned int count; +}; + struct alg_test_desc { const char *alg; int (*test)(const struct alg_test_desc *desc, const char *driver, @@ -130,6 +137,7 @@ struct alg_test_desc { struct hash_test_suite hash; struct cprng_test_suite cprng; struct drbg_test_suite drbg; + struct akcipher_test_suite akcipher; } suite; }; @@ -1825,6 +1833,139 @@ static int alg_test_drbg(const struct alg_test_desc *desc, const char *driver, } +static int do_test_rsa(struct crypto_akcipher *tfm, + struct akcipher_testvec *vecs) +{ + struct akcipher_request *req; + struct public_key pkey; + void *outbuf_enc = NULL; + void *outbuf_dec = NULL; + struct tcrypt_result result; + unsigned int out_len = vecs-c_size; + int err = -ENOMEM; + + req = akcipher_request_alloc(tfm, GFP_KERNEL); + if (!req) + return err; + + pkey.rsa.n = mpi_read_raw_data(vecs-pub_key_n, vecs-pub_key_n_size); + if (!pkey.rsa.n) + goto free_req; + + pkey.rsa.e = mpi_read_raw_data(vecs-pub_key_e, vecs-pub_key_e_size); + if (!pkey.rsa.e) + goto free_n; + + pkey.rsa.d = mpi_read_raw_data(vecs-sec_key_d, vecs-sec_key_d_size); + if (!pkey.rsa.d) + goto free_e; + + outbuf_enc = kzalloc(vecs-c_size, GFP_KERNEL); + if (!outbuf_enc) + goto free_d; + + /* Run RSA encrypt - c = m^e mod n;*/ + init_completion(result.completion); + crypto_akcipher_setkey(tfm, pkey); + akcipher_request_set_crypt(req, vecs-m, outbuf_enc, vecs-m_size, + out_len, out_len); + akcipher_request_set_callback(req, CRYPTO_TFM_REQ_MAY_BACKLOG, + tcrypt_complete, result); + err = wait_async_op(result, crypto_akcipher_encrypt(req)); + if (err) { + pr_err(alg: rsa: encrypt test failed. err %d\n, err); + goto free_all; + } + + if (out_len != vecs-c_size) { + err = -EINVAL; + goto free_all; + } + + outbuf_dec = kzalloc(out_len, GFP_KERNEL); + if (!outbuf_dec) { + err = -ENOMEM; + goto free_all; + } + + init_completion(result.completion); + akcipher_request_set_crypt(req, outbuf_enc, outbuf_dec, vecs-c_size, + out_len, out_len); + /* Run RSA decrypt - m = c^d mod n;*/ + err = wait_async_op(result, crypto_akcipher_decrypt(req)); + if (err) { + pr_err(alg: rsa: decrypt test failed. err %d\n, err); + goto free_all; + } + + if (out_len != vecs-m_size) { + err = -EINVAL; + goto free_all; + } + + /* verify that decrypted message is equal to the original msg */ + if (memcmp(vecs-m, outbuf_dec, vecs-m_size)) { + pr_err(alg: rsa: encrypt test failed. Invalid output\n); + err = -EINVAL; + } +free_all: + kfree(outbuf_dec); + kfree(outbuf_enc); +free_d: + mpi_free(pkey.rsa.d); +free_e: + mpi_free(pkey.rsa.e); +free_n: + mpi_free(pkey.rsa.n); +free_req: + akcipher_request_free(req); + return err; +} + +static int test_rsa(struct crypto_akcipher *tfm, struct akcipher_testvec *vecs, + unsigned int tcount) +{ + int ret, i; + + for (i = 0; i tcount; i++) { + ret = do_test_rsa(tfm, vecs++); + if (ret) { + pr_err(alg: rsa: test failed on vector %d\n, i + 1); + return ret; + } + } + return 0; +} + +static int test_akcipher(struct crypto_akcipher *tfm, const char *alg, +struct akcipher_testvec *vecs, unsigned int tcount) +{ + if (strncmp(alg, rsa, 3) == 0) + return test_rsa(tfm, vecs, tcount); + + return 0; +} + +static int alg_test_akcipher(const struct alg_test_desc *desc, +const char *driver, u32 type, u32 mask) +{ + struct crypto_akcipher *tfm; + int err = 0; + + tfm =
Re: [PATCH RFC v3 3/3] crypto: add tests vectors for RSA
Am Mittwoch, 3. Juni 2015, 15:44:24 schrieb Tadeusz Struk: Hi Tadeusz, New test vectors for RSA algorithm. Signed-off-by: Tadeusz Struk tadeusz.st...@intel.com --- crypto/testmgr.c | 151 ++ crypto/testmgr.h | 86 +++ 2 files changed, 237 insertions(+) diff --git a/crypto/testmgr.c b/crypto/testmgr.c index 717d6f2..54a5412 100644 --- a/crypto/testmgr.c +++ b/crypto/testmgr.c @@ -30,6 +30,8 @@ #include linux/string.h #include crypto/rng.h #include crypto/drbg.h +#include crypto/public_key.h +#include crypto/akcipher.h #include internal.h @@ -116,6 +118,11 @@ struct drbg_test_suite { unsigned int count; }; +struct akcipher_test_suite { + struct akcipher_testvec *vecs; + unsigned int count; +}; + struct alg_test_desc { const char *alg; int (*test)(const struct alg_test_desc *desc, const char *driver, @@ -130,6 +137,7 @@ struct alg_test_desc { struct hash_test_suite hash; struct cprng_test_suite cprng; struct drbg_test_suite drbg; + struct akcipher_test_suite akcipher; } suite; }; @@ -1825,6 +1833,139 @@ static int alg_test_drbg(const struct alg_test_desc *desc, const char *driver, } +static int do_test_rsa(struct crypto_akcipher *tfm, +struct akcipher_testvec *vecs) +{ + struct akcipher_request *req; + struct public_key pkey; + void *outbuf_enc = NULL; + void *outbuf_dec = NULL; + struct tcrypt_result result; + unsigned int out_len = vecs-c_size; + int err = -ENOMEM; + + req = akcipher_request_alloc(tfm, GFP_KERNEL); + if (!req) + return err; + + pkey.rsa.n = mpi_read_raw_data(vecs-pub_key_n, vecs-pub_key_n_size); + if (!pkey.rsa.n) + goto free_req; + + pkey.rsa.e = mpi_read_raw_data(vecs-pub_key_e, vecs-pub_key_e_size); + if (!pkey.rsa.e) + goto free_n; + + pkey.rsa.d = mpi_read_raw_data(vecs-sec_key_d, vecs-sec_key_d_size); + if (!pkey.rsa.d) + goto free_e; + + outbuf_enc = kzalloc(vecs-c_size, GFP_KERNEL); + if (!outbuf_enc) + goto free_d; + + /* Run RSA encrypt - c = m^e mod n;*/ + init_completion(result.completion); + crypto_akcipher_setkey(tfm, pkey); + akcipher_request_set_crypt(req, vecs-m, outbuf_enc, vecs-m_size, +out_len, out_len); + akcipher_request_set_callback(req, CRYPTO_TFM_REQ_MAY_BACKLOG, + tcrypt_complete, result); + err = wait_async_op(result, crypto_akcipher_encrypt(req)); + if (err) { + pr_err(alg: rsa: encrypt test failed. err %d\n, err); + goto free_all; + } + + if (out_len != vecs-c_size) { + err = -EINVAL; + goto free_all; + } + May I ask that the outbuf_enc is memcmp()ed with an expected value? This check is required for FIPS 140-2 compliance. Without that memcmp, FIPS 140-2 validations will not be successful. + outbuf_dec = kzalloc(out_len, GFP_KERNEL); + if (!outbuf_dec) { + err = -ENOMEM; + goto free_all; + } + + init_completion(result.completion); + akcipher_request_set_crypt(req, outbuf_enc, outbuf_dec, vecs-c_size, +out_len, out_len); + /* Run RSA decrypt - m = c^d mod n;*/ + err = wait_async_op(result, crypto_akcipher_decrypt(req)); + if (err) { + pr_err(alg: rsa: decrypt test failed. err %d\n, err); + goto free_all; + } + + if (out_len != vecs-m_size) { + err = -EINVAL; + goto free_all; + } + + /* verify that decrypted message is equal to the original msg */ + if (memcmp(vecs-m, outbuf_dec, vecs-m_size)) { + pr_err(alg: rsa: encrypt test failed. Invalid output\n); + err = -EINVAL; + } +free_all: + kfree(outbuf_dec); + kfree(outbuf_enc); +free_d: + mpi_free(pkey.rsa.d); +free_e: + mpi_free(pkey.rsa.e); +free_n: + mpi_free(pkey.rsa.n); +free_req: + akcipher_request_free(req); + return err; +} + +static int test_rsa(struct crypto_akcipher *tfm, struct akcipher_testvec *vecs, + unsigned int tcount) +{ + int ret, i; + + for (i = 0; i tcount; i++) { + ret = do_test_rsa(tfm, vecs++); + if (ret) { + pr_err(alg: rsa: test failed on vector %d\n, i + 1); + return ret; + } + } + return 0; +} + +static int test_akcipher(struct crypto_akcipher *tfm, const char *alg, + struct akcipher_testvec *vecs, unsigned int tcount) +{ + if (strncmp(alg, rsa, 3) == 0) + return test_rsa(tfm, vecs, tcount); + +
[PATCH 0/8] crypto: Avoid using RNG in interrupt context
Hi: Currently we always use stdrng in interrupt context, which doesn't work very well with DRBG which cannot be called there. We could change DRBG but it really does a lot of work (e.g., a reseed) in its generation function and doing it in interrupt context would be bad. In fact, the only reason we were doing this in interrupt context is to conserver entropy, which is a non-issue with DRBG. So this series changes all the RNG users to only use the RNG in process context, and then makes DRBG the default RNG. Cheers, -- Email: Herbert Xu herb...@gondor.apana.org.au Home Page: http://gondor.apana.org.au/~herbert/ PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt -- To unsubscribe from this list: send the line unsubscribe linux-crypto in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[CFP] Reminder: Linux Security Summit 2015 CFP closes this Friday 5th June
Just a reminder to folk who've done interesting things in Linux security this year, the CFP for LSS 2015 is open until this Friday, 5th June. See the following link for details: http://kernsec.org/wiki/index.php/Linux_Security_Summit_2015 This is not just for kernel developers, or even developers -- any interesting/novel application of Linux security or research is welcome. We're also looking for round-table discussion topics, and people to lead those discussions. Get your proposals in soon! - James -- James Morris jmor...@namei.org -- To unsubscribe from this list: send the line unsubscribe linux-crypto in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[PATCH 4/8] crypto: seqiv - Move IV seeding into init function
We currently do the IV seeding on the first givencrypt call in order to conserve entropy. However, this does not work with DRBG which cannot be called from interrupt context. In fact, with DRBG we don't need to conserve entropy anyway. So this patch moves the seeding into the init function. Signed-off-by: Herbert Xu herb...@gondor.apana.org.au --- crypto/seqiv.c | 113 +++-- 1 file changed, 15 insertions(+), 98 deletions(-) diff --git a/crypto/seqiv.c b/crypto/seqiv.c index 2333974..42e4ee5 100644 --- a/crypto/seqiv.c +++ b/crypto/seqiv.c @@ -474,98 +474,6 @@ static int seqiv_aead_decrypt(struct aead_request *req) return crypto_aead_decrypt(subreq); } -static int seqiv_givencrypt_first(struct skcipher_givcrypt_request *req) -{ - struct crypto_ablkcipher *geniv = skcipher_givcrypt_reqtfm(req); - struct seqiv_ctx *ctx = crypto_ablkcipher_ctx(geniv); - int err = 0; - - spin_lock_bh(ctx-lock); - if (crypto_ablkcipher_crt(geniv)-givencrypt != seqiv_givencrypt_first) - goto unlock; - - crypto_ablkcipher_crt(geniv)-givencrypt = seqiv_givencrypt; - err = crypto_rng_get_bytes(crypto_default_rng, ctx-salt, - crypto_ablkcipher_ivsize(geniv)); - -unlock: - spin_unlock_bh(ctx-lock); - - if (err) - return err; - - return seqiv_givencrypt(req); -} - -static int seqiv_aead_givencrypt_first(struct aead_givcrypt_request *req) -{ - struct crypto_aead *geniv = aead_givcrypt_reqtfm(req); - struct seqiv_ctx *ctx = crypto_aead_ctx(geniv); - int err = 0; - - spin_lock_bh(ctx-lock); - if (crypto_aead_crt(geniv)-givencrypt != seqiv_aead_givencrypt_first) - goto unlock; - - crypto_aead_crt(geniv)-givencrypt = seqiv_aead_givencrypt; - err = crypto_rng_get_bytes(crypto_default_rng, ctx-salt, - crypto_aead_ivsize(geniv)); - -unlock: - spin_unlock_bh(ctx-lock); - - if (err) - return err; - - return seqiv_aead_givencrypt(req); -} - -static int seqniv_aead_encrypt_first(struct aead_request *req) -{ - struct crypto_aead *geniv = crypto_aead_reqtfm(req); - struct seqiv_aead_ctx *ctx = crypto_aead_ctx(geniv); - int err = 0; - - spin_lock_bh(ctx-geniv.lock); - if (geniv-encrypt != seqniv_aead_encrypt_first) - goto unlock; - - geniv-encrypt = seqniv_aead_encrypt; - err = crypto_rng_get_bytes(crypto_default_rng, ctx-salt, - crypto_aead_ivsize(geniv)); - -unlock: - spin_unlock_bh(ctx-geniv.lock); - - if (err) - return err; - - return seqniv_aead_encrypt(req); -} - -static int seqiv_aead_encrypt_first(struct aead_request *req) -{ - struct crypto_aead *geniv = crypto_aead_reqtfm(req); - struct seqiv_aead_ctx *ctx = crypto_aead_ctx(geniv); - int err = 0; - - spin_lock_bh(ctx-geniv.lock); - if (geniv-encrypt != seqiv_aead_encrypt_first) - goto unlock; - - geniv-encrypt = seqiv_aead_encrypt; - err = crypto_rng_get_bytes(crypto_default_rng, ctx-salt, - crypto_aead_ivsize(geniv)); - -unlock: - spin_unlock_bh(ctx-geniv.lock); - - if (err) - return err; - - return seqiv_aead_encrypt(req); -} - static int seqiv_init(struct crypto_tfm *tfm) { struct crypto_ablkcipher *geniv = __crypto_ablkcipher_cast(tfm); @@ -575,7 +483,9 @@ static int seqiv_init(struct crypto_tfm *tfm) tfm-crt_ablkcipher.reqsize = sizeof(struct ablkcipher_request); - return skcipher_geniv_init(tfm); + return crypto_rng_get_bytes(crypto_default_rng, ctx-salt, + crypto_ablkcipher_ivsize(geniv)) ?: + skcipher_geniv_init(tfm); } static int seqiv_old_aead_init(struct crypto_tfm *tfm) @@ -588,7 +498,9 @@ static int seqiv_old_aead_init(struct crypto_tfm *tfm) crypto_aead_set_reqsize(__crypto_aead_cast(tfm), sizeof(struct aead_request)); - return aead_geniv_init(tfm); + return crypto_rng_get_bytes(crypto_default_rng, ctx-salt, + crypto_aead_ivsize(geniv)) ?: + aead_geniv_init(tfm); } static int seqiv_aead_init_common(struct crypto_tfm *tfm, unsigned int reqsize) @@ -601,6 +513,11 @@ static int seqiv_aead_init_common(struct crypto_tfm *tfm, unsigned int reqsize) crypto_aead_set_reqsize(geniv, sizeof(struct aead_request)); + err = crypto_rng_get_bytes(crypto_default_rng, ctx-salt, + crypto_aead_ivsize(geniv)); + if (err) + goto out; + ctx-null = crypto_get_default_null_skcipher(); err = PTR_ERR(ctx-null); if (IS_ERR(ctx-null)) @@ -654,7 +571,7 @@ static int
[PATCH 6/8] crypto: echainiv - Set Kconfig default to m
As this is required by many IPsec algorithms, let's set the default to m. Signed-off-by: Herbert Xu herb...@gondor.apana.org.au --- crypto/Kconfig |1 + 1 file changed, 1 insertion(+) diff --git a/crypto/Kconfig b/crypto/Kconfig index af011a9..c3b6a5b 100644 --- a/crypto/Kconfig +++ b/crypto/Kconfig @@ -232,6 +232,7 @@ config CRYPTO_ECHAINIV select CRYPTO_AEAD select CRYPTO_NULL select CRYPTO_RNG + default m help This IV generator generates an IV based on the encryption of a sequence number xored with a salt. This is the default -- To unsubscribe from this list: send the line unsubscribe linux-crypto in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[PATCH 5/8] crypto: drbg - Add stdrng alias and increase priority
This patch adds the stdrng module alias and increases the priority to ensure that it is loaded in preference to other RNGs. Signed-off-by: Herbert Xu herb...@gondor.apana.org.au --- crypto/drbg.c |3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/crypto/drbg.c b/crypto/drbg.c index 9284348..04836b4 100644 --- a/crypto/drbg.c +++ b/crypto/drbg.c @@ -1876,7 +1876,7 @@ static inline void __init drbg_fill_array(struct rng_alg *alg, const struct drbg_core *core, int pr) { int pos = 0; - static int priority = 100; + static int priority = 200; memcpy(alg-base.cra_name, stdrng, 6); if (pr) { @@ -1965,3 +1965,4 @@ MODULE_DESCRIPTION(NIST SP800-90A Deterministic Random Bit Generator (DRBG) CRYPTO_DRBG_HASH_STRING CRYPTO_DRBG_HMAC_STRING CRYPTO_DRBG_CTR_STRING); +MODULE_ALIAS_CRYPTO(stdrng); -- To unsubscribe from this list: send the line unsubscribe linux-crypto in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[PATCH 3/8] crypto: eseqiv - Move IV seeding into init function
We currently do the IV seeding on the first givencrypt call in order to conserve entropy. However, this does not work with DRBG which cannot be called from interrupt context. In fact, with DRBG we don't need to conserve entropy anyway. So this patch moves the seeding into the init function. Signed-off-by: Herbert Xu herb...@gondor.apana.org.au --- crypto/eseqiv.c | 29 - 1 file changed, 4 insertions(+), 25 deletions(-) diff --git a/crypto/eseqiv.c b/crypto/eseqiv.c index f116fae..78a7264 100644 --- a/crypto/eseqiv.c +++ b/crypto/eseqiv.c @@ -146,29 +146,6 @@ out: return err; } -static int eseqiv_givencrypt_first(struct skcipher_givcrypt_request *req) -{ - struct crypto_ablkcipher *geniv = skcipher_givcrypt_reqtfm(req); - struct eseqiv_ctx *ctx = crypto_ablkcipher_ctx(geniv); - int err = 0; - - spin_lock_bh(ctx-lock); - if (crypto_ablkcipher_crt(geniv)-givencrypt != eseqiv_givencrypt_first) - goto unlock; - - crypto_ablkcipher_crt(geniv)-givencrypt = eseqiv_givencrypt; - err = crypto_rng_get_bytes(crypto_default_rng, ctx-salt, - crypto_ablkcipher_ivsize(geniv)); - -unlock: - spin_unlock_bh(ctx-lock); - - if (err) - return err; - - return eseqiv_givencrypt(req); -} - static int eseqiv_init(struct crypto_tfm *tfm) { struct crypto_ablkcipher *geniv = __crypto_ablkcipher_cast(tfm); @@ -198,7 +175,9 @@ static int eseqiv_init(struct crypto_tfm *tfm) tfm-crt_ablkcipher.reqsize = reqsize + sizeof(struct ablkcipher_request); - return skcipher_geniv_init(tfm); + return crypto_rng_get_bytes(crypto_default_rng, ctx-salt, + crypto_ablkcipher_ivsize(geniv)) ?: + skcipher_geniv_init(tfm); } static struct crypto_template eseqiv_tmpl; @@ -220,7 +199,7 @@ static struct crypto_instance *eseqiv_alloc(struct rtattr **tb) if (inst-alg.cra_ablkcipher.ivsize != inst-alg.cra_blocksize) goto free_inst; - inst-alg.cra_ablkcipher.givencrypt = eseqiv_givencrypt_first; + inst-alg.cra_ablkcipher.givencrypt = eseqiv_givencrypt; inst-alg.cra_init = eseqiv_init; inst-alg.cra_exit = skcipher_geniv_exit; -- To unsubscribe from this list: send the line unsubscribe linux-crypto in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[PATCH 1/8] crypto: chainiv - Move IV seeding into init function
We currently do the IV seeding on the first givencrypt call in order to conserve entropy. However, this does not work with DRBG which cannot be called from interrupt context. In fact, with DRBG we don't need to conserve entropy anyway. So this patch moves the seeding into the init function. Signed-off-by: Herbert Xu herb...@gondor.apana.org.au --- crypto/chainiv.c | 66 +++ 1 file changed, 9 insertions(+), 57 deletions(-) diff --git a/crypto/chainiv.c b/crypto/chainiv.c index 63c17d5..be0bd52 100644 --- a/crypto/chainiv.c +++ b/crypto/chainiv.c @@ -80,35 +80,15 @@ unlock: return err; } -static int chainiv_givencrypt_first(struct skcipher_givcrypt_request *req) +static int chainiv_init_common(struct crypto_tfm *tfm, char iv[]) { - struct crypto_ablkcipher *geniv = skcipher_givcrypt_reqtfm(req); - struct chainiv_ctx *ctx = crypto_ablkcipher_ctx(geniv); - int err = 0; - - spin_lock_bh(ctx-lock); - if (crypto_ablkcipher_crt(geniv)-givencrypt != - chainiv_givencrypt_first) - goto unlock; - - crypto_ablkcipher_crt(geniv)-givencrypt = chainiv_givencrypt; - err = crypto_rng_get_bytes(crypto_default_rng, ctx-iv, - crypto_ablkcipher_ivsize(geniv)); - -unlock: - spin_unlock_bh(ctx-lock); + struct crypto_ablkcipher *geniv = __crypto_ablkcipher_cast(tfm); - if (err) - return err; - - return chainiv_givencrypt(req); -} - -static int chainiv_init_common(struct crypto_tfm *tfm) -{ tfm-crt_ablkcipher.reqsize = sizeof(struct ablkcipher_request); - return skcipher_geniv_init(tfm); + return crypto_rng_get_bytes(crypto_default_rng, iv, + crypto_ablkcipher_ivsize(geniv)) ?: + skcipher_geniv_init(tfm); } static int chainiv_init(struct crypto_tfm *tfm) @@ -117,7 +97,7 @@ static int chainiv_init(struct crypto_tfm *tfm) spin_lock_init(ctx-lock); - return chainiv_init_common(tfm); + return chainiv_init_common(tfm, ctx-iv); } static int async_chainiv_schedule_work(struct async_chainiv_ctx *ctx) @@ -205,33 +185,6 @@ postpone: return async_chainiv_postpone_request(req); } -static int async_chainiv_givencrypt_first(struct skcipher_givcrypt_request *req) -{ - struct crypto_ablkcipher *geniv = skcipher_givcrypt_reqtfm(req); - struct async_chainiv_ctx *ctx = crypto_ablkcipher_ctx(geniv); - int err = 0; - - if (test_and_set_bit(CHAINIV_STATE_INUSE, ctx-state)) - goto out; - - if (crypto_ablkcipher_crt(geniv)-givencrypt != - async_chainiv_givencrypt_first) - goto unlock; - - crypto_ablkcipher_crt(geniv)-givencrypt = async_chainiv_givencrypt; - err = crypto_rng_get_bytes(crypto_default_rng, ctx-iv, - crypto_ablkcipher_ivsize(geniv)); - -unlock: - clear_bit(CHAINIV_STATE_INUSE, ctx-state); - - if (err) - return err; - -out: - return async_chainiv_givencrypt(req); -} - static void async_chainiv_do_postponed(struct work_struct *work) { struct async_chainiv_ctx *ctx = container_of(work, @@ -270,7 +223,7 @@ static int async_chainiv_init(struct crypto_tfm *tfm) crypto_init_queue(ctx-queue, 100); INIT_WORK(ctx-postponed, async_chainiv_do_postponed); - return chainiv_init_common(tfm); + return chainiv_init_common(tfm, ctx-iv); } static void async_chainiv_exit(struct crypto_tfm *tfm) @@ -302,7 +255,7 @@ static struct crypto_instance *chainiv_alloc(struct rtattr **tb) if (IS_ERR(inst)) goto put_rng; - inst-alg.cra_ablkcipher.givencrypt = chainiv_givencrypt_first; + inst-alg.cra_ablkcipher.givencrypt = chainiv_givencrypt; inst-alg.cra_init = chainiv_init; inst-alg.cra_exit = skcipher_geniv_exit; @@ -312,8 +265,7 @@ static struct crypto_instance *chainiv_alloc(struct rtattr **tb) if (!crypto_requires_sync(algt-type, algt-mask)) { inst-alg.cra_flags |= CRYPTO_ALG_ASYNC; - inst-alg.cra_ablkcipher.givencrypt = - async_chainiv_givencrypt_first; + inst-alg.cra_ablkcipher.givencrypt = async_chainiv_givencrypt; inst-alg.cra_init = async_chainiv_init; inst-alg.cra_exit = async_chainiv_exit; -- To unsubscribe from this list: send the line unsubscribe linux-crypto in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[PATCH 2/8] crypto: echainiv - Move IV seeding into init function
We currently do the IV seeding on the first givencrypt call in order to conserve entropy. However, this does not work with DRBG which cannot be called from interrupt context. In fact, with DRBG we don't need to conserve entropy anyway. So this patch moves the seeding into the init function. Signed-off-by: Herbert Xu herb...@gondor.apana.org.au --- crypto/echainiv.c | 30 ++ 1 file changed, 6 insertions(+), 24 deletions(-) diff --git a/crypto/echainiv.c b/crypto/echainiv.c index 62a817f..08d3336 100644 --- a/crypto/echainiv.c +++ b/crypto/echainiv.c @@ -187,29 +187,6 @@ static int echainiv_decrypt(struct aead_request *req) return crypto_aead_decrypt(subreq); } -static int echainiv_encrypt_first(struct aead_request *req) -{ - struct crypto_aead *geniv = crypto_aead_reqtfm(req); - struct echainiv_ctx *ctx = crypto_aead_ctx(geniv); - int err = 0; - - spin_lock_bh(ctx-geniv.lock); - if (geniv-encrypt != echainiv_encrypt_first) - goto unlock; - - geniv-encrypt = echainiv_encrypt; - err = crypto_rng_get_bytes(crypto_default_rng, ctx-salt, - crypto_aead_ivsize(geniv)); - -unlock: - spin_unlock_bh(ctx-geniv.lock); - - if (err) - return err; - - return echainiv_encrypt(req); -} - static int echainiv_init(struct crypto_tfm *tfm) { struct crypto_aead *geniv = __crypto_aead_cast(tfm); @@ -220,6 +197,11 @@ static int echainiv_init(struct crypto_tfm *tfm) crypto_aead_set_reqsize(geniv, sizeof(struct aead_request)); + err = crypto_rng_get_bytes(crypto_default_rng, ctx-salt, + crypto_aead_ivsize(geniv)); + if (err) + goto out; + ctx-null = crypto_get_default_null_skcipher(); err = PTR_ERR(ctx-null); if (IS_ERR(ctx-null)) @@ -272,7 +254,7 @@ static int echainiv_aead_create(struct crypto_template *tmpl, inst-alg.ivsize MAX_IV_SIZE) goto free_inst; - inst-alg.encrypt = echainiv_encrypt_first; + inst-alg.encrypt = echainiv_encrypt; inst-alg.decrypt = echainiv_decrypt; inst-alg.base.cra_init = echainiv_init; -- To unsubscribe from this list: send the line unsubscribe linux-crypto in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[PATCH 7/8] crypto: rng - Make DRBG the default RNG
This patch creates a new invisible Kconfig option CRYPTO_RNG_DEFAULT that simply selects the DRBG. This new option is then selected by the IV generators. Signed-off-by: Herbert Xu herb...@gondor.apana.org.au --- crypto/Kconfig | 15 --- 1 file changed, 8 insertions(+), 7 deletions(-) diff --git a/crypto/Kconfig b/crypto/Kconfig index c3b6a5b..19ca651 100644 --- a/crypto/Kconfig +++ b/crypto/Kconfig @@ -78,6 +78,10 @@ config CRYPTO_RNG2 tristate select CRYPTO_ALGAPI2 +config CRYPTO_RNG_DEFAULT + tristate + select CRYPTO_DRBG_MENU + config CRYPTO_PCOMP tristate select CRYPTO_PCOMP2 @@ -222,7 +226,7 @@ config CRYPTO_SEQIV select CRYPTO_AEAD select CRYPTO_BLKCIPHER select CRYPTO_NULL - select CRYPTO_RNG + select CRYPTO_RNG_DEFAULT help This IV generator generates an IV based on a sequence number by xoring it with a salt. This algorithm is mainly useful for CTR @@ -231,7 +235,7 @@ config CRYPTO_ECHAINIV tristate Encrypted Chain IV Generator select CRYPTO_AEAD select CRYPTO_NULL - select CRYPTO_RNG + select CRYPTO_RNG_DEFAULT default m help This IV generator generates an IV based on the encryption of @@ -1450,7 +1454,6 @@ comment Random Number Generation config CRYPTO_ANSI_CPRNG tristate Pseudo Random Number Generation for Cryptographic modules - default m select CRYPTO_AES select CRYPTO_RNG help @@ -1468,11 +1471,9 @@ menuconfig CRYPTO_DRBG_MENU if CRYPTO_DRBG_MENU config CRYPTO_DRBG_HMAC - bool Enable HMAC DRBG + bool default y select CRYPTO_HMAC - help - Enable the HMAC DRBG variant as defined in NIST SP800-90A. config CRYPTO_DRBG_HASH bool Enable Hash DRBG @@ -1488,7 +1489,7 @@ config CRYPTO_DRBG_CTR config CRYPTO_DRBG tristate - default CRYPTO_DRBG_MENU if (CRYPTO_DRBG_HMAC || CRYPTO_DRBG_HASH || CRYPTO_DRBG_CTR) + default CRYPTO_DRBG_MENU select CRYPTO_RNG select CRYPTO_JITTERENTROPY -- To unsubscribe from this list: send the line unsubscribe linux-crypto in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH 5/8] crypto: drbg - Add stdrng alias and increase priority
On Wed, Jun 03, 2015 at 08:59:13AM +0200, Stephan Mueller wrote: Considering the patch 8/8 which removes krng, wouldn't it make sense to remove the following code from the DRBG: /* * If FIPS mode enabled, the selected DRBG shall have the * highest cra_priority over other stdrng instances to ensure * it is selected. */ if (fips_enabled) alg-base.cra_priority += 200; That code was added to get a higher prio than the krng in FIPS mode. As this is not needed any more (krng is gone), I would say it is safe to remove this code too. You'd have to remove it from ansi_cprng first. Feel free to send patches to do that. Thanks, -- Email: Herbert Xu herb...@gondor.apana.org.au Home Page: http://gondor.apana.org.au/~herbert/ PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt -- To unsubscribe from this list: send the line unsubscribe linux-crypto in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH 5/8] crypto: drbg - Add stdrng alias and increase priority
Am Mittwoch, 3. Juni 2015, 14:49:28 schrieb Herbert Xu: Hi Herbert, This patch adds the stdrng module alias and increases the priority to ensure that it is loaded in preference to other RNGs. Signed-off-by: Herbert Xu herb...@gondor.apana.org.au --- crypto/drbg.c |3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/crypto/drbg.c b/crypto/drbg.c index 9284348..04836b4 100644 --- a/crypto/drbg.c +++ b/crypto/drbg.c @@ -1876,7 +1876,7 @@ static inline void __init drbg_fill_array(struct rng_alg *alg, const struct drbg_core *core, int pr) { int pos = 0; - static int priority = 100; + static int priority = 200; Considering the patch 8/8 which removes krng, wouldn't it make sense to remove the following code from the DRBG: /* * If FIPS mode enabled, the selected DRBG shall have the * highest cra_priority over other stdrng instances to ensure * it is selected. */ if (fips_enabled) alg-base.cra_priority += 200; That code was added to get a higher prio than the krng in FIPS mode. As this is not needed any more (krng is gone), I would say it is safe to remove this code too. memcpy(alg-base.cra_name, stdrng, 6); if (pr) { @@ -1965,3 +1965,4 @@ MODULE_DESCRIPTION(NIST SP800-90A Deterministic Random Bit Generator (DRBG) CRYPTO_DRBG_HASH_STRING CRYPTO_DRBG_HMAC_STRING CRYPTO_DRBG_CTR_STRING); +MODULE_ALIAS_CRYPTO(stdrng); Ciao Stephan -- To unsubscribe from this list: send the line unsubscribe linux-crypto in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH 5/8] crypto: drbg - Add stdrng alias and increase priority
Am Mittwoch, 3. Juni 2015, 15:01:39 schrieb Herbert Xu: Hi Herbert, You'd have to remove it from ansi_cprng first. Feel free to send patches to do that. Absolutely, my bad. -- Ciao Stephan -- To unsubscribe from this list: send the line unsubscribe linux-crypto in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH 0/9] crypto: Add ChaCha20-Poly1305 AEAD support for IPsec
On Wed, Jun 03, 2015 at 10:44:25AM +0800, Herbert Xu wrote: On Mon, Jun 01, 2015 at 01:43:55PM +0200, Martin Willi wrote: This is a first version of a patch series implementing the ChaCha20-Poly1305 AEAD construction defined in RFC7539. It is based on the current cryptodev tree. The patches look fine to me. Steffen, what do you think? I'm fine with this. If you want to merge this series through the cryptodev tree, feel free to add a Acked-by: Steffen Klassert steffen.klass...@secunet.com -- To unsubscribe from this list: send the line unsubscribe linux-crypto in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH] crypto: prevent nx 842 load if no hw driver
On Wed, Jun 3, 2015 at 1:08 AM, Herbert Xu herb...@gondor.apana.org.au wrote: On Thu, May 28, 2015 at 04:21:31PM -0400, Dan Streetman wrote: Change the nx-842 common driver to wait for loading of both platform drivers, and fail loading if the platform driver pointer is not set. Add an independent platform driver pointer, that the platform drivers set if they find they are able to load (i.e. if they find their platform devicetree node(s)). The problem is currently, the main nx-842 driver will stay loaded even if there is no platform driver and thus no possible way it can do any compression or decompression. This allows the crypto 842-nx driver to load even if it won't actually work. For crypto compression users (e.g. zswap) that expect an available crypto compression driver to actually work, this is bad. This patch fixes that, so the 842-nx crypto compression driver won't load if it doesn't have the driver and hardware available to perform the compression. Signed-off-by: Dan Streetman ddstr...@ieee.org Applied. Though I had to do the Makefile bit by hand because it contains references to nx-compress-test which doesn't exist in my tree. Oops sorry, I forgot to remove that old test module patch from my tree. Thanks. -- Email: Herbert Xu herb...@gondor.apana.org.au Home Page: http://gondor.apana.org.au/~herbert/ PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt -- To unsubscribe from this list: send the line unsubscribe linux-crypto in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html -- To unsubscribe from this list: send the line unsubscribe linux-crypto in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[PATCH 1/2] Doc:crypto: Fix typo in crypto-API.tmpl
This patch fix some spelling typo found in crypto-API.tmpl Signed-off-by: Masanari Iida standby2...@gmail.com --- Documentation/DocBook/crypto-API.tmpl | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/Documentation/DocBook/crypto-API.tmpl b/Documentation/DocBook/crypto-API.tmpl index 5b05510..9e56773 100644 --- a/Documentation/DocBook/crypto-API.tmpl +++ b/Documentation/DocBook/crypto-API.tmpl @@ -119,7 +119,7 @@ para Note: The terms transformation and cipher algorithm are used - interchangably. + interchangeably. /para /sect1 @@ -1563,7 +1563,7 @@ struct sockaddr_alg sa = { sect1titleZero-Copy Interface/title para - In addition to the send/write/read/recv system call familty, the AF_ALG + In addition to the send/write/read/recv system call family, the AF_ALG interface can be accessed with the zero-copy interface of splice/vmsplice. As the name indicates, the kernel tries to avoid a copy operation into kernel space. -- 2.4.2.339.g77bd3ea -- To unsubscribe from this list: send the line unsubscribe linux-crypto in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [V5 PATCH 2/5] arm64 : Introduce support for ACPI _CCA object
On Wed, 2015-06-03 at 09:37 -0500, Suravee Suthikulanit wrote: On 5/28/2015 9:38 PM, Mark Salter wrote: On Wed, 2015-05-20 at 17:09 -0500, Suravee Suthikulpanit wrote: Fromhttp://www.uefi.org/sites/default/files/resources/ACPI_6.0.pdf, section 6.2.17 _CCA states that ARM platforms require ACPI _CCA object to be specified for DMA-cabpable devices. Therefore, this patch specifies ACPI_CCA_REQUIRED in arm64 Kconfig. In addition, to handle the case when _CCA is missing, arm64 would assign dummy_dma_ops to disable DMA capability of the device. Acked-by: Catalin Marinascatalin.mari...@arm.com Signed-off-by: Mark Saltermsal...@redhat.com Signed-off-by: Suravee Suthikulpanitsuravee.suthikulpa...@amd.com --- arch/arm64/Kconfig | 1 + arch/arm64/include/asm/dma-mapping.h | 18 ++- arch/arm64/mm/dma-mapping.c | 92 3 files changed, 109 insertions(+), 2 deletions(-) diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 4269dba..95307b4 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -1,5 +1,6 @@ config ARM64 def_bool y + select ACPI_CCA_REQUIRED if ACPI select ACPI_GENERIC_GSI if ACPI select ACPI_REDUCED_HARDWARE_ONLY if ACPI select ARCH_HAS_ATOMIC64_DEC_IF_POSITIVE diff --git a/arch/arm64/include/asm/dma-mapping.h b/arch/arm64/include/asm/dma-mapping.h index 9437e3d..f0d6d0b 100644 --- a/arch/arm64/include/asm/dma-mapping.h +++ b/arch/arm64/include/asm/dma-mapping.h @@ -18,6 +18,7 @@ #ifdef __KERNEL__ +#include linux/acpi.h #include linux/types.h #include linux/vmalloc.h ^^^ This hunk causes build issues with a couple of drivers: drivers/scsi/megaraid/megaraid_sas_fp.c:69:0: warning: FALSE redefined [enabled by default] #define FALSE 0 ^ In file included from include/acpi/acpi.h:58:0, from include/linux/acpi.h:37, from ./arch/arm64/include/asm/dma-mapping.h:21, from include/linux/dma-mapping.h:86, from ./arch/arm64/include/asm/pci.h:7, from include/linux/pci.h:1460, from drivers/scsi/megaraid/megaraid_sas_fp.c:37: include/acpi/actypes.h:433:0: note: this is the location of the previous definition #define FALSE (1 == 0) ^ In file included from include/acpi/acpi.h:58:0, from include/linux/acpi.h:37, from ./arch/arm64/include/asm/dma-mapping.h:21, from include/linux/dma-mapping.h:86, from include/scsi/scsi_cmnd.h:4, from drivers/scsi/ufs/ufshcd.h:60, from drivers/scsi/ufs/ufshcd.c:43: include/acpi/actypes.h:433:41: error: expected identifier before ‘(’ token #define FALSE (1 == 0) ^ drivers/scsi/ufs/unipro.h:203:2: note: in expansion of macro ‘FALSE’ FALSE = 0, ^ This happens because the ACPI definitions of TRUE and FALSE conflict with local definitions in megaraid and enum declaration in ufs. Mark, Thanks for pointing this out. Although, I would think that the megaraid_sas_fp.c should have had the #ifndef to check before defining the TRUE and FALSE as following. #ifndef TRUE #define TRUE 1 #endif #ifndef FALSE #define FALSE 0 #endif This seems to be what other drivers are also doing. If this is okay, I can send out a fix-up patch for the megaraid driver. Yeah, or #undef them if defined so megaraid defines them as desired. And #undef if defined would work for unipro.h as well. -- To unsubscribe from this list: send the line unsubscribe linux-crypto in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Crypto driver -DCP
On Wed, Jun 03, 2015 at 03:02:13PM -0500, Jay Monkman wrote: That would be one use, but a more likely use would be to prevent access to the keys. A system could write keys to the key slots in the bootloader or in a TrustZone secure world. Then those keys could be used for crypto operations in Linux without ever exposing them. Key slots can be written to, but cannot be read from. Even with keys stored in key slots, other keys may be used. For example, someone could do: operation w/ key in slot 1 operation w/ key provided in descriptor operation w/ key in slot 1 I don't think an LRU scheme would allow something like that. In that case I would suggest using setkey with a length other than that of a valid AES key. For example, you could use a one- byte value to select the key slot. Cheers, -- Email: Herbert Xu herb...@gondor.apana.org.au Home Page: http://gondor.apana.org.au/~herbert/ PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt -- To unsubscribe from this list: send the line unsubscribe linux-crypto in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html