[PATCH] crypto/testmgr: mark rfc4106(gcm(aes)) as fips_allowed
This gcm variant is popular for ipsec use, and there are folks who would like to use it while in fips mode. Mark it with fips_allowed=1 to facilitate that. CC: LKML linux-ker...@vger.kernel.org CC: Stephan Mueller smuel...@atsec.com Signed-off-by: Jarod Wilson ja...@redhat.com --- crypto/testmgr.c | 1 + 1 file changed, 1 insertion(+) diff --git a/crypto/testmgr.c b/crypto/testmgr.c index 235b1ff..758d028 100644 --- a/crypto/testmgr.c +++ b/crypto/testmgr.c @@ -3293,6 +3293,7 @@ static const struct alg_test_desc alg_test_descs[] = { }, { .alg = rfc4106(gcm(aes)), .test = alg_test_aead, + .fips_allowed = 1, .suite = { .aead = { .enc = { -- 1.8.3.1 -- To unsubscribe from this list: send the line unsubscribe linux-crypto in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Crypto: Add support for 192 256 bit keys to AESNI RFC4106 - resubmission
On Sun, Jan 11, 2015 at 11:48:08PM -0500, Timothy McCaffrey wrote: These patches fix the RFC4106 implementation in the aesni-intel module so it supports 192 256 bit keys. Since the AVX support that was added to this module also only supports 128 bit keys, and this patch only affects the SSE implementation, changes were also made to use the SSE version if key sizes other than 128 are specified. RFC4106 specifies that 192 256 bit keys must be supported (section 8.4). Also, this should fix Strongswan issue 341 where the aesni module needs to be unloaded if 256 bit keys are used: http://wiki.strongswan.org/issues/341 This patch has been tested with Sandy Bridge and Haswell processors. With 128 bit keys and input buffers 512 bytes a slight performance degradation was noticed (~1%). For input buffers of less than 512 bytes there was no performance impact. Compared to 128 bit keys, 256 bit key size performance is approx. .5 cycles per byte slower on Sandy Bridge, and .37 cycles per byte slower on Haswell (vs. SSE code). This patch has also been tested with StrongSwan IPSec connections where it worked correctly. I created this diff from a git clone of crypto-2.6.git. Any questions, please feel free to contact me. Signed off by: timothy.mccaff...@unisys.com Here's an unborked version that applies cleanly to linus' master right now. Somehow, tons of extra spaces made their way into the original (re-)submission, I just stripped them out (and fixed up a few extra trailing spaces and spaces before tabs). Signed-off-by: Jarod Wilson ja...@redhat.com --- diff --git a/arch/x86/crypto/aesni-intel_asm.S b/arch/x86/crypto/aesni-intel_asm.S index 477e9d7..6bd2c6c 100644 --- a/arch/x86/crypto/aesni-intel_asm.S +++ b/arch/x86/crypto/aesni-intel_asm.S @@ -32,12 +32,23 @@ #include linux/linkage.h #include asm/inst.h +/* + * The following macros are used to move an (un)aligned 16 byte value to/from + * an XMM register. This can done for either FP or integer values, for FP use + * movaps (move aligned packed single) or integer use movdqa (move double quad + * aligned). It doesn't make a performance difference which instruction is used + * since Nehalem (original Core i7) was released. However, the movaps is a byte + * shorter, so that is the one we'll use for now. (same for unaligned). + */ +#define MOVADQ movaps +#define MOVUDQ movups + #ifdef __x86_64__ + .data .align 16 .Lgf128mul_x_ble_mask: .octa 0x00010087 - POLY: .octa 0xC201 TWOONE: .octa 0x00010001 @@ -89,6 +100,7 @@ enc:.octa 0x2 #define arg8 STACK_OFFSET+16(%r14) #define arg9 STACK_OFFSET+24(%r14) #define arg10 STACK_OFFSET+32(%r14) +#define keysize 2*15*16(%arg1) #endif @@ -213,10 +225,12 @@ enc:.octa 0x2 .macro INITIAL_BLOCKS_DEC num_initial_blocks TMP1 TMP2 TMP3 TMP4 TMP5 XMM0 XMM1 \ XMM2 XMM3 XMM4 XMMDst TMP6 TMP7 i i_seq operation +MOVADQ SHUF_MASK(%rip), %xmm14 movarg7, %r10 # %r10 = AAD movarg8, %r12 # %r12 = aadLen mov%r12, %r11 pxor %xmm\i, %xmm\i + _get_AAD_loop\num_initial_blocks\operation: movd (%r10), \TMP1 pslldq $12, \TMP1 @@ -225,16 +239,18 @@ _get_AAD_loop\num_initial_blocks\operation: add$4, %r10 sub$4, %r12 jne_get_AAD_loop\num_initial_blocks\operation + cmp$16, %r11 je _get_AAD_loop2_done\num_initial_blocks\operation + mov$16, %r12 _get_AAD_loop2\num_initial_blocks\operation: psrldq $4, %xmm\i sub$4, %r12 cmp%r11, %r12 jne_get_AAD_loop2\num_initial_blocks\operation + _get_AAD_loop2_done\num_initial_blocks\operation: -movdqa SHUF_MASK(%rip), %xmm14 PSHUFB_XMM %xmm14, %xmm\i # byte-reflect the AAD data xor%r11, %r11 # initialise the data pointer offset as zero @@ -243,59 +259,34 @@ _get_AAD_loop2_done\num_initial_blocks\operation: mov%arg5, %rax # %rax = *Y0 movdqu (%rax), \XMM0# XMM0 = Y0 -movdqa SHUF_MASK(%rip), %xmm14 PSHUFB_XMM %xmm14, \XMM0 .if (\i == 5) || (\i == 6) || (\i == 7) + MOVADQ ONE(%RIP),\TMP1 + MOVADQ (%arg1),\TMP2 .irpc index, \i_seq - paddd ONE(%rip), \XMM0 # INCR Y0 + paddd \TMP1, \XMM0 # INCR Y0 movdqa \XMM0, %xmm\index -movdqa SHUF_MASK(%rip), %xmm14 PSHUFB_XMM %xmm14, %xmm\index # perform a 16 byte swap - -.endr -.irpc index, \i_seq - pxor 16*0(%arg1), %xmm\index -.endr -.irpc index, \i_seq - movaps 0x10(%rdi), \TMP1 - AESENC \TMP1, %xmm\index # Round 1 -.endr -.irpc index, \i_seq
[PATCH] crypto/testmgr: add missing spaces to drbg error strings
There are a few missing spaces in the error text strings for drbg_cavs_test, trivial fix. CC: Stephan Mueller smuel...@chronox.de CC: Herbert Xu herb...@gondor.apana.org.au CC: David S. Miller da...@davemloft.net CC: linux-crypto@vger.kernel.org Signed-off-by: Jarod Wilson ja...@redhat.com --- crypto/testmgr.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/crypto/testmgr.c b/crypto/testmgr.c index 81818b9..ac2b631 100644 --- a/crypto/testmgr.c +++ b/crypto/testmgr.c @@ -1778,7 +1778,7 @@ static int drbg_cavs_test(struct drbg_testvec *test, int pr, drng = crypto_alloc_rng(driver, type, mask); if (IS_ERR(drng)) { - printk(KERN_ERR alg: drbg: could not allocate DRNG handle for + printk(KERN_ERR alg: drbg: could not allocate DRNG handle for %s\n, driver); kzfree(buf); return -ENOMEM; @@ -1803,7 +1803,7 @@ static int drbg_cavs_test(struct drbg_testvec *test, int pr, buf, test-expectedlen, addtl); } if (ret = 0) { - printk(KERN_ERR alg: drbg: could not obtain random data for + printk(KERN_ERR alg: drbg: could not obtain random data for driver %s\n, driver); goto outbuf; } @@ -1818,7 +1818,7 @@ static int drbg_cavs_test(struct drbg_testvec *test, int pr, buf, test-expectedlen, addtl); } if (ret = 0) { - printk(KERN_ERR alg: drbg: could not obtain random data for + printk(KERN_ERR alg: drbg: could not obtain random data for driver %s\n, driver); goto outbuf; } -- 1.8.3.1 -- To unsubscribe from this list: send the line unsubscribe linux-crypto in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH] crypto/fips: only panic on bad/missing crypto mod signatures
On Wed, Jul 02, 2014 at 08:38:50PM +0800, Herbert Xu wrote: On Fri, Jun 27, 2014 at 03:12:54PM -0400, Jarod Wilson wrote: Per further discussion with NIST, the requirements for FIPS state that we only need to panic the system on failed kernel module signature checks for crypto subsystem modules. This moves the fips-mode-only module signature check out of the generic module loading code, into the crypto subsystem, at points where we can catch both algorithm module loads and mode module loads. At the same time, make CONFIG_CRYPTO_FIPS dependent on CONFIG_MODULE_SIG, as this is entirely necessary for FIPS mode. CC: Herbert Xu herb...@gondor.apana.org.au CC: David S. Miller da...@davemloft.net CC: Rusty Russell ru...@rustcorp.com.au CC: Stephan Mueller stephan.muel...@atsec.com CC: linux-crypto@vger.kernel.org Signed-off-by: Jarod Wilson ja...@redhat.com --- crypto/Kconfig | 1 + crypto/algapi.c | 13 + kernel/module.c | 3 --- 3 files changed, 14 insertions(+), 3 deletions(-) diff --git a/crypto/Kconfig b/crypto/Kconfig index ce4012a..36402e5 100644 --- a/crypto/Kconfig +++ b/crypto/Kconfig @@ -24,6 +24,7 @@ comment Crypto core or helper config CRYPTO_FIPS bool FIPS 200 compliance depends on CRYPTO_ANSI_CPRNG !CRYPTO_MANAGER_DISABLE_TESTS + depends on MODULE_SIG help This options enables the fips boot option which is required if you want to system to operate in a FIPS 200 diff --git a/crypto/algapi.c b/crypto/algapi.c index 7a1ae87..7d228ff 100644 --- a/crypto/algapi.c +++ b/crypto/algapi.c @@ -43,6 +43,12 @@ static inline int crypto_set_driver_name(struct crypto_alg *alg) static int crypto_check_alg(struct crypto_alg *alg) { +#ifdef CONFIG_CRYPTO_FIPS + if (fips_enabled alg-cra_module !alg-cra_module-sig_ok) + panic(Module %s signature verification failed in FIPS mode\n, + alg-cra_module-name); +#endif Please hide the ugly ifdef in a helper inline function. Will do. @@ -437,6 +449,7 @@ int crypto_register_template(struct crypto_template *tmpl) list_add(tmpl-list, crypto_template_list); crypto_notify(CRYPTO_MSG_TMPL_REGISTER, tmpl); + err = 0; out: up_write(crypto_alg_sem); While I have no problems with you adding a blank line please don't mix such additions with changes of substance. Accidental, sorry, was sloppy when moving the check from one place in the function to another. Will remedy that too. v2 coming after some quick testing. -- Jarod Wilson ja...@redhat.com -- To unsubscribe from this list: send the line unsubscribe linux-crypto in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[PATCH v2] crypto/fips: only panic on bad/missing crypto mod signatures
Per further discussion with NIST, the requirements for FIPS state that we only need to panic the system on failed kernel module signature checks for crypto subsystem modules. This moves the fips-mode-only module signature check out of the generic module loading code, into the crypto subsystem, at points where we can catch both algorithm module loads and mode module loads. At the same time, make CONFIG_CRYPTO_FIPS dependent on CONFIG_MODULE_SIG, as this is entirely necessary for FIPS mode. v2: remove extraneous blank line, perform checks in static inline function, drop no longer necessary fips.h include. CC: Herbert Xu herb...@gondor.apana.org.au CC: David S. Miller da...@davemloft.net CC: Rusty Russell ru...@rustcorp.com.au CC: Stephan Mueller stephan.muel...@atsec.com CC: linux-crypto@vger.kernel.org Signed-off-by: Jarod Wilson ja...@redhat.com --- crypto/Kconfig | 1 + crypto/algapi.c | 14 ++ kernel/module.c | 4 3 files changed, 15 insertions(+), 4 deletions(-) diff --git a/crypto/Kconfig b/crypto/Kconfig index ce4012a..36402e5 100644 --- a/crypto/Kconfig +++ b/crypto/Kconfig @@ -24,6 +24,7 @@ comment Crypto core or helper config CRYPTO_FIPS bool FIPS 200 compliance depends on CRYPTO_ANSI_CPRNG !CRYPTO_MANAGER_DISABLE_TESTS + depends on MODULE_SIG help This options enables the fips boot option which is required if you want to system to operate in a FIPS 200 diff --git a/crypto/algapi.c b/crypto/algapi.c index 7a1ae87..e8d3a7d 100644 --- a/crypto/algapi.c +++ b/crypto/algapi.c @@ -41,8 +41,20 @@ static inline int crypto_set_driver_name(struct crypto_alg *alg) return 0; } +static inline void crypto_check_module_sig(struct module *mod) +{ +#ifdef CONFIG_CRYPTO_FIPS + if (fips_enabled mod !mod-sig_ok) + panic(Module %s signature verification failed in FIPS mode\n, + mod-name); +#endif + return; +} + static int crypto_check_alg(struct crypto_alg *alg) { + crypto_check_module_sig(alg-cra_module); + if (alg-cra_alignmask (alg-cra_alignmask + 1)) return -EINVAL; @@ -430,6 +442,8 @@ int crypto_register_template(struct crypto_template *tmpl) down_write(crypto_alg_sem); + crypto_check_module_sig(tmpl-module); + list_for_each_entry(q, crypto_template_list, list) { if (q == tmpl) goto out; diff --git a/kernel/module.c b/kernel/module.c index 81e727c..ae79ce6 100644 --- a/kernel/module.c +++ b/kernel/module.c @@ -60,7 +60,6 @@ #include linux/jump_label.h #include linux/pfn.h #include linux/bsearch.h -#include linux/fips.h #include uapi/linux/module.h #include module-internal.h @@ -2448,9 +2447,6 @@ static int module_sig_check(struct load_info *info) } /* Not having a signature is only an error if we're strict. */ - if (err 0 fips_enabled) - panic(Module verification failed with error %d in FIPS mode\n, - err); if (err == -ENOKEY !sig_enforce) err = 0; -- 1.8.3.1 -- To unsubscribe from this list: send the line unsubscribe linux-crypto in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[PATCH] crypto/fips: only panic on bad/missing crypto mod signatures
Per further discussion with NIST, the requirements for FIPS state that we only need to panic the system on failed kernel module signature checks for crypto subsystem modules. This moves the fips-mode-only module signature check out of the generic module loading code, into the crypto subsystem, at points where we can catch both algorithm module loads and mode module loads. At the same time, make CONFIG_CRYPTO_FIPS dependent on CONFIG_MODULE_SIG, as this is entirely necessary for FIPS mode. CC: Herbert Xu herb...@gondor.apana.org.au CC: David S. Miller da...@davemloft.net CC: Rusty Russell ru...@rustcorp.com.au CC: Stephan Mueller stephan.muel...@atsec.com CC: linux-crypto@vger.kernel.org Signed-off-by: Jarod Wilson ja...@redhat.com --- crypto/Kconfig | 1 + crypto/algapi.c | 13 + kernel/module.c | 3 --- 3 files changed, 14 insertions(+), 3 deletions(-) diff --git a/crypto/Kconfig b/crypto/Kconfig index ce4012a..36402e5 100644 --- a/crypto/Kconfig +++ b/crypto/Kconfig @@ -24,6 +24,7 @@ comment Crypto core or helper config CRYPTO_FIPS bool FIPS 200 compliance depends on CRYPTO_ANSI_CPRNG !CRYPTO_MANAGER_DISABLE_TESTS + depends on MODULE_SIG help This options enables the fips boot option which is required if you want to system to operate in a FIPS 200 diff --git a/crypto/algapi.c b/crypto/algapi.c index 7a1ae87..7d228ff 100644 --- a/crypto/algapi.c +++ b/crypto/algapi.c @@ -43,6 +43,12 @@ static inline int crypto_set_driver_name(struct crypto_alg *alg) static int crypto_check_alg(struct crypto_alg *alg) { +#ifdef CONFIG_CRYPTO_FIPS + if (fips_enabled alg-cra_module !alg-cra_module-sig_ok) + panic(Module %s signature verification failed in FIPS mode\n, + alg-cra_module-name); +#endif + if (alg-cra_alignmask (alg-cra_alignmask + 1)) return -EINVAL; @@ -430,6 +436,12 @@ int crypto_register_template(struct crypto_template *tmpl) down_write(crypto_alg_sem); +#ifdef CONFIG_CRYPTO_FIPS + if (fips_enabled tmpl-module !tmpl-module-sig_ok) + panic(Module %s signature verification failed in FIPS mode\n, + tmpl-module-name); +#endif + list_for_each_entry(q, crypto_template_list, list) { if (q == tmpl) goto out; @@ -437,6 +449,7 @@ int crypto_register_template(struct crypto_template *tmpl) list_add(tmpl-list, crypto_template_list); crypto_notify(CRYPTO_MSG_TMPL_REGISTER, tmpl); + err = 0; out: up_write(crypto_alg_sem); diff --git a/kernel/module.c b/kernel/module.c index 81e727c..ec801d5 100644 --- a/kernel/module.c +++ b/kernel/module.c @@ -2448,9 +2448,6 @@ static int module_sig_check(struct load_info *info) } /* Not having a signature is only an error if we're strict. */ - if (err 0 fips_enabled) - panic(Module verification failed with error %d in FIPS mode\n, - err); if (err == -ENOKEY !sig_enforce) err = 0; -- 1.8.3.1 -- To unsubscribe from this list: send the line unsubscribe linux-crypto in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH] crypto/fips: only panic on bad/missing crypto mod signatures
On Fri, Jun 27, 2014 at 03:12:54PM -0400, Jarod Wilson wrote: Per further discussion with NIST, the requirements for FIPS state that we only need to panic the system on failed kernel module signature checks for crypto subsystem modules. This moves the fips-mode-only module signature check out of the generic module loading code, into the crypto subsystem, at points where we can catch both algorithm module loads and mode module loads. At the same time, make CONFIG_CRYPTO_FIPS dependent on CONFIG_MODULE_SIG, as this is entirely necessary for FIPS mode. ... --- a/crypto/algapi.c +++ b/crypto/algapi.c ... @@ -430,6 +436,12 @@ int crypto_register_template(struct crypto_template *tmpl) down_write(crypto_alg_sem); +#ifdef CONFIG_CRYPTO_FIPS + if (fips_enabled tmpl-module !tmpl-module-sig_ok) + panic(Module %s signature verification failed in FIPS mode\n, + tmpl-module-name); +#endif + Forgot to mention: the panic locations within the functions don't really matter a whole lot right this moment, but Stephan pointed out the possibility of a future FIPS standard that might not require a panic, thus the crypto_register_template check being done after the down_write() so that you could do a goto out; instead of a panic here and have things more or less behave. -- Jarod Wilson ja...@redhat.com -- To unsubscribe from this list: send the line unsubscribe linux-crypto in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH] [char] random: fix priming of last_data
On Tue, Mar 19, 2013 at 12:18:09PM -0400, Jarod Wilson wrote: Commit ec8f02da9e added priming of last_data per fips requirements. Unfortuantely, it did so in a way that can lead to multiple threads all incrementing nbytes, but only one actually doing anything with the extra data, which leads to some fun random corruption and panics. The fix is to simply do everything needed to prime last_data in a single shot, so there's no window for multiple cpus to increment nbytes -- in fact, we won't even increment or decrement nbytes anymore, we'll just extract the needed EXTRACT_BYTES one time per pool and then carry on with ^ EXTRACT_SIZE Error in the description, if anyone actually cares. Oops. the normal routine. All these changes have been tested across multiple hosts and architectures where panics were previously encoutered. The code changes are are strictly limited to areas only touched when when booted in fips mode. This change should also go into 3.8-stable, to make the myriads of fips users on 3.8.x happy. -- Jarod Wilson ja...@redhat.com -- To unsubscribe from this list: send the line unsubscribe linux-crypto in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[PATCH v3] random: prime last_data value per fips requirements
The value stored in last_data must be primed for FIPS 140-2 purposes. Upon first use, either on system startup or after an RNDCLEARPOOL ioctl, we need to take an initial random sample, store it internally in last_data, then pass along the value after that to the requester, so that consistency checks aren't being run against stale and possibly known data. v2: streamline code flow a bit, eliminating extra loop and spinlock in the case where we need to prime, and account for the extra primer bits. v3: extract_buf() can't be called with spinlock already held, so bring back some extra lock/unlock calls. CC: Herbert Xu herb...@gondor.apana.org.au CC: David S. Miller da...@davemloft.net CC: Neil Horman nhor...@tuxdriver.com CC: Matt Mackall m...@selenic.com CC: linux-crypto@vger.kernel.org Signed-off-by: Jarod Wilson ja...@redhat.com --- drivers/char/random.c | 17 + 1 files changed, 17 insertions(+), 0 deletions(-) diff --git a/drivers/char/random.c b/drivers/char/random.c index b86eae9..d0139df 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -437,6 +437,7 @@ struct entropy_store { int entropy_count; int entropy_total; unsigned int initialized:1; + bool last_data_init; __u8 last_data[EXTRACT_SIZE]; }; @@ -957,6 +958,10 @@ static ssize_t extract_entropy(struct entropy_store *r, void *buf, ssize_t ret = 0, i; __u8 tmp[EXTRACT_SIZE]; + /* if last_data isn't primed, we need EXTRACT_SIZE extra bytes */ + if (fips_enabled !r-last_data_init) + nbytes += EXTRACT_SIZE; + trace_extract_entropy(r-name, nbytes, r-entropy_count, _RET_IP_); xfer_secondary_pool(r, nbytes); nbytes = account(r, nbytes, min, reserved); @@ -967,6 +972,17 @@ static ssize_t extract_entropy(struct entropy_store *r, void *buf, if (fips_enabled) { unsigned long flags; + + /* prime last_data value if need be, per fips 140-2 */ + if (!r-last_data_init) { + spin_lock_irqsave(r-lock, flags); + memcpy(r-last_data, tmp, EXTRACT_SIZE); + r-last_data_init = true; + nbytes -= EXTRACT_SIZE; + spin_unlock_irqrestore(r-lock, flags); + extract_buf(r, tmp); + } + spin_lock_irqsave(r-lock, flags); if (!memcmp(tmp, r-last_data, EXTRACT_SIZE)) panic(Hardware RNG duplicated output!\n); @@ -1086,6 +1102,7 @@ static void init_std_data(struct entropy_store *r) r-entropy_count = 0; r-entropy_total = 0; + r-last_data_init = false; mix_pool_bytes(r, now, sizeof(now), NULL); for (i = r-poolinfo-POOLBYTES; i 0; i -= sizeof(rv)) { if (!arch_get_random_long(rv)) -- 1.7.1 -- To unsubscribe from this list: send the line unsubscribe linux-crypto in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH] xfrm: fix hmac(sha256) truncation length
Herbert Xu wrote: On Wed, Mar 07, 2012 at 03:12:37PM -0500, Jarod Wilson wrote: Commit bc74b0c8af17458ecae77f725e507ab5fd100105 added proper hmac sha384 and sha512 variants with truncation lengths of 192 and 256 respectively, per RFC4868: No, it was done deliberately to maintain backwards compatibility. Userspace should set the truncbits explicitly from now on. Okay, I suspected that might be the case. No plans to ever invert that, so that userspace has to explicitly set the shorter truncbits for backwards compat? -- Jarod Wilson ja...@redhat.com. -- To unsubscribe from this list: send the line unsubscribe linux-crypto in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[PATCH] xfrm: fix hmac(sha256) truncation length
Commit bc74b0c8af17458ecae77f725e507ab5fd100105 added proper hmac sha384 and sha512 variants with truncation lengths of 192 and 256 respectively, per RFC4868: http://www.ietf.org/rfc/rfc4868.txt The already-present hmac sha256 variant was left as-is, with a truncation length of 96 bits, per an earlier draft of the RFC, as I understand it (it predates 2.6.12-rc2, didn't look further back). This doesn't play well out of the box with various other ipsec devices that properly implement the current RFC value of 128-bit truncation for hmac sha256. Easy fix, assuming there's not some reason I'm not clued into about why this might have intentionally been left as-is. CC: Paul Wouters pwout...@redhat.com CC: Herber Xu herb...@gondor.apana.org.au CC: David S. Miller da...@davemloft.net CC: Martin Willi mar...@strongswan.org CC: net...@vger.kernel.org CC: linux-crypto@vger.kernel.org Signed-off-by: Jarod Wilson ja...@redhat.com --- net/xfrm/xfrm_algo.c |2 +- 1 files changed, 1 insertions(+), 1 deletions(-) diff --git a/net/xfrm/xfrm_algo.c b/net/xfrm/xfrm_algo.c index 791ab2e..f2b3ce2 100644 --- a/net/xfrm/xfrm_algo.c +++ b/net/xfrm/xfrm_algo.c @@ -203,7 +203,7 @@ static struct xfrm_algo_desc aalg_list[] = { .uinfo = { .auth = { - .icv_truncbits = 96, + .icv_truncbits = 128, .icv_fullbits = 256, } }, -- 1.7.1 -- To unsubscribe from this list: send the line unsubscribe linux-crypto in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[PATCH v2] ansi_cprng: enforce key != seed in fips mode
Apparently, NIST is tightening up its requirements for FIPS validation with respect to RNGs. Its always been required that in fips mode, the ansi cprng not be fed key and seed material that was identical, but they're now interpreting FIPS 140-2, section AS07.09 as requiring that the implementation itself must enforce the requirement. Easy fix, we just do a memcmp of key and seed in fips_cprng_reset and call it a day. v2: Per Neil's advice, ensure slen is sufficiently long before we compare key and seed to avoid looking at potentially unallocated mem. CC: Neil Horman nhor...@tuxdriver.com CC: Stephan Mueller smuel...@atsec.com CC: Steve Grubb sgr...@redhat.com Signed-off-by: Jarod Wilson ja...@redhat.com --- crypto/ansi_cprng.c |8 1 files changed, 8 insertions(+), 0 deletions(-) diff --git a/crypto/ansi_cprng.c b/crypto/ansi_cprng.c index ffa0245..6ddd99e 100644 --- a/crypto/ansi_cprng.c +++ b/crypto/ansi_cprng.c @@ -414,10 +414,18 @@ static int fips_cprng_get_random(struct crypto_rng *tfm, u8 *rdata, static int fips_cprng_reset(struct crypto_rng *tfm, u8 *seed, unsigned int slen) { u8 rdata[DEFAULT_BLK_SZ]; + u8 *key = seed + DEFAULT_BLK_SZ; int rc; struct prng_context *prng = crypto_rng_ctx(tfm); + if (slen DEFAULT_PRNG_KSZ + DEFAULT_BLK_SZ) + return -EINVAL; + + /* fips strictly requires seed != key */ + if (!memcmp(seed, key, DEFAULT_PRNG_KSZ)) + return -EINVAL; + rc = cprng_reset(tfm, seed, slen); if (!rc) -- 1.7.1 -- To unsubscribe from this list: send the line unsubscribe linux-crypto in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[PATCH] ansi_cprng: enforce key != seed in fips mode
Apparently, NIST is tightening up its requirements for FIPS validation with respect to RNGs. Its always been required that in fips mode, the ansi cprng not be fed key and seed material that was identical, but they're now interpreting FIPS 140-2, section AS07.09 as requiring that the implementation itself must enforce the requirement. Easy fix, we just do a memcmp of key and seed in fips_cprng_reset and call it a day. CC: Neil Horman nhor...@tuxdriver.com CC: Stephan Mueller smuel...@atsec.com CC: Steve Grubb sgr...@redhat.com Signed-off-by: Jarod Wilson ja...@redhat.com --- crypto/ansi_cprng.c |5 + 1 files changed, 5 insertions(+), 0 deletions(-) diff --git a/crypto/ansi_cprng.c b/crypto/ansi_cprng.c index ffa0245..a7fdcb4 100644 --- a/crypto/ansi_cprng.c +++ b/crypto/ansi_cprng.c @@ -414,10 +414,15 @@ static int fips_cprng_get_random(struct crypto_rng *tfm, u8 *rdata, static int fips_cprng_reset(struct crypto_rng *tfm, u8 *seed, unsigned int slen) { u8 rdata[DEFAULT_BLK_SZ]; + u8 *key = seed + DEFAULT_BLK_SZ; int rc; struct prng_context *prng = crypto_rng_ctx(tfm); + /* fips strictly requires seed != key */ + if (!memcmp(seed, key, DEFAULT_PRNG_KSZ)) + return -EINVAL; + rc = cprng_reset(tfm, seed, slen); if (!rc) -- 1.7.1 -- To unsubscribe from this list: send the line unsubscribe linux-crypto in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH] random: add blocking facility to urandom
Peter Zijlstra wrote: On Mon, 2011-09-12 at 09:56 -0400, Jarod Wilson wrote: Thomas Gleixner wrote: Well, there is enough prove out there that the hardware you're using is a perfect random number generator by itself. So stop complaining about not having access to TPM chips if you can create an entropy source just by (ab)using the inherent randomness of modern CPU architectures to refill your entropy pool on the fly when the need arises w/o imposing completely unintuitive thresholds and user visible API changes. We started out going down that path: http://www.mail-archive.com/linux-crypto@vger.kernel.org/msg05778.html We hit a bit of a roadblock with it though. Have you guys seen this work: http://lwn.net/images/conf/rtlws11/random-hardware.pdf Yeah, that was part of the initial inspiration for the prior approach. There were still concerns that clock entropy didn't meet the random entropy pool's perfect security design goal. Without a rewrite of the entropy accounting system, clock entropy isn't going in, so I think looking into said rewrite is up next on my list. -- Jarod Wilson ja...@redhat.com -- To unsubscribe from this list: send the line unsubscribe linux-crypto in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH] random: add blocking facility to urandom
valdis.kletni...@vt.edu wrote: On Fri, 09 Sep 2011 10:21:13 +0800, Sandy Harris said: Barring a complete failure of SHA-1, an enemy who wants to infer the state from outputs needs astronomically large amounts of both data and effort. So let me get this straight - the movie-plot attack we're defending against is somebody readin literally gigabytes to terabytes (though I suspect realistic attacks will require peta/exabytes) of data from /dev/urandom, then performing all the data reduction needed to infer the state of enough of the entropy pool to infer all 160 bits of SHA-1 when only 80 bits are output... *and* doing it all without taking *any* action that adds any entropy to the pool, and *also* ensuring that no other programs add any entropy via their actions before the reading and data reduction completes. (Hint - if the attacker can do this, you're already pwned and have bigger problems) /me thinks RedHat needs to start insisting on random drug testing for their security experts at BSI. EIther that, or force BSI to share the really good stuff they've been smoking, or they need to learn how huge a number 2^160 *really* is Well, previously, we were looking at simply improving random entropy contributions, but quoting Matt Mackall from here: http://www.mail-archive.com/linux-crypto@vger.kernel.org/msg05799.html 'I recommend you do some Google searches for ssl timing attack and aes timing attack to get a feel for the kind of seemingly impossible things that can be done and thereby recalibrate your scale of the impossible.' :) Note: I'm not a crypto person. At all. I'm just the lucky guy who got tagged to work on trying to implement various suggestions to satisfy various government agencies. -- Jarod Wilson ja...@redhat.com -- To unsubscribe from this list: send the line unsubscribe linux-crypto in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH] random: add blocking facility to urandom
Thomas Gleixner wrote: On Fri, 9 Sep 2011, Steve Grubb wrote: But what I was trying to say is that we can't depend on these supplemental hardware devices like TPM because we don't have access to the proprietary technical details that would be necessary to supplement the analysis. And when it comes to TPM chips, I bet each chip has different details and entropy sources and entropy estimations and rates. Those details we can't get at, so we can't solve the problem by including that hardware. That is the point I was trying to make. :) Well, there is enough prove out there that the hardware you're using is a perfect random number generator by itself. So stop complaining about not having access to TPM chips if you can create an entropy source just by (ab)using the inherent randomness of modern CPU architectures to refill your entropy pool on the fly when the need arises w/o imposing completely unintuitive thresholds and user visible API changes. We started out going down that path: http://www.mail-archive.com/linux-crypto@vger.kernel.org/msg05778.html We hit a bit of a roadblock with it though. -- Jarod Wilson ja...@redhat.com -- To unsubscribe from this list: send the line unsubscribe linux-crypto in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH] random: add blocking facility to urandom
valdis.kletni...@vt.edu wrote: On Mon, 12 Sep 2011 09:55:15 EDT, Jarod Wilson said: Well, previously, we were looking at simply improving random entropy contributions, but quoting Matt Mackall from here: http://www.mail-archive.com/linux-crypto@vger.kernel.org/msg05799.html 'I recommend you do some Google searches for ssl timing attack and aes timing attack to get a feel for the kind of seemingly impossible things that can be done and thereby recalibrate your scale of the impossible.' If you're referring to Dan Bernstein's 2005 paper on AES timing attacks (http://cr.yp.to/antiforgery/cachetiming-20050414.pdf), note that it took him on the order of 2*25 packets per byte of AES key - targeting a dummy server intentionally designed to minimize noise. Although he correctly notes: Of course, I wrote this server to minimize the amount of noise in the timings available to the client. However, adding noise does not stop the attack: the client simply averages over a larger number of samples, as in [7]. In particular, reducing the precision of the server's timestamps, or eliminating them from the server's responses, does not stop the attack: the client simply uses round-trip timings based on its local clock, and compensates for the increased noise by averaging over a larger number of samples. one has to remember that he's measuring average differences in processing time on the order of single-digits of cycles - if any *real* processing was happening it would only take a few cache line misses or an 'if' statement branching the other way to almost totally drown out the AES computation. (Personally, I'm amazed that FreeBSD 4.8's kernel is predictable enough to do these measurements - probably helps a *lot* that the server was otherwise idle - if somebody else was getting a timeslice in between it would totally swamp the numbers). Dan's reference [7] mentions specifically that RSA blinding (first implemented by default all the way back in OpenSSL 0.9.7b) defeats that paper's timing attack. If anything, those attacks are the best proof possible that the suggested fix for /dev/urandom is a fool's errand - why would anybody bother trying to figure out what the next data out of /dev/urandom is, when they can simply wait for a few milliseconds and extract it out of whatever program read it? :) I'm not referring to anything in particular, I'm mostly referring to the irony that one approach that was shot down was because, while not exactly practical, its theoretically not impossible to figure out clock sample entropy contributions, which might weaken the strength of the entropy pool. Your argument is more or less directly opposed to the reasoning the clock entropy patches were deemed unacceptable. :) Something to keep in mind: the whole impetus behind all this is *government* crypto certification requirements. They're paranoid. And something impractical at the individual level is less so at the determined, and willing to spend buckets of cash on resources, hostile foreign government level. At least in the minds of some governments. Note also: I don't necessarily share said governments' sentiments, I'm just tasked with trying to satisfy the requirements, and this was looked at as a potential minimally-invasive solution. I still think paranoid government-types would be fine with applications falling down if urandom blocked, because that should *only* happen if the system is being abused, but I understand the objections, so that idea is off the table. I'm likely going to look into Sasha's suggestion to do something via CUSE next, followed by taking a long hard look at what's involved in rewriting the entropy estimation logic such that clock-based entropy would be acceptable. -- Jarod Wilson ja...@redhat.com -- To unsubscribe from this list: send the line unsubscribe linux-crypto in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[PATCH] random: add blocking facility to urandom
Certain security-related certifications and their respective review bodies have said that they find use of /dev/urandom for certain functions, such as setting up ssh connections, is acceptable, but if and only if /dev/urandom can block after a certain threshold of bytes have been read from it with the entropy pool exhausted. Initially, we were investigating increasing entropy pool contributions, so that we could simply use /dev/random, but since that hasn't (yet) panned out, and upwards of five minutes to establsh an ssh connection using an entropy-starved /dev/random is unacceptable, we started looking at the blocking urandom approach. At present, urandom never blocks, even after all entropy has been exhausted from the entropy input pool. random immediately blocks when the input pool is exhausted. Some use cases want behavior somewhere in between these two, where blocking only occurs after some number have bytes have been read following input pool entropy exhaustion. Its possible to accomplish this and make it fully user-tunable, by adding a sysctl to set a max-bytes-after-0-entropy read threshold for urandom. In the out-of-the-box configuration, urandom behaves as it always has, but with a threshold value set, we'll block when its been exceeded. Tested by dd'ing from /dev/urandom in one window, and starting/stopping a cat of /dev/random in the other, with some debug spew added to the urandom read function to verify functionality. CC: Matt Mackall m...@selenic.com CC: Neil Horman nhor...@redhat.com CC: Herbert Xu herbert...@redhat.com CC: Steve Grubb sgr...@redhat.com CC: Stephan Mueller stephan.muel...@atsec.com CC: lkml linux-ker...@vger.kernel.org Signed-off-by: Jarod Wilson ja...@redhat.com --- Resending, neglected to cc lkml the first time, and this change could have implications outside just the crypto layer... drivers/char/random.c | 82 - 1 files changed, 81 insertions(+), 1 deletions(-) diff --git a/drivers/char/random.c b/drivers/char/random.c index c35a785..cf48b0f 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -289,6 +289,13 @@ static int trickle_thresh __read_mostly = INPUT_POOL_WORDS * 28; static DEFINE_PER_CPU(int, trickle_count); /* + * In normal operation, urandom never blocks, but optionally, you can + * set urandom to block after urandom_block_thresh bytes are read with + * the entropy pool exhausted. + */ +static int urandom_block_thresh = 0; + +/* * A pool of size .poolwords is stirred with a primitive polynomial * of degree .poolwords over GF(2). The taps for various sizes are * defined below. They are chosen to be evenly spaced (minimum RMS @@ -383,6 +390,7 @@ static struct poolinfo { * Static global variables */ static DECLARE_WAIT_QUEUE_HEAD(random_read_wait); +static DECLARE_WAIT_QUEUE_HEAD(urandom_read_wait); static DECLARE_WAIT_QUEUE_HEAD(random_write_wait); static struct fasync_struct *fasync; @@ -554,6 +562,7 @@ static void credit_entropy_bits(struct entropy_store *r, int nbits) /* should we wake readers? */ if (r == input_pool entropy_count = random_read_wakeup_thresh) { wake_up_interruptible(random_read_wait); + wake_up_interruptible(urandom_read_wait); kill_fasync(fasync, SIGIO, POLL_IN); } spin_unlock_irqrestore(r-lock, flags); @@ -1060,7 +1069,55 @@ random_read(struct file *file, char __user *buf, size_t nbytes, loff_t *ppos) static ssize_t urandom_read(struct file *file, char __user *buf, size_t nbytes, loff_t *ppos) { - return extract_entropy_user(nonblocking_pool, buf, nbytes); + ssize_t n; + static int excess_bytes_read; + + /* this is the default case with no urandom blocking threshold set */ + if (!urandom_block_thresh) + return extract_entropy_user(nonblocking_pool, buf, nbytes); + + if (nbytes == 0) + return 0; + + DEBUG_ENT(reading %d bits\n, nbytes*8); + + /* urandom blocking threshold set, but we have sufficient entropy */ + if (input_pool.entropy_count = random_read_wakeup_thresh) { + excess_bytes_read = 0; + return extract_entropy_user(nonblocking_pool, buf, nbytes); + } + + /* low on entropy, start counting bytes read */ + if (excess_bytes_read + nbytes urandom_block_thresh) { + n = extract_entropy_user(nonblocking_pool, buf, nbytes); + excess_bytes_read += n; + return n; + } + + /* low entropy read threshold exceeded, now we have to block */ + n = nbytes; + if (n SEC_XFER_SIZE) + n = SEC_XFER_SIZE; + + n = extract_entropy_user(nonblocking_pool, buf, n); + excess_bytes_read += n; + + if (file-f_flags O_NONBLOCK) + return -EAGAIN; + + DEBUG_ENT(sleeping?\n); + + wait_event_interruptible(urandom_read_wait
Re: [PATCH] random: add blocking facility to urandom
Sasha Levin wrote: On Wed, 2011-09-07 at 13:38 -0400, Jarod Wilson wrote: Certain security-related certifications and their respective review bodies have said that they find use of /dev/urandom for certain functions, such as setting up ssh connections, is acceptable, but if and only if /dev/urandom can block after a certain threshold of bytes have been read from it with the entropy pool exhausted. Initially, we were investigating increasing entropy pool contributions, so that we could simply use /dev/random, but since that hasn't (yet) panned out, and upwards of five minutes to establsh an ssh connection using an entropy-starved /dev/random is unacceptable, we started looking at the blocking urandom approach. Can't you accomplish this in userspace by trying to read as much as you can out of /dev/random without blocking, then reading out of /dev/urandom the minimum between allowed threshold and remaining bytes, and then blocking on /dev/random? For example, lets say you need 100 bytes of randomness, and your threshold is 30 bytes. You try reading out of /dev/random and get 50 bytes, at that point you'll read another 30 (=threshold) bytes out /dev/urandom and then you'll need to block on /dev/random until you get the remaining 20 bytes. We're looking for a generic solution here that doesn't require re-educating every single piece of userspace. And anything done in userspace is going to be full of possible holes -- there needs to be something in place that actually *enforces* the policy, and centralized accounting/tracking, lest you wind up with multiple processes racing to grab the entropy. -- Jarod Wilson ja...@redhat.com -- To unsubscribe from this list: send the line unsubscribe linux-crypto in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH] random: add blocking facility to urandom
Sasha Levin wrote: On Wed, 2011-09-07 at 14:26 -0400, Jarod Wilson wrote: Sasha Levin wrote: On Wed, 2011-09-07 at 13:38 -0400, Jarod Wilson wrote: Certain security-related certifications and their respective review bodies have said that they find use of /dev/urandom for certain functions, such as setting up ssh connections, is acceptable, but if and only if /dev/urandom can block after a certain threshold of bytes have been read from it with the entropy pool exhausted. Initially, we were investigating increasing entropy pool contributions, so that we could simply use /dev/random, but since that hasn't (yet) panned out, and upwards of five minutes to establsh an ssh connection using an entropy-starved /dev/random is unacceptable, we started looking at the blocking urandom approach. Can't you accomplish this in userspace by trying to read as much as you can out of /dev/random without blocking, then reading out of /dev/urandom the minimum between allowed threshold and remaining bytes, and then blocking on /dev/random? For example, lets say you need 100 bytes of randomness, and your threshold is 30 bytes. You try reading out of /dev/random and get 50 bytes, at that point you'll read another 30 (=threshold) bytes out /dev/urandom and then you'll need to block on /dev/random until you get the remaining 20 bytes. We're looking for a generic solution here that doesn't require re-educating every single piece of userspace. [...] A flip-side here is that you're going to break every piece of userspace which assumed (correctly) that /dev/urandom never blocks. Out of the box, that continues to be the case. This just adds a knob so that it *can* block at a desired threshold. Since this is a sysctl you can't fine tune which processes/threads/file-handles will block on /dev/urandom and which ones won't. The security requirement is that everything blocks. [..] And anything done in userspace is going to be full of possible holes [..] Such as? Is there an example of a case which can't be handled in userspace? How do you mandate preventing reads from urandom when there isn't sufficient entropy? You likely wind up needing to restrict access to the actual urandom via permissions and selinux policy or similar, and then run a daemon or something that provides a pseudo-urandom that brokers access to the real urandom. Get the permissions or policy wrong, and havoc ensues. An issue with the initscript or udev rule to hide the real urandom, and things can fall down. Its a whole lot more fragile than this approach, and a lot more involved in setting it up. [..] there needs to be something in place that actually *enforces* the policy, and centralized accounting/tracking, lest you wind up with multiple processes racing to grab the entropy. Does the weak entropy you get out of /dev/urandom get weaker the more you pull out of it? I assumed that this change is done because you want to limit the amount of weak entropy mixed in with strong entropy. The argument is that once there's no entropy left, an attacker only needs X number of samples before they can start accurately determining what the next random number will be. btw, Is the threshold based on a research done on the linux RNG? Or is it an arbitrary number that would be set by your local sysadmin? Stephan (cc'd on the thread) is attempting to get some feedback from BSI as to what they have in the way of an actual number. The implementation has a goal of being flexible enough for whatever a given certification or security requirement says that number is. -- Jarod Wilson ja...@redhat.com -- To unsubscribe from this list: send the line unsubscribe linux-crypto in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH] random: add blocking facility to urandom
Ted Ts'o wrote: On Wed, Sep 07, 2011 at 02:26:35PM -0400, Jarod Wilson wrote: We're looking for a generic solution here that doesn't require re-educating every single piece of userspace. And anything done in userspace is going to be full of possible holes -- there needs to be something in place that actually *enforces* the policy, and centralized accounting/tracking, lest you wind up with multiple processes racing to grab the entropy. Yeah, but there are userspace programs that depend on urandom not blocking... so your proposed change would break them. But only if you've set the sysctl to a non-zero value, and even then, only if someone is actively draining entropy from /dev/random. Otherwise, in practice, it behaves the same as always. Granted, I haven't tested with all possible userspace to see how it might fall down, but suggestions for progs to try would be welcomed. But again, I want to stress that out of the box, there's absolutely no change to the way urandom behaves, no blocking, this *only* kicks in if you twiddle the sysctl because you have some sort of security requirement that mandates it. -- Jarod Wilson ja...@redhat.com -- To unsubscribe from this list: send the line unsubscribe linux-crypto in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH] random: add blocking facility to urandom
Sasha Levin wrote: On Wed, 2011-09-07 at 16:56 -0400, Steve Grubb wrote: On Wednesday, September 07, 2011 04:37:57 PM Sasha Levin wrote: On Wed, 2011-09-07 at 16:30 -0400, Steve Grubb wrote: On Wednesday, September 07, 2011 04:23:13 PM Sasha Levin wrote: On Wed, 2011-09-07 at 16:02 -0400, Steve Grubb wrote: On Wednesday, September 07, 2011 03:27:37 PM Ted Ts'o wrote: On Wed, Sep 07, 2011 at 02:26:35PM -0400, Jarod Wilson wrote: We're looking for a generic solution here that doesn't require re-educating every single piece of userspace. And anything done in userspace is going to be full of possible holes -- there needs to be something in place that actually *enforces* the policy, and centralized accounting/tracking, lest you wind up with multiple processes racing to grab the entropy. Yeah, but there are userspace programs that depend on urandom not blocking... so your proposed change would break them. The only time this kicks in is when a system is under attack. If you have set this and the system is running as normal, you will never notice it even there. Almost all uses of urandom grab 4 bytes and seed openssl or libgcrypt or nss. It then uses those libraries. There are the odd cases where something uses urandom to generate a key or otherwise grab a chunk of bytes, but these are still small reads in the scheme of things. Can you think of any legitimate use of urandom that grabs 100K or 1M from urandom? Even those numbers still won't hit the sysctl on a normally function system. As far as I remember, several wipe utilities are using /dev/urandom to overwrite disks (possibly several times). Which should generate disk activity and feed entropy to urandom. I thought you need to feed random, not urandom. I think they draw from the same pool. There is a blocking and a non blocking pool. There's a single shared input pool that both the blocking and non-blocking pools pull from. New entropy data is added to the input pool, then transferred to the interface-specific pools as needed. Anyway, it won't happen fast enough to actually not block. Writing 1TB of urandom into a disk won't generate 1TB (or anything close to that) of randomness to cover for itself. We don't need a 1:1 mapping of RNG used to entropy acquired. Its more on the scale of 8,000,000:1 or higher. I'm just saying that writing 1TB into a disk using urandom will start to block, it won't generate enough randomness by itself. Writing 1TB of data to a disk using urandom won't block at all if nobody is using /dev/random. We seed /dev/urandom with entropy, then it just behaves as a Cryptographic RNG, its not pulling out any further entropy data until it needs to reseed, and thus the entropy count isn't dropping to 0, so we're not blocking. Someone has to actually drain the entropy, typically by pulling a fair bit of data from /dev/random, for the blocking to actually come into play. Why not implement it as a user mode CUSE driver that would wrap /dev/urandom and make it behave any way you want to? why push it into the kernel? Hadn't considered CUSE. But it does have the issues Steve mentioned in his earlier reply. Another proposal that has been kicked around: a 3rd random chardev, which implements this functionality, leaving urandom unscathed. Some udev magic or a driver param could move/disable/whatever urandom and put this alternate device in its place. Ultimately, identical behavior, but the true urandom doesn't get altered at all. -- Jarod Wilson ja...@redhat.com -- To unsubscribe from this list: send the line unsubscribe linux-crypto in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[PATCH] random: add blocking facility to urandom
Certain security-related certifications and their respective review bodies have said that they find use of /dev/urandom for certain functions, such as setting up ssh connections, is acceptable, but if and only if /dev/urandom can block after a certain threshold of bytes have been read from it with the entropy pool exhausted. Initially, we were investigating increasing entropy pool contributions, so that we could simply use /dev/random, but since that hasn't (yet) panned out, and upwards of five minutes to establsh an ssh connection using an entropy-starved /dev/random is unacceptable, we started looking at the blocking urandom approach. At present, urandom never blocks, even after all entropy has been exhausted from the entropy input pool. random immediately blocks when the input pool is exhausted. Some use cases want behavior somewhere in between these two, where blocking only occurs after some number have bytes have been read following input pool entropy exhaustion. Its possible to accomplish this and make it fully user-tunable, by adding a sysctl to set a max-bytes-after-0-entropy read threshold for urandom. In the out-of-the-box configuration, urandom behaves as it always has, but with a threshold value set, we'll block when its been exceeded. Tested by dd'ing from /dev/urandom in one window, and starting/stopping a cat of /dev/random in the other, with some debug spew added to the urandom read function to verify functionality. CC: Matt Mackall m...@selenic.com CC: Neil Horman nhor...@redhat.com CC: Herbert Xu herbert...@redhat.com CC: Steve Grubb sgr...@redhat.com CC: Stephan Mueller stephan.muel...@atsec.com Signed-off-by: Jarod Wilson ja...@redhat.com --- drivers/char/random.c | 82 - 1 files changed, 81 insertions(+), 1 deletions(-) diff --git a/drivers/char/random.c b/drivers/char/random.c index c35a785..cf48b0f 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -289,6 +289,13 @@ static int trickle_thresh __read_mostly = INPUT_POOL_WORDS * 28; static DEFINE_PER_CPU(int, trickle_count); /* + * In normal operation, urandom never blocks, but optionally, you can + * set urandom to block after urandom_block_thresh bytes are read with + * the entropy pool exhausted. + */ +static int urandom_block_thresh = 0; + +/* * A pool of size .poolwords is stirred with a primitive polynomial * of degree .poolwords over GF(2). The taps for various sizes are * defined below. They are chosen to be evenly spaced (minimum RMS @@ -383,6 +390,7 @@ static struct poolinfo { * Static global variables */ static DECLARE_WAIT_QUEUE_HEAD(random_read_wait); +static DECLARE_WAIT_QUEUE_HEAD(urandom_read_wait); static DECLARE_WAIT_QUEUE_HEAD(random_write_wait); static struct fasync_struct *fasync; @@ -554,6 +562,7 @@ static void credit_entropy_bits(struct entropy_store *r, int nbits) /* should we wake readers? */ if (r == input_pool entropy_count = random_read_wakeup_thresh) { wake_up_interruptible(random_read_wait); + wake_up_interruptible(urandom_read_wait); kill_fasync(fasync, SIGIO, POLL_IN); } spin_unlock_irqrestore(r-lock, flags); @@ -1060,7 +1069,55 @@ random_read(struct file *file, char __user *buf, size_t nbytes, loff_t *ppos) static ssize_t urandom_read(struct file *file, char __user *buf, size_t nbytes, loff_t *ppos) { - return extract_entropy_user(nonblocking_pool, buf, nbytes); + ssize_t n; + static int excess_bytes_read; + + /* this is the default case with no urandom blocking threshold set */ + if (!urandom_block_thresh) + return extract_entropy_user(nonblocking_pool, buf, nbytes); + + if (nbytes == 0) + return 0; + + DEBUG_ENT(reading %d bits\n, nbytes*8); + + /* urandom blocking threshold set, but we have sufficient entropy */ + if (input_pool.entropy_count = random_read_wakeup_thresh) { + excess_bytes_read = 0; + return extract_entropy_user(nonblocking_pool, buf, nbytes); + } + + /* low on entropy, start counting bytes read */ + if (excess_bytes_read + nbytes urandom_block_thresh) { + n = extract_entropy_user(nonblocking_pool, buf, nbytes); + excess_bytes_read += n; + return n; + } + + /* low entropy read threshold exceeded, now we have to block */ + n = nbytes; + if (n SEC_XFER_SIZE) + n = SEC_XFER_SIZE; + + n = extract_entropy_user(nonblocking_pool, buf, n); + excess_bytes_read += n; + + if (file-f_flags O_NONBLOCK) + return -EAGAIN; + + DEBUG_ENT(sleeping?\n); + + wait_event_interruptible(urandom_read_wait, + input_pool.entropy_count = random_read_wakeup_thresh); + + DEBUG_ENT(awake\n); + + if (signal_pending(current)) + return -ERESTARTSYS
Re: [PATCH 0/5] Feed entropy pool via high-resolution clocksources
Matt Mackall wrote: On Wed, 2011-06-15 at 10:49 -0400, Jarod Wilson wrote: Matt Mackall wrote: On Tue, 2011-06-14 at 18:51 -0400, Jarod Wilson wrote: Matt Mackall wrote: ... But that's not even the point. Entropy accounting here is about providing a theoretical level of security above cryptographically strong. As the source says: Even if it is possible to analyze SHA in some clever way, as long as the amount of data returned from the generator is less than the inherent entropy in the pool, the output data is totally unpredictable. This is the goal of the code as it exists. And that goal depends on consistent _underestimates_ and accurate accounting. Okay, so as you noted, I was only crediting one bit of entropy per byte mixed in. Would there be some higher mixed-to-credited ratio that might be sufficient to meet the goal? As I've mentioned elsewhere, I think something around .08 bits per timestamp is probably a good target. That's the entropy content of a coin-flip that is biased to flip heads 99 times out of 100. But even that isn't good enough in the face of a 100Hz clock source. And obviously the current system doesn't handle fractional bits at all. What if only one bit every n samples were credited? So 1/n bits per timestamp, effectively, and for an n of 100, that would yield .01 bits per timestamp. Something like this: Something like that would work, sure. But it's a hack/abuse -relative to the current framework-. I'm reluctant to just pile on the hacks on the current system, as that just means getting it coherent is that much further away. Something I should have probably made clearer was that we were looking for a semi-quick improvement for a particular use case that involves a certain 2.6.32-based kernel... For the short term, we're looking at some userspace alternatives, such as a Linux implementation of an entropy seed approach BSI approves of. Reference doc here, but in German: https://www.bsi.bund.de/ContentBSI/Publikationen/TechnischeRichtlinien/tr02102/index_htm.html Longer-term, we're having some discussions related to the revamped framework you've laid out. I'll table this clocksource-based entropy contribution code for now. Also, thank you for putting up with me. ;) -- Jarod Wilson ja...@redhat.com -- To unsubscribe from this list: send the line unsubscribe linux-crypto in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH 2/5] clocksource: add support for entropy-generation function
Thomas Gleixner wrote: On Mon, 13 Jun 2011, Jarod Wilson wrote: Add a new function pointer to struct clocksource that can optionally be filled in by clocksources deemed to be high enough resolution to feed the random number generator entropy pool. Uurrg. + * @entropy: random entropy pool addition function (optional, and + * requires a fairly high-resolution clocksource) Why do you want to do that ? Looking at the implementations of TSC and HPET it's the same code. We really do not want to add that all over the place. We can make that a property flag and the entropy function common to the core code. +/** + * Do we have at least one clocksource that can generate entropy? + */ +bool clocksource_entropy_available(void) +{ + struct clocksource *src; + bool entropy_possible = false; + + mutex_lock(clocksource_mutex); + list_for_each_entry(src,clocksource_list, list) { + if (src-entropy) { + entropy_possible = true; + break; + } + } + mutex_unlock(clocksource_mutex); + + return entropy_possible; +} +EXPORT_SYMBOL(clocksource_entropy_available); That should be evaluated when clocksources are registered not at some random point in time, which might return total nonsense as it's not guaranteed that the clocksource which has the entropy property is going to end up as the current clocksource. +/** + * Call the clocksource's entropy-generation function, if set + */ +void clocksource_add_entropy(void) +{ + if (!curr_clocksource-entropy) + return; Why restricting it to the current clocksource? We can use the best registered one for this which has the entropy property set. Yeah, John had some similar suggestions, and locally, I've got a modified version that instead of adding entropy functions for each clocksource, adds an entropy rating, then the highest rated entropy clocksource gets used, but a common clocksource entropy function that calls the chosen clocksource's read function. The entropy clocksource function gets picked by way of another function similar to clocksource_select, called in all the same places. However, since none of this is going to be viable until the random code is significantly restructured, I haven't bothered with posting any of the updates I've made. I can toss the delta out there if anyone really wants to take a look at it, but otherwise, I'll just sit on it until said restructuring happens. -- Jarod Wilson ja...@redhat.com -- To unsubscribe from this list: send the line unsubscribe linux-crypto in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH 0/5] Feed entropy pool via high-resolution clocksources
Matt Mackall wrote: On Tue, 2011-06-14 at 18:51 -0400, Jarod Wilson wrote: Matt Mackall wrote: ... But that's not even the point. Entropy accounting here is about providing a theoretical level of security above cryptographically strong. As the source says: Even if it is possible to analyze SHA in some clever way, as long as the amount of data returned from the generator is less than the inherent entropy in the pool, the output data is totally unpredictable. This is the goal of the code as it exists. And that goal depends on consistent _underestimates_ and accurate accounting. Okay, so as you noted, I was only crediting one bit of entropy per byte mixed in. Would there be some higher mixed-to-credited ratio that might be sufficient to meet the goal? As I've mentioned elsewhere, I think something around .08 bits per timestamp is probably a good target. That's the entropy content of a coin-flip that is biased to flip heads 99 times out of 100. But even that isn't good enough in the face of a 100Hz clock source. And obviously the current system doesn't handle fractional bits at all. What if only one bit every n samples were credited? So 1/n bits per timestamp, effectively, and for an n of 100, that would yield .01 bits per timestamp. Something like this: void add_clocksource_randomness(int clock_delta) { static int samples; /* only mix in the low byte */ u8 mix = clock_delta 0xff; DEBUG_ENT(clock event %u\n, mix); preempt_disable(); if (input_pool.entropy_count trickle_thresh (__get_cpu_var(trickle_count)++ 0xfff)) goto out; mix_pool_bytes(input_pool, mix, sizeof(mix)); samples++; /* Only credit one bit per 100 samples to be conservative */ if (samples == 100) { credit_entropy_bits(input_pool, sizeof(mix)); samples = 0; } out: preempt_enable(); } Additionally, this function would NOT be exported, it would only be utilized by a new clocksource entropy contribution function in kernel/time/clocksource.c. Locally, I've made most of the changes as discussed with John, so clocksources now have an entropy rating rather than an entropy function, and if the rating is not high enough, the clocksource won't be able to add entropy -- all clocksources default to a rating of 0, only hpet and tsc have been marked otherwise. Additionally, hpet has a higher rating than tsc, so it'll be preferred over tsc, even if tsc is the system timer clocksource. This code will effectively do absolutely nothing if not running on x86 PC hardware with an hpet or tsc (and it seems maybe tsc shouldn't even be considered, so perhaps this should be hpet-only). One further thought: what if reads of both the hpet and tsc were mixed together to form the sample value actually fed into the entropy pool? This of course assumes the system has both available, and that the tsc is actually fine-grained enough to be usable, but maybe it strengthens the randomness of the sample value at least somewhat? This could also be marked as experimental or dangerous or what have you, so that its a kernel builder's conscious decision to enable clock-based entropy contributions. (If I appear to be grasping at straws here, well, I probably am.) ;) Look, I understand what I'm trying to say here is very confusing, so please make an effort to understand all the pieces together: - the driver is designed for -perfect- security as described above - the usual assumptions about observability of network samples and other timestamps ARE FALSE on COMMON NON-PC HARDWARE - thus network sampling is incompatible with the CURRENT design - nonetheless, the current design of entropy accounting is not actually meeting its goals in practice Heh, I guess that answers my question already... - thus we need an alternative to entropy accounting - that alternative WILL be compatible with sampling insecure sources Okay. So I admit to really only considering and/or caring about x86 hardware, which doesn't seem to have helped my cause. But you do seem to be saying that clocksource-based sampling *will* be compatible with the new alternative, correct? And is said alternative something on the relatively near-term radar? Various people have offered to spend some time fixing this; I haven't had time to look at it for a while. Okay, I know how that goes. So not likely to come to fruition in the immediate near-term. I'd offer to spend some time working on it, but I don't think I'm qualified. :) -- Jarod Wilson ja...@redhat.com -- To unsubscribe from this list: send the line unsubscribe linux-crypto in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH 4/5] tsc: wire up entropy generation function
H. Peter Anvin wrote: On 06/13/2011 05:39 PM, Kent Borg wrote: I was assuming that drivers, responding to an interrupt from some external event, would want to make this call. Such as a network driver. Those already are doing this. Two points: 1. Why look at the high-order bits? How are they going to have values that cannot be inferred? If you are looking for entropy, the low-order bits is where they're going keep it. (Think Willie Sutton.) If looking at the low-byte is cheaper, then let's be cheap. If, however, if looking at more bytes *is* as cheap, then there is no harm. But in any case let's keep the code lean enough that it can be called from an interrupt service routine. Consider the case where the TSC is running in steps of 64 and you're using the low 6 bits. What if we mix the low byte into the pool, but only increase the entropy estimate if a comparison with the prior read actually shows a change? Another thought is to perhaps mix the counter value with a value read from ansi_cprng, which is similar to a tactic referred to in NIST's 140sp750.pdf. Peter, as long as I've got your attention here... Patch 3 in the set adds essentially the same sort of support using the hpet instead of the tsc. Is that less contentious, as far as precision in the low byte(s) to provide entropy data? My initial local implementation was actually hpet-only, before reworking things to try to be more generic. -- Jarod Wilson ja...@redhat.com -- To unsubscribe from this list: send the line unsubscribe linux-crypto in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH 0/5] Feed entropy pool via high-resolution clocksources
john stultz wrote: On Mon, 2011-06-13 at 18:06 -0400, Jarod Wilson wrote: Many server systems are seriously lacking in sources of entropy, as we typically only feed the entropy pool by way of input layer events, a few NIC driver interrupts and disk activity. A non-busy server can easily become entropy-starved. We can mitigate this somewhat by periodically mixing in entropy data based on the delta between multiple high-resolution clocksource reads, per: https://www.osadl.org/Analysis-of-inherent-randomness-of-the-L.rtlws11-developers-okech.0.html Additionally, NIST already approves of similar implementations, so this should be usable in high-securtiy deployments requiring a fair chunk of available entropy data for frequent use of /dev/random. http://csrc.nist.gov/groups/STM/cmvp/documents/140-1/140sp/140sp750.pdf (section 6.1 mentions a clock-based seed source). Yes, thus far, I've only bothered with x86-based clocksources, since that is what I can test most easily. If this patch series isn't deemed totally insane, adding support for other clocksources should be trivial. Also note that unless you explicitly build and load the clock-entropy driver, there should be little or no change whatsoever to the way anything behaves right now, its purely opt-in. There's probably room for some improvement here, and I'm kind of outside my comfort area, but hey, it seems to work pretty well here in my own testing, so here it is... Jarod Wilson (5): random: add new clocksource entropy interface clocksource: add support for entropy-generation function hpet: wire up entropy generation function tsc: wire up entropy generation function misc: add clocksource-based entropy generation driver So this is interesting work, but I have a few questions. 1) hpet_add_entropy() and tsc_add_entropy() are basically identical. It seems there should be a generic bit of code that uses the clocksource's read() function and does the same calculation. 2) If the .entropy() functions really aren't clocksource specific, it doesn't really seem like this functionality belongs in the clocksource layer. Instead it seems it should be a layer above, that makes use of the clocksource code in a generic fashion. Wasn't sure at first if there might be a need for more differentiation in the entropy-gathering functions, but yeah, at least with these two, just using the clocksource read function from a common entropy-gathering function does look to make more sense. 3) Why are you making use of the curr_clocksource? The timekeeping code has very strict rules about what makes a clocksource usable for timekeeping. I suspect entropy generation has different requirements, and thus shouldn't lean on the curr_clocksource value (ie: TSC may be unsynced or halts in idle, and thus unusable for time, but likely still ok for entropy). 4) Further on that point, there's a likely bug: If an entropy supporting clocksource is available, clocksource_entropy_available() will return true, but if the one curr_clocksource is not the one that supports entropy, clocksource_add_entropy() won't do anything. I suspect the add_entropy function needs to scan for a entropy flagged clocksource and use it instead. Yeah, there are definitely different requirements. Using curr_clocksource was sort of the easy way out, I guess. :) So yeah, its entirely possible for someone to set a non-entropy-generating clocksource as their active one, and still have an entropy-generating one that is available -- I was actually running tsc as my primary clocksource and using the hpet for entropy in my initial internal implementation here. Each clocksource currently has a rating for how good a timekeeping clocksource it is, would it make sense to have a second rating that indicates how good an entropy source it is, and use that to determine which clocksource would best serve for entropy generation? 5) Does the entropy calculation need to be flagged clocksource by clocksource? Or could a generic metric be made (ie: frequency?) in order to enable such functionality on all capable clocksources automatically? I honestly haven't a clue here. I'm not sure if there are possibly clocksources that have a relatively high frequency, but lack precision in their lowest bits, nor am I sure (yet?) how to figure out what the frequency and/or precision of a given clocksource actually is. I guess I assume its non-trivial, since we have to have the 'rating' value to choose primary timekeeping clocksource. My initial thinking is that it would be best to have this purely opt-in, for clocksources that have been evaluated and deemed acceptable for this use. So I'll see what I can come up with in the way of removing the per-clocksource entropy functions. Doing so adds a new complication, by way of removing what I was keying off of to decide if the clocksource should be providing entropy, but perhaps I can try working in an entropy quality rating at the same
Re: [PATCH 0/5] Feed entropy pool via high-resolution clocksources
Jarod Wilson wrote: Matt Mackall wrote: On Mon, 2011-06-13 at 18:06 -0400, Jarod Wilson wrote: Many server systems are seriously lacking in sources of entropy, as we typically only feed the entropy pool by way of input layer events, a few NIC driver interrupts and disk activity. A non-busy server can easily become entropy-starved. We can mitigate this somewhat by periodically mixing in entropy data based on the delta between multiple high-resolution clocksource reads, per: https://www.osadl.org/Analysis-of-inherent-randomness-of-the-L.rtlws11-developers-okech.0.html Additionally, NIST already approves of similar implementations, so this should be usable in high-securtiy deployments requiring a fair chunk of available entropy data for frequent use of /dev/random. So, mixed feelings here: Yes: it's a great idea to regularly mix other data into the pool. More samples are always better for RNG quality. Maybe: the current RNG is not really designed with high-bandwidth entropy sources in mind, so this might introduce non-negligible overhead in systems with, for instance, huge numbers of CPUs. The current implementation is opt-in, and single-threaded, so at least currently, I don't think there should be any significant issues. I stand corrected. Hadn't considered the possible issues with doing a regular preempt_disable() and __get_cpu_var() on a system with tons of cpus. (I'm still not sure exactly what the issues would be, but I think I see the potential for issues of some sort.) -- Jarod Wilson ja...@redhat.com -- To unsubscribe from this list: send the line unsubscribe linux-crypto in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH 0/5] Feed entropy pool via high-resolution clocksources
Matt Mackall wrote: On Tue, 2011-06-14 at 11:18 -0400, Jarod Wilson wrote: Matt Mackall wrote: ... No: it's not a great idea to _credit_ the entropy count with this data. Someone watching the TSC or HPET from userspace can guess when samples are added by watching for drop-outs in their sampling (ie classic timing attack). I'm admittedly a bit of a novice in this area... Why does it matter if someone watching knows more or less when a sample is added? It doesn't really reveal anything about the sample itself, if we're using a high-granularity counter value's low bits -- round-trip to userspace has all sorts of inherent timing jitter, so determining the low-order bits the kernel got by monitoring from userspace should be more or less impossible. And the pool is constantly changing, making it a less static target on an otherwise mostly idle system. I recommend you do some Google searches for ssl timing attack and aes timing attack to get a feel for the kind of seemingly impossible things that can be done and thereby recalibrate your scale of the impossible. Hm. These are attempting to reveal a static key though. We're talking about trying to reveal the exact value of the counter when it was read by the kernel. And trying to do so repeatedly, several times per second. And this can't be done without getting some form of local system access, so far as I know. And the act of trying to monitor and calculate deltas should serve to introduce even more potential randomness into the actual clock read deltas. This code is largely spurned on by someone here at Red Hat who I probably should have had in the cc list to begin with, Steve Grubb, who pointed to slides 23-25 and the chart in slide 30 of this doc... https://www.osadl.org/fileadmin/dam/presentations/RTLWS11/okech-inherent-randomness.pdf ...as the primary arguments for why this is a good source of entropy. (I see you do credit only 1 bit per byte: that's fairly conservative, true, but it must be _perfectly conservative_ for the theoretical requirements of /dev/random to be met. These requirements are in fact known to be unfulfillable in practice(!), but that doesn't mean we should introduce more users of entropy accounting. Instead, it means that entropy accounting is broken and needs to be removed.) Hrm. The government seems to have a different opinion. Various certs have requirements for some sort of entropy accounting and minimum estimated entropy guarantees. We can certainly be even more conservative than 1 bit per byte, but yeah, I don't really have a good answer for perfectly conservative, and I don't know what might result (on the government cert front) from removing entropy accounting altogether... Well, the deal with accounting is this: if you earn $.90 and spend $1.00 every day, you'll eventually go broke, even if your rounded-to-the-nearest-dollar accounting tells you you're solidly in the black. The only distinction between /dev/random and urandom is that we claim that /dev/random is always solidly in the black. But as we don't have a firm theoretical basis for making our accounting estimates on the input side, the whole accounting thing kind of breaks down into a kind of busted rate-limiter. Well, they *are* understood to be estimates, and /dev/random does block when we've spent everything we've (estimated we've) got, and at least circa 2.6.18 in RHEL5.4, NIST was satisfied that /dev/random's estimation was good enough by way of some statistical analysis done on data dumped out of it. What if we could show through statistical analysis that our entropy estimation is still good enough even with clock data mixed in? (Ignoring the potential of timing attacks for the moment). We'd do better counting a raw number of samples per source, and then claiming that we've reached a 'full' state when we reach a certain 'diversity x depth' score. And then assuring we have a lot of diversity and depth going into the pool. Hrm. I presume NIST and friends would still need some way to translate that into estimated bits of entropy for the purposes of having a common metric with other systems, but I guess we could feel better about the entropy if we had some sort of guarantee that no more than x% came from a single entropy source -- such as, say, no more than 25% of your entropy bits are from a clocksource. But then we may end up right back where we are right now -- a blocking entropy-starved /dev/random on a server system that has no other significant sources generating entropy. -- Jarod Wilson ja...@redhat.com -- To unsubscribe from this list: send the line unsubscribe linux-crypto in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH 0/5] Feed entropy pool via high-resolution clocksources
Matt Mackall wrote: On Tue, 2011-06-14 at 16:17 -0400, Jarod Wilson wrote: Matt Mackall wrote: On Tue, 2011-06-14 at 11:18 -0400, Jarod Wilson wrote: Matt Mackall wrote: ... No: it's not a great idea to _credit_ the entropy count with this data. Someone watching the TSC or HPET from userspace can guess when samples are added by watching for drop-outs in their sampling (ie classic timing attack). I'm admittedly a bit of a novice in this area... Why does it matter if someone watching knows more or less when a sample is added? It doesn't really reveal anything about the sample itself, if we're using a high-granularity counter value's low bits -- round-trip to userspace has all sorts of inherent timing jitter, so determining the low-order bits the kernel got by monitoring from userspace should be more or less impossible. And the pool is constantly changing, making it a less static target on an otherwise mostly idle system. I recommend you do some Google searches for ssl timing attack and aes timing attack to get a feel for the kind of seemingly impossible things that can be done and thereby recalibrate your scale of the impossible. Hm. These are attempting to reveal a static key though. We're talking about trying to reveal the exact value of the counter when it was read by the kernel. And trying to do so repeatedly, several times per second. I read this as I am not yet properly recalibrated. Probably not. :) Yes, it's hard. Hard != impractical. And this can't be done without getting some form of local system access, Ok, now Google remote timing attack. The stuff I'm reading seems to require that the data you're trying to discern is somehow exposed over the network, which so far as I know, the entropy input pool isn't, but you obviously know this stuff WAY better than I do, so I'll stop trying. ;) This code is largely spurned on by someone here at Red Hat who I probably should have had in the cc list to begin with, Steve Grubb, who pointed to slides 23-25 and the chart in slide 30 of this doc... https://www.osadl.org/fileadmin/dam/presentations/RTLWS11/okech-inherent-randomness.pdf ...as the primary arguments for why this is a good source of entropy. ..on a sixth-generation desktop CPU with a cycle-accurate counter. Welcome to the real world, where that's now a tiny minority of deployed systems. Sure, but that's part of why only the hpet and tsc clocksources were wired up in this patchset. But that's not even the point. Entropy accounting here is about providing a theoretical level of security above cryptographically strong. As the source says: Even if it is possible to analyze SHA in some clever way, as long as the amount of data returned from the generator is less than the inherent entropy in the pool, the output data is totally unpredictable. This is the goal of the code as it exists. And that goal depends on consistent _underestimates_ and accurate accounting. Okay, so as you noted, I was only crediting one bit of entropy per byte mixed in. Would there be some higher mixed-to-credited ratio that might be sufficient to meet the goal? Look, I understand what I'm trying to say here is very confusing, so please make an effort to understand all the pieces together: - the driver is designed for -perfect- security as described above - the usual assumptions about observability of network samples and other timestamps ARE FALSE on COMMON NON-PC HARDWARE - thus network sampling is incompatible with the CURRENT design - nonetheless, the current design of entropy accounting is not actually meeting its goals in practice Heh, I guess that answers my question already... - thus we need an alternative to entropy accounting - that alternative WILL be compatible with sampling insecure sources Okay. So I admit to really only considering and/or caring about x86 hardware, which doesn't seem to have helped my cause. But you do seem to be saying that clocksource-based sampling *will* be compatible with the new alternative, correct? And is said alternative something on the relatively near-term radar? -- Jarod Wilson ja...@redhat.com -- To unsubscribe from this list: send the line unsubscribe linux-crypto in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[PATCH 0/5] Feed entropy pool via high-resolution clocksources
Many server systems are seriously lacking in sources of entropy, as we typically only feed the entropy pool by way of input layer events, a few NIC driver interrupts and disk activity. A non-busy server can easily become entropy-starved. We can mitigate this somewhat by periodically mixing in entropy data based on the delta between multiple high-resolution clocksource reads, per: https://www.osadl.org/Analysis-of-inherent-randomness-of-the-L.rtlws11-developers-okech.0.html Additionally, NIST already approves of similar implementations, so this should be usable in high-securtiy deployments requiring a fair chunk of available entropy data for frequent use of /dev/random. http://csrc.nist.gov/groups/STM/cmvp/documents/140-1/140sp/140sp750.pdf (section 6.1 mentions a clock-based seed source). Yes, thus far, I've only bothered with x86-based clocksources, since that is what I can test most easily. If this patch series isn't deemed totally insane, adding support for other clocksources should be trivial. Also note that unless you explicitly build and load the clock-entropy driver, there should be little or no change whatsoever to the way anything behaves right now, its purely opt-in. There's probably room for some improvement here, and I'm kind of outside my comfort area, but hey, it seems to work pretty well here in my own testing, so here it is... Jarod Wilson (5): random: add new clocksource entropy interface clocksource: add support for entropy-generation function hpet: wire up entropy generation function tsc: wire up entropy generation function misc: add clocksource-based entropy generation driver arch/x86/kernel/hpet.c | 18 ++ arch/x86/kernel/tsc.c| 18 ++ drivers/char/random.c| 28 ++ drivers/misc/Kconfig | 14 + drivers/misc/Makefile|1 + drivers/misc/clock-entropy.c | 122 ++ include/linux/clocksource.h |6 ++ include/linux/random.h |1 + kernel/time/clocksource.c| 33 +++ 9 files changed, 241 insertions(+), 0 deletions(-) create mode 100644 drivers/misc/clock-entropy.c CC: Matt Mackall m...@selenic.com CC: Venkatesh Pallipadi (Venki) ve...@google.com CC: Thomas Gleixner t...@linutronix.de CC: Ingo Molnar mi...@elte.hu CC: John Stultz johns...@us.ibm.com CC: Herbert Xu herb...@gondor.apana.org.au CC: David S. Miller da...@davemloft.net -- To unsubscribe from this list: send the line unsubscribe linux-crypto in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[PATCH 5/5] misc: add clocksource-based entropy generation driver
This is a fairly simple driver that just starts up a kernel thread that periodically calls the active clocksource's entropy-gathering function, if it has one. The default interval of 100us between polls doesn't show any measurable impact to cpu usage on a core 2 duo test rig here, and ensures the entropy pool is constantly being fed, even on a system with no input device, no network traffic and no disk activity. CC: Matt Mackall m...@selenic.com CC: Venkatesh Pallipadi (Venki) ve...@google.com CC: Thomas Gleixner t...@linutronix.de CC: Ingo Molnar mi...@elte.hu CC: John Stultz johns...@us.ibm.com CC: Herbert Xu herb...@gondor.apana.org.au CC: David S. Miller da...@davemloft.net Signed-off-by: Jarod Wilson ja...@redhat.com --- drivers/misc/Kconfig | 14 + drivers/misc/Makefile|1 + drivers/misc/clock-entropy.c | 122 ++ 3 files changed, 137 insertions(+), 0 deletions(-) create mode 100644 drivers/misc/clock-entropy.c diff --git a/drivers/misc/Kconfig b/drivers/misc/Kconfig index 4e349cd..b997f6a 100644 --- a/drivers/misc/Kconfig +++ b/drivers/misc/Kconfig @@ -490,6 +490,20 @@ config PCH_PHUB To compile this driver as a module, choose M here: the module will be called pch_phub. +config CLOCK_ENTROPY + tristate Clocksource-based Entropy Generator + help + This is a driver that simply starts up a kernel thread that calls + clocksource helper functions to periodically poll the clocksource, + calculate a delta from the prior poll, and then feed the delta + value along to the random number generator entropy pool. This can + be useful for server systems that are otherwise starved for entropy, + as there are very few sources of entropy generation, particularly + on a non-busy server system. + + To compile this driver as a module, choose M here. The module will + be called clock-entropy. + source drivers/misc/c2port/Kconfig source drivers/misc/eeprom/Kconfig source drivers/misc/cb710/Kconfig diff --git a/drivers/misc/Makefile b/drivers/misc/Makefile index 5f03172..a659ad2 100644 --- a/drivers/misc/Makefile +++ b/drivers/misc/Makefile @@ -46,3 +46,4 @@ obj-y += ti-st/ obj-$(CONFIG_AB8500_PWM) += ab8500-pwm.o obj-y += lis3lv02d/ obj-y += carma/ +obj-$(CONFIG_CLOCK_ENTROPY)+= clock-entropy.o diff --git a/drivers/misc/clock-entropy.c b/drivers/misc/clock-entropy.c new file mode 100644 index 000..56d65ac --- /dev/null +++ b/drivers/misc/clock-entropy.c @@ -0,0 +1,122 @@ +/* + * Clocksource-based entropy generator + * Copyright (c) 2011 Jarod Wilson ja...@redhat.com + * + * Many server systems are seriously lacking in sources of entropy, + * as we typically only feed the entropy pool by way of input layer + * events, a few NIC driver interrupts and disk activity. A non-busy + * server can easily become entropy-starved. We can mitigate this + * somewhat by periodically mixing in entropy data based on the + * delta between multiple high-resolution clocksource reads, per: + * + * https://www.osadl.org/Analysis-of-inherent-randomness-of-the-L.rtlws11-developers-okech.0.html + * + * Additionally, NIST already approves of similar implementations, so + * this should be usable in high-securtiy deployments requiring a + * fair chunk of available entropy data for frequent use of /dev/random. + * --- + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License as published by + * the Free Software Foundation; either version 2 of the License, or + * (at your option) any later version. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, write to the Free Software + * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA + */ + +#include linux/module.h +#include linux/kernel.h +#include linux/sched.h +#include linux/errno.h +#include linux/kthread.h +#include linux/clocksource.h + +/* module parameters */ +static int ce_debug; +static int interval = 100; /* microseconds */ + +#define ce_dbg(fmt, args...) \ + do {\ + if (ce_debug) \ + printk(KERN_DEBUG KBUILD_MODNAME : fmt, \ +## args); \ + } while (0) + +/* Periodic clocksource polling thread data */ +static struct task_struct *ceg_task
[PATCH v2] [crypto] testmgr: add xts-aes-256 self-test
FIPS compliance requires a known-answer self-test for all approved cipher and mode combinations, for all valid key sizes. Presently, there are only self-tests for xts-aes-128. This adds a 256-bit one, pulled from the same reference document, which should satisfy the requirement. v2: properly increment vector count definitions CC: CC: Herbert Xu herb...@gondor.apana.org.au CC: David S. Miller da...@davemloft.net Signed-off-by: Jarod Wilson ja...@redhat.com --- crypto/testmgr.h | 293 +- 1 files changed, 291 insertions(+), 2 deletions(-) diff --git a/crypto/testmgr.h b/crypto/testmgr.h index aa6dac0..204c001 100644 --- a/crypto/testmgr.h +++ b/crypto/testmgr.h @@ -2976,8 +2976,8 @@ static struct cipher_testvec cast6_dec_tv_template[] = { #define AES_CBC_DEC_TEST_VECTORS 4 #define AES_LRW_ENC_TEST_VECTORS 8 #define AES_LRW_DEC_TEST_VECTORS 8 -#define AES_XTS_ENC_TEST_VECTORS 4 -#define AES_XTS_DEC_TEST_VECTORS 4 +#define AES_XTS_ENC_TEST_VECTORS 5 +#define AES_XTS_DEC_TEST_VECTORS 5 #define AES_CTR_ENC_TEST_VECTORS 3 #define AES_CTR_DEC_TEST_VECTORS 3 #define AES_CTR_3686_ENC_TEST_VECTORS 7 @@ -3924,6 +3924,150 @@ static struct cipher_testvec aes_xts_enc_tv_template[] = { \x0a\x28\x2d\xf9\x20\x14\x7b\xea \xbe\x42\x1e\xe5\x31\x9d\x05\x68, .rlen = 512, + }, { /* XTS-AES 10, XTS-AES-256, data unit 512 bytes */ + .key= \x27\x18\x28\x18\x28\x45\x90\x45 + \x23\x53\x60\x28\x74\x71\x35\x26 + \x62\x49\x77\x57\x24\x70\x93\x69 + \x99\x59\x57\x49\x66\x96\x76\x27 + \x31\x41\x59\x26\x53\x58\x97\x93 + \x23\x84\x62\x64\x33\x83\x27\x95 + \x02\x88\x41\x97\x16\x93\x99\x37 + \x51\x05\x82\x09\x74\x94\x45\x92, + .klen = 64, + .iv = \xff\x00\x00\x00\x00\x00\x00\x00 + \x00\x00\x00\x00\x00\x00\x00\x00, + \x00\x00\x00\x00\x00\x00\x00\x00, + \x00\x00\x00\x00\x00\x00\x00\x00, + .input = \x00\x01\x02\x03\x04\x05\x06\x07 + \x08\x09\x0a\x0b\x0c\x0d\x0e\x0f + \x10\x11\x12\x13\x14\x15\x16\x17 + \x18\x19\x1a\x1b\x1c\x1d\x1e\x1f + \x20\x21\x22\x23\x24\x25\x26\x27 + \x28\x29\x2a\x2b\x2c\x2d\x2e\x2f + \x30\x31\x32\x33\x34\x35\x36\x37 + \x38\x39\x3a\x3b\x3c\x3d\x3e\x3f + \x40\x41\x42\x43\x44\x45\x46\x47 + \x48\x49\x4a\x4b\x4c\x4d\x4e\x4f + \x50\x51\x52\x53\x54\x55\x56\x57 + \x58\x59\x5a\x5b\x5c\x5d\x5e\x5f + \x60\x61\x62\x63\x64\x65\x66\x67 + \x68\x69\x6a\x6b\x6c\x6d\x6e\x6f + \x70\x71\x72\x73\x74\x75\x76\x77 + \x78\x79\x7a\x7b\x7c\x7d\x7e\x7f + \x80\x81\x82\x83\x84\x85\x86\x87 + \x88\x89\x8a\x8b\x8c\x8d\x8e\x8f + \x90\x91\x92\x93\x94\x95\x96\x97 + \x98\x99\x9a\x9b\x9c\x9d\x9e\x9f + \xa0\xa1\xa2\xa3\xa4\xa5\xa6\xa7 + \xa8\xa9\xaa\xab\xac\xad\xae\xaf + \xb0\xb1\xb2\xb3\xb4\xb5\xb6\xb7 + \xb8\xb9\xba\xbb\xbc\xbd\xbe\xbf + \xc0\xc1\xc2\xc3\xc4\xc5\xc6\xc7 + \xc8\xc9\xca\xcb\xcc\xcd\xce\xcf + \xd0\xd1\xd2\xd3\xd4\xd5\xd6\xd7 + \xd8\xd9\xda\xdb\xdc\xdd\xde\xdf + \xe0\xe1\xe2\xe3\xe4\xe5\xe6\xe7 + \xe8\xe9\xea\xeb\xec\xed\xee\xef + \xf0\xf1\xf2\xf3\xf4\xf5\xf6\xf7 + \xf8\xf9\xfa\xfb\xfc\xfd\xfe\xff + \x00\x01\x02\x03\x04\x05\x06\x07 + \x08\x09\x0a\x0b\x0c\x0d\x0e\x0f + \x10\x11\x12\x13\x14\x15\x16\x17 + \x18\x19\x1a\x1b\x1c\x1d\x1e\x1f + \x20\x21\x22\x23\x24\x25\x26\x27 + \x28\x29\x2a\x2b\x2c\x2d\x2e\x2f + \x30\x31\x32\x33\x34\x35\x36\x37 + \x38\x39\x3a\x3b\x3c\x3d\x3e\x3f + \x40\x41\x42\x43\x44\x45\x46\x47 + \x48\x49\x4a\x4b\x4c\x4d\x4e\x4f + \x50\x51\x52\x53\x54\x55\x56\x57 + \x58\x59\x5a\x5b\x5c\x5d\x5e\x5f + \x60\x61\x62\x63\x64\x65\x66\x67 + \x68\x69\x6a\x6b\x6c\x6d\x6e\x6f + \x70\x71\x72\x73\x74\x75\x76\x77
[PATCH] crypto/testmgr: add xts-aes-256 self-test
FIPS compliance requires a known-answer self-test for all approved cipher and mode combinations, for all valid key sizes. Presently, there are only self-tests for xts-aes-128. This adds a 256-bit one, pulled from the same reference document, which should satisfy the requirement. CC: Herbert Xu herb...@gondor.apana.org.au CC: David S. Miller da...@davemloft.net Signed-off-by: Jarod Wilson ja...@redhat.com --- crypto/testmgr.h | 289 ++ 1 files changed, 289 insertions(+), 0 deletions(-) diff --git a/crypto/testmgr.h b/crypto/testmgr.h index aa6dac0..37438e7 100644 --- a/crypto/testmgr.h +++ b/crypto/testmgr.h @@ -3924,6 +3924,150 @@ static struct cipher_testvec aes_xts_enc_tv_template[] = { \x0a\x28\x2d\xf9\x20\x14\x7b\xea \xbe\x42\x1e\xe5\x31\x9d\x05\x68, .rlen = 512, + }, { /* XTS-AES 10, XTS-AES-256, data unit 512 bytes */ + .key= \x27\x18\x28\x18\x28\x45\x90\x45 + \x23\x53\x60\x28\x74\x71\x35\x26 + \x62\x49\x77\x57\x24\x70\x93\x69 + \x99\x59\x57\x49\x66\x96\x76\x27 + \x31\x41\x59\x26\x53\x58\x97\x93 + \x23\x84\x62\x64\x33\x83\x27\x95 + \x02\x88\x41\x97\x16\x93\x99\x37 + \x51\x05\x82\x09\x74\x94\x45\x92, + .klen = 64, + .iv = \xff\x00\x00\x00\x00\x00\x00\x00 + \x00\x00\x00\x00\x00\x00\x00\x00, + \x00\x00\x00\x00\x00\x00\x00\x00, + \x00\x00\x00\x00\x00\x00\x00\x00, + .input = \x00\x01\x02\x03\x04\x05\x06\x07 + \x08\x09\x0a\x0b\x0c\x0d\x0e\x0f + \x10\x11\x12\x13\x14\x15\x16\x17 + \x18\x19\x1a\x1b\x1c\x1d\x1e\x1f + \x20\x21\x22\x23\x24\x25\x26\x27 + \x28\x29\x2a\x2b\x2c\x2d\x2e\x2f + \x30\x31\x32\x33\x34\x35\x36\x37 + \x38\x39\x3a\x3b\x3c\x3d\x3e\x3f + \x40\x41\x42\x43\x44\x45\x46\x47 + \x48\x49\x4a\x4b\x4c\x4d\x4e\x4f + \x50\x51\x52\x53\x54\x55\x56\x57 + \x58\x59\x5a\x5b\x5c\x5d\x5e\x5f + \x60\x61\x62\x63\x64\x65\x66\x67 + \x68\x69\x6a\x6b\x6c\x6d\x6e\x6f + \x70\x71\x72\x73\x74\x75\x76\x77 + \x78\x79\x7a\x7b\x7c\x7d\x7e\x7f + \x80\x81\x82\x83\x84\x85\x86\x87 + \x88\x89\x8a\x8b\x8c\x8d\x8e\x8f + \x90\x91\x92\x93\x94\x95\x96\x97 + \x98\x99\x9a\x9b\x9c\x9d\x9e\x9f + \xa0\xa1\xa2\xa3\xa4\xa5\xa6\xa7 + \xa8\xa9\xaa\xab\xac\xad\xae\xaf + \xb0\xb1\xb2\xb3\xb4\xb5\xb6\xb7 + \xb8\xb9\xba\xbb\xbc\xbd\xbe\xbf + \xc0\xc1\xc2\xc3\xc4\xc5\xc6\xc7 + \xc8\xc9\xca\xcb\xcc\xcd\xce\xcf + \xd0\xd1\xd2\xd3\xd4\xd5\xd6\xd7 + \xd8\xd9\xda\xdb\xdc\xdd\xde\xdf + \xe0\xe1\xe2\xe3\xe4\xe5\xe6\xe7 + \xe8\xe9\xea\xeb\xec\xed\xee\xef + \xf0\xf1\xf2\xf3\xf4\xf5\xf6\xf7 + \xf8\xf9\xfa\xfb\xfc\xfd\xfe\xff + \x00\x01\x02\x03\x04\x05\x06\x07 + \x08\x09\x0a\x0b\x0c\x0d\x0e\x0f + \x10\x11\x12\x13\x14\x15\x16\x17 + \x18\x19\x1a\x1b\x1c\x1d\x1e\x1f + \x20\x21\x22\x23\x24\x25\x26\x27 + \x28\x29\x2a\x2b\x2c\x2d\x2e\x2f + \x30\x31\x32\x33\x34\x35\x36\x37 + \x38\x39\x3a\x3b\x3c\x3d\x3e\x3f + \x40\x41\x42\x43\x44\x45\x46\x47 + \x48\x49\x4a\x4b\x4c\x4d\x4e\x4f + \x50\x51\x52\x53\x54\x55\x56\x57 + \x58\x59\x5a\x5b\x5c\x5d\x5e\x5f + \x60\x61\x62\x63\x64\x65\x66\x67 + \x68\x69\x6a\x6b\x6c\x6d\x6e\x6f + \x70\x71\x72\x73\x74\x75\x76\x77 + \x78\x79\x7a\x7b\x7c\x7d\x7e\x7f + \x80\x81\x82\x83\x84\x85\x86\x87 + \x88\x89\x8a\x8b\x8c\x8d\x8e\x8f + \x90\x91\x92\x93\x94\x95\x96\x97 + \x98\x99\x9a\x9b\x9c\x9d\x9e\x9f + \xa0\xa1\xa2\xa3\xa4\xa5\xa6\xa7 + \xa8\xa9\xaa\xab\xac\xad\xae\xaf + \xb0\xb1\xb2\xb3\xb4\xb5\xb6\xb7 + \xb8\xb9\xba\xbb
Re: [PATCH] crypto/testmgr: add xts-aes-256 self-test
Jarod Wilson wrote: FIPS compliance requires a known-answer self-test for all approved cipher and mode combinations, for all valid key sizes. Presently, there are only self-tests for xts-aes-128. This adds a 256-bit one, pulled from the same reference document, which should satisfy the requirement. Oh hell. v2 coming in a bit after I verify its actually really still functional with the new vector added. I forgot to update AES_XTS_ENC_TEST_VECTORS and AES_XTS_DEC_TEST_VECTORS to match, so the new vector wasn't being run. -- Jarod Wilson ja...@redhat.com -- To unsubscribe from this list: send the line unsubscribe linux-crypto in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH 3/7] crypto: testmgr: fix warning
On 10/19/09 9:52 AM, Jiri Kosina wrote: On Mon, 19 Oct 2009, Felipe Contreras wrote: crypto/testmgr.c: In function ?test_cprng?: crypto/testmgr.c:1204: warning: ?err? may be used uninitialized in this function Signed-off-by: Felipe Contrerasfelipe.contre...@gmail.com --- crypto/testmgr.c |2 +- 1 files changed, 1 insertions(+), 1 deletions(-) diff --git a/crypto/testmgr.c b/crypto/testmgr.c index 6d5b746..1f2357b 100644 --- a/crypto/testmgr.c +++ b/crypto/testmgr.c @@ -1201,7 +1201,7 @@ static int test_cprng(struct crypto_rng *tfm, struct cprng_testvec *template, unsigned int tcount) { const char *algo = crypto_tfm_alg_driver_name(crypto_rng_tfm(tfm)); - int err, i, j, seedsize; + int err = 0, i, j, seedsize; u8 *seed; char result[32]; As it is not obvious to me immediately why/whether tcount couldn't be zero (which would cause uninitialized use of 'err'), I am not merging this through trivial tree. Herbert? I believe I'm the guilty party who wrote the code in question. Initializing err to 0 isn't correct. tcount should always be at least 1, if its 0, test_cprng has been called with invalid parameters. I believe err would best be initialized to -EINVAL, lest the caller think they were successful. -- Jarod Wilson ja...@redhat.com -- To unsubscribe from this list: send the line unsubscribe linux-crypto in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH 3/7] crypto: testmgr: fix warning
On 10/19/09 9:58 AM, Jarod Wilson wrote: On 10/19/09 9:52 AM, Jiri Kosina wrote: On Mon, 19 Oct 2009, Felipe Contreras wrote: crypto/testmgr.c: In function ?test_cprng?: crypto/testmgr.c:1204: warning: ?err? may be used uninitialized in this function Signed-off-by: Felipe Contrerasfelipe.contre...@gmail.com --- crypto/testmgr.c | 2 +- 1 files changed, 1 insertions(+), 1 deletions(-) diff --git a/crypto/testmgr.c b/crypto/testmgr.c index 6d5b746..1f2357b 100644 --- a/crypto/testmgr.c +++ b/crypto/testmgr.c @@ -1201,7 +1201,7 @@ static int test_cprng(struct crypto_rng *tfm, struct cprng_testvec *template, unsigned int tcount) { const char *algo = crypto_tfm_alg_driver_name(crypto_rng_tfm(tfm)); - int err, i, j, seedsize; + int err = 0, i, j, seedsize; u8 *seed; char result[32]; As it is not obvious to me immediately why/whether tcount couldn't be zero (which would cause uninitialized use of 'err'), I am not merging this through trivial tree. Herbert? I believe I'm the guilty party who wrote the code in question. Initializing err to 0 isn't correct. tcount should always be at least 1, if its 0, test_cprng has been called with invalid parameters. I believe err would best be initialized to -EINVAL, lest the caller think they were successful. ...and I need to re-read said code more carefully. tcount is desc-suite.cprng.count, which is ANSI_CPRNG_AES_TEST_VECTORS, which is #define'd to 6, and is the count of how many cprng test vectors there are. If someone changes that to 0, then I guess a retval of 0 would actually be correct, since no tests failed... So yeah, I rescind my claim that initializing err to 0 is incorrect, I think that's just fine. -- Jarod Wilson ja...@redhat.com -- To unsubscribe from this list: send the line unsubscribe linux-crypto in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH 1/1] add fips(ansi_cprng) (v2)
On 09/18/2009 02:34 PM, Neil Horman wrote: Patch to add fips(ansi_cprng) alg, which is ansi_cprng plus a continuous test Signed-off-by: Neil Horman nhor...@tuxdriver.com That turned out to be a lot less messy than my puny mind was thinking it might be. The solution actually looks quite elegant, especially like how much cleaner the priming of the continuity test in fips mode is. Acked-by: Jarod Wilson ja...@redhat.com -- Jarod Wilson ja...@redhat.com -- To unsubscribe from this list: send the line unsubscribe linux-crypto in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH 0/3] enhance RNG api with flags to allow for different operational modes
On 09/16/2009 11:37 PM, Herbert Xu wrote: On Wed, Sep 16, 2009 at 12:04:56PM -0400, Neil Horman wrote: So the question is, how do I make this RNG fips compliant without breaking some subset of users out there that rely on the predictability of the CPRNG? The solution I've come up with is a dynamic flag. This patch series What user apart from the test vector relies on the predictability? As far as I know, only the internal self-tests and fips testing rely on the predictability without the first value consumed internally. However, in theory, being able to disable the continuity check could also be of benefit to some throughput-intensive operation, particularly on an embedded system. The monte carlo self-tests could obviously compensate for the internally consumed value without altering the end result, but the single-iteration tests could not. We're using self-test vectors that were published by NIST right now, so I'd rather avoid having to alter them. -- Jarod Wilson ja...@redhat.com -- To unsubscribe from this list: send the line unsubscribe linux-crypto in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH 0/3] enhance RNG api with flags to allow for different operational modes
On 09/17/2009 04:16 PM, Herbert Xu wrote: On Thu, Sep 17, 2009 at 01:08:24PM -0400, Neil Horman wrote: Just so that I'm clear on what your suggesting, you're approach would be to register two algs in ansi_cprng, a 'raw' cprng, and a 'fips compliant cprng' underneath that used the raw cprng as a base, but implemented the continuity test underneath it? If so, yeah, I can get behind that idea. I'll spin a new set of patches shortly. Yes, exactly like how we structure the raw CTR and RFC3686 which is CTR tailored for IPsec. Yeah, I like that solution as well, does feel less dirty. So essentially, in fips mode, we'd wind up using fips(ansi_cprng) or similar, while the self-tests are done against raw ansi_cprng, correct? -- Jarod Wilson ja...@redhat.com -- To unsubscribe from this list: send the line unsubscribe linux-crypto in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH 1/3] add RNG api calls to set common flags
On 09/16/2009 12:11 PM, Neil Horman wrote: patch 1/3: Add flags infrastructure to rng api This patch adds api calls for get/set flags calls to the crypto rng api. This api allows algorithm implementations to register calls to respond to flag settings that are global and common to all rng's. If a given algorithm has no external flags that it needs to respond to, it can opt to not register any calls, and as a result a default handler will be registered for each which universally returns EOPNOTSUPPORT. Signed-off-by: Neil Hormannhor...@tuxdriver.com Acked-by: Jarod Wilson ja...@redhat.com -- Jarod Wilson ja...@redhat.com -- To unsubscribe from this list: send the line unsubscribe linux-crypto in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH 0/3] enhance RNG api with flags to allow for different operational modes
On 09/16/2009 12:04 PM, Neil Horman wrote: Hey all- Ok, so I've got a story behind this one. It was recently called to my attention that the ansi cprng is missing an aspect of its compliance requrements for FIPS-140. Specifically, its missing a behavior in its continuous test. When the CPRNG produces random blocks, the firrst block that it produces must never be returned to the user. Instead it must be saved and a second block must be generated so that it can be compared to the first block before being returned to the user. I recently posted a patch to do this for the hardware RNG. Its fine to do this there, since there are no expectations of a predictable result in that RNG. The CPRNG however, provides a predictable random sequence for a given input seed key and iteration. The above requirement messes with that predictability however because it changes which block is returned on the zeroth iteration to the user. Some test vectors expect this, some do not. So the question is, how do I make this RNG fips compliant without breaking some subset of users out there that rely on the predictability of the CPRNG? The solution I've come up with is a dynamic flag. This patch series adds two api calls to the crypto RNG api rng_set_flags and rng_get_flags, which set flags with global meaning on instances of an rng. A given RNG can opt to set the registered agorithm methods for these api calls or not. In the event they don't a default handler is set for each that returns EOPNOTSUPPORT. Using this new mechanism I've implemented these calls in ansi_cprng so that setting the TEST_MODE flag disables the continuous check, allowing for the zeroth block to get returned to the user, which lets us pass most of the supplied test vectors (most notably the NIST provided vectors). Neil and I discussed this whole mess off-list, and I'm in agreement that this is the cleanest solution to the problem, despite the relative complexity it adds to the base rng code. Will reply to each part individually for tracking purposes, but ACK for all three parts. -- Jarod Wilson ja...@redhat.com -- To unsubscribe from this list: send the line unsubscribe linux-crypto in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH 2/3] augment the testmgr code to set TEST_MODE flag on all rng instances
On 09/16/2009 12:13 PM, Neil Horman wrote: patch 2/3: Update testmgr code to place any rng it tests in TEST_MODE This patch instructs the testmgr code to place all rng allocations that it makes into test mode, so that in the event that it has internal mechanisms that may affect the testing of the RNG, they won't affect the outcome of the test they are preforming. Signed-off-by: Neil Hormannhor...@tuxdriver.com With these same additions, my fips crypto test code arrives at expected results still as well. Acked-by: Jarod Wilson ja...@redhat.com -- Jarod Wilson ja...@redhat.com -- To unsubscribe from this list: send the line unsubscribe linux-crypto in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH 3/3] augment CPRNG to correctly implement continuous test for FIPS, and support TEST_MODE flags
On 09/16/2009 12:25 PM, Neil Horman wrote: patch 3/3: modify cprng to make contnuity check fips compliant and allow for a disabling of the continuity test when the RNG is placed in FIPS mode Signed-off-by: Neil Hormannhor...@txudriver.com Acked-by: Jarod Wilson ja...@redhat.com -- Jarod Wilson ja...@redhat.com -- To unsubscribe from this list: send the line unsubscribe linux-crypto in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: sha384 self-test failure oddity
On Tuesday 16 June 2009 07:22:50 Herbert Xu wrote: On Tue, Jun 02, 2009 at 03:41:34PM -0400, Jarod Wilson wrote: On Tuesday 02 June 2009 09:20:38 Herbert Xu wrote: On Tue, Jun 02, 2009 at 09:01:42AM -0400, Jarod Wilson wrote: I can't reproduce this with 2.6.30-rc7. I'll rebase my cryptodev tree to 2.6.30-rcX and see if I can still reproduce the problem. I just rebased cryptodev against 2.6.30-rc7 today so please let me know whether you can still reproduce it with cryptodev. Still happens after the rebase. Kernel config here, if it helps: http://jwilson.fedorapeople.org/misc/cryptodev-2.6-config Sorry, I still have no luck in reproducing this. I used the current linux-2.6 which has the cryptodev tree merged, and the crypto bits from your config file. Any chance you can let me log onto your system to have a look? Sure, I'll drop you the access info off-list in a sec. -- Jarod Wilson ja...@redhat.com -- To unsubscribe from this list: send the line unsubscribe linux-crypto in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[PATCH] crypto: s390 des3 - permit weak keys unless REQ_WEAK_KEY set
Just started running fips cavs test vectors through an s390x system for giggles, and discovered that I missed patching s390's arch-specific des3 implementation w/an earlier des3 patch to permit weak keys. This change adds the same flag tweaks as ad79cdd77fc1466e45cf923890f66bcfe7c43f12 for s390's des3 implementation, yields expected test results now. Signed-off-by: Jarod Wilson ja...@redhat.com --- arch/s390/crypto/des_s390.c | 11 ++- 1 files changed, 6 insertions(+), 5 deletions(-) diff --git a/arch/s390/crypto/des_s390.c b/arch/s390/crypto/des_s390.c index 4aba83b..2bc479a 100644 --- a/arch/s390/crypto/des_s390.c +++ b/arch/s390/crypto/des_s390.c @@ -250,8 +250,9 @@ static int des3_128_setkey(struct crypto_tfm *tfm, const u8 *key, const u8 *temp_key = key; u32 *flags = tfm-crt_flags; - if (!(memcmp(key, key[DES_KEY_SIZE], DES_KEY_SIZE))) { - *flags |= CRYPTO_TFM_RES_BAD_KEY_SCHED; + if (!(memcmp(key, key[DES_KEY_SIZE], DES_KEY_SIZE)) + (*flags CRYPTO_TFM_REQ_WEAK_KEY)) { + *flags |= CRYPTO_TFM_RES_WEAK_KEY; return -EINVAL; } for (i = 0; i 2; i++, temp_key += DES_KEY_SIZE) { @@ -411,9 +412,9 @@ static int des3_192_setkey(struct crypto_tfm *tfm, const u8 *key, if (!(memcmp(key, key[DES_KEY_SIZE], DES_KEY_SIZE) memcmp(key[DES_KEY_SIZE], key[DES_KEY_SIZE * 2], - DES_KEY_SIZE))) { - - *flags |= CRYPTO_TFM_RES_BAD_KEY_SCHED; + DES_KEY_SIZE)) + (*flags CRYPTO_TFM_REQ_WEAK_KEY)) { + *flags |= CRYPTO_TFM_RES_WEAK_KEY; return -EINVAL; } for (i = 0; i 3; i++, temp_key += DES_KEY_SIZE) { -- Jarod Wilson ja...@redhat.com -- To unsubscribe from this list: send the line unsubscribe linux-crypto in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [RFC PATCH] crypto: add buffer overflow checks to testmgr
On Friday 29 May 2009 18:10:55 Herbert Xu wrote: On Fri, May 29, 2009 at 11:32:54AM -0400, Jarod Wilson wrote: At present, its entirely possible to add a test vector to testmgr with an input longer than a page in length w/o specifying a .np option, and overflow the page of memory allocated to {a,}xbuf[0], silently corrupting memory. I know, because I've accidentally done it. :) While this doesn't currently happen in practice w/the existing code, due to all !np vectors being less than a 4k page in length (and the page allocation loop often returns contiguous pages anyway), explicit checks or a way to remove the 4k limit would be a good idea. A few ways to fix and/or work around this: 1) allocate some larger guaranteed contiguous buffers using __get_free_pages() or kmalloc and use them in the !np case 2) catch the PAGE_SIZE !np case and then do things similar to how they are done in the np case 3) catch the PAGE_SIZE !np case and simply exit with an error Since there currently aren't any test vectors that are actually larger than a page and not tagged np, option 1 seems like a waste of memory and option 2 sounds like unnecessary complexity, so I'd offer up option 3 as the most viable alternative right now. Signed-off-by: Jarod Wilson ja...@redhat.com I just posted exactly the same thing yesterday :) One note... This is actually causing some new compile warnings to be spit out, varies from arch to arch, dependent on page size... ppc64 with 64k pages is the worst offender: crypto/testmgr.c: In function 'test_nhash': crypto/testmgr.c:194: warning: comparison is always false due to limited range of data type crypto/testmgr.c: In function 'test_aead': crypto/testmgr.c:374: warning: comparison is always false due to limited range of data type crypto/testmgr.c:375: warning: comparison is always false due to limited range of data type crypto/testmgr.c: In function 'test_cipher': crypto/testmgr.c:676: warning: comparison is always false due to limited range of data type crypto/testmgr.c: In function 'test_skcipher': crypto/testmgr.c:771: warning: comparison is always false due to limited range of data type -- Jarod Wilson ja...@redhat.com -- To unsubscribe from this list: send the line unsubscribe linux-crypto in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: sha384 self-test failure oddity
On Tuesday 02 June 2009 01:10:27 Herbert Xu wrote: Jarod Wilson ja...@redhat.com wrote: While doing a bit of testing of some other crypto code, I've repeatedly noticed a sha384 self-test failure. If you 'modprobe tcrypt', the sha384 self-test fails, then immediately after it, sha384-generic self-tests succeed. Something is awry w/sha384 initialization, as can be more plainly seen by the following after a reboot: # modprobe tcrypt mode=11 (run sha384 self-test) dmesg - alg: hash: Failed to load transform for sha384: -2 What kernel version Straight up clone of cryptodev-2.6 -- i.e., not linus' tree + cryptodev tacked on top. uname -r just says 2.6.29. and config? Pretty much a dupe of the latest Fedora 11 kernel configs, run through make oldconfig. I can't reproduce this with 2.6.30-rc7. I'll rebase my cryptodev tree to 2.6.30-rcX and see if I can still reproduce the problem. What does modinfo sha512_generic say? # modinfo sha512_generic filename: /lib/modules/2.6.29/kernel/crypto/sha512_generic.ko alias: sha512 alias: sha384 description:SHA-512 and SHA-384 Secure Hash Algorithms license:GPL srcversion: 8709387FD8370A6569E17F3 depends: vermagic: 2.6.29 SMP mod_unload -- Jarod Wilson ja...@redhat.com -- To unsubscribe from this list: send the line unsubscribe linux-crypto in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: sha384 self-test failure oddity
On Tuesday 02 June 2009 09:20:38 Herbert Xu wrote: On Tue, Jun 02, 2009 at 09:01:42AM -0400, Jarod Wilson wrote: I can't reproduce this with 2.6.30-rc7. I'll rebase my cryptodev tree to 2.6.30-rcX and see if I can still reproduce the problem. I just rebased cryptodev against 2.6.30-rc7 today so please let me know whether you can still reproduce it with cryptodev. Still happens after the rebase. Kernel config here, if it helps: http://jwilson.fedorapeople.org/misc/cryptodev-2.6-config -- Jarod Wilson ja...@redhat.com -- To unsubscribe from this list: send the line unsubscribe linux-crypto in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
sha384 self-test failure oddity
While doing a bit of testing of some other crypto code, I've repeatedly noticed a sha384 self-test failure. If you 'modprobe tcrypt', the sha384 self-test fails, then immediately after it, sha384-generic self-tests succeed. Something is awry w/sha384 initialization, as can be more plainly seen by the following after a reboot: # modprobe tcrypt mode=11 (run sha384 self-test) dmesg - alg: hash: Failed to load transform for sha384: -2 # modprobe tcrypt mode=12 (run sha512 self-test) dmesg - alg: self-tests for sha384-generic (sha384) passed alg: self-tests for sha512-generic (sha512) passed alg: self-tests for sha512 (sha512) passed # modprobe tcrypt mode=11 (run sha384 self-test again) dmesg - alg: self-tests for sha384 (sha384) passed Not sure offhand what's going awry, and don't have time to look deeper right now, but will do so if nobody else gets around to it first... -- Jarod Wilson ja...@redhat.com -- To unsubscribe from this list: send the line unsubscribe linux-crypto in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [RFC PATCH] crypto: add buffer overflow checks to testmgr
On 05/29/2009 06:10 PM, Herbert Xu wrote: On Fri, May 29, 2009 at 11:32:54AM -0400, Jarod Wilson wrote: At present, its entirely possible to add a test vector to testmgr with an input longer than a page in length w/o specifying a .np option, and overflow the page of memory allocated to {a,}xbuf[0], silently corrupting memory. I know, because I've accidentally done it. :) While this doesn't currently happen in practice w/the existing code, due to all !np vectors being less than a 4k page in length (and the page allocation loop often returns contiguous pages anyway), explicit checks or a way to remove the 4k limit would be a good idea. A few ways to fix and/or work around this: 1) allocate some larger guaranteed contiguous buffers using __get_free_pages() or kmalloc and use them in the !np case 2) catch the PAGE_SIZE !np case and then do things similar to how they are done in the np case 3) catch the PAGE_SIZE !np case and simply exit with an error Since there currently aren't any test vectors that are actually larger than a page and not tagged np, option 1 seems like a waste of memory and option 2 sounds like unnecessary complexity, so I'd offer up option 3 as the most viable alternative right now. Signed-off-by: Jarod Wilson ja...@redhat.com I just posted exactly the same thing yesterday :) Oh, haha, serves me right for not looking first... Your variant seems to be a bit more complete too, as I didn't look at any of the possible cases where there might be overflows when using scatterlists. Cool, worksforme! Thanks much, -- Jarod Wilson ja...@redhat.com -- To unsubscribe from this list: send the line unsubscribe linux-crypto in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH] crypto: tcrypt: add option to not exit on success
On Wednesday 13 May 2009 09:27:52 Herbert Xu wrote: On Wed, May 13, 2009 at 09:12:46AM -0400, Jarod Wilson wrote: Hm... FIPS has the requirement that we test all algs before we use any algs, self-tests on demand before first use for each alg is insufficient. At first blush, I'm not seeing how we ensure this happens. How can we trigger a cbc(des3_ede) self-test from userspace? I see that modprobe'ing des.ko runs the base des and des3_ede self-tests, but modprobe'ing cbc.ko doesn't lead to any self-tests being run. Once we have a user-space interface crypto API you will be able to instantiate any given algorithm. For now I suggest that you create your own module to instantiate these FIPS algorithms. Or just load tcrypt and ignore the exit status, or make tcrypt return 0 if we're in FIPS mode. The latter option is more or less what the patch at the start of this thread did, although via a param to tcrypt, not keying off the fips flag. If I were to modify the patch to drop the mod param usage, and instead key off the fips flag to not exit, would that be acceptable for committing until such time as the userspace interface is ready? -- Jarod Wilson ja...@redhat.com -- To unsubscribe from this list: send the line unsubscribe linux-crypto in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH] crypto: tcrypt: add option to not exit on success
On Wednesday 13 May 2009 10:02:25 Neil Horman wrote: On Wed, May 13, 2009 at 11:27:52PM +1000, Herbert Xu wrote: On Wed, May 13, 2009 at 09:12:46AM -0400, Jarod Wilson wrote: Hm... FIPS has the requirement that we test all algs before we use any algs, self-tests on demand before first use for each alg is insufficient. At first blush, I'm not seeing how we ensure this happens. How can we trigger a cbc(des3_ede) self-test from userspace? I see that modprobe'ing des.ko runs the base des and des3_ede self-tests, but modprobe'ing cbc.ko doesn't lead to any self-tests being run. Once we have a user-space interface crypto API you will be able to instantiate any given algorithm. Thats a good idea. Jarod, didn't you create a generic netlink socket family module that created just such an interface for testing purposes? Indeed. Works quite well for my somewhat specific needs... That might be worth polishing and submitting to provide that user space crypto api It would likely need a LOT of polish, and I'm not sure if its at all close to what we have (Herbert has?) in mind At the moment, it consists of: 1) a kernel module, which duplicates many of the functions in testmgr, more or less, but gets its input over a generic netlink socket, rather than from a static array, and sends the result back out over the same socket. 2) a userspace binary that feeds very specifically formatted vectors to the kernel via the netlink socket, listens for the result and spits it out -- it doesn't actually verify the correctness of the result, but adding support for passing in an expected result and checking against it would be trivial. I guess the place to start would be to ask exactly what sort of user-space interface to the crypto API did we have in mind here? i.e., what sort of user-space-kernel-space communication is envisioned, and what sort of functionality should be present? I could see the desire for a simpler interface that doesn't rely upon a specific userspace binary -- something sysfs or proc-based -- but netlink is reasonably flexible... -- Jarod Wilson ja...@redhat.com -- To unsubscribe from this list: send the line unsubscribe linux-crypto in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH] crypto: tcrypt: add option to not exit on success
On Monday 11 May 2009 10:06:32 Jarod Wilson wrote: At present, the tcrypt module always exits with an -EAGAIN upon successfully completing all the tests its been asked to run. There are cases where it would be much simpler to verify all tests passed if tcrypt simply stayed loaded (i.e. returned 0). Specifically, in fips mode, all self-tests need to be run from the initrd, and its much simpler to check the ret from modprobe for success than to scrape dmesg. To make this doable, I've simply added a module param to allow this behavior, leaving the default behavior more or less the same as before, although now we're tracking all success/failure rets as well. I've been reminded that a self-test failure in fips mode means an immediate panic, so modprobe never sees the ret in that case, but if the module load failed for other reasons, a non-zero return value from modprobe is possible w/o traversing the code paths that trigger a self-test failure panic. For one, if the tcrypt module were to go missing for some reason, modprobe would have a non-zero ret, and the initrd would need to handle panicking the system. Would there be any objections to dropping the noexit parameter entirely and just making its behavior the default? It would make all users regardless of fips mode notice failures more readily. -- Jarod Wilson ja...@redhat.com -- To unsubscribe from this list: send the line unsubscribe linux-crypto in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[PATCH 2/2 v2] crypto: skip algs not flagged fips_allowed in fips mode
Because all fips-allowed algorithms must be self-tested before they can be used, they will all have entries in testmgr.c's alg_test_descs[]. Skip self-tests for any algs not flagged as fips_approved and return -EINVAL when in fips mode. Resending with properly updated patch v2 tag. Signed-off-by: Jarod Wilson ja...@redhat.com --- crypto/testmgr.c |8 1 files changed, 8 insertions(+), 0 deletions(-) diff --git a/crypto/testmgr.c b/crypto/testmgr.c index 51bae62..f93b26d 100644 --- a/crypto/testmgr.c +++ b/crypto/testmgr.c @@ -2308,6 +2308,9 @@ int alg_test(const char *driver, const char *alg, u32 type, u32 mask) if (i 0) goto notest; + if (fips_enabled !alg_test_descs[i].fips_allowed) + goto non_fips_alg; + rc = alg_test_cipher(alg_test_descs + i, driver, type, mask); goto test_done; } @@ -2316,6 +2319,9 @@ int alg_test(const char *driver, const char *alg, u32 type, u32 mask) if (i 0) goto notest; + if (fips_enabled !alg_test_descs[i].fips_allowed) + goto non_fips_alg; + rc = alg_test_descs[i].test(alg_test_descs + i, driver, type, mask); test_done: @@ -2331,5 +2337,7 @@ test_done: notest: printk(KERN_INFO alg: No test for %s (%s)\n, alg, driver); return 0; +non_fips_alg: + return -EINVAL; } EXPORT_SYMBOL_GPL(alg_test); -- Jarod Wilson ja...@redhat.com -- To unsubscribe from this list: send the line unsubscribe linux-crypto in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[PATCH] crypto: tcrypt: add option to not exit on success
At present, the tcrypt module always exits with an -EAGAIN upon successfully completing all the tests its been asked to run. There are cases where it would be much simpler to verify all tests passed if tcrypt simply stayed loaded (i.e. returned 0). Specifically, in fips mode, all self-tests need to be run from the initrd, and its much simpler to check the ret from modprobe for success than to scrape dmesg. To make this doable, I've simply added a module param to allow this behavior, leaving the default behavior more or less the same as before, although now we're tracking all success/failure rets as well. The tcrypt_test() portion of the patch is dependent on my earlier pair of patches that skip non-fips algs in fips mode, at least to achieve the fully intended behavior. Signed-off-by: Jarod Wilson ja...@redhat.com --- crypto/tcrypt.c | 168 +++ 1 files changed, 94 insertions(+), 74 deletions(-) diff --git a/crypto/tcrypt.c b/crypto/tcrypt.c index 9e4974e..dee729c 100644 --- a/crypto/tcrypt.c +++ b/crypto/tcrypt.c @@ -27,6 +27,7 @@ #include linux/timex.h #include linux/interrupt.h #include tcrypt.h +#include internal.h /* * Need slab memory for testing (size in number of pages). @@ -45,6 +46,7 @@ static unsigned int sec; static int mode; +static int noexit; static char *tvmem[TVMEMSIZE]; static char *check[] = { @@ -468,248 +470,255 @@ static void test_available(void) static inline int tcrypt_test(const char *alg) { - return alg_test(alg, alg, 0, 0); + int ret; + + ret = alg_test(alg, alg, 0, 0); + /* non-fips algs return -EINVAL in fips mode */ + if (fips_enabled ret == -EINVAL) + ret = 0; + return ret; } -static void do_test(int m) +static int do_test(int m) { int i; + int ret = 0; switch (m) { case 0: for (i = 1; i 200; i++) - do_test(i); + ret += do_test(i); break; case 1: - tcrypt_test(md5); + ret += tcrypt_test(md5); break; case 2: - tcrypt_test(sha1); + ret += tcrypt_test(sha1); break; case 3: - tcrypt_test(ecb(des)); - tcrypt_test(cbc(des)); + ret += tcrypt_test(ecb(des)); + ret += tcrypt_test(cbc(des)); break; case 4: - tcrypt_test(ecb(des3_ede)); - tcrypt_test(cbc(des3_ede)); + ret += tcrypt_test(ecb(des3_ede)); + ret += tcrypt_test(cbc(des3_ede)); break; case 5: - tcrypt_test(md4); + ret += tcrypt_test(md4); break; case 6: - tcrypt_test(sha256); + ret += tcrypt_test(sha256); break; case 7: - tcrypt_test(ecb(blowfish)); - tcrypt_test(cbc(blowfish)); + ret += tcrypt_test(ecb(blowfish)); + ret += tcrypt_test(cbc(blowfish)); break; case 8: - tcrypt_test(ecb(twofish)); - tcrypt_test(cbc(twofish)); + ret += tcrypt_test(ecb(twofish)); + ret += tcrypt_test(cbc(twofish)); break; case 9: - tcrypt_test(ecb(serpent)); + ret += tcrypt_test(ecb(serpent)); break; case 10: - tcrypt_test(ecb(aes)); - tcrypt_test(cbc(aes)); - tcrypt_test(lrw(aes)); - tcrypt_test(xts(aes)); - tcrypt_test(ctr(aes)); - tcrypt_test(rfc3686(ctr(aes))); + ret += tcrypt_test(ecb(aes)); + ret += tcrypt_test(cbc(aes)); + ret += tcrypt_test(lrw(aes)); + ret += tcrypt_test(xts(aes)); + ret += tcrypt_test(ctr(aes)); + ret += tcrypt_test(rfc3686(ctr(aes))); break; case 11: - tcrypt_test(sha384); + ret += tcrypt_test(sha384); break; case 12: - tcrypt_test(sha512); + ret += tcrypt_test(sha512); break; case 13: - tcrypt_test(deflate); + ret += tcrypt_test(deflate); break; case 14: - tcrypt_test(ecb(cast5)); + ret += tcrypt_test(ecb(cast5)); break; case 15: - tcrypt_test(ecb(cast6)); + ret += tcrypt_test(ecb(cast6)); break; case 16: - tcrypt_test(ecb(arc4)); + ret += tcrypt_test(ecb(arc4)); break; case 17: - tcrypt_test(michael_mic); + ret += tcrypt_test(michael_mic
Re: [PATCH 0/2] crypto: disallow non-approved algs in fips mode
On Thursday 07 May 2009 14:41:26 Jarod Wilson wrote: At present, nothing is preventing the use of non-approved algorithms in fips mode. I was initially working on a patch to make it easier for all fips-approved algs to be tested using tcrypt, and realized the changes I was making could also be used to prevent non-approved algs in fips mode. Any approved alg *must* have self-tests, and thus have an entry in testmgr.c's alg_test_descs[]. By adding a fips flag to these entries, we can simply reject all algs that don't have this flag when in fips mode by skipping their self-tests and returning an -EINVAL to prevent them from being loaded. So with this change, I can 1) 'modprobe tcrypt' and have all fips approved algs self-tested, and *only* fips approved algs tested 2) 'modprobe md4' for example, and in fips mode, have the module load rejected as invalid Hrm. Minor correction... Only seeing module loads rejected as invalid when patching this into a Red Hat Enterprise Linux 5.x kernel. With the cryptodev tree, we do skip non-allowed algs as intended, but loading modules for non-allowed algs still works... -- Jarod Wilson ja...@redhat.com -- To unsubscribe from this list: send the line unsubscribe linux-crypto in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[PATCH 2/2] crypto: mark algs allowed in fips mode
Set the fips_allowed flag in testmgr.c's alg_test_descs[] for algs that are allowed to be used when in fips mode. One caveat: des isn't actually allowed anymore, but des (and thus also ecb(des)) has to be permitted, because disallowing them results in des3_ede being unable to properly register (see des module init func). Also, crc32 isn't technically on the fips approved list, but I think it gets used in various places that necessitate it being allowed. This list is based on http://csrc.nist.gov/groups/STM/cavp/index.html Important note: allowed/approved here does NOT mean validated, just that its an alg that *could* be validated. Signed-off-by: Jarod Wilson ja...@redhat.com --- crypto/testmgr.c | 22 ++ 1 files changed, 22 insertions(+), 0 deletions(-) diff --git a/crypto/testmgr.c b/crypto/testmgr.c index 232f043..f93b26d 100644 --- a/crypto/testmgr.c +++ b/crypto/testmgr.c @@ -1433,6 +1433,7 @@ static const struct alg_test_desc alg_test_descs[] = { { .alg = ansi_cprng, .test = alg_test_cprng, + .fips_allowed = 1, .suite = { .cprng = { .vecs = ansi_cprng_aes_tv_template, @@ -1442,6 +1443,7 @@ static const struct alg_test_desc alg_test_descs[] = { }, { .alg = cbc(aes), .test = alg_test_skcipher, + .fips_allowed = 1, .suite = { .cipher = { .enc = { @@ -1517,6 +1519,7 @@ static const struct alg_test_desc alg_test_descs[] = { }, { .alg = cbc(des3_ede), .test = alg_test_skcipher, + .fips_allowed = 1, .suite = { .cipher = { .enc = { @@ -1547,6 +1550,7 @@ static const struct alg_test_desc alg_test_descs[] = { }, { .alg = ccm(aes), .test = alg_test_aead, + .fips_allowed = 1, .suite = { .aead = { .enc = { @@ -1562,6 +1566,7 @@ static const struct alg_test_desc alg_test_descs[] = { }, { .alg = crc32c, .test = alg_test_crc32c, + .fips_allowed = 1, .suite = { .hash = { .vecs = crc32c_tv_template, @@ -1571,6 +1576,7 @@ static const struct alg_test_desc alg_test_descs[] = { }, { .alg = ctr(aes), .test = alg_test_skcipher, + .fips_allowed = 1, .suite = { .cipher = { .enc = { @@ -1616,6 +1622,7 @@ static const struct alg_test_desc alg_test_descs[] = { }, { .alg = ecb(aes), .test = alg_test_skcipher, + .fips_allowed = 1, .suite = { .cipher = { .enc = { @@ -1721,6 +1728,7 @@ static const struct alg_test_desc alg_test_descs[] = { }, { .alg = ecb(des), .test = alg_test_skcipher, + .fips_allowed = 1, .suite = { .cipher = { .enc = { @@ -1736,6 +1744,7 @@ static const struct alg_test_desc alg_test_descs[] = { }, { .alg = ecb(des3_ede), .test = alg_test_skcipher, + .fips_allowed = 1, .suite = { .cipher = { .enc = { @@ -1871,6 +1880,7 @@ static const struct alg_test_desc alg_test_descs[] = { }, { .alg = gcm(aes), .test = alg_test_aead, + .fips_allowed = 1, .suite = { .aead = { .enc = { @@ -1913,6 +1923,7 @@ static const struct alg_test_desc alg_test_descs[] = { }, { .alg = hmac(sha1), .test = alg_test_hash, + .fips_allowed = 1, .suite = { .hash = { .vecs = hmac_sha1_tv_template, @@ -1922,6 +1933,7 @@ static const struct alg_test_desc alg_test_descs[] = { }, { .alg = hmac(sha224), .test = alg_test_hash, + .fips_allowed = 1, .suite = { .hash = { .vecs = hmac_sha224_tv_template, @@ -1931,6 +1943,7 @@ static const struct alg_test_desc alg_test_descs[] = { }, { .alg = hmac(sha256), .test = alg_test_hash, + .fips_allowed = 1, .suite = { .hash = { .vecs = hmac_sha256_tv_template, @@ -1940,6 +1953,7
Re: [PATCH v2] crypto: add ctr(aes) test vectors
On Wednesday 06 May 2009 05:30:21 Herbert Xu wrote: On Tue, May 05, 2009 at 10:42:58AM -0400, Jarod Wilson wrote: Now with multi-block test vectors, all from SP800-38A, Appendix F.5. Also added ctr(aes) to case 10 in tcrypt. Quickly smoke-tested in fips mode, got back alg_test: alg ctr(aes-x86_64) (ctr(aes)) self-test passed. Signed-off-by: Jarod Wilson ja...@redhat.com It didn't build because you forgot to change the RFC3686 dec vector name. I've fixed it for you. Please test your patches in future. Gah, apologies. Tested it in that other kernel tree I needed to get this done for, then neglected to do so for cryptodev. Thanks for the fixage. -- Jarod Wilson ja...@redhat.com -- To unsubscribe from this list: send the line unsubscribe linux-crypto in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH] crypto: don't raise alarm for no ctr(aes*) tests in fips mode
On Tuesday 05 May 2009 01:29:05 Herbert Xu wrote: On Mon, May 04, 2009 at 11:45:08PM -0400, Jarod Wilson wrote: Can't keep all the RFCs and SPs and whatnot straight in my head, and they aren't in front of me, but I thought I read that the basic counter increment routine wasn't mandated to be any specific way, the only mandate was to ensure unique values. Suggestions for how to do so were made though. It doesn't matter what is or isn't specified for CTR, the thing that we call ctr is the one that's used for RFC 3686, CCM, and GCM. It is completely pinned down and can be tested. There are two different can be tested contexts here. I completely agree that ctr(aes) is testable within the tcrypt/testmgr context, and sent a patch for such in this thread yesterday. The other context is FIPS CAVS testing, which NIST is saying can't be done, and I was attempting to understand why, which probably only served to muddy the waters. We can definitely do self-tests for ctr(aes). -- Jarod Wilson ja...@redhat.com -- To unsubscribe from this list: send the line unsubscribe linux-crypto in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH] crypto: add ctr(aes) test vectors
On Tuesday 05 May 2009 09:18:35 Herbert Xu wrote: On Mon, May 04, 2009 at 04:24:44PM -0400, Jarod Wilson wrote: Indeed, the first enc/dec operation after we set the counter *is* completely deterministic across all implementations, the AESAVS is referring to tests with multiple operations, which aren't possible, due to varying implementations of counter increment routines. This patch adds test vectors for ctr(aes), using the first block input values from Appendix F.5 of NIST Special Pub 800-38A. Well, our ctr(aes) must be completely deterministic as it is used as the base for CCM and GCM. In fact, if it weren't so then you can't use it for anything since two implementations may produces different outputs. Yeah, that makes sense, I believe I finally see the light. So if you could resend some vectors that test multiple blocks then I'll happily add them. Multi-block test vectors coming shortly, passing in all the input blocks from F.5 of 800-38A is spitting back the expected answers for ever block. -- Jarod Wilson ja...@redhat.com -- To unsubscribe from this list: send the line unsubscribe linux-crypto in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[PATCH v2] crypto: add ctr(aes) test vectors
On Tuesday 05 May 2009 09:55:24 Jarod Wilson wrote: On Tuesday 05 May 2009 09:18:35 Herbert Xu wrote: On Mon, May 04, 2009 at 04:24:44PM -0400, Jarod Wilson wrote: Indeed, the first enc/dec operation after we set the counter *is* completely deterministic across all implementations, the AESAVS is referring to tests with multiple operations, which aren't possible, due to varying implementations of counter increment routines. This patch adds test vectors for ctr(aes), using the first block input values from Appendix F.5 of NIST Special Pub 800-38A. Well, our ctr(aes) must be completely deterministic as it is used as the base for CCM and GCM. In fact, if it weren't so then you can't use it for anything since two implementations may produces different outputs. Yeah, that makes sense, I believe I finally see the light. So if you could resend some vectors that test multiple blocks then I'll happily add them. Multi-block test vectors coming shortly, passing in all the input blocks from F.5 of 800-38A is spitting back the expected answers for ever block. Now with multi-block test vectors, all from SP800-38A, Appendix F.5. Also added ctr(aes) to case 10 in tcrypt. Quickly smoke-tested in fips mode, got back alg_test: alg ctr(aes-x86_64) (ctr(aes)) self-test passed. Signed-off-by: Jarod Wilson ja...@redhat.com --- crypto/tcrypt.c |1 + crypto/testmgr.c | 23 ++- crypto/testmgr.h | 164 +- 3 files changed, 182 insertions(+), 6 deletions(-) diff --git a/crypto/tcrypt.c b/crypto/tcrypt.c index ea3b8a8..9e4974e 100644 --- a/crypto/tcrypt.c +++ b/crypto/tcrypt.c @@ -526,6 +526,7 @@ static void do_test(int m) tcrypt_test(cbc(aes)); tcrypt_test(lrw(aes)); tcrypt_test(xts(aes)); + tcrypt_test(ctr(aes)); tcrypt_test(rfc3686(ctr(aes))); break; diff --git a/crypto/testmgr.c b/crypto/testmgr.c index ffe7963..0efdda7 100644 --- a/crypto/testmgr.c +++ b/crypto/testmgr.c @@ -1518,6 +1518,21 @@ static const struct alg_test_desc alg_test_descs[] = { } } }, { + .alg = ctr(aes), + .test = alg_test_skcipher, + .suite = { + .cipher = { + .enc = { + .vecs = aes_ctr_enc_tv_template, + .count = AES_CTR_ENC_TEST_VECTORS + }, + .dec = { + .vecs = aes_ctr_dec_tv_template, + .count = AES_CTR_DEC_TEST_VECTORS + } + } + } + }, { .alg = cts(cbc(aes)), .test = alg_test_skcipher, .suite = { @@ -1967,12 +1982,12 @@ static const struct alg_test_desc alg_test_descs[] = { .suite = { .cipher = { .enc = { - .vecs = aes_ctr_enc_tv_template, - .count = AES_CTR_ENC_TEST_VECTORS + .vecs = aes_ctr_rfc3686_enc_tv_template, + .count = AES_CTR_3686_ENC_TEST_VECTORS }, .dec = { - .vecs = aes_ctr_dec_tv_template, - .count = AES_CTR_DEC_TEST_VECTORS + .vecs = aes_ctr_rfc3686_dec_tv_template, + .count = AES_CTR_3686_DEC_TEST_VECTORS } } } diff --git a/crypto/testmgr.h b/crypto/testmgr.h index c1c709b..6883fd7 100644 --- a/crypto/testmgr.h +++ b/crypto/testmgr.h @@ -2854,8 +2854,10 @@ static struct cipher_testvec cast6_dec_tv_template[] = { #define AES_LRW_DEC_TEST_VECTORS 8 #define AES_XTS_ENC_TEST_VECTORS 4 #define AES_XTS_DEC_TEST_VECTORS 4 -#define AES_CTR_ENC_TEST_VECTORS 7 -#define AES_CTR_DEC_TEST_VECTORS 6 +#define AES_CTR_ENC_TEST_VECTORS 3 +#define AES_CTR_DEC_TEST_VECTORS 3 +#define AES_CTR_3686_ENC_TEST_VECTORS 7 +#define AES_CTR_3686_DEC_TEST_VECTORS 6 #define AES_GCM_ENC_TEST_VECTORS 9 #define AES_GCM_DEC_TEST_VECTORS 8 #define AES_CCM_ENC_TEST_VECTORS 7 @@ -3998,6 +4000,164 @@ static struct cipher_testvec aes_xts_dec_tv_template[] = { static struct cipher_testvec aes_ctr_enc_tv_template[] = { + { /* From NIST Special Publication 800-38A, Appendix F.5 */ + .key= \x2b\x7e\x15\x16\x28\xae\xd2\xa6 + \xab\xf7\x15\x88\x09\xcf\x4f\x3c, + .klen = 16, + .iv = \xf0\xf1\xf2\xf3\xf4\xf5\xf6
Re: [PATCH] crypto: don't raise alarm for no ctr(aes*) tests in fips mode
On Monday 04 May 2009 07:10:10 Herbert Xu wrote: On Tue, Apr 28, 2009 at 09:18:22PM -0400, Jarod Wilson wrote: Per the NIST AESAVS document, Appendix A[1], it isn't possible to have automated self-tests for counter-mode AES, but people are misled to believe something is wrong by the message that says there is no test for ctr(aes). Simply suppress all 'no test for ctr(aes*' messages if fips_enabled is set to avoid confusion. This is not true at all. In our implementation the counter is set through the IV so it definitely is possible to test counter mode algorithms in Linux. Ah... Now I think I see... We can provide an initial counter w/o a problem, but counter incrementation is implementation-specific, so we can't have automated tests that cover multiple enc/dec ops, but if we limit ourselves to just one op, self-tests should be perfectly doable, and NIST SP 800-38A, Appendix F.5 has vectors we could make use of (using just the block #1 values). At least, spot-checking the vectors, I'm getting the expected results for the 1st block. Okay, I'll whip something up in a sec. -- Jarod Wilson ja...@redhat.com -- To unsubscribe from this list: send the line unsubscribe linux-crypto in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH] crypto: don't raise alarm for no ctr(aes*) tests in fips mode
On 05/04/2009 09:08 PM, Herbert Xu wrote: On Mon, May 04, 2009 at 02:56:58PM -0400, Jarod Wilson wrote: Ah... Now I think I see... We can provide an initial counter w/o a problem, but counter incrementation is implementation-specific, so Not in Linux. If you're going to provide ctr you'd better increment in the way the current implementation does it. Otherwise anything that wraps around it, such as RFC3686 will fail. Another way to put it, only the counter mode as used in RFC 3686, CCM and GCM is what we call ctr. Yeah, no, I didn't mean within Linux we'd have different implementations, I meant e.g. Linux vs. Windows vs. a Cisco router or what have you as far as the base counter increment routine being implementation-specific. Can't keep all the RFCs and SPs and whatnot straight in my head, and they aren't in front of me, but I thought I read that the basic counter increment routine wasn't mandated to be any specific way, the only mandate was to ensure unique values. Suggestions for how to do so were made though. That all seems to coincide with the AESAVS's assertion that automated testing of ctr(aes) isn't possible, if one considers that Monte Carlo tests are typically a standard part of all the other ciphers/modes full validation test suites. I just initially read that to mean that self-tests weren't possible, while I now believe its only referring to exhaustive CAVS testing (i.e. w/MCT) not being possible, due to potential differences from one counter inc routine to another. Its also possible I'm losing my mind though. -- Jarod Wilson ja...@redhat.com -- To unsubscribe from this list: send the line unsubscribe linux-crypto in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[PATCH v2] crypto: don't raise alarm for no ctr(aes) tests
On Wednesday 29 April 2009 08:46:47 Jarod Wilson wrote: On Wednesday 29 April 2009 06:50:35 Neil Horman wrote: On Tue, Apr 28, 2009 at 09:18:22PM -0400, Jarod Wilson wrote: Per the NIST AESAVS document, Appendix A[1], it isn't possible to have automated self-tests for counter-mode AES, but people are misled to believe something is wrong by the message that says there is no test for ctr(aes). Simply suppress all 'no test for ctr(aes*' messages if fips_enabled is set to avoid confusion. Dependent on earlier patch 'crypto: catch base cipher self-test failures in fips mode', which adds the test_done label. [1] http://csrc.nist.gov/groups/STM/cavp/documents/aes/AESAVS.pdf [...] From the way I read the document, anything operating in a counter mode will have an unpredictable output (given the counter operation isn't specified). While the above works, I'm not sure that it fully covers the various ccm modes available (ccm_base and rfc4309). I believe Appendix A only applies for straight up counter-mode aes, ccm_base and rfc4309 actually have well-defined counter operations. We've already got self-tests for ccm(aes) and a pending patch for rfc4309(ccm(aes), and since they don't start w/'ctr(aes', they wouldn't be caught by that (admittedly hacky) check even if we didn't have test vectors for them. Perhaps instead it would be better to add a TFM mask flag indicating that the selected transform included a unpredictable component or counter input (marking the alg as being unsuitable for automatic testing without knoweldge of the inner workings of that counter. Then you could just test for that flag? Yeah, I thought about a flag too, but it seemed potentially a lot of overhead for what might well be restricted to ctr(aes*). It might've been relevant for ctr(des3_ede) or ctr(des), but they're not on the fips approved algo/mode list, so I took the easy way out. I'm game to go the flag route if need be though. Neil and I talked a bit more off-list about the best approach to take, and after a bit of trial and error, we came up with the idea to simply add an 'untestable' flag to the alg_test_desc struct, a corresponding entry for ctr(aes) in alg_test_descs, and a hook to check for untestable algs in alg_find_test(). Works well enough in local testing, eliminates the use of strncmp, and results in far more granular control over marking which algs are explicitly untestable and shouldn't have warnings printed for. At the moment, I've gone for suppressing the warnings regardless of whether we're in fips mode or not, but adding a different warning (er, info) message in the untestable case works too, if that's preferred. The errnos used seem appropriate, but I might have missed more appropriate ones somewhere. nb: this patch applies atop my earlier '[PATCH v2] crypto: catch base cipher self-test failures in fips mode'. Signed-off-by: Jarod Wilson ja...@redhat.com --- crypto/testmgr.c | 21 +++-- 1 files changed, 19 insertions(+), 2 deletions(-) diff --git a/crypto/testmgr.c b/crypto/testmgr.c index f39c148..b78278c 100644 --- a/crypto/testmgr.c +++ b/crypto/testmgr.c @@ -94,6 +94,7 @@ struct alg_test_desc { const char *alg; int (*test)(const struct alg_test_desc *desc, const char *driver, u32 type, u32 mask); + int untestable; union { struct aead_test_suite aead; @@ -1518,6 +1519,13 @@ static const struct alg_test_desc alg_test_descs[] = { } } }, { + /* +* Automated ctr(aes) tests aren't possible per Appendix A of +* http://csrc.nist.gov/groups/STM/cavp/documents/aes/AESAVS.pdf +*/ + .alg = ctr(aes), + .untestable = 1, + }, { .alg = cts(cbc(aes)), .test = alg_test_skcipher, .suite = { @@ -2198,10 +2206,13 @@ static int alg_find_test(const char *alg) continue; } + if (alg_test_descs[i].untestable) + return -ENODATA; + return i; } - return -1; + return -ENOSYS; } int alg_test(const char *driver, const char *alg, u32 type, u32 mask) @@ -2237,7 +2248,13 @@ test_done: return rc; notest: - printk(KERN_INFO alg: No test for %s (%s)\n, alg, driver); + switch (i) { + case -ENODATA: + break; + case -ENOSYS: + default: + printk(KERN_INFO alg: No test for %s (%s)\n, alg, driver); + } return 0; } EXPORT_SYMBOL_GPL(alg_test); -- Jarod Wilson ja...@redhat.com -- To unsubscribe from this list: send the line unsubscribe linux-crypto in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH] crypto: catch base cipher self-test failures in fips mode
On Wednesday 29 April 2009 06:36:23 Neil Horman wrote: On Tue, Apr 28, 2009 at 09:11:51PM -0400, Jarod Wilson wrote: I think this might have already been posted by Neil Horman, and we already have it in the Red Hat Enterprise Linux 5.x kernels, but in fips mode, we need to panic on the base cipher self-tests failing as well as the later tests. Signed-off-by: Jarod Wilson ja...@redhat.com I did post it: http://www.mail-archive.com/linux-crypto@vger.kernel.org/msg02307.html looks like it somehow just never made it to Linus. Thanks for noticing, Jarod. That part got committed, this is an additional piece, as I believe that wasn't quite complete. This patch adds another check for the rc of alg_test_cipher() (vs. only the check for alg_test_descs[i].test()). -- Jarod Wilson ja...@redhat.com -- To unsubscribe from this list: send the line unsubscribe linux-crypto in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH] crypto: don't raise alarm for no ctr(aes*) tests in fips mode
On Wednesday 29 April 2009 06:50:35 Neil Horman wrote: On Tue, Apr 28, 2009 at 09:18:22PM -0400, Jarod Wilson wrote: Per the NIST AESAVS document, Appendix A[1], it isn't possible to have automated self-tests for counter-mode AES, but people are misled to believe something is wrong by the message that says there is no test for ctr(aes). Simply suppress all 'no test for ctr(aes*' messages if fips_enabled is set to avoid confusion. Dependent on earlier patch 'crypto: catch base cipher self-test failures in fips mode', which adds the test_done label. [1] http://csrc.nist.gov/groups/STM/cavp/documents/aes/AESAVS.pdf Signed-off-by: Jarod Wilson ja...@redhat.com --- crypto/testmgr.c | 11 +++ 1 files changed, 11 insertions(+), 0 deletions(-) diff --git a/crypto/testmgr.c b/crypto/testmgr.c index 5a50416..39ffa69 100644 --- a/crypto/testmgr.c +++ b/crypto/testmgr.c @@ -2134,6 +2134,17 @@ int alg_test(const char *driver, const char *alg, u32 type, u32 mask) type, mask); goto test_done; notest: + /* +* Per NIST AESAVS[1], it isn't possible to have automated self-tests +* for counter mode aes vectors, they have to be covered by ecb mode +* and code inspection. The ecb mode tests are trigger above in the +* CRYPTO_ALG_TYPE_CIPHER section. Suppress warnings about missing +* ctr tests if we're in fips mode to avoid confusion. +* +* [1] http://csrc.nist.gov/groups/STM/cavp/documents/aes/AESAVS.pdf +*/ + if (fips_enabled !strncmp(alg, ctr(aes, 7)) + goto test_done; printk(KERN_INFO alg: No test for %s (%s)\n, alg, driver); test_done: if (fips_enabled rc) From the way I read the document, anything operating in a counter mode will have an unpredictable output (given the counter operation isn't specified). While the above works, I'm not sure that it fully covers the various ccm modes available (ccm_base and rfc4309). I believe Appendix A only applies for straight up counter-mode aes, ccm_base and rfc4309 actually have well-defined counter operations. We've already got self-tests for ccm(aes) and a pending patch for rfc4309(ccm(aes), and since they don't start w/'ctr(aes', they wouldn't be caught by that (admittedly hacky) check even if we didn't have test vectors for them. Perhaps instead it would be better to add a TFM mask flag indicating that the selected transform included a unpredictable component or counter input (marking the alg as being unsuitable for automatic testing without knoweldge of the inner workings of that counter. Then you could just test for that flag? Yeah, I thought about a flag too, but it seemed potentially a lot of overhead for what might well be restricted to ctr(aes*). It might've been relevant for ctr(des3_ede) or ctr(des), but they're not on the fips approved algo/mode list, so I took the easy way out. I'm game to go the flag route if need be though. -- Jarod Wilson ja...@redhat.com -- To unsubscribe from this list: send the line unsubscribe linux-crypto in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH] crypto: catch base cipher self-test failures in fips mode
On Wednesday 29 April 2009 09:15:07 Herbert Xu wrote: On Tue, Apr 28, 2009 at 09:11:51PM -0400, Jarod Wilson wrote: +notest: + printk(KERN_INFO alg: No test for %s (%s)\n, alg, driver); Can notest ever get here with rc != 0? Nope. -- Jarod Wilson ja...@redhat.com -- To unsubscribe from this list: send the line unsubscribe linux-crypto in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH] crypto: print self-test pass notices in fips mode
On Wednesday 29 April 2009 09:18:17 Herbert Xu wrote: On Tue, Apr 28, 2009 at 09:21:35PM -0400, Jarod Wilson wrote: diff --git a/crypto/testmgr.c b/crypto/testmgr.c index 39ffa69..d0cc85c 100644 --- a/crypto/testmgr.c +++ b/crypto/testmgr.c @@ -2149,6 +2149,10 @@ notest: test_done: if (fips_enabled rc) panic(%s: %s alg self test failed in fips mode!\n, driver, alg); + /* fips mode requires we print out self-test success notices */ + if (fips_enabled !rc strncmp(alg, ctr(aes, 7)) + printk(KERN_INFO alg: self-tests for %s (%s) passed\n, + driver, alg); What is this strncmp crap for? To avoid claiming we successfully self-tested ctr(aes) when its not actually directly testable. Was intended to go sort of hand in hand with the other patch to suppress 'no self test' messages for ctr(aes) when in fips mode. Of course, since at this point, we've run ecb(aes), and that's what's suggested as the way to test ctr(aes)[*], perhaps we don't need to suppress it. [*] well, along with the sign-off from the lab that the counter code is acceptable -- Jarod Wilson ja...@redhat.com -- To unsubscribe from this list: send the line unsubscribe linux-crypto in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH] crypto: print self-test pass notices in fips mode
On Wednesday 29 April 2009 09:21:53 Jarod Wilson wrote: On Wednesday 29 April 2009 09:18:17 Herbert Xu wrote: On Tue, Apr 28, 2009 at 09:21:35PM -0400, Jarod Wilson wrote: diff --git a/crypto/testmgr.c b/crypto/testmgr.c index 39ffa69..d0cc85c 100644 --- a/crypto/testmgr.c +++ b/crypto/testmgr.c @@ -2149,6 +2149,10 @@ notest: test_done: if (fips_enabled rc) panic(%s: %s alg self test failed in fips mode!\n, driver, alg); + /* fips mode requires we print out self-test success notices */ + if (fips_enabled !rc strncmp(alg, ctr(aes, 7)) + printk(KERN_INFO alg: self-tests for %s (%s) passed\n, +driver, alg); What is this strncmp crap for? To avoid claiming we successfully self-tested ctr(aes) when its not actually directly testable. Was intended to go sort of hand in hand with the other patch to suppress 'no self test' messages for ctr(aes) when in fips mode. Of course, since at this point, we've run ecb(aes), and that's what's suggested as the way to test ctr(aes)[*], perhaps we don't need to suppress it. [*] well, along with the sign-off from the lab that the counter code is acceptable So this might actually be another argument in favor of adding a this algo isn't really testable flag as Neil suggested... -- Jarod Wilson ja...@redhat.com -- To unsubscribe from this list: send the line unsubscribe linux-crypto in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH] crypto: catch base cipher self-test failures in fips mode
On Wednesday 29 April 2009 09:26:46 Herbert Xu wrote: On Wed, Apr 29, 2009 at 09:18:37AM -0400, Jarod Wilson wrote: On Wednesday 29 April 2009 09:15:07 Herbert Xu wrote: On Tue, Apr 28, 2009 at 09:11:51PM -0400, Jarod Wilson wrote: +notest: + printk(KERN_INFO alg: No test for %s (%s)\n, alg, driver); Can notest ever get here with rc != 0? Nope. So why do we need to move it? Oh. Hrm. Upon looking a bit harder at it, I think the only reason would be if we wanted to print out a message claiming success in testing ctr(aes). There's also a devious ulterior motive for this patch, which is to make the cryptodev tree look identical(er) to the Red Hat Enterprise Linux 5.x kernel tree w/in alg_test()... -- Jarod Wilson ja...@redhat.com -- To unsubscribe from this list: send the line unsubscribe linux-crypto in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[PATCH] crypto: catch base cipher self-test failures in fips mode
I think this might have already been posted by Neil Horman, and we already have it in the Red Hat Enterprise Linux 5.x kernels, but in fips mode, we need to panic on the base cipher self-tests failing as well as the later tests. Signed-off-by: Jarod Wilson ja...@redhat.com --- crypto/testmgr.c | 14 +++--- 1 files changed, 7 insertions(+), 7 deletions(-) diff --git a/crypto/testmgr.c b/crypto/testmgr.c index 40c1078..5a50416 100644 --- a/crypto/testmgr.c +++ b/crypto/testmgr.c @@ -2109,7 +2109,7 @@ static int alg_find_test(const char *alg) int alg_test(const char *driver, const char *alg, u32 type, u32 mask) { int i; - int rc; + int rc = 0; if ((type CRYPTO_ALG_TYPE_MASK) == CRYPTO_ALG_TYPE_CIPHER) { char nalg[CRYPTO_MAX_ALG_NAME]; @@ -2122,7 +2122,8 @@ int alg_test(const char *driver, const char *alg, u32 type, u32 mask) if (i 0) goto notest; - return alg_test_cipher(alg_test_descs + i, driver, type, mask); + rc = alg_test_cipher(alg_test_descs + i, driver, type, mask); + goto test_done; } i = alg_find_test(alg); @@ -2131,14 +2132,13 @@ int alg_test(const char *driver, const char *alg, u32 type, u32 mask) rc = alg_test_descs[i].test(alg_test_descs + i, driver, type, mask); + goto test_done; +notest: + printk(KERN_INFO alg: No test for %s (%s)\n, alg, driver); +test_done: if (fips_enabled rc) panic(%s: %s alg self test failed in fips mode!\n, driver, alg); - return rc; - -notest: - printk(KERN_INFO alg: No test for %s (%s)\n, alg, driver); - return 0; } EXPORT_SYMBOL_GPL(alg_test); -- Jarod Wilson ja...@redhat.com -- To unsubscribe from this list: send the line unsubscribe linux-crypto in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[PATCH] crypto: don't raise alarm for no ctr(aes*) tests in fips mode
Per the NIST AESAVS document, Appendix A[1], it isn't possible to have automated self-tests for counter-mode AES, but people are misled to believe something is wrong by the message that says there is no test for ctr(aes). Simply suppress all 'no test for ctr(aes*' messages if fips_enabled is set to avoid confusion. Dependent on earlier patch 'crypto: catch base cipher self-test failures in fips mode', which adds the test_done label. [1] http://csrc.nist.gov/groups/STM/cavp/documents/aes/AESAVS.pdf Signed-off-by: Jarod Wilson ja...@redhat.com --- crypto/testmgr.c | 11 +++ 1 files changed, 11 insertions(+), 0 deletions(-) diff --git a/crypto/testmgr.c b/crypto/testmgr.c index 5a50416..39ffa69 100644 --- a/crypto/testmgr.c +++ b/crypto/testmgr.c @@ -2134,6 +2134,17 @@ int alg_test(const char *driver, const char *alg, u32 type, u32 mask) type, mask); goto test_done; notest: + /* +* Per NIST AESAVS[1], it isn't possible to have automated self-tests +* for counter mode aes vectors, they have to be covered by ecb mode +* and code inspection. The ecb mode tests are trigger above in the +* CRYPTO_ALG_TYPE_CIPHER section. Suppress warnings about missing +* ctr tests if we're in fips mode to avoid confusion. +* +* [1] http://csrc.nist.gov/groups/STM/cavp/documents/aes/AESAVS.pdf +*/ + if (fips_enabled !strncmp(alg, ctr(aes, 7)) + goto test_done; printk(KERN_INFO alg: No test for %s (%s)\n, alg, driver); test_done: if (fips_enabled rc) -- Jarod Wilson ja...@redhat.com -- To unsubscribe from this list: send the line unsubscribe linux-crypto in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH 2/3] crypto: handle ccm dec test vectors expected to fail verification
On 04/20/2009 02:25 AM, Herbert Xu wrote: On Wed, Apr 15, 2009 at 09:36:43AM -0400, Jarod Wilson wrote: Add infrastructure to tcrypt to support handling ccm decryption test vectors that are expected to fail verification. Signed-off-by: Jarod Wilsonja...@redhat.com --- crypto/testmgr.c | 32 crypto/testmgr.h |1 + 2 files changed, 33 insertions(+), 0 deletions(-) diff --git a/crypto/testmgr.c b/crypto/testmgr.c index a8bdcb3..92f4df0 100644 --- a/crypto/testmgr.c +++ b/crypto/testmgr.c @@ -373,6 +373,16 @@ static int test_aead(struct crypto_aead *tfm, int enc, switch (ret) { case 0: + if (template[i].novrfy) { + /* verification was supposed to fail */ + printk(KERN_ERR alg: aead: %s failed + on test %d for %s: ret was 0, + expected -EBADMSG\n, + e, j, algo); + /* so really, we got a bad message */ + ret = -EBADMSG; + goto out; + } break; case -EINPROGRESS: case -EBUSY: @@ -382,6 +392,10 @@ static int test_aead(struct crypto_aead *tfm, int enc, INIT_COMPLETION(result.completion); break; } + case -EBADMSG: + if (template[i].novrfy) + /* verification failure was expected */ + goto next_aead_vector; /* fall through */ How about just doing continue instead of having this goto? Gah. Yeah, that makes sense. Relic of having some additional debug goo in there, where I wanted feedback from every loop of the test... -- Jarod Wilson ja...@redhat.com -- To unsubscribe from this list: send the line unsubscribe linux-crypto in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[PATCH v4 0/2] crypto: add testmgr support and self-tests for rfc4309
These patches add necessary infrastructure additions to testmgr/tcrypt to support the inclusion of rfc4309(ccm(aes)) self-tests, and the rfc4309 test vectors themselves. [PATCH 1/2] crypto: handle ccm dec test vectors expected to fail verification [PATCH 2/2] crypto: add self-tests for rfc4309(ccm(aes)) Differences from prior submission: - Patch 1 from the prior series dropped, as it turns out to be completely unnecessary. - Now using continue directly when expected ccm dec verify failures are encountered, rather than wedging in an unneeded goto. The self-tests patch itself remains unchanged. -- Jarod Wilson ja...@redhat.com -- To unsubscribe from this list: send the line unsubscribe linux-crypto in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[PATCH 2/2] crypto: add self-tests for rfc4309(ccm(aes))
Add an array of encryption and decryption + verification self-tests for rfc4309(ccm(aes)). Test vectors all come from sample FIPS CAVS files provided to Red Hat by a testing lab. Unfortunately, all the published sample vectors in RFC 3610 and NIST Special Publication 800-38C contain nonce lengths that the kernel's rfc4309 implementation doesn't support, so while using some public domain vectors would have been preferred, its not possible at this time. Signed-off-by: Jarod Wilson ja...@redhat.com --- crypto/tcrypt.c |4 + crypto/testmgr.c | 15 +++ crypto/testmgr.h | 370 ++ 3 files changed, 389 insertions(+), 0 deletions(-) diff --git a/crypto/tcrypt.c b/crypto/tcrypt.c index 50d1e35..0452036 100644 --- a/crypto/tcrypt.c +++ b/crypto/tcrypt.c @@ -667,6 +667,10 @@ static void do_test(int m) tcrypt_test(zlib); break; + case 45: + tcrypt_test(rfc4309(ccm(aes))); + break; + case 100: tcrypt_test(hmac(md5)); break; diff --git a/crypto/testmgr.c b/crypto/testmgr.c index 92f4df0..c808501 100644 --- a/crypto/testmgr.c +++ b/crypto/testmgr.c @@ -1893,6 +1893,21 @@ static const struct alg_test_desc alg_test_descs[] = { } } }, { + .alg = rfc4309(ccm(aes)), + .test = alg_test_aead, + .suite = { + .aead = { + .enc = { + .vecs = aes_ccm_rfc4309_enc_tv_template, + .count = AES_CCM_4309_ENC_TEST_VECTORS + }, + .dec = { + .vecs = aes_ccm_rfc4309_dec_tv_template, + .count = AES_CCM_4309_DEC_TEST_VECTORS + } + } + } + }, { .alg = rmd128, .test = alg_test_hash, .suite = { diff --git a/crypto/testmgr.h b/crypto/testmgr.h index b77b61d..5add651 100644 --- a/crypto/testmgr.h +++ b/crypto/testmgr.h @@ -2848,6 +2848,8 @@ static struct cipher_testvec cast6_dec_tv_template[] = { #define AES_GCM_DEC_TEST_VECTORS 8 #define AES_CCM_ENC_TEST_VECTORS 7 #define AES_CCM_DEC_TEST_VECTORS 7 +#define AES_CCM_4309_ENC_TEST_VECTORS 7 +#define AES_CCM_4309_DEC_TEST_VECTORS 10 static struct cipher_testvec aes_enc_tv_template[] = { { /* From FIPS-197 */ @@ -5826,6 +5828,374 @@ static struct aead_testvec aes_ccm_dec_tv_template[] = { }, }; +/* + * rfc4309 refers to section 8 of rfc3610 for test vectors, but they all + * use a 13-byte nonce, we only support an 11-byte nonce. Similarly, all of + * Special Publication 800-38C's test vectors also use nonce lengths our + * implementation doesn't support. The following are taken from fips cavs + * fax files on hand at Red Hat. + * + * nb: actual key lengths are (klen - 3), the last 3 bytes are actually + * part of the nonce which combine w/the iv, but need to be input this way. + */ +static struct aead_testvec aes_ccm_rfc4309_enc_tv_template[] = { + { + .key= \x83\xac\x54\x66\xc2\xeb\xe5\x05 + \x2e\x01\xd1\xfc\x5d\x82\x66\x2e + \x96\xac\x59, + .klen = 19, + .iv = \x30\x07\xa1\xe2\xa2\xc7\x55\x24, + .alen = 0, + .input = \x19\xc8\x81\xf6\xe9\x86\xff\x93 + \x0b\x78\x67\xe5\xbb\xb7\xfc\x6e + \x83\x77\xb3\xa6\x0c\x8c\x9f\x9c + \x35\x2e\xad\xe0\x62\xf9\x91\xa1, + .ilen = 32, + .result = \xab\x6f\xe1\x69\x1d\x19\x99\xa8 + \x92\xa0\xc4\x6f\x7e\xe2\x8b\xb1 + \x70\xbb\x8c\xa6\x4c\x6e\x97\x8a + \x57\x2b\xbe\x5d\x98\xa6\xb1\x32 + \xda\x24\xea\xd9\xa1\x39\x98\xfd + \xa4\xbe\xd9\xf2\x1a\x6d\x22\xa8, + .rlen = 48, + }, { + .key= \x1e\x2c\x7e\x01\x41\x9a\xef\xc0 + \x0d\x58\x96\x6e\x5c\xa2\x4b\xd3 + \x4f\xa3\x19, + .klen = 19, + .iv = \xd3\x01\x5a\xd8\x30\x60\x15\x56, + .assoc = \xda\xe6\x28\x9c\x45\x2d\xfd\x63 + \x5e\xda\x4c\xb6\xe6\xfc\xf9\xb7 + \x0c\x56\xcb\xe4\xe0\x05\x7a\xe1 + \x0a\x63\x09\x78\xbc\x2c\x55\xde, + .alen = 32, + .input = \x87\xa3\x36\xfd\x96\xb3\x93\x78 + \xa9\x28\x63\xba\x12\xa3\x14\x85 + \x57\x1e\x06\xc9\x7b\x21\xef\x76 + \x7f\x38\x7e\x8e\x29\xa4\x3e\x7e, + .ilen
Re: [PATCH v2] crypto: add self-tests for rfc4309(ccm(aes))
On Wednesday 15 April 2009 07:20:53 Herbert Xu wrote: On Mon, Apr 13, 2009 at 07:11:00PM -0400, Jarod Wilson wrote: + case -EBADMSG: + if (template[i].novrfy) + /* verification failure was expected */ + goto next_aead_vector; /* fall through */ We should also fail if novrfy is true and we get a 0 return value. Oh, yeah, whoops, missed that case. Will add that. I'd also prefer to have the novrfy infrastructure changes to go into a separate patch. Yeah, wasn't sure if I should do infra separate or not. I'll split things up a bit for the next submission that includes the catch for novrfy true, ret 0. -- Jarod Wilson ja...@redhat.com -- To unsubscribe from this list: send the line unsubscribe linux-crypto in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[PATCH 1/3] crypto: properly handle null input and assoc data aead test vectors
Currenty, if either input or associated data are null in an aead test vector, we'll have random contents of the input and assoc arrays. Similar to the iv, play it safe and zero out the contents. Signed-off-by: Jarod Wilson ja...@redhat.com --- crypto/testmgr.c | 14 -- 1 files changed, 12 insertions(+), 2 deletions(-) diff --git a/crypto/testmgr.c b/crypto/testmgr.c index bfee6e9..a8bdcb3 100644 --- a/crypto/testmgr.c +++ b/crypto/testmgr.c @@ -314,8 +314,18 @@ static int test_aead(struct crypto_aead *tfm, int enc, input = xbuf[0]; assoc = axbuf[0]; - memcpy(input, template[i].input, template[i].ilen); - memcpy(assoc, template[i].assoc, template[i].alen); + if (template[i].input) + memcpy(input, template[i].input, + template[i].ilen); + else + memset(input, 0, MAX_IVLEN); + + if (template[i].assoc) + memcpy(assoc, template[i].assoc, + template[i].alen); + else + memset(assoc, 0, MAX_IVLEN); + if (template[i].iv) memcpy(iv, template[i].iv, MAX_IVLEN); else -- 1.6.2.2 -- Jarod Wilson ja...@redhat.com -- To unsubscribe from this list: send the line unsubscribe linux-crypto in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[PATCH 2/3] crypto: handle ccm dec test vectors expected to fail verification
Add infrastructure to tcrypt to support handling ccm decryption test vectors that are expected to fail verification. Signed-off-by: Jarod Wilson ja...@redhat.com --- crypto/testmgr.c | 32 crypto/testmgr.h |1 + 2 files changed, 33 insertions(+), 0 deletions(-) diff --git a/crypto/testmgr.c b/crypto/testmgr.c index a8bdcb3..92f4df0 100644 --- a/crypto/testmgr.c +++ b/crypto/testmgr.c @@ -373,6 +373,16 @@ static int test_aead(struct crypto_aead *tfm, int enc, switch (ret) { case 0: + if (template[i].novrfy) { + /* verification was supposed to fail */ + printk(KERN_ERR alg: aead: %s failed + on test %d for %s: ret was 0, + expected -EBADMSG\n, + e, j, algo); + /* so really, we got a bad message */ + ret = -EBADMSG; + goto out; + } break; case -EINPROGRESS: case -EBUSY: @@ -382,6 +392,10 @@ static int test_aead(struct crypto_aead *tfm, int enc, INIT_COMPLETION(result.completion); break; } + case -EBADMSG: + if (template[i].novrfy) + /* verification failure was expected */ + goto next_aead_vector; /* fall through */ default: printk(KERN_ERR alg: aead: %s failed on test @@ -398,6 +412,8 @@ static int test_aead(struct crypto_aead *tfm, int enc, goto out; } } +next_aead_vector: + continue; } for (i = 0, j = 0; i tcount; i++) { @@ -491,6 +507,16 @@ static int test_aead(struct crypto_aead *tfm, int enc, switch (ret) { case 0: + if (template[i].novrfy) { + /* verification was supposed to fail */ + printk(KERN_ERR alg: aead: %s failed + on chunk test %d for %s: ret + was 0, expected -EBADMSG\n, + e, j, algo); + /* so really, we got a bad message */ + ret = -EBADMSG; + goto out; + } break; case -EINPROGRESS: case -EBUSY: @@ -500,6 +526,10 @@ static int test_aead(struct crypto_aead *tfm, int enc, INIT_COMPLETION(result.completion); break; } + case -EBADMSG: + if (template[i].novrfy) + /* verification failure was expected */ + goto next_aead_chunk_vector; /* fall through */ default: printk(KERN_ERR alg: aead: %s failed on @@ -550,6 +580,8 @@ static int test_aead(struct crypto_aead *tfm, int enc, temp += template[i].tap[k]; } } +next_aead_chunk_vector: + continue; } ret = 0; diff --git a/crypto/testmgr.h b/crypto/testmgr.h index 526f00a..b77b61d 100644 --- a/crypto/testmgr.h +++ b/crypto/testmgr.h @@ -62,6 +62,7 @@ struct aead_testvec { int np; int anp; unsigned char fail; + unsigned char novrfy; /* ccm dec verification failure expected */ unsigned char wk; /* weak key flag */ unsigned char klen; unsigned short ilen; -- 1.6.2.2 -- Jarod Wilson ja...@redhat.com -- To unsubscribe from this list: send the line unsubscribe linux-crypto in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[PATCH 3/3] crypto: add self-tests for rfc4309(ccm(aes))
Add an array of encryption and decryption + verification self-tests for rfc4309(ccm(aes)). Test vectors all come from sample FIPS CAVS files provided to Red Hat by a testing lab. Unfortunately, all the published sample vectors in RFC 3610 and NIST Special Publication 800-38C contain nonce lengths that the kernel's rfc4309 implementation doesn't support, so while using some public domain vectors would have been preferred, its not possible at this time. Signed-off-by: Jarod Wilson ja...@redhat.com --- crypto/tcrypt.c |4 + crypto/testmgr.c | 15 +++ crypto/testmgr.h | 370 ++ 3 files changed, 389 insertions(+), 0 deletions(-) diff --git a/crypto/tcrypt.c b/crypto/tcrypt.c index 50d1e35..0452036 100644 --- a/crypto/tcrypt.c +++ b/crypto/tcrypt.c @@ -667,6 +667,10 @@ static void do_test(int m) tcrypt_test(zlib); break; + case 45: + tcrypt_test(rfc4309(ccm(aes))); + break; + case 100: tcrypt_test(hmac(md5)); break; diff --git a/crypto/testmgr.c b/crypto/testmgr.c index 92f4df0..c808501 100644 --- a/crypto/testmgr.c +++ b/crypto/testmgr.c @@ -1893,6 +1893,21 @@ static const struct alg_test_desc alg_test_descs[] = { } } }, { + .alg = rfc4309(ccm(aes)), + .test = alg_test_aead, + .suite = { + .aead = { + .enc = { + .vecs = aes_ccm_rfc4309_enc_tv_template, + .count = AES_CCM_4309_ENC_TEST_VECTORS + }, + .dec = { + .vecs = aes_ccm_rfc4309_dec_tv_template, + .count = AES_CCM_4309_DEC_TEST_VECTORS + } + } + } + }, { .alg = rmd128, .test = alg_test_hash, .suite = { diff --git a/crypto/testmgr.h b/crypto/testmgr.h index b77b61d..5add651 100644 --- a/crypto/testmgr.h +++ b/crypto/testmgr.h @@ -2848,6 +2848,8 @@ static struct cipher_testvec cast6_dec_tv_template[] = { #define AES_GCM_DEC_TEST_VECTORS 8 #define AES_CCM_ENC_TEST_VECTORS 7 #define AES_CCM_DEC_TEST_VECTORS 7 +#define AES_CCM_4309_ENC_TEST_VECTORS 7 +#define AES_CCM_4309_DEC_TEST_VECTORS 10 static struct cipher_testvec aes_enc_tv_template[] = { { /* From FIPS-197 */ @@ -5826,6 +5828,374 @@ static struct aead_testvec aes_ccm_dec_tv_template[] = { }, }; +/* + * rfc4309 refers to section 8 of rfc3610 for test vectors, but they all + * use a 13-byte nonce, we only support an 11-byte nonce. Similarly, all of + * Special Publication 800-38C's test vectors also use nonce lengths our + * implementation doesn't support. The following are taken from fips cavs + * fax files on hand at Red Hat. + * + * nb: actual key lengths are (klen - 3), the last 3 bytes are actually + * part of the nonce which combine w/the iv, but need to be input this way. + */ +static struct aead_testvec aes_ccm_rfc4309_enc_tv_template[] = { + { + .key= \x83\xac\x54\x66\xc2\xeb\xe5\x05 + \x2e\x01\xd1\xfc\x5d\x82\x66\x2e + \x96\xac\x59, + .klen = 19, + .iv = \x30\x07\xa1\xe2\xa2\xc7\x55\x24, + .alen = 0, + .input = \x19\xc8\x81\xf6\xe9\x86\xff\x93 + \x0b\x78\x67\xe5\xbb\xb7\xfc\x6e + \x83\x77\xb3\xa6\x0c\x8c\x9f\x9c + \x35\x2e\xad\xe0\x62\xf9\x91\xa1, + .ilen = 32, + .result = \xab\x6f\xe1\x69\x1d\x19\x99\xa8 + \x92\xa0\xc4\x6f\x7e\xe2\x8b\xb1 + \x70\xbb\x8c\xa6\x4c\x6e\x97\x8a + \x57\x2b\xbe\x5d\x98\xa6\xb1\x32 + \xda\x24\xea\xd9\xa1\x39\x98\xfd + \xa4\xbe\xd9\xf2\x1a\x6d\x22\xa8, + .rlen = 48, + }, { + .key= \x1e\x2c\x7e\x01\x41\x9a\xef\xc0 + \x0d\x58\x96\x6e\x5c\xa2\x4b\xd3 + \x4f\xa3\x19, + .klen = 19, + .iv = \xd3\x01\x5a\xd8\x30\x60\x15\x56, + .assoc = \xda\xe6\x28\x9c\x45\x2d\xfd\x63 + \x5e\xda\x4c\xb6\xe6\xfc\xf9\xb7 + \x0c\x56\xcb\xe4\xe0\x05\x7a\xe1 + \x0a\x63\x09\x78\xbc\x2c\x55\xde, + .alen = 32, + .input = \x87\xa3\x36\xfd\x96\xb3\x93\x78 + \xa9\x28\x63\xba\x12\xa3\x14\x85 + \x57\x1e\x06\xc9\x7b\x21\xef\x76 + \x7f\x38\x7e\x8e\x29\xa4\x3e\x7e, + .ilen
[PATCH v2] crypto: add self-tests for rfc4309(ccm(aes))
One more time, with feeling. Investigated using test vectors from rfc3610, but they all use a 13-byte nonce, we only support 11-byte, so I'm sticking with using the samples from a fips cavs example file I have on hand. I swear I tested the first version of the patch before submitting last time, but upon further testing, don't know how it actually managed to run correctly w/o some of the additional changes in this version. This version contains the same vectors, but some of them had incorrect klen's in them last time. Additionally, null input and associated data should be handled more appropriately, as well as the expected decryption verification failure vectors. This version of the patch has been tested on both 2.6.30-rc1-git5 + cryptodev-2.6 and a Red Hat Enterprise Linux 5.x kernel, with some extra debugging spew added to verify it really *is* working. Signed-off-by: Jarod Wilson ja...@redhat.com --- crypto/tcrypt.c |4 + crypto/testmgr.c | 41 ++- crypto/testmgr.h | 371 ++ 3 files changed, 414 insertions(+), 2 deletions(-) diff --git a/crypto/tcrypt.c b/crypto/tcrypt.c index 50d1e35..0452036 100644 --- a/crypto/tcrypt.c +++ b/crypto/tcrypt.c @@ -667,6 +667,10 @@ static void do_test(int m) tcrypt_test(zlib); break; + case 45: + tcrypt_test(rfc4309(ccm(aes))); + break; + case 100: tcrypt_test(hmac(md5)); break; diff --git a/crypto/testmgr.c b/crypto/testmgr.c index bfee6e9..8e5cc4b 100644 --- a/crypto/testmgr.c +++ b/crypto/testmgr.c @@ -314,8 +314,18 @@ static int test_aead(struct crypto_aead *tfm, int enc, input = xbuf[0]; assoc = axbuf[0]; - memcpy(input, template[i].input, template[i].ilen); - memcpy(assoc, template[i].assoc, template[i].alen); + if (template[i].input) + memcpy(input, template[i].input, + template[i].ilen); + else + memset(input, 0, MAX_IVLEN); + + if (template[i].assoc) + memcpy(assoc, template[i].assoc, + template[i].alen); + else + memset(assoc, 0, MAX_IVLEN); + if (template[i].iv) memcpy(iv, template[i].iv, MAX_IVLEN); else @@ -372,6 +382,10 @@ static int test_aead(struct crypto_aead *tfm, int enc, INIT_COMPLETION(result.completion); break; } + case -EBADMSG: + if (template[i].novrfy) + /* verification failure was expected */ + goto next_aead_vector; /* fall through */ default: printk(KERN_ERR alg: aead: %s failed on test @@ -388,6 +402,8 @@ static int test_aead(struct crypto_aead *tfm, int enc, goto out; } } +next_aead_vector: + continue; } for (i = 0, j = 0; i tcount; i++) { @@ -490,6 +506,10 @@ static int test_aead(struct crypto_aead *tfm, int enc, INIT_COMPLETION(result.completion); break; } + case -EBADMSG: + if (template[i].novrfy) + /* verification failure was expected */ + goto next_aead_chunk_vector; /* fall through */ default: printk(KERN_ERR alg: aead: %s failed on @@ -540,6 +560,8 @@ static int test_aead(struct crypto_aead *tfm, int enc, temp += template[i].tap[k]; } } +next_aead_chunk_vector: + continue; } ret = 0; @@ -1851,6 +1873,21 @@ static const struct alg_test_desc alg_test_descs[] = { } } }, { + .alg = rfc4309(ccm(aes)), + .test = alg_test_aead, + .suite = { + .aead = { + .enc = { + .vecs = aes_ccm_rfc4309_enc_tv_template, + .count = AES_CCM_4309_ENC_TEST_VECTORS + }, + .dec
Re: [PATCH] add self-tests for rfc4309(ccm(aes))
On Thursday 09 April 2009 14:52:04 Neil Horman wrote: On Thu, Apr 09, 2009 at 02:34:59PM -0400, Jarod Wilson wrote: Patch is against current cryptodev-2.6 tree, successfully tested via 'modprobe tcrypt type=45'. The number of test vectors might be a bit excessive, but I tried to hit a wide range of combinations of varying key sizes, associate data lengths, input lengths and pass/fail. Signed-off-by: Jarod Wilson ja...@redhat.com snip +/* + * rfc4309 says section 8 contains test vectors, only, it doesn't, and NIST + * Special Publication 800-38C's test vectors use nonce lengths our rfc4309 + * implementation doesn't support. The following are taken from fips cavs + * fax files on hand at Red Hat. + * + * nb: actual key lengths are (klen - 3), the last 3 bytes are actually + * part of the nonce which combine w/the iv, but need to be input this way. + */ RFC4309 section 8 actually says the test vectors you can use are here: http://www.ietf.org/rfc/rfc3610.txt in RFC3610 section 8. Oh, I'm dense, didn't correctly parse that 4309 was referring back to 3610 for the actual test vectors. I'll see what I can do with those... I don't think theres anything wrong with the vectors your're using below, but you may want to add the vectors from 3610 just to imrpove the testing. I think I'd drop some of the ones in the initial patch in favor of adding some from 3610, rather than simply adding more. The coverage is already pretty good, increasing the number of vectors shouldn't really be necessary, but it would definitely be nice to have vectors that are already publicly published and verified. -- Jarod Wilson ja...@redhat.com -- To unsubscribe from this list: send the line unsubscribe linux-crypto in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH v4] crypto: des3_ede: permit weak keys unless REQ_WEAK_KEY set
While its a slightly insane to bypass the key1 == key2 || key2 == key3 check in triple-des, since it reduces it to the same strength as des, some folks do need to do this from time to time for backwards compatibility with des. My own case is FIPS CAVS test vectors. Many triple-des test vectors use a single key, replicated 3x. In order to get the expected results, des3_ede_setkey() needs to only reject weak keys if the CRYPTO_TFM_REQ_WEAK_KEY flag is set. Also sets a more appropriate RES flag when a weak key is found. This time, hopefully without unintended line wrapping... Signed-off-by: Jarod Wilson [EMAIL PROTECTED] --- crypto/des_generic.c |5 +++-- 1 files changed, 3 insertions(+), 2 deletions(-) diff --git a/crypto/des_generic.c b/crypto/des_generic.c index 5d0e458..5bd3ee3 100644 --- a/crypto/des_generic.c +++ b/crypto/des_generic.c @@ -868,9 +868,10 @@ static int des3_ede_setkey(struct crypto_tfm *tfm, const u8 *key, u32 *flags = tfm-crt_flags; if (unlikely(!((K[0] ^ K[2]) | (K[1] ^ K[3])) || -!((K[2] ^ K[4]) | (K[3] ^ K[5] +!((K[2] ^ K[4]) | (K[3] ^ K[5]))) +(*flags CRYPTO_TFM_REQ_WEAK_KEY)) { - *flags |= CRYPTO_TFM_RES_BAD_KEY_SCHED; + *flags |= CRYPTO_TFM_RES_WEAK_KEY; return -EINVAL; } -- Jarod Wilson [EMAIL PROTECTED] -- To unsubscribe from this list: send the line unsubscribe linux-crypto in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH] crypto: des: fix inverted weak key flag check
Jarod Wilson wrote: Based on the fact the testmgr code sets the weak key flag for vectors with wk = 1, the weak key flag check in des_setkey() is backwards. Also, add a warning when a weak key is rejected, otherwise you silently get a bogus result back. Signed-off-by: Jarod Wilson [EMAIL PROTECTED] Self-NAK. Due to mail server issues, this might show up before the original... My understanding of the flag's meaning appears to be backwards, Herbert pointed out that the use of this flag is self-consistent. Also, the printk probably shouldn't be there, the caller should check retval and complain if need be. -- Jarod Wilson [EMAIL PROTECTED] -- To unsubscribe from this list: send the line unsubscribe linux-crypto in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH v2] crypto: des3_ede: permit weak keys unless REQ_WEAK_KEY set
Jarod Wilson wrote: While its a slightly insane to bypass the key1 == key2 || key2 == key3 check in triple-des, since it reduces it to the same strength as des, some folks do need to do this from time to time for backwards compatibility with des. My own case is FIPS CAVS test vectors. Many triple-des test vectors use a single key, replicated 3x. In order to get the expected results, des3_ede_setkey() needs to honor the weak key flag. Also adds a warning when a weak key is rejected, otherwise, you silently get back a bogus result. Signed-off-by: Jarod Wilson [EMAIL PROTECTED] v2: make CRYPTO_TFM_REQ_WEAK_KEY flag usage consistent w/rest of crypto subsystem, per comments from Herbert in Red Hat bugzilla #474394. --- crypto/des_generic.c |3 ++- 1 files changed, 2 insertions(+), 1 deletions(-) diff --git a/crypto/des_generic.c b/crypto/des_generic.c index 5d0e458..9002073 100644 --- a/crypto/des_generic.c +++ b/crypto/des_generic.c @@ -868,7 +868,8 @@ static int des3_ede_setkey(struct crypto_tfm *tfm, const u8 *key, u32 *flags = tfm-crt_flags; if (unlikely(!((K[0] ^ K[2]) | (K[1] ^ K[3])) || -!((K[2] ^ K[4]) | (K[3] ^ K[5] +!((K[2] ^ K[4]) | (K[3] ^ K[5]))) +(*flags CRYPTO_TFM_REQ_WEAK_KEY)) { *flags |= CRYPTO_TFM_RES_BAD_KEY_SCHED; return -EINVAL; -- Jarod Wilson [EMAIL PROTECTED] -- To unsubscribe from this list: send the line unsubscribe linux-crypto in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH v3] crypto: des3_ede: permit weak keys unless REQ_WEAK_KEY set
Jarod Wilson wrote: While its a slightly insane to bypass the key1 == key2 || key2 == key3 check in triple-des, since it reduces it to the same strength as des, some folks do need to do this from time to time for backwards compatibility with des. My own case is FIPS CAVS test vectors. Many triple-des test vectors use a single key, replicated 3x. In order to get the expected results, des3_ede_setkey() needs to honor the weak key flag. v2: make CRYPTO_TFM_REQ_WEAK_KEY flag usage consistent w/rest of crypto subsystem, per comments from Herbert in Red Hat bugzilla #474394. v3: set more appropriate RES flag, also per Herbert. Signed-off-by: Jarod Wilson [EMAIL PROTECTED] --- crypto/des_generic.c |5 +++-- 1 files changed, 3 insertions(+), 2 deletions(-) diff --git a/crypto/des_generic.c b/crypto/des_generic.c index 5d0e458..5bd3ee3 100644 --- a/crypto/des_generic.c +++ b/crypto/des_generic.c @@ -868,9 +868,10 @@ static int des3_ede_setkey(struct crypto_tfm *tfm, const u8 *key, u32 *flags = tfm-crt_flags; if (unlikely(!((K[0] ^ K[2]) | (K[1] ^ K[3])) || -!((K[2] ^ K[4]) | (K[3] ^ K[5] +!((K[2] ^ K[4]) | (K[3] ^ K[5]))) +(*flags CRYPTO_TFM_REQ_WEAK_KEY)) { - *flags |= CRYPTO_TFM_RES_BAD_KEY_SCHED; + *flags |= CRYPTO_TFM_RES_WEAK_KEY; return -EINVAL; } -- Jarod Wilson [EMAIL PROTECTED] -- To unsubscribe from this list: send the line unsubscribe linux-crypto in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
[PATCH] crypto: ansi_cprng: avoid incorrect extra call to _get_more_prng_bytes
While working with some FIPS RNGVS test vectors yesterday, I discovered a little bug in the way the ansi_cprng code works right now. For example, the following test vector (complete with expected result) from http://csrc.nist.gov/groups/STM/cavp/documents/rng/RNGVS.pdf ... Key = f3b1666d13607242ed061cabb8d46202 DT = e6b3be782a23fa62d71d4afbb0e922fc V = f000 R = 88dda456302423e5f69da57e7b95c73a ...when run through ansi_cprng, yields an incorrect R value of e2afe0d794120103d6e86a2b503bdfaa. If I load up ansi_cprng w/dbg=1 though, it was fairly obvious what was going wrong: 8 getting 16 random bytes for context 810033fb2b10 Calling _get_more_prng_bytes for context 810033fb2b10 Input DT: : e6 b3 be 78 2a 23 fa 62 d7 1d 4a fb b0 e9 22 fc Input I: : 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 Input V: : f0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 tmp stage 0: : e6 b3 be 78 2a 23 fa 62 d7 1d 4a fb b0 e9 22 fc tmp stage 1: : f4 8e cb 25 94 3e 8c 31 d6 14 cd 8a 23 f1 3f 84 tmp stage 2: : 8c 53 6f 73 a4 1a af d4 20 89 68 f4 58 64 f8 be Returning new block for context 810033fb2b10 Output DT: : e7 b3 be 78 2a 23 fa 62 d7 1d 4a fb b0 e9 22 fc Output I: : 04 8e cb 25 94 3e 8c 31 d6 14 cd 8a 23 f1 3f 84 Output V: : 48 89 3b 71 bc e4 00 b6 5e 21 ba 37 8a 0a d5 70 New Random Data: : 88 dd a4 56 30 24 23 e5 f6 9d a5 7e 7b 95 c7 3a Calling _get_more_prng_bytes for context 810033fb2b10 Input DT: : e7 b3 be 78 2a 23 fa 62 d7 1d 4a fb b0 e9 22 fc Input I: : 04 8e cb 25 94 3e 8c 31 d6 14 cd 8a 23 f1 3f 84 Input V: : 48 89 3b 71 bc e4 00 b6 5e 21 ba 37 8a 0a d5 70 tmp stage 0: : e7 b3 be 78 2a 23 fa 62 d7 1d 4a fb b0 e9 22 fc tmp stage 1: : 80 6b 3a 8c 23 ae 8f 53 be 71 4c 16 fc 13 b2 ea tmp stage 2: : 2a 4d e1 2a 0b 58 8e e6 36 b8 9c 0a 26 22 b8 30 Returning new block for context 810033fb2b10 Output DT: : e8 b3 be 78 2a 23 fa 62 d7 1d 4a fb b0 e9 22 fc Output I: : c8 e2 01 fd 9f 4a 8f e5 e0 50 f6 21 76 19 67 9a Output V: : ba 98 e3 75 c0 1b 81 8d 03 d6 f8 e2 0c c6 54 4b New Random Data: : e2 af e0 d7 94 12 01 03 d6 e8 6a 2b 50 3b df aa returning 16 from get_prng_bytes in context 810033fb2b10 8 The expected result is there, in the first New Random Data, but we're incorrectly making a second call to _get_more_prng_bytes, due to some checks that are slightly off, which resulted in our original bytes never being returned anywhere. One approach to fixing this would be to alter some byte_count checks in get_prng_bytes, but it would mean the last DEFAULT_BLK_SZ bytes would be copied a byte at a time, rather than in a single memcpy, so a slightly more involved, equally functional, and ultimately more efficient way of fixing this was suggested to me by Neil, which I'm submitting here. All of the RNGVS ANSI X9.31 AES128 VST test vectors I've passed through ansi_cprng are now returning the expected results with this change. --- crypto/ansi_cprng.c | 17 +++-- 1 files changed, 11 insertions(+), 6 deletions(-) diff --git a/crypto/ansi_cprng.c b/crypto/ansi_cprng.c index 486aa93..1b3b1da 100644 --- a/crypto/ansi_cprng.c +++ b/crypto/ansi_cprng.c @@ -223,9 +223,10 @@ remainder: } /* -* Copy up to the next whole block size +* Copy any data less than an entire block */ if (byte_count DEFAULT_BLK_SZ) { +empty_rbuf: for (; ctx-rand_data_valid DEFAULT_BLK_SZ; ctx-rand_data_valid++) { *ptr = ctx-rand_data[ctx-rand_data_valid]; @@ -240,18 +241,22 @@ remainder: * Now copy whole blocks */ for (; byte_count = DEFAULT_BLK_SZ; byte_count -= DEFAULT_BLK_SZ) { - if (_get_more_prng_bytes(ctx) 0) { - memset(buf, 0, nbytes); - err = -EINVAL; - goto done; + if (ctx-rand_data_valid == DEFAULT_BLK_SZ) { + if (_get_more_prng_bytes(ctx) 0) { + memset(buf, 0, nbytes); + err = -EINVAL; + goto done; + } } + if (ctx-rand_data_valid 0) + goto empty_rbuf; memcpy(ptr, ctx-rand_data, DEFAULT_BLK_SZ); ctx-rand_data_valid += DEFAULT_BLK_SZ; ptr += DEFAULT_BLK_SZ; } /* -* Now copy any extra partial data +* Now go back and get any remaining partial block */ if (byte_count) goto remainder; -- Jarod Wilson [EMAIL PROTECTED] -- To unsubscribe from this list: send the line unsubscribe linux-crypto in the body of a message to [EMAIL PROTECTED] More majordomo info
Re: [PATCH] crypto: extend ansi_cprng to allow resetting of DT value
On Monday 03 November 2008 16:22:40 Neil Horman wrote: Hey all- This is a patch that was sent to me by Jarod Wilson, marking off my outstanding todo to allow the ansi cprng to set/reset the DT counter value in a cprng instance. Currently crytpo_rng_reset accepts a seed byte array which is interpreted by the ansi_cprng as a {V key} tuple. This patch extends that tuple to now be {V key DT}, with DT an optional value during reset. This patch also fixes a bug we noticed in which the offset of the key area of the seed is started at DEFAULT_PRNG_KSZ rather than DEFAULT_BLK_SZ as it should be. Regards Neil Signed-off-by: Neil Horman [EMAIL PROTECTED] Even better than my original version, since it lets providing a DT value be optional. Go ahead and slap this on there too: Signed-off-by: Jarod Wilson [EMAIL PROTECTED] ansi_cprng.c | 16 +--- 1 file changed, 13 insertions(+), 3 deletions(-) diff --git a/crypto/ansi_cprng.c b/crypto/ansi_cprng.c index 72db0fd..486aa93 100644 --- a/crypto/ansi_cprng.c +++ b/crypto/ansi_cprng.c @@ -349,15 +349,25 @@ static int cprng_get_random(struct crypto_rng *tfm, u8 *rdata, return get_prng_bytes(rdata, dlen, prng); } +/* + * This is the cprng_registered reset method the seed value is + * interpreted as the tuple { V KEY DT} + * V and KEY are required during reset, and DT is optional, detected + * as being present by testing the length of the seed + */ static int cprng_reset(struct crypto_rng *tfm, u8 *seed, unsigned int slen) { struct prng_context *prng = crypto_rng_ctx(tfm); - u8 *key = seed + DEFAULT_PRNG_KSZ; + u8 *key = seed + DEFAULT_BLK_SZ; + u8 *dt = NULL; if (slen DEFAULT_PRNG_KSZ + DEFAULT_BLK_SZ) return -EINVAL; - reset_prng_context(prng, key, DEFAULT_PRNG_KSZ, seed, NULL); + if (slen = (2 * DEFAULT_BLK_SZ + DEFAULT_PRNG_KSZ)) + dt = key + DEFAULT_PRNG_KSZ; + + reset_prng_context(prng, key, DEFAULT_PRNG_KSZ, seed, dt); if (prng-flags PRNG_NEED_RESET) return -EINVAL; @@ -379,7 +389,7 @@ static struct crypto_alg rng_alg = { .rng = { .rng_make_random= cprng_get_random, .rng_reset = cprng_reset, - .seedsize = DEFAULT_PRNG_KSZ + DEFAULT_BLK_SZ, + .seedsize = DEFAULT_PRNG_KSZ + 2*DEFAULT_BLK_SZ, } } }; -- Jarod Wilson [EMAIL PROTECTED] -- To unsubscribe from this list: send the line unsubscribe linux-crypto in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html