From: Eric Biggers
The 2018-11-28 revision of the Adiantum paper has revised some notation:
- 'M' was replaced with 'L' (meaning "Left", for the left-hand part of
the message) in the definition of Adiantum hashing, to avoid confusion
with the full message
- ε-almost-∆-univer
From: Eric Biggers
The kernel's ChaCha20 uses the RFC7539 convention of the nonce being 12
bytes rather than 8, so actually I only appended 12 random bytes (not
16) to its test vectors to form 24-byte nonces for the XChaCha20 test
vectors. The other 4 bytes were just from zero-padding
From: Eric Biggers
There is a draft specification for XChaCha20 being worked on. Add the
XChaCha20 test vector from the appendix so that we can be extra sure the
kernel's implementation is compatible.
I also recomputed the ciphertext with XChaCha12 and added it there too,
to keep the tests
From: Eric Biggers
If the stream cipher implementation is asynchronous, then the Adiantum
instance must be flagged as asynchronous as well. Otherwise someone
asking for a synchronous algorithm can get an asynchronous algorithm.
There are no asynchronous xchacha12 or xchacha20 implementations
On Thu, Sep 06, 2018 at 12:43:41PM +0200, Ard Biesheuvel wrote:
> On 5 September 2018 at 21:24, Eric Biggers wrote:
> > From: Eric Biggers
> >
> > fscrypt doesn't use the CTR mode of operation for anything, so there's
> > no need to select CRYPTO_CTR. It was a
From: Eric Biggers
'shash' algorithms are always synchronous, so passing CRYPTO_ALG_ASYNC
in the mask to crypto_alloc_shash() has no effect. Many users therefore
already don't pass it, but some still do. This inconsistency can cause
confusion, especially since the way the 'mask' argument works
From: Eric Biggers
'cipher' algorithms (single block ciphers) are always synchronous, so
passing CRYPTO_ALG_ASYNC in the mask to crypto_alloc_cipher() has no
effect. Many users therefore already don't pass it, but some still do.
This inconsistency can cause confusion, especially since the way
From: Eric Biggers
Some algorithms initialize their .cra_list prior to registration.
But this is unnecessary since crypto_register_alg() will overwrite
.cra_list when adding the algorithm to the 'crypto_alg_list'.
Apparently the useless assignment has just been copy+pasted around.
So, remove
From: Eric Biggers
Remove the unnecessary setting of CRYPTO_ALG_TYPE_SKCIPHER.
Commit 2c95e6d97892 ("crypto: skcipher - remove useless setting of type
flags") took care of this everywhere else, but a few more instances made
it into the tree at about the same time. Squash them befor
On Fri, Oct 19, 2018 at 05:54:12PM +0800, Ard Biesheuvel wrote:
> On 19 October 2018 at 13:41, Ard Biesheuvel wrote:
> > On 18 October 2018 at 12:37, Eric Biggers wrote:
> >> From: Eric Biggers
> >>
> >> Make the ARM scalar AES implementation closer to consta
On Fri, Oct 19, 2018 at 01:41:35PM +0800, Ard Biesheuvel wrote:
> On 18 October 2018 at 12:37, Eric Biggers wrote:
> > From: Eric Biggers
> >
> > Make the ARM scalar AES implementation closer to constant-time by
> > disabling interrupts and prefetchi
From: Eric Biggers
Make the ARM scalar AES implementation closer to constant-time by
disabling interrupts and prefetching the tables into L1 cache. This is
feasible because due to ARM's "free" rotations, the main tables are only
1024 bytes instead of the usual 4096 used b
From: Eric Biggers
In the "aes-fixed-time" AES implementation, disable interrupts while
accessing the S-box, in order to make cache-timing attacks more
difficult. Previously it was possible for the CPU to be interrupted
while the S-box was loaded into L1 cache, potentiall
text.
Thanks to Ard Biesheuvel for the suggestions.
Eric Biggers (2):
crypto: aes_ti - disable interrupts while accessing S-box
crypto: arm/aes - add some hardening against cache-timing attacks
arch/arm/crypto/Kconfig | 9 +
arch/arm/crypto/aes-cipher-core.S | 62 +++
From: Eric Biggers
Make the ARM scalar AES implementation closer to constant-time by
disabling interrupts and prefetching the tables into L1 cache. This is
feasible because due to ARM's "free" rotations, the main tables are only
1024 bytes instead of the usual 4096 used b
etiming-20050414.pdf for a discussion
of the many difficulties involved in writing truly constant-time AES
software. But it's valuable to make such attacks more difficult.
Eric Biggers (2):
crypto: aes_ti - disable interrupts while accessing S-box
crypto: arm/aes - add some hardening against cache-timin
From: Eric Biggers
In the "aes-fixed-time" AES implementation, disable interrupts while
accessing the S-box, in order to make cache-timing attacks more
difficult. Previously it was possible for the CPU to be interrupted
while the S-box was loaded into L1 cache, potentiall
Hi Ard,
On Thu, Oct 04, 2018 at 08:55:14AM +0200, Ard Biesheuvel wrote:
> Hi Eric,
>
> On 4 October 2018 at 06:07, Eric Biggers wrote:
> > From: Eric Biggers
> >
> > The generic constant-time AES implementation is supposed to preload the
> > AES S
Hi Ard,
On Mon, Oct 08, 2018 at 11:15:53PM +0200, Ard Biesheuvel wrote:
> On ARM v6 and later, we define CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS
> because the ordinary load/store instructions (ldr, ldrh, ldrb) can
> tolerate any misalignment of the memory address. However, load/store
> double and
Hi Ard,
On Mon, Oct 08, 2018 at 11:15:52PM +0200, Ard Biesheuvel wrote:
> On ARM v6 and later, we define CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS
> because the ordinary load/store instructions (ldr, ldrh, ldrb) can
> tolerate any misalignment of the memory address. However, load/store
> double and
On Fri, Oct 05, 2018 at 07:16:13PM +0200, Ard Biesheuvel wrote:
> On 5 October 2018 at 19:13, Eric Biggers wrote:
> > From: Eric Biggers
> >
> > aesni-intel_glue.c still calls crypto_fpu_init() and crypto_fpu_exit()
> > to register/unregister the "fpu"
From: Eric Biggers
aesni-intel_glue.c still calls crypto_fpu_init() and crypto_fpu_exit()
to register/unregister the "fpu" template. But these functions don't
exist anymore, causing a build error. Remove the calls to them.
Fixes: 944585a64f5e ("crypto: x86/aes-ni - remove s
From: Eric Biggers
The generic constant-time AES implementation is supposed to preload the
AES S-box into the CPU's L1 data cache. But, an interrupt handler can
run on the CPU and muck with the cache. Worse, on preemptible kernels
the process can even be preempted and moved to a different CPU
From: Eric Biggers
In the new arm64 CTS-CBC implementation, return an error code rather
than crashing on inputs shorter than AES_BLOCK_SIZE bytes. Also set
cra_blocksize to AES_BLOCK_SIZE (like is done in the cts template) to
indicate the minimum input size.
Fixes: dd597fb33ff0 ("crypto:
Hi Yann,
On Wed, Sep 12, 2018 at 11:50:00AM +0200, Yann Droneaud wrote:
> Hi,
>
> Le mardi 11 septembre 2018 à 20:05 -0700, Eric Biggers a écrit :
> > From: Eric Biggers
> >
> > In commit 9f480faec58c ("crypto: chacha20 - Fix keystream alignment for
>
From: Eric Biggers
In commit 9f480faec58c ("crypto: chacha20 - Fix keystream alignment for
chacha20_block()"), I had missed that chacha20_block() can be called
directly on the buffer passed to get_random_bytes(), which can have any
alignment. So, while my commit didn't break anything,
To revive this...
On Fri, Aug 10, 2018 at 08:27:58AM +0200, Stephan Mueller wrote:
> Am Donnerstag, 9. August 2018, 21:40:12 CEST schrieb Eric Biggers:
>
> Hi Eric,
>
> > while (bytes >= CHACHA20_BLOCK_SIZE) {
> > chacha20_block(state, stream);
>
From: Eric Biggers
fscrypt doesn't use the CTR mode of operation for anything, so there's
no need to select CRYPTO_CTR. It was added by commit 71dea01ea2ed
("ext4 crypto: require CONFIG_CRYPTO_CTR if ext4 encryption is
enabled"). But, I've been unable to identify the arm64
From: Eric Biggers
Optimize ChaCha20 NEON performance by:
- Implementing the 8-bit rotations using the 'vtbl.8' instruction.
- Streamlining the part that adds the original state and XORs the data.
- Making some other small tweaks.
On ARM Cortex-A7, these optimizations improve ChaCha20
On Fri, Aug 31, 2018 at 06:51:34PM +0200, Ard Biesheuvel wrote:
> >>
> >> + adr ip, .Lrol8_table
> >> mov r3, #10
> >>
> >> .Ldoubleround4:
> >> @@ -238,24 +268,25 @@ ENTRY(chacha20_4block_xor_neon)
> >> // x1 += x5, x13 = rotl32(x13 ^ x1, 8)
> >>
Hi Ard,
On Fri, Aug 31, 2018 at 05:56:24PM +0200, Ard Biesheuvel wrote:
> Hi Eric,
>
> On 31 August 2018 at 10:01, Eric Biggers wrote:
> > From: Eric Biggers
> >
> > Optimize ChaCha20 NEON performance by:
> >
> > - Implementing the 8-bit r
From: Eric Biggers
Optimize ChaCha20 NEON performance by:
- Implementing the 8-bit rotations using the 'vtbl.8' instruction.
- Streamlining the part that adds the original state and XORs the data.
- Making some other small tweaks.
On ARM Cortex-A7, these optimizations improve ChaCha20
On Thu, Aug 09, 2018 at 12:07:18PM -0700, Eric Biggers wrote:
> On Thu, Aug 09, 2018 at 08:38:56PM +0200, Stephan Müller wrote:
> > The function extract_crng invokes the ChaCha20 block operation directly
> > on the user-provided buffer. The block operation operates on u32
On Thu, Aug 09, 2018 at 08:38:56PM +0200, Stephan Müller wrote:
> The function extract_crng invokes the ChaCha20 block operation directly
> on the user-provided buffer. The block operation operates on u32 words.
> Thus the extract_crng function expects the buffer to be aligned to u32
> as it is
From: Eric Biggers
Make it return -EINVAL if crypto_dh_key_len() is incorrect rather than
overflowing the buffer.
Signed-off-by: Eric Biggers
---
crypto/dh_helper.c | 30 --
1 file changed, 16 insertions(+), 14 deletions(-)
diff --git a/crypto/dh_helper.c b/crypto
From: Eric Biggers
It was forgotten to increase DH_KPP_SECRET_MIN_SIZE to include 'q_size',
causing an out-of-bounds write of 4 bytes in crypto_dh_encode_key(), and
an out-of-bounds read of 4 bytes in crypto_dh_decode_key(). Fix it, and
fix the lengths of the test vectors to match
Hi GaoKui,
On Thu, Jul 26, 2018 at 02:44:30AM +, gaokui (A) wrote:
> Hi, Eric,
> Thanks for your reply.
>
> I have run your program on an original kernel and it reproduced the
> crash. And I also run the program on a kernel with our patch, but there was
> no crash.
>
>
From: Eric Biggers
The 4-way ChaCha20 NEON code implements 16-bit rotates with vrev32.16,
but the one-way code (used on remainder blocks) implements it with
vshl + vsri, which is slower. Switch the one-way code to vrev32.16 too.
Signed-off-by: Eric Biggers
---
arch/arm/crypto/chacha20-neon
From: Eric Biggers
Like the skcipher_walk and blkcipher_walk cases:
scatterwalk_done() is only meant to be called after a nonzero number of
bytes have been processed, since scatterwalk_pagedone() will flush the
dcache of the *previous* page. But in the error case of
ablkcipher_walk_done(), e.g
From: Eric Biggers
Like the skcipher_walk case:
scatterwalk_done() is only meant to be called after a nonzero number of
bytes have been processed, since scatterwalk_pagedone() will flush the
dcache of the *previous* page. But in the error case of
blkcipher_walk_done(), e.g. if the input wasn't
From: Eric Biggers
scatterwalk_done() is only meant to be called after a nonzero number of
bytes have been processed, since scatterwalk_pagedone() will flush the
dcache of the *previous* page. But in the error case of
skcipher_walk_done(), e.g. if the input wasn't an integer number of
blocks
From: Eric Biggers
This series fixes the bug reported by Liu Chao (found using syzkaller)
where a crash occurs in scatterwalk_pagedone() on architectures such as
arm and arm64 that implement flush_dcache_page(), due to an invalid page
pointer when walk->offset == 0. This series attem
From: Eric Biggers
Setting 'walk->nbytes = walk->total' in skcipher_walk_first() doesn't
make sense because actually walk->nbytes needs to be set to the length
of the first step in the walk, which may be less than walk->total. This
is done by skcipher_walk_next() which is called
From: Eric Biggers
scatterwalk_samebuf() is never used. Remove it.
Signed-off-by: Eric Biggers
---
include/crypto/scatterwalk.h | 7 ---
1 file changed, 7 deletions(-)
diff --git a/include/crypto/scatterwalk.h b/include/crypto/scatterwalk.h
index eac72840a7d2..a66c127a20ed 100644
From: Eric Biggers
All callers pass chain=0 to scatterwalk_crypto_chain().
Remove this unneeded parameter.
Signed-off-by: Eric Biggers
---
crypto/lrw.c | 4 ++--
crypto/scatterwalk.c | 2 +-
crypto/xts.c | 4 ++--
include/crypto/scatterwalk.h | 8
From: Eric Biggers
The ALIGN() macro needs to be passed the alignment, not the alignmask
(which is the alignment minus 1).
Fixes: b286d8b1a690 ("crypto: skcipher - Add skcipher walk interface")
Cc: # v4.10+
Signed-off-by: Eric Biggers
---
crypto/skcipher.c | 2 +-
1 file changed, 1
From: Eric Biggers
Commit b73b7ac0a774 ("crypto: sha256_generic - add cra_priority") gave
sha256-generic and sha224-generic a cra_priority of 100, to match the
convention for generic implementations. But sha256-arm64 and
sha224-arm64 also have priority 100, so their orde
Hi Liu,
On Mon, Jul 09, 2018 at 05:10:19PM +0800, Liu Chao wrote:
> From: Luo Xinqiang
>
> In function scatterwalk_pagedone(), a kernel panic of invalid
> page will occur if walk->offset equals 0. This patch fixes the
> problem by setting the page addresswith sg_page(walk->sg)
> directly if
On Wed, Jul 11, 2018 at 03:26:56PM +0800, Herbert Xu wrote:
> On Tue, Jul 10, 2018 at 08:59:05PM -0700, Eric Biggers wrote:
> > From: Eric Biggers
> >
> > It was forgotten to increase DH_KPP_SECRET_MIN_SIZE to include 'q_size',
> > causing an out
Hi Stephan,
On Wed, Jun 27, 2018 at 08:15:31AM +0200, Stephan Müller wrote:
> Hi,
>
> Changes v2:
> * addition of a check that mpi_alloc succeeds.
>
> ---8<---
>
> According to SP800-56A section 5.6.2.1, the public key to be processed
> for the DH operation shall be checked for
From: Eric Biggers
Some crypto API users allocating a tfm with crypto_alloc_$FOO() are also
specifying the type flags for $FOO, e.g. crypto_alloc_shash() with
CRYPTO_ALG_TYPE_SHASH. But, that's redundant since the crypto API will
override any specified type flag/mask with the correct ones.
So
From: Eric Biggers
Many shash algorithms set .cra_flags = CRYPTO_ALG_TYPE_SHASH. But this
is redundant with the C structure type ('struct shash_alg'), and
crypto_register_shash() already sets the type flag automatically,
clearing any type flag that was already there. Apparently the useless
From: Eric Biggers
Many ahash algorithms set .cra_flags = CRYPTO_ALG_TYPE_AHASH. But this
is redundant with the C structure type ('struct ahash_alg'), and
crypto_register_ahash() already sets the type flag automatically,
clearing any type flag that was already there. Apparently the useless
From: Eric Biggers
Some ahash algorithms set .cra_type = _ahash_type. But this is
redundant with the C structure type ('struct ahash_alg'), and
crypto_register_ahash() already sets the .cra_type automatically.
Apparently the useless assignment has just been copy+pasted around.
So, remove
w I didn't bother with 'blkcipher' and 'ablkcipher' algorithms,
since those should eventually be migrated to 'skcipher' anyway.
Eric Biggers (6):
crypto: shash - remove useless setting of type flags
crypto: ahash - remove useless setting of type flags
crypto: ahash - remove useless setting o
From: Eric Biggers
Some skcipher algorithms set .cra_flags = CRYPTO_ALG_TYPE_SKCIPHER. But
this is redundant with the C structure type ('struct skcipher_alg'), and
crypto_register_skcipher() already sets the type flag automatically,
clearing any type flag that was already there. Apparently
From: Eric Biggers
Some aead algorithms set .cra_flags = CRYPTO_ALG_TYPE_AEAD. But this is
redundant with the C structure type ('struct aead_alg'), and
crypto_register_aead() already sets the type flag automatically,
clearing any type flag that was already there. Apparently the useless
From: Eric Biggers
sha1-generic had a cra_priority of 0, so it wasn't possible to have a
lower priority SHA-1 implementation, as is desired for sha1_mb which is
only useful under certain workloads and is otherwise extremely slow.
Change it to priority 100, which is the priority used for many
From: Eric Biggers
sha256-generic and sha224-generic had a cra_priority of 0, so it wasn't
possible to have a lower priority SHA-256 or SHA-224 implementation, as
is desired for sha256_mb which is only useful under certain workloads
and is otherwise extremely slow. Change them to priority 100
From: Eric Biggers
sha512-generic and sha384-generic had a cra_priority of 0, so it wasn't
possible to have a lower priority SHA-512 or SHA-384 implementation, as
is desired for sha512_mb which is only useful under certain workloads
and is otherwise extremely slow. Change them to priority 100
From: Eric Biggers
With all the crypto modules enabled on x86, and with a CPU that supports
AVX-2 but not SHA-NI instructions (e.g. Haswell, Broadwell, Skylake),
the "multibuffer" implementations of SHA-1, SHA-256, and SHA-512 are the
highest priority. However, these implementa
From: Eric Biggers
I found that not only was sha256_mb sometimes computing the wrong digest
(fixed by a separately sent patch), but under normal workloads it's
hundreds of times slower than sha256-avx2, due to the flush delay. The
same applies to sha1_mb and sha512_mb. Yet, currently these can
From: Eric Biggers
"arch/x86/crypto/sha*-mb" needs a trailing slash, since it refers to
directories. Otherwise get_maintainer.pl doesn't find the entry.
Signed-off-by: Eric Biggers
---
MAINTAINERS | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/MAINTAINERS b/M
From: Eric Biggers
There is a copy-paste error where sha256_mb_mgr_get_comp_job_avx2()
copies the SHA-256 digest state from sha256_mb_mgr::args::digest to
job_sha256::result_digest. Consequently, the sha256_mb algorithm
sometimes calculates the wrong digest. Fix it.
Reproducer using AF_ALG
Hi Jan,
On Fri, Jun 22, 2018 at 04:37:20PM +0200, Jan Glauber wrote:
> While commit 336073840a87 ("crypto: testmgr - Allow different compression
> results")
> allowed to test non-generic compression algorithms there are some corner
> cases that would not be detected in test_comp().
>
> For
Hi Jan,
On Fri, Jun 22, 2018 at 04:37:21PM +0200, Jan Glauber wrote:
> The test vectors were generated using the ThunderX ZIP coprocessor.
>
> Signed-off-by: Jan Glauber
> ---
> crypto/testmgr.c | 9 ++
> crypto/testmgr.h | 77
> 2 files
ation
of Speck-XTS")
Signed-off-by: Eric Biggers
---
arch/arm/crypto/speck-neon-core.S | 6 --
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/arch/arm/crypto/speck-neon-core.S
b/arch/arm/crypto/speck-neon-core.S
index 3c1e203e53b9..57caa742016e 100644
--- a/arch/arm/crypto/
On Sun, Jun 17, 2018 at 01:10:41PM +0200, Ard Biesheuvel wrote:
> > +
> > + // One-time XTS preparation
> > +
> > + /*
> > + * Allocate stack space to store 128 bytes worth of tweaks. For
> > + * performance, this space is aligned to a 16-byte boundary so
From: Eric Biggers
Remove the original version of the VMAC template that had the nonce
hardcoded to 0 and produced a digest with the wrong endianness. I'm
unsure whether this had users or not (there are no explicit in-kernel
references to it), but given that the hardcoded nonce made it wildly
From: Eric Biggers
Hi, this series fixes various bugs in the VMAC template (crypto/vmac.c).
First, the per-request context was being stored in the transform
context, which made VMAC not thread-safe, and the kernel could be
crashed by using the same VMAC transform in multiple threads using
AF_ALG
From: Eric Biggers
The VMAC template assumes the block cipher has a 128-bit block size, but
it failed to check for that. Thus it was possible to instantiate it
using a 64-bit block size cipher, e.g. "vmac(cast5)", causing
uninitialized memory to be used.
Add the needed check when ins
From: Eric Biggers
Currently the VMAC template uses a "nonce" hardcoded to 0, which makes
it insecure unless a unique key is set for every message. Also, the
endianness of the final digest is wrong: the implementation uses little
endian, but the VMAC specification has it as big end
From: Eric Biggers
syzbot reported a crash in vmac_final() when multiple threads
concurrently use the same "vmac(aes)" transform through AF_ALG. The bug
is pretty fundamental: the VMAC template doesn't separate per-request
state from per-tfm (per-key) state like the other hash alg
On Sat, May 12, 2018 at 10:43:08AM +0200, Dmitry Vyukov wrote:
> On Fri, Feb 2, 2018 at 11:18 PM, Eric Biggers <ebigge...@gmail.com> wrote:
> > On Fri, Feb 02, 2018 at 02:57:32PM +0100, Dmitry Vyukov wrote:
> >> On Fri, Feb 2, 2018 at 2:48 PM, syzbot
> &
From: Eric Biggers <ebigg...@google.com>
The x86 assembly implementations of Salsa20 use the frame base pointer
register (%ebp or %rbp), which breaks frame pointer convention and
breaks stack traces when unwinding from an interrupt in the crypto code.
Recent (v4.10+) kernels will warn
From: Eric Biggers <ebigg...@google.com>
This reverts commit eb772f37ae8163a89e28a435f6a18742ae06653b, as now the
x86 Salsa20 implementation has been removed and the generic helpers are
no longer needed outside of salsa20_generic.c.
We could keep this just in case someone else wants to add
alsa20-asm
implementations, which as far as I can tell are basically useless these
days; the x86_64 asm version in particular isn't actually any faster
than the C version anymore. (And possibly no one even uses these
anyway.) See the patch for the full explanation.
Eric Biggers (2):
crypto: x
Hi Denis,
On Fri, May 25, 2018 at 09:48:36AM -0500, Denis Kenzior wrote:
> Hi Eric,
>
> > The solution to the "too many system calls" problem is trivial: just do
> > SHA-512
> > in userspace. It's just math; you don't need a system call, any more than
> > you
> > would call sys_add(1, 1) to
Hi Denis,
On Thu, May 24, 2018 at 07:56:50PM -0500, Denis Kenzior wrote:
> Hi Ted,
>
> > > I'm not really here to criticize or judge the past. AF_ALG exists now. It
> > > is being used. Can we just make it better? Or are we going to whinge at
> > > every user that tries to use (and improve)
On Thu, May 24, 2018 at 09:36:15AM -0500, Denis Kenzior wrote:
> Hi Stephan,
>
> On 05/24/2018 12:57 AM, Stephan Mueller wrote:
> > Am Donnerstag, 24. Mai 2018, 04:45:00 CEST schrieb Eric Biggers:
> >
> > Hi Eric,
> >
> > >
> > > "Not hav
Hi Yu,
On Thu, May 24, 2018 at 10:26:12AM +0800, Yu Chen wrote:
> Hi Stephan,
> thanks for your reply,
> On Wed, May 23, 2018 at 1:43 AM Stephan Mueller wrote:
>
> > Am Dienstag, 22. Mai 2018, 05:00:40 CEST schrieb Yu Chen:
>
> > Hi Yu,
>
> > > Hi all,
> > > The request
From: Eric Biggers <ebigg...@google.com>
One "kw(aes)" decryption test vector doesn't exactly match an encryption
test vector with input and result swapped. In preparation for removing
the decryption test vectors, add this test vector to the encryption test
vectors, so we don
From: Eric Biggers <ebigg...@google.com>
None of the four "ecb(tnepres)" decryption test vectors exactly match an
encryption test vector with input and result swapped. In preparation
for removing the decryption test vectors, add these to the encryption
test vectors, so we don
From: Eric Biggers <ebigg...@google.com>
Two "ecb(des)" decryption test vectors don't exactly match any of the
encryption test vectors with input and result swapped. In preparation
for removing the decryption test vectors, add these to the encryption
test vectors, so we don
From: Eric Biggers <ebigg...@google.com>
One "cbc(des)" decryption test vector doesn't exactly match an
encryption test vector with input and result swapped. It's *almost* the
same as one, but the decryption version is "chunked" while the
encryption version is &q
ludes my manual changes on top of the scripted changes.
Eric Biggers (5):
crypto: testmgr - add extra ecb(des) encryption test vectors
crypto: testmgr - make an cbc(des) encryption test vector chunked
crypto: testmgr - add extra ecb(tnepres) encryption test vectors
crypto: testmgr - add extra
From: Eric Biggers <ebigg...@google.com>
The __crc32_le() wrapper function is pointless. Just call crc32_le()
directly instead.
Signed-off-by: Eric Biggers <ebigg...@google.com>
---
crypto/crc32_generic.c | 10 ++
1 file changed, 2 insertions(+), 8 deletions(-)
diff --
From: Eric Biggers <ebigg...@google.com>
crc32c-generic sets an alignmask, but actually its ->update() works with
any alignment; only its ->setkey() and outputting the final digest
assume an alignment. To prevent the buffer from having to be aligned by
the crypto API for just these c
From: Eric Biggers <ebigg...@google.com>
crc32c has an unkeyed test vector but crc32 did not. Add the crc32c one
(which uses an empty input) to crc32 too, and also add a new one to both
that uses a nonempty input. These test vectors verify that crc32 and
crc32c implementations use the c
From: Eric Biggers <ebigg...@google.com>
crc32-generic doesn't have a cra_alignmask set, which is desired as its
->update() works with any alignment. However, it incorrectly assumes
4-byte alignment in ->setkey() and when outputting the final digest.
Fix this by using the unal
hash algorithm to have both unkeyed and keyed tests,
without relying on having it work by accident.
The new test vectors pass with the generic and x86 CRC implementations.
I haven't tested others yet; if any happen to be broken, they'll need to
be fixed.
Eric Biggers (6):
crypto: crc32-generic
From: Eric Biggers <ebigg...@google.com>
The Blackfin CRC driver was removed by commit 9678a8dc53c1 ("crypto:
bfin_crc - remove blackfin CRC driver"), but it was forgotten to remove
the corresponding "hmac(crc32)" test vectors. I see no point in keeping
them
From: Eric Biggers <ebigg...@google.com>
Since testmgr uses a single tfm for all tests of each hash algorithm,
once a key is set the tfm won't be unkeyed anymore. But with crc32 and
crc32c, the key is really the "default initial state" and is optional;
those algorithms should
Hi Ondrej,
On Fri, May 11, 2018 at 02:12:51PM +0200, Ondrej Mosnáček wrote:
> From: Ondrej Mosnacek
>
> This patch adds optimized implementations of AEGIS-128, AEGIS-128L,
> and AEGIS-256, utilizing the AES-NI and SSE2 x86 extensions.
>
> Signed-off-by: Ondrej Mosnacek
t;
Note: algorithms can be dynamically added to the crypto API, which can
result in different implementations being used at different times. But
this is rare; for most users, showing the first will be good enough.
Signed-off-by: Eric Biggers <ebigg...@google.com>
---
Changed since v1:
t;
Note: algorithms can be dynamically added to the crypto API, which can
result in different implementations being used at different times. But
this is rare; for most users, showing the first will be good enough.
Signed-off-by: Eric Biggers <ebigg...@google.com>
---
Note: this patch is o
^253 time complexity and 2^125 chosen
plaintexts, i.e. only marginally faster than brute force. There is no
known attack on the full 34 rounds.
Signed-off-by: Eric Biggers <ebigg...@google.com>
---
Changed since v1:
- Improved commit message and documentation.
Documentation/files
Hi Samuel,
On Thu, Apr 26, 2018 at 03:05:44AM +0100, Samuel Neves wrote:
> On Wed, Apr 25, 2018 at 8:49 PM, Eric Biggers <ebigg...@google.com> wrote:
> > I agree that my explanation should have been better, and should have
> > considered
> > more crypto algor
Hi Samuel,
On Wed, Apr 25, 2018 at 03:33:16PM +0100, Samuel Neves wrote:
> Let's put the provenance of Speck aside for a moment, and suppose that
> it is an ideal block cipher. There are still some issues with this
> patch as it stands.
>
> - The rationale seems off. Consider this bit from the
Hi Jason,
On Tue, Apr 24, 2018 at 10:58:35PM +0200, Jason A. Donenfeld wrote:
> Hi Eric,
>
> On Tue, Apr 24, 2018 at 8:16 PM, Eric Biggers <ebigg...@google.com> wrote:
> > So, what do you propose replacing it with?
>
> Something more cryptographically justif
1 - 100 of 460 matches
Mail list logo