Re: [PATCH 2/3] crypto: crypto_xor - use unaligned accessors for aligned fast path

2018-10-09 Thread Ard Biesheuvel
if (likely(c)) >> return; >> } >> diff --git a/include/crypto/algapi.h b/include/crypto/algapi.h >> index 4a5ad10e75f0..86267c232f34 100644 >> --- a/include/crypto/algapi.h >> +++ b/include/c

Re: [PATCH 2/3] crypto: crypto_xor - use unaligned accessors for aligned fast path

2018-10-08 Thread Eric Biggers
.h > +++ b/include/crypto/algapi.h > @@ -17,6 +17,8 @@ > #include > #include > > +#include > + > /* > * Maximum values for blocksize and alignmask, used to allocate > * static buffers that are big enough for any combination of > @@ -212,7 +214,9 @@ static in

[PATCH 2/3] crypto: crypto_xor - use unaligned accessors for aligned fast path

2018-10-08 Thread Ard Biesheuvel
+ b/include/crypto/algapi.h @@ -17,6 +17,8 @@ #include #include +#include + /* * Maximum values for blocksize and alignmask, used to allocate * static buffers that are big enough for any combination of @@ -212,7 +214,9 @@ static inline void crypto_xor(u8 *dst, const u8 *src, unsigned

[PATCH resend 02/18] crypto/algapi - make crypto_xor() take separate dst and src arguments

2017-07-24 Thread Ard Biesheuvel
There are quite a number of occurrences in the kernel of the pattern if (dst != src) memcpy(dst, src, walk.total % AES_BLOCK_SIZE); crypto_xor(dst, final, walk.total % AES_BLOCK_SIZE); or crypto_xor(keystream, src, nbytes); memcpy(dst, keystream, nbytes); where crypto_xor

Re: [PATCH v2 2/2] crypto/algapi - make crypto_xor() take separate dst and src arguments

2017-07-22 Thread kbuild test robot
nction 'p8_aes_ctr_final': drivers/crypto/vmx/aes_ctr.c:107:29: warning: passing argument 3 of 'crypto_xor' makes integer from pointer without a cast [-Wint-conversion] crypto_xor(dst, keystream, src, nbytes); ^~~ In file included from include

[PATCH v2 2/2] crypto/algapi - make crypto_xor() take separate dst and src arguments

2017-07-18 Thread Ard Biesheuvel
There are quite a number of occurrences in the kernel of the pattern if (dst != src) memcpy(dst, src, walk.total % AES_BLOCK_SIZE); crypto_xor(dst, final, walk.total % AES_BLOCK_SIZE); or crypto_xor(keystream, src, nbytes); memcpy(dst, keystream, nbytes); where crypto_xor() is

[PATCH v2 0/2] crypto/algapi - refactor crypto_xor() to avoid memcpy()s

2017-07-18 Thread Ard Biesheuvel
>From 2/2: """ There are quite a number of occurrences in the kernel of the pattern if (dst != src) memcpy(dst, src, walk.total % AES_BLOCK_SIZE); crypto_xor(dst, final, walk.total % AES_BLOCK_SIZE); or crypto_xor(keystream, src, nbytes); mem

Re: [PATCH 2/2] crypto/algapi - make crypto_xor() take separate dst and src arguments

2017-07-18 Thread Ard Biesheuvel
On 18 July 2017 at 09:39, Herbert Xu wrote: > On Mon, Jul 10, 2017 at 02:45:48PM +0100, Ard Biesheuvel wrote: >> There are quite a number of occurrences in the kernel of the pattern >> >> if (dst != src) >> memcpy(dst, src, walk.total % AES_BLOCK_S

Re: [PATCH 2/2] crypto/algapi - make crypto_xor() take separate dst and src arguments

2017-07-18 Thread Herbert Xu
On Mon, Jul 10, 2017 at 02:45:48PM +0100, Ard Biesheuvel wrote: > There are quite a number of occurrences in the kernel of the pattern > > if (dst != src) > memcpy(dst, src, walk.total % AES_BLOCK_SIZE); > crypto_xor(dst, final, walk.total % AES_BLOCK

[PATCH 2/2] crypto/algapi - make crypto_xor() take separate dst and src arguments

2017-07-10 Thread Ard Biesheuvel
There are quite a number of occurrences in the kernel of the pattern if (dst != src) memcpy(dst, src, walk.total % AES_BLOCK_SIZE); crypto_xor(dst, final, walk.total % AES_BLOCK_SIZE); or crypto_xor(keystream, src, nbytes); memcpy(dst, keystream, nbytes); where

[PATCH 0/2] crypto/algapi - refactor crypto_xor() to avoid memcpy()s

2017-07-10 Thread Ard Biesheuvel
>From 2/2: """ There are quite a number of occurrences in the kernel of the pattern if (dst != src) memcpy(dst, src, walk.total % AES_BLOCK_SIZE); crypto_xor(dst, final, walk.total % AES_BLOCK_SIZE); or crypto_xor(keystream, src, nbytes); mem

Re: [PATCH v3] crypto: algapi - make crypto_xor() and crypto_inc() alignment agnostic

2017-02-14 Thread Ard Biesheuvel
than that, I see little reason to introduce complicated logic here. >> + while (((unsigned long)dst & (relalign - 1)) && len > 0) { > > IS_ALIGNED > >> +static inline void crypto_xor(u8 *dst, const u8 *src, unsigned int size) >> +{ >> + if (IS_E

Re: [PATCH v3] crypto: algapi - make crypto_xor() and crypto_inc() alignment agnostic

2017-02-13 Thread Jason A. Donenfeld
& (relalign - 1)) && len > 0) { IS_ALIGNED > +static inline void crypto_xor(u8 *dst, const u8 *src, unsigned int size) > +{ > + if (IS_ENABLED(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS) && > + __builtin_constant_p(size) && > + (size

Re: [PATCH v3] crypto: algapi - make crypto_xor() and crypto_inc() alignment agnostic

2017-02-11 Thread Herbert Xu
On Sun, Feb 05, 2017 at 10:06:12AM +, Ard Biesheuvel wrote: > Instead of unconditionally forcing 4 byte alignment for all generic > chaining modes that rely on crypto_xor() or crypto_inc() (which may > result in unnecessary copying of data when the underlying hardware > can perfo

[PATCH v3] crypto: algapi - make crypto_xor() and crypto_inc() alignment agnostic

2017-02-05 Thread Ard Biesheuvel
Instead of unconditionally forcing 4 byte alignment for all generic chaining modes that rely on crypto_xor() or crypto_inc() (which may result in unnecessary copying of data when the underlying hardware can perform unaligned accesses efficiently), make those functions deal with unaligned input

Re: [RFC PATCH] crypto: algapi - make crypto_xor() and crypto_inc() alignment agnostic

2017-02-04 Thread Eric Biggers
when passing buffers that have > __aligned(BLOCKSIZE). It proves to be a very useful optimization on > some platforms. Yes, this is a good idea. Though it seems that usually at least one of the two pointers passed to crypto_xor() will have alignment unknown to the compiler, sometimes the len

Re: [RFC PATCH] crypto: algapi - make crypto_xor() and crypto_inc() alignment agnostic

2017-02-04 Thread Jason A. Donenfeld
Another thing that might be helpful is that you can let gcc decide on the alignment, and then optimize appropriately. Check out what we do with siphash: https://git.kernel.org/cgit/linux/kernel/git/davem/net-next.git/tree/include/linux/siphash.h#n76 static inline u64 siphash(const void *data, siz

Re: [RFC PATCH] crypto: algapi - make crypto_xor() and crypto_inc() alignment agnostic

2017-02-04 Thread Jason A. Donenfeld
Hey, On Thu, Feb 2, 2017 at 7:47 AM, Eric Biggers wrote: > I'm wondering whether it has to be that way, especially since it seems to most > commonly be used on very small input buffers, e.g. 8 or 16-byte blocks. Note that popular stream ciphers like chacha or salsa wind up XORing much longer blo

Re: [PATCH v2] crypto: algapi - make crypto_xor() and crypto_inc() alignment agnostic

2017-02-04 Thread Ard Biesheuvel
7;t like the _unaligned() accessors is because they take the performance hit regardless of whether the pointer is aligned or not. > But > if the various cases with !CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS are going to > be handled/optimized I think they will just need to be se

Re: [PATCH v2] crypto: algapi - make crypto_xor() and crypto_inc() alignment agnostic

2017-02-04 Thread Eric Biggers
On Sat, Feb 04, 2017 at 01:20:38PM -0800, Eric Biggers wrote: > Unfortunately this is still broken, for two different reasons. First, if the > pointers have the same relative misalignment, then 'delta' and 'misalign' will > be set to 0 and long accesses will be used, even though the pointers may >

Re: [PATCH v2] crypto: algapi - make crypto_xor() and crypto_inc() alignment agnostic

2017-02-04 Thread Eric Biggers
the version with put_unaligned/get_unaligned (and it seems to perform okay on MIPS, though not on ARM which is probably more important). But if the various cases with !CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS are going to be handled/optimized I think they will just need to be separated out, maybe

[PATCH v2] crypto: algapi - make crypto_xor() and crypto_inc() alignment agnostic

2017-02-02 Thread Ard Biesheuvel
Instead of unconditionally forcing 4 byte alignment for all generic chaining modes that rely on crypto_xor() or crypto_inc() (which may result in unnecessary copying of data when the underlying hardware can perform unaligned accesses efficiently), make those functions deal with unaligned input

Re: [RFC PATCH] crypto: algapi - make crypto_xor() and crypto_inc() alignment agnostic

2017-02-01 Thread Ard Biesheuvel
On 2 February 2017 at 06:47, Eric Biggers wrote: > On Mon, Jan 30, 2017 at 02:11:29PM +, Ard Biesheuvel wrote: >> Instead of unconditionally forcing 4 byte alignment for all generic >> chaining modes that rely on crypto_xor() or crypto_inc() (which may >> result in unnece

Re: [RFC PATCH] crypto: algapi - make crypto_xor() and crypto_inc() alignment agnostic

2017-02-01 Thread Eric Biggers
On Mon, Jan 30, 2017 at 02:11:29PM +, Ard Biesheuvel wrote: > Instead of unconditionally forcing 4 byte alignment for all generic > chaining modes that rely on crypto_xor() or crypto_inc() (which may > result in unnecessary copying of data when the underlying hardware > can perfo

[RFC PATCH] crypto: algapi - make crypto_xor() and crypto_inc() alignment agnostic

2017-01-30 Thread Ard Biesheuvel
Instead of unconditionally forcing 4 byte alignment for all generic chaining modes that rely on crypto_xor() or crypto_inc() (which may result in unnecessary copying of data when the underlying hardware can perform unaligned accesses efficiently), make those functions deal with unaligned input

Re: crypto_xor

2010-09-06 Thread Herbert Xu
Nikos Mavrogiannopoulos wrote: > Hello, > I was checking the crypto_xor() function and it is for some reason > limited to 32 bit integers. Why not make it depend on the architecture > by replacing the u32 with unsigned long? That way 64bit machines should > perform xor with le

crypto_xor

2010-09-01 Thread Nikos Mavrogiannopoulos
Hello, I was checking the crypto_xor() function and it is for some reason limited to 32 bit integers. Why not make it depend on the architecture by replacing the u32 with unsigned long? That way 64bit machines should perform xor with less instructions... Something like: void crypto_xor(u8 *dst

[PATCH 1/6] crypto: xcbc - Use crypto_xor

2009-07-21 Thread Herbert Xu
crypto: xcbc - Use crypto_xor This patch replaces the local xor function with the generic crypto_xor function. Signed-off-by: Herbert Xu --- crypto/xcbc.c | 22 ++ 1 file changed, 6 insertions(+), 16 deletions(-) diff --git a/crypto/xcbc.c b/crypto/xcbc.c index 3b991bf

[PATCH 1/5] [CRYPTO] api: Add crypto_inc and crypto_xor

2007-11-22 Thread Herbert Xu
[CRYPTO] api: Add crypto_inc and crypto_xor With the addition of more stream ciphers we need to curb the proliferation of ad-hoc xor functions. This patch creates a generic pair of functions, crypto_inc and crypto_xor which does big-endian increment and exclusive or, respectively. For optimum

[PATCH 2/5] [CRYPTO] cbc: Use crypto_xor

2007-11-22 Thread Herbert Xu
[CRYPTO] cbc: Use crypto_xor This patch replaces the custom xor in CBC with the generic crypto_xor. Signed-off-by: Herbert Xu <[EMAIL PROTECTED]> --- crypto/cbc.c | 93 ++- 1 files changed, 16 insertions(+), 77 deletions(-) diff

[0/5] Add generic crypto_xor/crypto_inc functions

2007-11-22 Thread Herbert Xu
Hi: This series of patches introduces the fcuntionc crypto_xor and crypto_inc. It then uses them in the spots where duplicates have been created for the same operations. Cheers, -- Visit Openswan at http://www.openswan.org/ Email: Herbert Xu ~{PmV>HI~} <[EMAIL PROTECTED]> Home P

[PATCH 4/5] [CRYPTO] pcbc: Use crypto_xor

2007-11-22 Thread Herbert Xu
[CRYPTO] pcbc: Use crypto_xor This patch replaces the custom xor in CBC with the generic crypto_xor. It changes the operations for in-place encryption slightly to avoid calling crypto_xor with tmpbuf since it is not necessarily aligned. Signed-off-by: Herbert Xu <[EMAIL PROTECTED]> ---

[PATCH 5/5] [CRYPTO] ctr: Use crypto_inc and crypto_xor

2007-11-22 Thread Herbert Xu
[CRYPTO] ctr: Use crypto_inc and crypto_xor This patch replaces the custom inc/xor in CTR with the generic functions. Signed-off-by: Herbert Xu <[EMAIL PROTECTED]> --- crypto/ctr.c | 71 +-- 1 files changed, 16 insertions(