if (likely(c))
>> return;
>> }
>> diff --git a/include/crypto/algapi.h b/include/crypto/algapi.h
>> index 4a5ad10e75f0..86267c232f34 100644
>> --- a/include/crypto/algapi.h
>> +++ b/include/c
.h
> +++ b/include/crypto/algapi.h
> @@ -17,6 +17,8 @@
> #include
> #include
>
> +#include
> +
> /*
> * Maximum values for blocksize and alignmask, used to allocate
> * static buffers that are big enough for any combination of
> @@ -212,7 +214,9 @@ static in
+ b/include/crypto/algapi.h
@@ -17,6 +17,8 @@
#include
#include
+#include
+
/*
* Maximum values for blocksize and alignmask, used to allocate
* static buffers that are big enough for any combination of
@@ -212,7 +214,9 @@ static inline void crypto_xor(u8 *dst, const u8 *src,
unsigned
There are quite a number of occurrences in the kernel of the pattern
if (dst != src)
memcpy(dst, src, walk.total % AES_BLOCK_SIZE);
crypto_xor(dst, final, walk.total % AES_BLOCK_SIZE);
or
crypto_xor(keystream, src, nbytes);
memcpy(dst, keystream, nbytes);
where crypto_xor
nction 'p8_aes_ctr_final':
drivers/crypto/vmx/aes_ctr.c:107:29: warning: passing argument 3 of
'crypto_xor' makes integer from pointer without a cast [-Wint-conversion]
crypto_xor(dst, keystream, src, nbytes);
^~~
In file included from include
There are quite a number of occurrences in the kernel of the pattern
if (dst != src)
memcpy(dst, src, walk.total % AES_BLOCK_SIZE);
crypto_xor(dst, final, walk.total % AES_BLOCK_SIZE);
or
crypto_xor(keystream, src, nbytes);
memcpy(dst, keystream, nbytes);
where crypto_xor() is
>From 2/2:
"""
There are quite a number of occurrences in the kernel of the pattern
if (dst != src)
memcpy(dst, src, walk.total % AES_BLOCK_SIZE);
crypto_xor(dst, final, walk.total % AES_BLOCK_SIZE);
or
crypto_xor(keystream, src, nbytes);
mem
On 18 July 2017 at 09:39, Herbert Xu wrote:
> On Mon, Jul 10, 2017 at 02:45:48PM +0100, Ard Biesheuvel wrote:
>> There are quite a number of occurrences in the kernel of the pattern
>>
>> if (dst != src)
>> memcpy(dst, src, walk.total % AES_BLOCK_S
On Mon, Jul 10, 2017 at 02:45:48PM +0100, Ard Biesheuvel wrote:
> There are quite a number of occurrences in the kernel of the pattern
>
> if (dst != src)
> memcpy(dst, src, walk.total % AES_BLOCK_SIZE);
> crypto_xor(dst, final, walk.total % AES_BLOCK
There are quite a number of occurrences in the kernel of the pattern
if (dst != src)
memcpy(dst, src, walk.total % AES_BLOCK_SIZE);
crypto_xor(dst, final, walk.total % AES_BLOCK_SIZE);
or
crypto_xor(keystream, src, nbytes);
memcpy(dst, keystream, nbytes);
where
>From 2/2:
"""
There are quite a number of occurrences in the kernel of the pattern
if (dst != src)
memcpy(dst, src, walk.total % AES_BLOCK_SIZE);
crypto_xor(dst, final, walk.total % AES_BLOCK_SIZE);
or
crypto_xor(keystream, src, nbytes);
mem
than that, I see little reason to introduce complicated logic here.
>> + while (((unsigned long)dst & (relalign - 1)) && len > 0) {
>
> IS_ALIGNED
>
>> +static inline void crypto_xor(u8 *dst, const u8 *src, unsigned int size)
>> +{
>> + if (IS_E
& (relalign - 1)) && len > 0) {
IS_ALIGNED
> +static inline void crypto_xor(u8 *dst, const u8 *src, unsigned int size)
> +{
> + if (IS_ENABLED(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS) &&
> + __builtin_constant_p(size) &&
> + (size
On Sun, Feb 05, 2017 at 10:06:12AM +, Ard Biesheuvel wrote:
> Instead of unconditionally forcing 4 byte alignment for all generic
> chaining modes that rely on crypto_xor() or crypto_inc() (which may
> result in unnecessary copying of data when the underlying hardware
> can perfo
Instead of unconditionally forcing 4 byte alignment for all generic
chaining modes that rely on crypto_xor() or crypto_inc() (which may
result in unnecessary copying of data when the underlying hardware
can perform unaligned accesses efficiently), make those functions
deal with unaligned input
when passing buffers that have
> __aligned(BLOCKSIZE). It proves to be a very useful optimization on
> some platforms.
Yes, this is a good idea. Though it seems that usually at least one of the two
pointers passed to crypto_xor() will have alignment unknown to the compiler,
sometimes the len
Another thing that might be helpful is that you can let gcc decide on
the alignment, and then optimize appropriately. Check out what we do
with siphash:
https://git.kernel.org/cgit/linux/kernel/git/davem/net-next.git/tree/include/linux/siphash.h#n76
static inline u64 siphash(const void *data, siz
Hey,
On Thu, Feb 2, 2017 at 7:47 AM, Eric Biggers wrote:
> I'm wondering whether it has to be that way, especially since it seems to most
> commonly be used on very small input buffers, e.g. 8 or 16-byte blocks.
Note that popular stream ciphers like chacha or salsa wind up XORing
much longer blo
7;t like the _unaligned() accessors is because they
take the performance hit regardless of whether the pointer is aligned
or not.
> But
> if the various cases with !CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS are going to
> be handled/optimized I think they will just need to be se
On Sat, Feb 04, 2017 at 01:20:38PM -0800, Eric Biggers wrote:
> Unfortunately this is still broken, for two different reasons. First, if the
> pointers have the same relative misalignment, then 'delta' and 'misalign' will
> be set to 0 and long accesses will be used, even though the pointers may
>
the version with put_unaligned/get_unaligned (and it seems to
perform okay on MIPS, though not on ARM which is probably more important). But
if the various cases with !CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS are going to
be handled/optimized I think they will just need to be separated out, maybe
Instead of unconditionally forcing 4 byte alignment for all generic
chaining modes that rely on crypto_xor() or crypto_inc() (which may
result in unnecessary copying of data when the underlying hardware
can perform unaligned accesses efficiently), make those functions
deal with unaligned input
On 2 February 2017 at 06:47, Eric Biggers wrote:
> On Mon, Jan 30, 2017 at 02:11:29PM +, Ard Biesheuvel wrote:
>> Instead of unconditionally forcing 4 byte alignment for all generic
>> chaining modes that rely on crypto_xor() or crypto_inc() (which may
>> result in unnece
On Mon, Jan 30, 2017 at 02:11:29PM +, Ard Biesheuvel wrote:
> Instead of unconditionally forcing 4 byte alignment for all generic
> chaining modes that rely on crypto_xor() or crypto_inc() (which may
> result in unnecessary copying of data when the underlying hardware
> can perfo
Instead of unconditionally forcing 4 byte alignment for all generic
chaining modes that rely on crypto_xor() or crypto_inc() (which may
result in unnecessary copying of data when the underlying hardware
can perform unaligned accesses efficiently), make those functions
deal with unaligned input
Nikos Mavrogiannopoulos wrote:
> Hello,
> I was checking the crypto_xor() function and it is for some reason
> limited to 32 bit integers. Why not make it depend on the architecture
> by replacing the u32 with unsigned long? That way 64bit machines should
> perform xor with le
Hello,
I was checking the crypto_xor() function and it is for some reason
limited to 32 bit integers. Why not make it depend on the architecture
by replacing the u32 with unsigned long? That way 64bit machines should
perform xor with less instructions...
Something like:
void crypto_xor(u8 *dst
crypto: xcbc - Use crypto_xor
This patch replaces the local xor function with the generic
crypto_xor function.
Signed-off-by: Herbert Xu
---
crypto/xcbc.c | 22 ++
1 file changed, 6 insertions(+), 16 deletions(-)
diff --git a/crypto/xcbc.c b/crypto/xcbc.c
index 3b991bf
[CRYPTO] api: Add crypto_inc and crypto_xor
With the addition of more stream ciphers we need to curb the proliferation
of ad-hoc xor functions. This patch creates a generic pair of functions,
crypto_inc and crypto_xor which does big-endian increment and exclusive or,
respectively.
For optimum
[CRYPTO] cbc: Use crypto_xor
This patch replaces the custom xor in CBC with the generic crypto_xor.
Signed-off-by: Herbert Xu <[EMAIL PROTECTED]>
---
crypto/cbc.c | 93 ++-
1 files changed, 16 insertions(+), 77 deletions(-)
diff
Hi:
This series of patches introduces the fcuntionc crypto_xor and
crypto_inc. It then uses them in the spots where duplicates
have been created for the same operations.
Cheers,
--
Visit Openswan at http://www.openswan.org/
Email: Herbert Xu ~{PmV>HI~} <[EMAIL PROTECTED]>
Home P
[CRYPTO] pcbc: Use crypto_xor
This patch replaces the custom xor in CBC with the generic crypto_xor.
It changes the operations for in-place encryption slightly to avoid
calling crypto_xor with tmpbuf since it is not necessarily aligned.
Signed-off-by: Herbert Xu <[EMAIL PROTECTED]>
---
[CRYPTO] ctr: Use crypto_inc and crypto_xor
This patch replaces the custom inc/xor in CTR with the generic functions.
Signed-off-by: Herbert Xu <[EMAIL PROTECTED]>
---
crypto/ctr.c | 71 +--
1 files changed, 16 insertions(
33 matches
Mail list logo