On Mon, Aug 06, 2018 at 03:29:39PM +0300, Horia Geantă wrote:
> xts setkey callback returns 0 on some error paths.
> Fix this by returning -EINVAL.
>
> Cc: # 4.12+
> Fixes: b189817cf789 ("crypto: caam/qi - add ablkcipher and authenc
> algorithms")
> Signed-off-by: Horia Geantă
Patch applied.
On Mon, Aug 06, 2018 at 03:29:55PM +0300, Horia Geantă wrote:
> Crypto engine needs some temporary locations in external memory for
> running RSA decrypt forms 2 and 3 (CRT).
> These are named "tmp1" and "tmp2" in the PDB.
>
> Update DMA mapping direction of tmp1 and tmp2 from TO_DEVICE to
>
On Mon, Aug 06, 2018 at 03:29:09PM +0300, Horia Geantă wrote:
> Descriptor address needs to be swapped to CPU endianness before being
> DMA unmapped.
>
> Cc: # 4.8+
> Fixes: 261ea058f016 ("crypto: caam - handle core endianness != caam
> endianness")
> Reported-by: Laurentiu Tudor
>
Jiecheng Wu wrote:
> Function chtls_close_conn() defined in
> drivers/crypto/chelsio/chtls/chtls_cm.c calls alloc_skb() to allocate memory
> for struct sk_buff which is dereferenced immediately. As alloc_skb() may
> return NULL on failure, this code piece may cause NULL pointer dereference
>
On Tue, Aug 07, 2018 at 12:34:57PM +, Horia Geanta wrote:
> On 8/7/2018 11:00 AM, Marcin Niestroj wrote:
> > It is possible, that caam_jr_alloc() is called before JR devices are
> > probed. Return -EPROBE_DEFER in drivers that rely on JR devices, so
> > they are probed at later stage.
> >
>
Hi Herbert,
> On Thu, Aug 23, 2018 at 05:31:01PM +0200, Petr Vorel wrote:
> > Hi,
> > I wonder, it it makes sense to backport commit
> > e666d4e9ceec crypto: vmx - Use skcipher for ctr fallback
> > to v 4.14 stable kernel.
> > I'm using it in 4.12+.
> > These commits (somehow similar) has been
On Thu, Aug 23, 2018 at 05:31:01PM +0200, Petr Vorel wrote:
> Hi,
>
> I wonder, it it makes sense to backport commit
> e666d4e9ceec crypto: vmx - Use skcipher for ctr fallback
> to v 4.14 stable kernel.
> I'm using it in 4.12+.
>
> These commits (somehow similar) has been backported to 4.10.2:
>
On 23 August 2018 at 21:04, Nick Desaulniers wrote:
> On Thu, Aug 23, 2018 at 9:48 AM Ard Biesheuvel
> wrote:
>>
>> Replace the literal load of the addend vector with a sequence that
>> performs each add individually. This sequence is only 2 instructions
>> longer than the original, and 2%
On Thu, Aug 23, 2018 at 9:48 AM Ard Biesheuvel
wrote:
>
> Replace the literal load of the addend vector with a sequence that
> performs each add individually. This sequence is only 2 instructions
> longer than the original, and 2% faster on Cortex-A53.
>
> This is an improvement by itself, but
Replace the literal load of the addend vector with a sequence that
performs each add individually. This sequence is only 2 instructions
longer than the original, and 2% faster on Cortex-A53.
This is an improvement by itself, but also works around a Clang issue,
whose integrated assembler does not
Hi,
I wonder, it it makes sense to backport commit
e666d4e9ceec crypto: vmx - Use skcipher for ctr fallback
to v 4.14 stable kernel.
I'm using it in 4.12+.
These commits (somehow similar) has been backported to 4.10.2:
5839f555fa57 crypto: vmx - Use skcipher for xts fallback
c96d0a1c47ab crypto:
Speed up the GHASH algorithm based on 64-bit polynomial multiplication
by adding support for 4-way aggregation. This improves throughput by
~85% on Cortex-A53, from 1.7 cycles per byte to 0.9 cycles per byte.
When combined with AES into GCM, throughput improves by ~25%, from
3.8 cycles per byte
Speed up the GHASH algorithm based on 64-bit polynomial multiplication
by adding support for 4-way aggregation. This improves throughput by
~60% on Cortex-A53, from 1.70 cycles per byte to 1.05 cycles per byte.
Signed-off-by: Ard Biesheuvel
---
arch/arm/crypto/Kconfig | 1 +
On 21 August 2018 at 20:34, Nick Desaulniers wrote:
> On Tue, Aug 21, 2018 at 11:19 AM Ard Biesheuvel
> wrote:
>>
>> On 21 August 2018 at 20:04, Nick Desaulniers wrote:
>> > On Tue, Aug 21, 2018 at 9:46 AM Ard Biesheuvel
>> > wrote:
>> >>
>> >> Replace the literal load of the addend vector
On Tue, Aug 21, 2018 at 11:19 AM Ard Biesheuvel
wrote:
>
> On 21 August 2018 at 20:04, Nick Desaulniers wrote:
> > On Tue, Aug 21, 2018 at 9:46 AM Ard Biesheuvel
> > wrote:
> >>
> >> Replace the literal load of the addend vector with a sequence that
> >> composes it using immediates. While at
On 21 August 2018 at 20:04, Nick Desaulniers wrote:
> On Tue, Aug 21, 2018 at 9:46 AM Ard Biesheuvel
> wrote:
>>
>> Replace the literal load of the addend vector with a sequence that
>> composes it using immediates. While at it, tweak the code that refers
>> to it so it does not clobber the
On Tue, Aug 21, 2018 at 9:46 AM Ard Biesheuvel
wrote:
>
> Replace the literal load of the addend vector with a sequence that
> composes it using immediates. While at it, tweak the code that refers
> to it so it does not clobber the register, so we can take the load
> out of the loop as well.
>
>
Replace the literal load of the addend vector with a sequence that
composes it using immediates. While at it, tweak the code that refers
to it so it does not clobber the register, so we can take the load
out of the loop as well.
This results in generally better code, but also works around a Clang
This patch fixes sleep-in-atomic bugs in AES-CBC and AES-XTS VMX
implementations. The problem is that the blkcipher_* functions should
not be called in atomic context.
The bugs can be reproduced via the AF_ALG interface by trying to
encrypt/decrypt sufficiently large buffers (at least 64 KiB)
ut 21. 8. 2018 o 16:18 Stephan Mueller napísal(a):
> Am Dienstag, 21. August 2018, 14:48:11 CEST schrieb Ondrej Mosnáček:
>
> Hi Ondrej, Marcelo,
>
> (+Marcelo)
>
> > Looking at crypto/algif_skcipher.c, I can see that skcipher_recvmsg()
> > holds the socket lock the whole time and yet passes
> >
Am Dienstag, 21. August 2018, 14:48:11 CEST schrieb Ondrej Mosnáček:
Hi Ondrej, Marcelo,
(+Marcelo)
> Looking at crypto/algif_skcipher.c, I can see that skcipher_recvmsg()
> holds the socket lock the whole time and yet passes
> CRYPTO_TFM_REQ_MAY_SLEEP to the cipher implementation. Isn't that
Adopt the SPDX license identifiers to ease license compliance
management.
Signed-off-by: Tudor Ambarus
---
drivers/crypto/atmel-aes.c | 5 +
drivers/crypto/atmel-authenc.h | 13 +
drivers/crypto/atmel-ecc.c | 11 +--
drivers/crypto/atmel-ecc.h | 14
Hi,
I hit the following BUG when running the kcapi-enc-test.sh test from
libkcapi [1] on ppc64/ppc64le with recent kernels:
[ 891.863680] BUG: sleeping function called from invalid context at
include/crypto/algapi.h:424
[ 891.864622] in_atomic(): 1, irqs_disabled(): 0, pid: 12347, name:
> -Original Message-
> From: Ard Biesheuvel
> Sent: Monday, August 20, 2018 8:29 PM
> To: linux-crypto@vger.kernel.org
> Cc: herb...@gondor.apana.org.au; Vakul Garg ;
> davejwat...@fb.com; Peter Doliwa ; Ard
> Biesheuvel
> Subject: [PATCH] crypto: arm64/aes-gcm-ce - fix scatterwalk
Commit 71e52c278c54 ("crypto: arm64/aes-ce-gcm - operate on
two input blocks at a time") modified the granularity at which
the AES/GCM code processes its input to allow subsequent changes
to be applied that improve performance by using aggregation to
process multiple input blocks at once.
For
Hi Ard
Kernel tls selftest ' msg_more' is broken with the latest gcm changes for
optimizing it on cortex a-53.
(I am using David Miller's net-next branch.)
Reverting following commits fixes the problem.
1. crypto: arm64/ghash-ce - implement 4-way aggregation
2. crypto: arm64/ghash-ce - replace
Lieber Freund,
Ich bin Herr Richard Wahl der Mega-Gewinner von $ 533M In Mega Millions Jackpot
spende ich an 5 zufällige Personen, wenn Sie diese E-Mail erhalten, dann wurde
Ihre E-Mail nach einem Spinball ausgewählt. Ich habe den größten Teil meines
Vermögens auf eine Reihe von
Function chtls_close_conn() defined in drivers/crypto/chelsio/chtls/chtls_cm.c
calls alloc_skb() to allocate memory for struct sk_buff which is dereferenced
immediately. As alloc_skb() may return NULL on failure, this code piece may
cause NULL pointer dereference bug.
---
Am Donnerstag, 16. August 2018, 09:14:59 CEST schrieb Jitendra Lulla:
Hi Jitendra,
> Hi Stephen,
>
> I could not spot in the kernel where we are computing GHASH when the
> IV is bigger than 12 Bytes for GCM encryption.
>
> libkcapi and kernel appears to ignore the bytes beyond 12th byte in the
Hi Stephen,
I could not spot in the kernel where we are computing GHASH when the
IV is bigger than 12 Bytes for GCM encryption.
libkcapi and kernel appears to ignore the bytes beyond 12th byte in the IV.
SO the o/p is same with iv=12 bytes or iv=128 bytes as can be seen below:
Hi Herbert,
Folk have reported a boot hangs on Ryzen systems. A firmware bug can hang
SEV commands and thus resulting in a boot hang. To protect against firmware
bugs this series add a timeout in the SEV command.
I would like to get this logic added in stable tree's as well but this patch
will
On Mon, 6 Aug 2018 23:12:44 +0300
Dan Carpenter wrote:
> Hello Jonathan Cameron,
>
> The patch 915e4e8413da: "crypto: hisilicon - SEC security accelerator
> driver" from Jul 23, 2018, leads to the following static checker
> warning:
>
> drivers/crypto/hisilicon/sec/sec_algs.c:865
On Fri, Aug 10, 2018 at 08:20:51AM +0200, Stephan Mueller wrote:
> > while (nbytes >= CHACHA20_BLOCK_SIZE) {
> > int adjust = (unsigned long)buf & (sizeof(tmp[0]) - 1);
> >
> > extract_crng(buf);
>
> Why this line?
>
> > buf += CHACHA20_BLOCK_SIZE;
Am Donnerstag, 9. August 2018, 21:40:12 CEST schrieb Eric Biggers:
Hi Eric,
> while (bytes >= CHACHA20_BLOCK_SIZE) {
> chacha20_block(state, stream);
> - crypto_xor(dst, (const u8 *)stream, CHACHA20_BLOCK_SIZE);
> + crypto_xor(dst, stream,
Am Donnerstag, 9. August 2018, 21:21:32 CEST schrieb Theodore Y. Ts'o:
Hi Theodore,
> I'm wondering whether we have kernel code that actually tries to
> extract more than 64 bytes, so I'm not sure how often we enter the
> while loop at all. Out of curiosity, did you find this from code
>
Am Donnerstag, 9. August 2018, 21:07:18 CEST schrieb Eric Biggers:
Hi Eric,
> This patch is backwards: the temporary buffer is used when the buffer is
> *aligned*, not misaligned. And more problematically, 'buf' is never
> incremented in one of the cases...
Of course, it needs to be reversed.
Hi,
Le jeudi 09 août 2018 à 12:40 -0700, Eric Biggers a écrit :
> From: Eric Biggers
> Subject: [PATCH] crypto: chacha20 - Fix keystream alignment for
> chacha20_block() (again)
>
> In commit 9f480faec58cd6 ("crypto: chacha20 - Fix keystream alignment
> for chacha20_block()") I had missed that
On Thu, Aug 09, 2018 at 12:07:18PM -0700, Eric Biggers wrote:
> On Thu, Aug 09, 2018 at 08:38:56PM +0200, Stephan Müller wrote:
> > The function extract_crng invokes the ChaCha20 block operation directly
> > on the user-provided buffer. The block operation operates on u32 words.
> > Thus the
On Thu, Aug 09, 2018 at 08:38:56PM +0200, Stephan Müller wrote:
> The function extract_crng invokes the ChaCha20 block operation directly
> on the user-provided buffer. The block operation operates on u32 words.
> Thus the extract_crng function expects the buffer to be aligned to u32
> as it is
On Thu, Aug 09, 2018 at 08:38:56PM +0200, Stephan Müller wrote:
> The function extract_crng invokes the ChaCha20 block operation directly
> on the user-provided buffer. The block operation operates on u32 words.
> Thus the extract_crng function expects the buffer to be aligned to u32
> as it is
The function extract_crng invokes the ChaCha20 block operation directly
on the user-provided buffer. The block operation operates on u32 words.
Thus the extract_crng function expects the buffer to be aligned to u32
as it is visible with the parameter type of extract_crng. However,
get_random_bytes
On Wed, Aug 08, 2018 at 04:30:09AM +, Wei Yongjun wrote:
> Fixes the following sparse warning:
>
> drivers/crypto/hisilicon/sec/sec_algs.c:396:5: warning:
> symbol 'sec_send_request' was not declared. Should it be static?
>
> Fixes: 915e4e8413da ("crypto: hisilicon - SEC security
Fixes the following sparse warning:
drivers/crypto/hisilicon/sec/sec_algs.c:396:5: warning:
symbol 'sec_send_request' was not declared. Should it be static?
Fixes: 915e4e8413da ("crypto: hisilicon - SEC security accelerator driver")
Signed-off-by: Wei Yongjun
---
ARMv8.2 specifies special instructions for the SM3 cryptographic hash
and the SM4 symmetric cipher. While it is unlikely that a core would
implement one and not the other, we should only use SM4 instructions
if the SM4 CPU feature bit is set, and we currently check the SM3
feature bit instead. So
On 8/7/2018 11:00 AM, Marcin Niestroj wrote:
> It is possible, that caam_jr_alloc() is called before JR devices are
> probed. Return -EPROBE_DEFER in drivers that rely on JR devices, so
> they are probed at later stage.
>
These drivers don't have a probe() callback.
Returning -EPROBE_DEFER in
On Sat, Aug 04, 2018 at 06:21:01AM +0800, kbuild test robot wrote:
>
> Fixes: 915e4e8413da ("crypto: hisilicon - SEC security accelerator driver")
> Signed-off-by: kbuild test robot
Patch applied. Thanks.
--
Email: Herbert Xu
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key:
On Sat, Aug 04, 2018 at 08:46:23PM +0200, Ard Biesheuvel wrote:
> Another bit of performance work on the GHASH driver: this time it is not
> the combined AES/GCM algorithm but the bare GHASH driver that gets updated.
>
> Even though ARM cores that implement the polynomical multiplication
>
On Fri, Aug 03, 2018 at 01:37:50PM +0200, Ondrej Mosnacek wrote:
> It turns out I had misunderstood how the x86_match_cpu() function works.
> It evaluates a logical OR of the matching conditions, not logical AND.
> This caused the CPU feature checks for AEGIS to pass even if only SSE2
> (but not
Hi,
I'm resending patch series, as I forgot to CC linux-crypto list.
This patch series fixes/improves CAAM JR dependent drivers initialization.
So far successful initialization depended on link and device-tree nodes
order. These changes make sure all drivers that use JRs (i.e. call
There is a race condition, when driver is not initialized
yet (jr_driver_init() was not called yet), but another kernel
code calls caam_jr_alloc(). This results in warnings about
uninitialized lock and NULL pointer dereference error.
Fix that by statically initializing global driver data, so
It is possible, that caam_jr_alloc() is called before JR devices are
probed. Return -EPROBE_DEFER in drivers that rely on JR devices, so
they are probed at later stage.
Signed-off-by: Marcin Niestroj
---
drivers/crypto/caam/caamalg.c| 3 +++
drivers/crypto/caam/caamalg_qi.c | 3 +++
Hello Jonathan Cameron,
The patch 915e4e8413da: "crypto: hisilicon - SEC security accelerator
driver" from Jul 23, 2018, leads to the following static checker
warning:
drivers/crypto/hisilicon/sec/sec_algs.c:865 sec_alg_skcipher_crypto()
error: double free of 'split_sizes'
--
Hello
I am Craig Donaldson, the personal account office to our late customer
with our Bank Name Withheld for security reasons, a national of your
country, who used to work with Shell Oil - development company in here
in UK as a contractor.
On the 21st of April 2016, my client, his wife and
IV generation is done only at AEAD level.
Support in ablkcipher is not needed, thus remove the dead code.
Link:
https://www.mail-archive.com/search?l=mid=20160901101257.ga3...@gondor.apana.org.au
Signed-off-by: Horia Geantă
---
drivers/crypto/caam/caamalg.c | 275
Convert driver from deprecated ablkcipher API to skcipher.
Link:
https://www.mail-archive.com/search?l=mid=20170728085622.gc19...@gondor.apana.org.au
Signed-off-by: Horia Geantă
---
drivers/crypto/caam/caamalg.c | 12 +-
drivers/crypto/caam/caamalg_desc.c | 61 +++---
Convert driver from deprecated ablkcipher API to skcipher.
Link:
https://www.mail-archive.com/search?l=mid=20170728085622.gc19...@gondor.apana.org.au
Signed-off-by: Horia Geantă
---
drivers/crypto/caam/caamalg.c | 448 +++---
drivers/crypto/caam/compat.h |
IV generation is done only at AEAD level.
Support in ablkcipher is not needed, thus remove the dead code.
Link:
https://www.mail-archive.com/search?l=mid=20160901101257.ga3...@gondor.apana.org.a
Signed-off-by: Horia Geantă
---
drivers/crypto/caam/caamalg_desc.c | 81
This patch set converts caam/jr and caam/qi top level drivers
from ablkcipher API to skcipher.
First two patches remove the unused ablkcipher algorithms with
support for IV generation.
The following two patches deal with the conversion.
Note: There is a dependency for the patch set - a fix sent
Crypto engine needs some temporary locations in external memory for
running RSA decrypt forms 2 and 3 (CRT).
These are named "tmp1" and "tmp2" in the PDB.
Update DMA mapping direction of tmp1 and tmp2 from TO_DEVICE to
BIDIRECTIONAL, since engine needs r/w access.
Cc: # 4.13+
Fixes:
xts setkey callback returns 0 on some error paths.
Fix this by returning -EINVAL.
Cc: # 4.12+
Fixes: b189817cf789 ("crypto: caam/qi - add ablkcipher and authenc algorithms")
Signed-off-by: Horia Geantă
---
drivers/crypto/caam/caamalg_qi.c | 6 ++
1 file changed, 2 insertions(+), 4
Descriptor address needs to be swapped to CPU endianness before being
DMA unmapped.
Cc: # 4.8+
Fixes: 261ea058f016 ("crypto: caam - handle core endianness != caam endianness")
Reported-by: Laurentiu Tudor
Signed-off-by: Horia Geantă
---
drivers/crypto/caam/jr.c | 3 ++-
1 file changed, 2
Signed-off-by: Robert P. J. Day
---
diff --git a/Documentation/devicetree/bindings/crypto/rockchip-crypto.txt
b/Documentation/devicetree/bindings/crypto/rockchip-crypto.txt
index 5e2ba385b8c9..53e39d5f94e7 100644
--- a/Documentation/devicetree/bindings/crypto/rockchip-crypto.txt
+++
On 03/08/18 13:37, Ondrej Mosnacek wrote:
> It turns out I had misunderstood how the x86_match_cpu() function works.
> It evaluates a logical OR of the matching conditions, not logical AND.
> This caused the CPU feature checks for AEGIS to pass even if only SSE2
> (but not AES-NI) was supported
Another bit of performance work on the GHASH driver: this time it is not
the combined AES/GCM algorithm but the bare GHASH driver that gets updated.
Even though ARM cores that implement the polynomical multiplication
instructions that these routines depend on are guaranteed to also support
the
Enhance the GHASH implementation that uses 64-bit polynomial
multiplication by adding support for 4-way aggregation. This
more than doubles the performance, from 2.4 cycles per byte
to 1.1 cpb on Cortex-A53.
Signed-off-by: Ard Biesheuvel
---
arch/arm64/crypto/ghash-ce-core.S | 122
Checking the TIF_NEED_RESCHED flag is disproportionately costly on cores
with fast crypto instructions and comparatively slow memory accesses.
On algorithms such as GHASH, which executes at ~1 cycle per byte on
cores that implement support for 64 bit polynomial multiplication,
there is really no
Fixes: 915e4e8413da ("crypto: hisilicon - SEC security accelerator driver")
Signed-off-by: kbuild test robot
---
sec_algs.c |2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/crypto/hisilicon/sec/sec_algs.c
b/drivers/crypto/hisilicon/sec/sec_algs.c
index
tree:
https://git.kernel.org/pub/scm/linux/kernel/git/herbert/cryptodev-2.6.git master
head: a94bfd5f50ddf114594f5c7f99686f1cfa2b7a76
commit: 915e4e8413dacc086efcef4de04fdfdca57e8b1c [99/119] crypto: hisilicon -
SEC security accelerator driver
reproduce:
# apt-get install sparse
On 3 August 2018 at 17:47, Herbert Xu wrote:
> On Mon, Jul 30, 2018 at 11:06:39PM +0200, Ard Biesheuvel wrote:
>> Update the combined AES-GCM AEAD implementation to process two blocks
>> at a time, allowing us to switch to a faster version of the GHASH
>> implementation.
>>
>> Note that this does
On Sun, Jul 29, 2018 at 04:52:30PM +0200, Ard Biesheuvel wrote:
> As it turns out, checking the TIF_NEED_RESCHED flag after each
> iteration results in a significant performance regression (~10%)
> when running fast algorithms (i.e., ones that use special instructions
> and operate in the < 4
On Mon, Jul 30, 2018 at 11:06:39PM +0200, Ard Biesheuvel wrote:
> Update the combined AES-GCM AEAD implementation to process two blocks
> at a time, allowing us to switch to a faster version of the GHASH
> implementation.
>
> Note that this does not update the core GHASH transform, only the
>
On Fri, Jul 27, 2018 at 03:36:10PM -0700, Eric Biggers wrote:
> From: Eric Biggers
>
> It was forgotten to increase DH_KPP_SECRET_MIN_SIZE to include 'q_size',
> causing an out-of-bounds write of 4 bytes in crypto_dh_encode_key(), and
> an out-of-bounds read of 4 bytes in crypto_dh_decode_key().
Tom Lendacky wrote:
> Should the PSP initialization fail, the PSP data structure will be
> freed and the value contained in the sp_device struct set to NULL.
> At module unload, psp_dev_destroy() does not check if the pointer
> value is NULL and will end up dereferencing a NULL pointer.
>
> Add
On Fri, Jul 27, 2018 at 03:36:11PM -0700, Eric Biggers wrote:
> From: Eric Biggers
>
> Make it return -EINVAL if crypto_dh_key_len() is incorrect rather than
> overflowing the buffer.
>
> Signed-off-by: Eric Biggers
Patch applied. Thanks.
--
Email: Herbert Xu
Home Page:
On Tue, Jul 24, 2018 at 06:29:07PM -0700, Eric Biggers wrote:
> From: Eric Biggers
>
> The 4-way ChaCha20 NEON code implements 16-bit rotates with vrev32.16,
> but the one-way code (used on remainder blocks) implements it with
> vshl + vsri, which is slower. Switch the one-way code to vrev32.16
On Mon, Jul 23, 2018 at 10:54:55AM -0700, Eric Biggers wrote:
> From: Eric Biggers
>
> This series fixes the bug reported by Liu Chao (found using syzkaller)
> where a crash occurs in scatterwalk_pagedone() on architectures such as
> arm and arm64 that implement flush_dcache_page(), due to an
On Mon, Jul 23, 2018 at 10:21:29AM -0700, Eric Biggers wrote:
> From: Eric Biggers
>
> Setting 'walk->nbytes = walk->total' in skcipher_walk_first() doesn't
> make sense because actually walk->nbytes needs to be set to the length
> of the first step in the walk, which may be less than
On Mon, Jul 23, 2018 at 10:04:28AM -0700, Eric Biggers wrote:
> From: Eric Biggers
>
> scatterwalk_samebuf() is never used. Remove it.
>
> Signed-off-by: Eric Biggers
Patch applied. Thanks.
--
Email: Herbert Xu
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key:
On Mon, Jul 23, 2018 at 09:57:50AM -0700, Eric Biggers wrote:
> From: Eric Biggers
>
> The ALIGN() macro needs to be passed the alignment, not the alignmask
> (which is the alignment minus 1).
>
> Fixes: b286d8b1a690 ("crypto: skcipher - Add skcipher walk interface")
> Cc: # v4.10+
>
On Mon, Jul 23, 2018 at 10:01:33AM -0700, Eric Biggers wrote:
> From: Eric Biggers
>
> All callers pass chain=0 to scatterwalk_crypto_chain().
>
> Remove this unneeded parameter.
>
> Signed-off-by: Eric Biggers
Patch applied. Thanks.
--
Email: Herbert Xu
Home Page:
On Mon, Jul 23, 2018 at 05:18:48PM +0300, Horia Geantă wrote:
> Avoid RCU stalls in the case of non-preemptible kernel and lengthy
> speed tests by rescheduling when advancing from one block size
> to another.
>
> Signed-off-by: Horia Geantă
Patch applied. Thanks.
--
Email: Herbert Xu
Home
On Mon, Jul 23, 2018 at 04:49:52PM +0100, Jonathan Cameron wrote:
> The driver provides in kernel support for the Hisilicon SEC accelerator
> found in the hip06 and hip07 SoCs. There are 4 such units on the D05
> board for which an appropriate DT binding has been provided. ACPI also
> works with
On Fri, Jul 20, 2018 at 07:42:01PM +0200, Stephan Müller wrote:
> The cipher implementations of the kernel crypto API favor in-place
> cipher operations. Thus, switch the CTR cipher operation in the DRBG to
> perform in-place operations. This is implemented by using the output
> buffer as input
It turns out I had misunderstood how the x86_match_cpu() function works.
It evaluates a logical OR of the matching conditions, not logical AND.
This caused the CPU feature checks for AEGIS to pass even if only SSE2
(but not AES-NI) was supported (or vice versa), leading to potential
crashes if
On Thu, Aug 2, 2018 at 11:17 AM Ondrej Mosnacek wrote:
> It turns out I had misunderstood how the x86_match_cpu() function works.
> It evaluates a logical OR of the matching conditions, not logical AND.
> This caused the CPU feature checks to pass even if only SSE2 (but not
> AES-NI) was
On 3 August 2018 at 10:17, Herbert Xu wrote:
> On Fri, Aug 03, 2018 at 09:10:08AM +0200, Ard Biesheuvel wrote:
>> But I think it's too late now to take this into v4.18. Could you
>> please queue this (and my other two pending arm64/aes-gcm patches, if
>> possible) for v4.19 instead?
>
> OK I'll
On Fri, Aug 03, 2018 at 09:10:08AM +0200, Ard Biesheuvel wrote:
> But I think it's too late now to take this into v4.18. Could you
> please queue this (and my other two pending arm64/aes-gcm patches, if
> possible) for v4.19 instead?
OK I'll do that.
Cheers,
--
Email: Herbert Xu
Home Page:
On 3 August 2018 at 08:14, Herbert Xu wrote:
> On Sun, Jul 29, 2018 at 04:52:30PM +0200, Ard Biesheuvel wrote:
>> As it turns out, checking the TIF_NEED_RESCHED flag after each
>> iteration results in a significant performance regression (~10%)
>> when running fast algorithms (i.e., ones that use
On Sun, Jul 29, 2018 at 04:52:30PM +0200, Ard Biesheuvel wrote:
> As it turns out, checking the TIF_NEED_RESCHED flag after each
> iteration results in a significant performance regression (~10%)
> when running fast algorithms (i.e., ones that use special instructions
> and operate in the < 4
It turns out I had misunderstood how the x86_match_cpu() function works.
It evaluates a logical OR of the matching conditions, not logical AND.
This caused the CPU feature checks to pass even if only SSE2 (but not
AES-NI) was supported (ir vice versa), leading to potential crashes if
something
On 7/11/2018 3:16 AM, Dan Carpenter wrote:
Hello Janakarajan Natarajan,
The patch edd303ff0e9e: "crypto: ccp - Add DOWNLOAD_FIRMWARE SEV
command" from May 25, 2018, leads to the following static checker
warning:
drivers/crypto/ccp/psp-dev.c:397 sev_get_api_version()
error:
On Mon, Jul 23, 2018 at 04:49:53PM +0100, Jonathan Cameron wrote:
> The hip06 and hip07 SoCs contain a number of these crypto units which
> accelerate AES and DES operations.
>
You forgot the 'v2' on the patches. Only matters because I sort my
reviews by version and then date. But don't tell
On Tue, Jul 31, 2018 at 09:47:28AM +0100, Will Deacon wrote:
> On Tue, Jul 31, 2018 at 09:22:52AM +0200, Ard Biesheuvel wrote:
> > (+ Catalin, Will)
> >
> > On 27 July 2018 at 14:59, Ard Biesheuvel wrote:
> > > Calling pmull_gcm_encrypt_block() requires kernel_neon_begin() and
> > >
On Tue, Jul 31, 2018 at 09:22:52AM +0200, Ard Biesheuvel wrote:
> (+ Catalin, Will)
>
> On 27 July 2018 at 14:59, Ard Biesheuvel wrote:
> > Calling pmull_gcm_encrypt_block() requires kernel_neon_begin() and
> > kernel_neon_end() to be used since the routine touches the NEON
> > register file.
(+ Catalin, Will)
On 27 July 2018 at 14:59, Ard Biesheuvel wrote:
> Calling pmull_gcm_encrypt_block() requires kernel_neon_begin() and
> kernel_neon_end() to be used since the routine touches the NEON
> register file. Add the missing calls.
>
> Also, since NEON register contents are not
Update the core AES/GCM transform and the associated plumbing to operate
on 2 AES/GHASH blocks at a time. By itself, this is not expected to
result in a noticeable speedup, but it paves the way for reimplementing
the GHASH component using 2-way aggregation.
Signed-off-by: Ard Biesheuvel
---
Squeeze out another 5% of performance by minimizing the number
of invocations of kernel_neon_begin()/kernel_neon_end() on the
common path, which also allows some reloads of the key schedule
to be optimized away.
The resulting code runs at 2.3 cycles per byte on a Cortex-A53.
Signed-off-by: Ard
Implement a faster version of the GHASH transform which amortizes
the reduction modulo the characteristic polynomial across two
input blocks at a time.
On a Cortex-A53, the gcm(aes) performance increases 24%, from
3.0 cycles per byte to 2.4 cpb for large input sizes.
Signed-off-by: Ard
Update the combined AES-GCM AEAD implementation to process two blocks
at a time, allowing us to switch to a faster version of the GHASH
implementation.
Note that this does not update the core GHASH transform, only the
combined AES-GCM AEAD mode. GHASH is mostly used with AES anyway, and
the ARMv8
As it turns out, checking the TIF_NEED_RESCHED flag after each
iteration results in a significant performance regression (~10%)
when running fast algorithms (i.e., ones that use special instructions
and operate in the < 4 cycles per byte range) on in-order cores with
comparatively slow memory
301 - 400 of 29368 matches
Mail list logo