On Thursday, March 20, 2014 at 03:48:06 PM, Ard Biesheuvel wrote:
> This patch adds support for the SHA-224 and SHA-256 hash algorithms using
> the NEON based SHA-256 instructions that were introduced in ARM v8.
>
> Signed-off-by: Ard Biesheuvel
> ---
[...]
> + * Copyright (c) Alan Smithee.
Em
On Monday, March 24, 2014 at 05:10:36 PM, Mathias Krause wrote:
> The recent addition of the AVX2 variant of the SHA1 hash function wrongly
> disabled the AVX variant by introducing a flaw in the feature test. Fixed
> in patch 1.
>
> The alignment calculations of the AVX2 assembler implementation
On 03/24/2014 09:10 AM, Mathias Krause wrote:
> The recent addition of the AVX2 variant of the SHA1 hash function wrongly
> disabled the AVX variant by introducing a flaw in the feature test. Fixed
> in patch 1.
>
> The alignment calculations of the AVX2 assembler implementation are
> questionable
On Mon, 2014-03-24 at 17:10 +0100, Mathias Krause wrote:
> The recent addition of the AVX2 variant of the SHA1 hash function wrongly
> disabled the AVX variant by introducing a flaw in the feature test. Fixed
> in patch 1.
>
> The alignment calculations of the AVX2 assembler implementation are
> q
On 03/24/2014 09:10 AM, Mathias Krause wrote:
> The recent addition of the AVX2 variant of the SHA1 hash function wrongly
> disabled the AVX variant by introducing a flaw in the feature test. Fixed
> in patch 1.
>
> The alignment calculations of the AVX2 assembler implementation are
> questionable
The recent addition of the AVX2 variant of the SHA1 hash function wrongly
disabled the AVX variant by introducing a flaw in the feature test. Fixed
in patch 1.
The alignment calculations of the AVX2 assembler implementation are
questionable, too. Especially the page alignment of the stack pointer
There is really no need to page align sha1_transform_avx2. The default
alignment is just fine. This is not the hot code but only the entry
point, after all.
Cc: Chandramouli Narayanan
Cc: H. Peter Anvin
Cc: Marek Vasut
Signed-off-by: Mathias Krause
---
arch/x86/crypto/sha1_avx2_x86_64_asm.S |
The AVX2 implementation might waste up to a page of stack memory because
of a wrong alignment calculation. This will, in the worst case, increase
the stack usage of sha1_transform_avx2() alone to 5.4 kB -- way to big
for a kernel function. Even worse, it might also allocate *less* bytes
than needed
Commit 7c1da8d0d0 "crypto: sha - SHA1 transform x86_64 AVX2"
accidentally disabled the AVX variant by making the avx_usable() test
not only fail in case the CPU doesn't support AVX or OSXSAVE but also
if it doesn't support AVX2.
Fix that regression by splitting up the AVX/AVX2 test into two
functi
On 3/22/2014 6:24 PM, Ben Hutchings wrote:
On Fri, 2014-03-21 at 00:35 +0545, Yashpal Dutta wrote:
Job ring is suspended gracefully and resume afresh.
Both Sleep (where device will remain powered-on) and Deep-sleep (where
device will be powered-down are handled gracefully. Persistance sessions
On 24/03/14 12:16, Neil Horman wrote:
On Mon, Mar 24, 2014 at 01:01:04AM +0530, Monam Agarwal wrote:
This patch replaces rcu_assign_pointer(x, NULL) with RCU_INIT_POINTER(x, NULL)
The rcu_assign_pointer() ensures that the initialization of a structure
is carried out before storing a pointer to
On Mon, Mar 24, 2014 at 01:01:04AM +0530, Monam Agarwal wrote:
> This patch replaces rcu_assign_pointer(x, NULL) with RCU_INIT_POINTER(x, NULL)
>
> The rcu_assign_pointer() ensures that the initialization of a structure
> is carried out before storing a pointer to that structure.
> And in
12 matches
Mail list logo