Hi Herbert,
Ping on this series
Regards,
-George Cherian
On Thu, May 4, 2017 at 5:04 PM, George Cherian
wrote:
> This series adds more algorithm support for CPT.
> Add support for
> -ecb(aes)
> -cfb(aes)
> -ecb(des3_ede)
>
> Some cleanups
The updated memory management is described in the top part of the code.
As one benefit of the changed memory management, the AIO and synchronous
operation is now implemented in one common function. The AF_ALG
operation uses the async kernel crypto API interface for each cipher
operation. Thus, the
The updated memory management is described in the top part of the code.
As one benefit of the changed memory management, the AIO and synchronous
operation is now implemented in one common function. The AF_ALG
operation uses the async kernel crypto API interface for each cipher
operation. Thus, the
Hi Herbert,
Changes v9:
- remove ctx->inflight (implies remove of associated finish wait queue)
With the changes, you will see a lot of code duplication now
as I deliberately tried to use the same struct and variable names,
the same function names and even the same oder of functions.
If you
To accommodate systems that may disallow use of the NEON in kernel mode
in some circumstances, introduce a C fallback for synchronous AES in CTR
mode, and use it if may_use_simd() returns false.
Signed-off-by: Ard Biesheuvel
---
arch/arm64/crypto/Kconfig|
To accommodate systems that disallow the use of kernel mode NEON in
some circumstances, take the return value of may_use_simd into
account when deciding whether to invoke the C fallback routine.
Signed-off-by: Ard Biesheuvel
---
arch/arm64/crypto/chacha20-neon-glue.c
Of the various chaining modes implemented by the bit sliced AES driver,
only CTR is exposed as a synchronous cipher, and requires a fallback in
order to remain usable once we update the kernel mode NEON handling logic
to disallow nested use. So wire up the existing CTR fallback C code.
The arm64 kernel will shortly disallow nested kernel mode NEON, so
add a fallback to scalar code that can be invoked in that case.
Signed-off-by: Ard Biesheuvel
---
arch/arm64/crypto/Kconfig | 3 ++-
arch/arm64/crypto/aes-ce-cipher.c | 20 +---
In order to be able to reuse the generic AES code as a fallback for
situations where the NEON may not be used, update the key handling
to match the byte order of the generic code: it stores round keys
as sequences of 32-bit quantities rather than streams of bytes, and
so our code needs to be
The arm64 kernel will shortly disallow nested kernel mode NEON, so
add a fallback to scalar C code that can be invoked in that case.
Signed-off-by: Ard Biesheuvel
---
arch/arm64/crypto/crct10dif-ce-glue.c | 13 +
1 file changed, 9 insertions(+), 4
The arm64 kernel will shortly disallow nested kernel mode NEON, so
add a fallback to scalar code that can be invoked in that case.
Signed-off-by: Ard Biesheuvel
---
arch/arm64/crypto/Kconfig| 3 +-
arch/arm64/crypto/sha2-ce-glue.c | 30 +---
The arm64 kernel will shortly disallow nested kernel mode NEON.
So honour this in the ARMv8 Crypto Extensions implementation of CCM-AES,
and fall back to a dynamically instantiated ccm(aes) implementation if
necessary (which will in all likelihood be produced by the generic CCM,
CTR and AES
The arm64 kernel will shortly disallow nested kernel mode NEON, so
add a fallback to scalar C code that can be invoked in that case.
Signed-off-by: Ard Biesheuvel
---
arch/arm64/crypto/Kconfig | 3 +-
arch/arm64/crypto/ghash-ce-glue.c | 49
The arm64 kernel will shortly disallow nested kernel mode NEON, so
add a fallback to scalar C code that can be invoked in that case.
Signed-off-by: Ard Biesheuvel
---
arch/arm64/crypto/crc32-ce-glue.c | 11 ++-
1 file changed, 6 insertions(+), 5 deletions(-)
If 'kzalloc' fails, we return 0 which means success.
return -ENOMEM instead as already done a few lines above.
Signed-off-by: Christophe JAILLET
---
drivers/crypto/amcc/crypto4xx_core.c | 1 +
1 file changed, 1 insertion(+)
diff --git
Am Samstag, 10. Juni 2017, 10:59:40 CEST schrieb Herbert Xu:
Hi Herbert,
>
> Indeed, we should just kill the wait or perhaps convert it to
> a WARN_ON.
As the inflight variable is only used for refcounting and the related wait, I
would propose to remove that variable entirely.
Ciao
Stephan
On Sat, Jun 10, 2017 at 11:05:39AM +0300, Gilad Ben-Yossef wrote:
>
> I guess there is a question if it really is important to know that
> your request ended up
> on the backlog, rather than being handled.I can imagine it can be used
> as back pressure
> indication but I wonder if someone is using
On Sat, Jun 10, 2017 at 10:33:28AM +0200, Stephan Müller wrote:
>
> Right. Shouldn't we drop the ctx->inflight completely?
>
> The code in the current patch set contains:
>
> when an async operation is queued:
>
> sock_hold(sk);
> ctx->inflight++;
>
> upon
Am Samstag, 10. Juni 2017, 05:13:16 CEST schrieb Herbert Xu:
Hi Herbert,
> On Tue, May 23, 2017 at 04:31:59PM +0200, Stephan Müller wrote:
> > static void skcipher_sock_destruct(struct sock *sk)
> > {
> >
> > struct alg_sock *ask = alg_sk(sk);
> > struct skcipher_ctx *ctx =
On Sat, Jun 10, 2017 at 6:43 AM, Herbert Xu wrote:
> On Mon, May 29, 2017 at 11:22:48AM +0300, Gilad Ben-Yossef wrote:
>>
>> +static inline int crypto_wait_req(int err, struct crypto_wait *wait)
>> +{
>> + switch (err) {
>> + case -EINPROGRESS:
>> + case
Herbert Xu wrote:
> Patch applied. Thanks.
Note that I've passed this on to James to pass on to Linus along with a bunch
of other patches.
David
21 matches
Mail list logo