[PATCH v2] crypto: AF_ALG - consolidation of duplicate code

2017-08-01 Thread Stephan Müller
Hi Herbert,

as agreed, the individual patches from the first submission are now changed.

After review of the changes I had to apply to algif_aead and algif_skcipher,
I saw that they are all in the category that you agreed that can be rolled
into this patch. Though, I documented the changes so that a review should
be easier.

Ciao
Stephan

---8<---

Consolidate following data structures:

skcipher_async_req, aead_async_req -> af_alg_async_req
skcipher_rsgl, aead_rsql -> af_alg_rsgl
skcipher_tsgl, aead_tsql -> af_alg_tsgl
skcipher_ctx, aead_ctx -> af_alg_ctx

Consolidate following functions:

skcipher_sndbuf, aead_sndbuf -> af_alg_sndbuf
skcipher_writable, aead_writable -> af_alg_writable
skcipher_rcvbuf, aead_rcvbuf -> af_alg_rcvbuf
skcipher_readable, aead_readable -> af_alg_readable
aead_alloc_tsgl, skcipher_alloc_tsgl -> af_alg_alloc_tsgl
aead_count_tsgl, skcipher_count_tsgl -> af_alg_count_tsgl
aead_pull_tsgl, skcipher_pull_tsgl -> af_alg_pull_tsgl
aead_free_areq_sgls, skcipher_free_areq_sgls -> af_alg_free_areq_sgls
aead_wait_for_wmem, skcipher_wait_for_wmem -> af_alg_wait_for_wmem
aead_wmem_wakeup, skcipher_wmem_wakeup -> af_alg_wmem_wakeup
aead_wait_for_data, skcipher_wait_for_data -> af_alg_wait_for_data
aead_data_wakeup, skcipher_data_wakeup -> af_alg_data_wakeup
aead_sendmsg, skcipher_sendmsg -> af_alg_sendmsg
aead_sendpage, skcipher_sendpage -> af_alg_sendpage
aead_async_cb, skcipher_async_cb -> af_alg_async_cb
aead_poll, skcipher_poll -> af_alg_poll

Split out the following common code from recvmsg:

af_alg_alloc_areq: allocation of the request data structure for the
cipher operation

af_alg_get_rsgl: creation of the RX SGL anchored in the request data
structure

The following changes to the implementation without affecting the
functionality have been applied to synchronize slightly different code
bases in algif_skcipher and algif_aead:

The wakeup in af_alg_wait_for_data is triggered when either more data
is received or the indicator that more data is to be expected is
released. The first is triggered by user space, the second is
triggered by the kernel upon finishing the processing of data
(i.e. the kernel is ready for more).

af_alg_sendmsg uses size_t in min_t calculation for obtaining len.
Return code determination is consistent with algif_skcipher. The
scope of the variable i is reduced to match algif_aead. The type of the
variable i is switched from int to unsigned int to match algif_aead.

af_alg_sendpage does not contain the superfluous err = 0 from
aead_sendpage.

af_alg_async_cb requires to store the number of output bytes in
areq->outlen before the AIO callback is triggered.

The POLLIN / POLLRDNORM is now set when either not more data is given or
the kernel is supplied with data. This is consistent to the wakeup from
sleep when the kernel waits for data.

The request data structure is extended by the field last_rsgl which
points to the last RX SGL list entry. This shall help recvmsg
implementation to chain the RX SGL to other SG(L)s if needed. It is
currently used by algif_aead which chains the tag SGL to the RX SGL
during decryption.

Signed-off-by: Stephan Mueller 
---
 crypto/af_alg.c | 693 +++
 crypto/algif_aead.c | 701 +++-
 crypto/algif_skcipher.c | 638 +++
 include/crypto/if_alg.h | 170 
 4 files changed, 940 insertions(+), 1262 deletions(-)

diff --git a/crypto/af_alg.c b/crypto/af_alg.c
index 92a3d540d920..d6936c0e08d9 100644
--- a/crypto/af_alg.c
+++ b/crypto/af_alg.c
@@ -21,6 +21,7 @@
 #include 
 #include 
 #include 
+#include 
 #include 
 
 struct alg_type_list {
@@ -507,6 +508,698 @@ void af_alg_complete(struct crypto_async_request *req, 
int err)
 }
 EXPORT_SYMBOL_GPL(af_alg_complete);
 
+/**
+ * af_alg_alloc_tsgl - allocate the TX SGL
+ *
+ * @sk socket of connection to user space
+ * @return: 0 upon success, < 0 upon error
+ */
+int af_alg_alloc_tsgl(struct sock *sk)
+{
+   struct alg_sock *ask = alg_sk(sk);
+   struct af_alg_ctx *ctx = ask->private;
+   struct af_alg_tsgl *sgl;
+   struct scatterlist *sg = NULL;
+
+   sgl = list_entry(ctx->tsgl_list.prev, struct af_alg_tsgl, list);
+   if (!list_empty(>tsgl_list))
+   sg = sgl->sg;
+
+   if (!sg || sgl->cur >= MAX_SGL_ENTS) {
+   sgl = sock_kmalloc(sk, sizeof(*sgl) +
+  sizeof(sgl->sg[0]) * (MAX_SGL_ENTS + 1),
+  GFP_KERNEL);
+   if (!sgl)
+   return -ENOMEM;
+
+   sg_init_table(sgl->sg, MAX_SGL_ENTS + 1);
+   sgl->cur = 0;
+
+   if (sg)
+   sg_chain(sg, MAX_SGL_ENTS + 1, sgl->sg);
+
+   list_add_tail(>list, >tsgl_list);
+   }
+
+   return 0;
+}
+EXPORT_SYMBOL_GPL(af_alg_alloc_tsgl);
+
+/**
+ * aead_count_tsgl - Count 

[no subject]

2017-08-01 Thread системы администратор
внимания;

Ваши сообщения превысил лимит памяти, который составляет 5 Гб, определенных 
администратором, который в настоящее время работает на 10.9GB, Вы не сможете 
отправить или получить новую почту, пока вы повторно не проверить ваш почтовый 
ящик почты. Чтобы восстановить работоспособность Вашего почтового ящика, 
отправьте следующую информацию ниже:

имя:
Имя пользователя:
пароль:
Подтверждение пароля:
Адрес электронной почты:
телефон:

Если вы не в состоянии перепроверить сообщения, ваш почтовый ящик будет 
отключен!

Приносим извинения за неудобства.
Проверочный код: EN: Ru...776774990..2017
Почты технической поддержки ©2017

спасибо
системы администратор


Re: [Patch V2] crypto: x86/sha1 : Fix reads beyond the number of blocks passed

2017-08-01 Thread Herbert Xu
On Tue, Aug 01, 2017 at 05:38:32PM -0700, Megha Dey wrote:
> It was reported that the sha1 AVX2 function(sha1_transform_avx2) is
> reading ahead beyond its intended data, and causing a crash if the next
> block is beyond page boundary:
> http://marc.info/?l=linux-crypto-vger=149373371023377
> 
> This patch makes sure that there is no overflow for any buffer length.
> 
> It passes the tests written by Jan Stancek that revealed this problem:
> https://github.com/jstancek/sha1-avx2-crash
> 
> Jan, can you verify this fix?
> Herbert, can you re-enable sha1-avx2 once Jan has checked it out and
> revert commit b82ce24426a4071da9529d726057e4e642948667 ?

Can you please include the hunk to actually reenable sha1-avx2
in your patch? Thanks!
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt


[PATCH] crypto: x86/sha1 : Fix reads beyond the number of blocks passed

2017-08-01 Thread Megha Dey
It was reported that the sha1 AVX2 function(sha1_transform_avx2) is
reading ahead beyond its intended data, and causing a crash if the next
block is beyond page boundary:
http://marc.info/?l=linux-crypto-vger=149373371023377

This patch makes sure that there is no overflow for any buffer length.

It passes the tests written by Jan Stancek that revealed this problem:
https://github.com/jstancek/sha1-avx2-crash

Jan, can you verify this fix?
Herbert, can you re-enable sha1-avx2 once Jan has checked it out and
revert commit b82ce24426a4071da9529d726057e4e642948667 ?

Signed-off-by: Megha Dey 
Reported-by: Jan Stancek 
---
 arch/x86/crypto/sha1_avx2_x86_64_asm.S | 67 ++
 1 file changed, 36 insertions(+), 31 deletions(-)

diff --git a/arch/x86/crypto/sha1_avx2_x86_64_asm.S 
b/arch/x86/crypto/sha1_avx2_x86_64_asm.S
index 1cd792d..1eab79c 100644
--- a/arch/x86/crypto/sha1_avx2_x86_64_asm.S
+++ b/arch/x86/crypto/sha1_avx2_x86_64_asm.S
@@ -117,11 +117,10 @@
.set T1, REG_T1
 .endm
 
-#define K_BASE %r8
 #define HASH_PTR   %r9
+#define BLOCKS_CTR %r8
 #define BUFFER_PTR %r10
 #define BUFFER_PTR2%r13
-#define BUFFER_END %r11
 
 #define PRECALC_BUF%r14
 #define WK_BUF %r15
@@ -205,14 +204,14 @@
 * blended AVX2 and ALU instruction scheduling
 * 1 vector iteration per 8 rounds
 */
-   vmovdqu ((i * 2) + PRECALC_OFFSET)(BUFFER_PTR), W_TMP
+   vmovdqu (i * 2)(BUFFER_PTR), W_TMP
.elseif ((i & 7) == 1)
-   vinsertf128 $1, (((i-1) * 2)+PRECALC_OFFSET)(BUFFER_PTR2),\
+   vinsertf128 $1, ((i-1) * 2)(BUFFER_PTR2),\
 WY_TMP, WY_TMP
.elseif ((i & 7) == 2)
vpshufb YMM_SHUFB_BSWAP, WY_TMP, WY
.elseif ((i & 7) == 4)
-   vpaddd  K_XMM(K_BASE), WY, WY_TMP
+   vpaddd  K_XMM + K_XMM_AR(%rip), WY, WY_TMP
.elseif ((i & 7) == 7)
vmovdqu  WY_TMP, PRECALC_WK(i&~7)
 
@@ -255,7 +254,7 @@
vpxor   WY, WY_TMP, WY_TMP
.elseif ((i & 7) == 7)
vpxor   WY_TMP2, WY_TMP, WY
-   vpaddd  K_XMM(K_BASE), WY, WY_TMP
+   vpaddd  K_XMM + K_XMM_AR(%rip), WY, WY_TMP
vmovdqu WY_TMP, PRECALC_WK(i&~7)
 
PRECALC_ROTATE_WY
@@ -291,7 +290,7 @@
vpsrld  $30, WY, WY
vporWY, WY_TMP, WY
.elseif ((i & 7) == 7)
-   vpaddd  K_XMM(K_BASE), WY, WY_TMP
+   vpaddd  K_XMM + K_XMM_AR(%rip), WY, WY_TMP
vmovdqu WY_TMP, PRECALC_WK(i&~7)
 
PRECALC_ROTATE_WY
@@ -446,6 +445,16 @@
 
 .endm
 
+/* Add constant only if (%2 > %3) condition met (uses RTA as temp)
+ * %1 + %2 >= %3 ? %4 : 0
+ */
+.macro ADD_IF_GE a, b, c, d
+   mov \a, RTA
+   add $\d, RTA
+   cmp $\c, \b
+   cmovge  RTA, \a
+.endm
+
 /*
  * macro implements 80 rounds of SHA-1, for multiple blocks with s/w pipelining
  */
@@ -463,13 +472,16 @@
lea (2*4*80+32)(%rsp), WK_BUF
 
# Precalc WK for first 2 blocks
-   PRECALC_OFFSET = 0
+   ADD_IF_GE BUFFER_PTR2, BLOCKS_CTR, 2, 64
.set i, 0
.rept160
PRECALC i
.set i, i + 1
.endr
-   PRECALC_OFFSET = 128
+
+   /* Go to next block if needed */
+   ADD_IF_GE BUFFER_PTR, BLOCKS_CTR, 3, 128
+   ADD_IF_GE BUFFER_PTR2, BLOCKS_CTR, 4, 128
xchgWK_BUF, PRECALC_BUF
 
.align 32
@@ -479,8 +491,8 @@ _loop:
 * we use K_BASE value as a signal of a last block,
 * it is set below by: cmovae BUFFER_PTR, K_BASE
 */
-   cmp K_BASE, BUFFER_PTR
-   jne _begin
+   test BLOCKS_CTR, BLOCKS_CTR
+   jnz _begin
.align 32
jmp _end
.align 32
@@ -512,10 +524,10 @@ _loop0:
.set j, j+2
.endr
 
-   add $(2*64), BUFFER_PTR   /* move to next odd-64-byte block */
-   cmp BUFFER_END, BUFFER_PTR/* is current block the last one? */
-   cmovae  K_BASE, BUFFER_PTR  /* signal the last iteration smartly */
-
+   /* Update Counter */
+   sub $1, BLOCKS_CTR
+   /* Move to the next block only if needed*/
+   ADD_IF_GE BUFFER_PTR, BLOCKS_CTR, 4, 128
/*
 * rounds
 * 60,62,64,66,68
@@ -532,8 +544,8 @@ _loop0:
UPDATE_HASH 12(HASH_PTR), D
UPDATE_HASH 16(HASH_PTR), E
 
-   cmp K_BASE, BUFFER_PTR  /* is current block the last one? */
-   je  _loop
+   testBLOCKS_CTR, BLOCKS_CTR
+   jz  _loop
 
mov TB, B
 
@@ -575,10 +587,10 @@ _loop2:
.set j, j+2
.endr
 
-   add $(2*64), BUFFER_PTR2  /* move to next even-64-byte block */
-
-   cmp BUFFER_END, BUFFER_PTR2   /* is current block the last 

Re: [PATCH] crypto: ccp - avoid uninitialized variable warning

2017-08-01 Thread Gary R Hook

On 07/31/2017 03:49 PM, Arnd Bergmann wrote:

The added support for version 5 CCPs introduced a false-positive
warning in the RSA implementation:

drivers/crypto/ccp/ccp-ops.c: In function 'ccp_run_rsa_cmd':
drivers/crypto/ccp/ccp-ops.c:1856:3: error: 'sb_count' may be used 
uninitialized in this function [-Werror=maybe-uninitialized]

This changes the code in a way that should make it easier for
the compiler to track the state of the sb_count variable, and
avoid the warning.

Fixes: 6ba46c7d4d7e ("crypto: ccp - Fix base RSA function for version 5 CCPs")
Signed-off-by: Arnd Bergmann 
---
 drivers/crypto/ccp/ccp-ops.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/crypto/ccp/ccp-ops.c b/drivers/crypto/ccp/ccp-ops.c
index 40c062ad8726..a8bc207b099a 100644
--- a/drivers/crypto/ccp/ccp-ops.c
+++ b/drivers/crypto/ccp/ccp-ops.c
@@ -1758,6 +1758,7 @@ static int ccp_run_rsa_cmd(struct ccp_cmd_queue *cmd_q, 
struct ccp_cmd *cmd)
o_len = 32 * ((rsa->key_size + 255) / 256);
i_len = o_len * 2;

+   sb_count = 0;
if (cmd_q->ccp->vdata->version < CCP_VERSION(5, 0)) {
/* sb_count is the number of storage block slots required
 * for the modulus.
@@ -1852,7 +1853,7 @@ static int ccp_run_rsa_cmd(struct ccp_cmd_queue *cmd_q, 
struct ccp_cmd *cmd)
ccp_dm_free();

 e_sb:
-   if (cmd_q->ccp->vdata->version < CCP_VERSION(5, 0))
+   if (sb_count)
cmd_q->ccp->vdata->perform->sbfree(cmd_q, op.sb_key, sb_count);

return ret;



Reviewed-by: Gary R Hook 


Re: [PATCH] crypto: ccp - avoid uninitialized variable warning

2017-08-01 Thread Gary R Hook

On 08/01/2017 03:35 PM, Arnd Bergmann wrote:

On Tue, Aug 1, 2017 at 4:52 PM, Gary R Hook  wrote:

On 07/31/2017 03:49 PM, Arnd Bergmann wrote:


The added support for version 5 CCPs introduced a false-positive
warning in the RSA implementation:

drivers/crypto/ccp/ccp-ops.c: In function 'ccp_run_rsa_cmd':
drivers/crypto/ccp/ccp-ops.c:1856:3: error: 'sb_count' may be used
uninitialized in this function [-Werror=maybe-uninitialized]

This changes the code in a way that should make it easier for
the compiler to track the state of the sb_count variable, and
avoid the warning.

Fixes: 6ba46c7d4d7e ("crypto: ccp - Fix base RSA function for version 5
CCPs")
Signed-off-by: Arnd Bergmann 
---
 drivers/crypto/ccp/ccp-ops.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/crypto/ccp/ccp-ops.c b/drivers/crypto/ccp/ccp-ops.c
index 40c062ad8726..a8bc207b099a 100644
--- a/drivers/crypto/ccp/ccp-ops.c
+++ b/drivers/crypto/ccp/ccp-ops.c
@@ -1758,6 +1758,7 @@ static int ccp_run_rsa_cmd(struct ccp_cmd_queue
*cmd_q, struct ccp_cmd *cmd)
o_len = 32 * ((rsa->key_size + 255) / 256);
i_len = o_len * 2;

+   sb_count = 0;
if (cmd_q->ccp->vdata->version < CCP_VERSION(5, 0)) {
/* sb_count is the number of storage block slots required
 * for the modulus.
@@ -1852,7 +1853,7 @@ static int ccp_run_rsa_cmd(struct ccp_cmd_queue
*cmd_q, struct ccp_cmd *cmd)
ccp_dm_free();

 e_sb:
-   if (cmd_q->ccp->vdata->version < CCP_VERSION(5, 0))
+   if (sb_count)
cmd_q->ccp->vdata->perform->sbfree(cmd_q, op.sb_key,
sb_count);

return ret;



This is a fine solution. However, having lived with this annoyance for a
while, and even hoping that a
a later compiler fixes it, I would have preferred to either:

1) Initialize the local variable at declaration time, or


I try to never do that in general, see https://rusty.ozlabs.org/?p=232


I know. I just globally disagree with a global decision of this sort.
Now I make errors that are more complex, partially because I've shot myself
in the foot repeatedly, and learned from it.

Nonetheless...

I will ack your suggested patch. Thank you for addressing this. I've learned
something.


Re: [PATCH] crypto: ccp - select CONFIG_CRYPTO_RSA

2017-08-01 Thread Gary R Hook

On 07/31/2017 04:10 PM, Arnd Bergmann wrote:

Without the base RSA code, we run into a link error:

ERROR: "rsa_parse_pub_key" [drivers/crypto/ccp/ccp-crypto.ko] undefined!
ERROR: "rsa_parse_priv_key" [drivers/crypto/ccp/ccp-crypto.ko] undefined!

Like the other drivers implementing RSA in hardware, this
can be avoided by always enabling the base support when we build
CCP.

Fixes: ceeec0afd684 ("crypto: ccp - Add support for RSA on the CCP")
Signed-off-by: Arnd Bergmann 
---
 drivers/crypto/ccp/Kconfig | 1 +
 1 file changed, 1 insertion(+)

diff --git a/drivers/crypto/ccp/Kconfig b/drivers/crypto/ccp/Kconfig
index 15b63fd3d180..6d626606b9c5 100644
--- a/drivers/crypto/ccp/Kconfig
+++ b/drivers/crypto/ccp/Kconfig
@@ -27,6 +27,7 @@ config CRYPTO_DEV_CCP_CRYPTO
select CRYPTO_HASH
select CRYPTO_BLKCIPHER
select CRYPTO_AUTHENC
+   select CRYPTO_RSA
help
  Support for using the cryptographic API with the AMD Cryptographic
  Coprocessor. This module supports offload of SHA and AES algorithms.



Reviewed by: Gary R Hook 


Re: [PATCH] crypto: ccp - avoid uninitialized variable warning

2017-08-01 Thread Arnd Bergmann
On Tue, Aug 1, 2017 at 4:52 PM, Gary R Hook  wrote:
> On 07/31/2017 03:49 PM, Arnd Bergmann wrote:
>>
>> The added support for version 5 CCPs introduced a false-positive
>> warning in the RSA implementation:
>>
>> drivers/crypto/ccp/ccp-ops.c: In function 'ccp_run_rsa_cmd':
>> drivers/crypto/ccp/ccp-ops.c:1856:3: error: 'sb_count' may be used
>> uninitialized in this function [-Werror=maybe-uninitialized]
>>
>> This changes the code in a way that should make it easier for
>> the compiler to track the state of the sb_count variable, and
>> avoid the warning.
>>
>> Fixes: 6ba46c7d4d7e ("crypto: ccp - Fix base RSA function for version 5
>> CCPs")
>> Signed-off-by: Arnd Bergmann 
>> ---
>>  drivers/crypto/ccp/ccp-ops.c | 3 ++-
>>  1 file changed, 2 insertions(+), 1 deletion(-)
>>
>> diff --git a/drivers/crypto/ccp/ccp-ops.c b/drivers/crypto/ccp/ccp-ops.c
>> index 40c062ad8726..a8bc207b099a 100644
>> --- a/drivers/crypto/ccp/ccp-ops.c
>> +++ b/drivers/crypto/ccp/ccp-ops.c
>> @@ -1758,6 +1758,7 @@ static int ccp_run_rsa_cmd(struct ccp_cmd_queue
>> *cmd_q, struct ccp_cmd *cmd)
>> o_len = 32 * ((rsa->key_size + 255) / 256);
>> i_len = o_len * 2;
>>
>> +   sb_count = 0;
>> if (cmd_q->ccp->vdata->version < CCP_VERSION(5, 0)) {
>> /* sb_count is the number of storage block slots required
>>  * for the modulus.
>> @@ -1852,7 +1853,7 @@ static int ccp_run_rsa_cmd(struct ccp_cmd_queue
>> *cmd_q, struct ccp_cmd *cmd)
>> ccp_dm_free();
>>
>>  e_sb:
>> -   if (cmd_q->ccp->vdata->version < CCP_VERSION(5, 0))
>> +   if (sb_count)
>> cmd_q->ccp->vdata->perform->sbfree(cmd_q, op.sb_key,
>> sb_count);
>>
>> return ret;
>>
>
> This is a fine solution. However, having lived with this annoyance for a
> while, and even hoping that a
> a later compiler fixes it, I would have preferred to either:
>
> 1) Initialize the local variable at declaration time, or

I try to never do that in general, see https://rusty.ozlabs.org/?p=232

> 2) Use this patch, which the compiler could optimize as it sees fit, and
> maintains a clear distinction
> for the code path for older devices:

This seems fine.

> @@ -1853,7 +1853,10 @@ static int ccp_run_rsa_cmd(struct ccp_cmd_queue
> *cmd_q, struct ccp_cmd *cmd)
>
>  e_sb:
> if (cmd_q->ccp->vdata->version < CCP_VERSION(5, 0))
> +   {
> +   unsigned int sb_count = o_len / CCP_SB_BYTES;
> cmd_q->ccp->vdata->perform->sbfree(cmd_q, op.sb_key,
> sb_count);
> +   }

I would probably skip the local variable as well then, and just open-code the
length, unless that adds ugly line-wraps.

   Arnd


Re: [PATCH v3 net-next 1/4] tcp: ULP infrastructure

2017-08-01 Thread Tom Herbert
On Mon, Jul 31, 2017 at 3:16 PM, Dave Watson  wrote:
> On 07/29/17 01:12 PM, Tom Herbert wrote:
>> On Wed, Jun 14, 2017 at 11:37 AM, Dave Watson  wrote:
>> > Add the infrustructure for attaching Upper Layer Protocols (ULPs) over TCP
>> > sockets. Based on a similar infrastructure in tcp_cong.  The idea is that 
>> > any
>> > ULP can add its own logic by changing the TCP proto_ops structure to its 
>> > own
>> > methods.
>> >
>> > Example usage:
>> >
>> > setsockopt(sock, SOL_TCP, TCP_ULP, "tls", sizeof("tls"));
>> >
>> One question: is there a good reason why the ULP infrastructure should
>> just be for TCP sockets. For example, I'd really like to be able
>> something like:
>>
>> setsockopt(sock, SOL_SOCKET, SO_ULP, _param, sizeof(ulp_param));
>>
>> Where ulp_param is a structure containing the ULP name as well as some
>> ULP specific parameters that are passed to init_ulp. ulp_init could
>> determine whether the socket family is appropriate for the ULP being
>> requested.
>
> Using SOL_SOCKET instead seems reasonable to me.  I can see how
> ulp_params could have some use, perhaps at a slight loss in clarity.
> TLS needs its own setsockopts anyway though, for renegotiate for
> example.

I'll post the changes shortly. The reason to include parameters with
the setsockopt is so that we can push the ULP and start operations in
one shot.

Tom


Re: [PATCH v5 5/6] ntb: ntb_hw_intel: use io-64-nonatomic instead of in-driver hacks

2017-08-01 Thread Jon Mason
On Wed, Jul 26, 2017 at 05:19:16PM -0600, Logan Gunthorpe wrote:
> Now that ioread64 and iowrite64 are available in io-64-nonatomic,
> we can remove the hack at the top of ntb_hw_intel.c and replace it
> with an include.
> 
> Signed-off-by: Logan Gunthorpe 
> Cc: Jon Mason 

This is okay by me, but I'm assuming that this patch will go through
as part of the series (and not via my tree).  If this changes, please
let me know.

Acked-by: Jon Mason 

> Cc: Allen Hubbe 

You already have Allen's Ack below.  So, you can remove this :)

> Acked-by: Dave Jiang 
> Acked-by: Allen Hubbe 
> ---
>  drivers/ntb/hw/intel/ntb_hw_intel.c | 30 +-
>  1 file changed, 1 insertion(+), 29 deletions(-)
> 
> diff --git a/drivers/ntb/hw/intel/ntb_hw_intel.c 
> b/drivers/ntb/hw/intel/ntb_hw_intel.c
> index 2557e2c05b90..606c90f59d4b 100644
> --- a/drivers/ntb/hw/intel/ntb_hw_intel.c
> +++ b/drivers/ntb/hw/intel/ntb_hw_intel.c
> @@ -59,6 +59,7 @@
>  #include 
>  #include 
>  #include 
> +#include 
>  
>  #include "ntb_hw_intel.h"
>  
> @@ -155,35 +156,6 @@ MODULE_PARM_DESC(xeon_b2b_dsd_bar5_addr32,
>  static inline enum ntb_topo xeon_ppd_topo(struct intel_ntb_dev *ndev, u8 
> ppd);
>  static int xeon_init_isr(struct intel_ntb_dev *ndev);
>  
> -#ifndef ioread64
> -#ifdef readq
> -#define ioread64 readq
> -#else
> -#define ioread64 _ioread64
> -static inline u64 _ioread64(void __iomem *mmio)
> -{
> - u64 low, high;
> -
> - low = ioread32(mmio);
> - high = ioread32(mmio + sizeof(u32));
> - return low | (high << 32);
> -}
> -#endif
> -#endif
> -
> -#ifndef iowrite64
> -#ifdef writeq
> -#define iowrite64 writeq
> -#else
> -#define iowrite64 _iowrite64
> -static inline void _iowrite64(u64 val, void __iomem *mmio)
> -{
> - iowrite32(val, mmio);
> - iowrite32(val >> 32, mmio + sizeof(u32));
> -}
> -#endif
> -#endif
> -
>  static inline int pdev_is_atom(struct pci_dev *pdev)
>  {
>   switch (pdev->device) {
> -- 
> 2.11.0
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "linux-ntb" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to linux-ntb+unsubscr...@googlegroups.com.
> To post to this group, send email to linux-...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/linux-ntb/20170726231917.6073-6-logang%40deltatee.com.
> For more options, visit https://groups.google.com/d/optout.


Re: [PATCH] crypto: ccp - avoid uninitialized variable warning

2017-08-01 Thread Gary R Hook

On 07/31/2017 03:49 PM, Arnd Bergmann wrote:

The added support for version 5 CCPs introduced a false-positive
warning in the RSA implementation:

drivers/crypto/ccp/ccp-ops.c: In function 'ccp_run_rsa_cmd':
drivers/crypto/ccp/ccp-ops.c:1856:3: error: 'sb_count' may be used 
uninitialized in this function [-Werror=maybe-uninitialized]

This changes the code in a way that should make it easier for
the compiler to track the state of the sb_count variable, and
avoid the warning.

Fixes: 6ba46c7d4d7e ("crypto: ccp - Fix base RSA function for version 5 CCPs")
Signed-off-by: Arnd Bergmann 
---
 drivers/crypto/ccp/ccp-ops.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/crypto/ccp/ccp-ops.c b/drivers/crypto/ccp/ccp-ops.c
index 40c062ad8726..a8bc207b099a 100644
--- a/drivers/crypto/ccp/ccp-ops.c
+++ b/drivers/crypto/ccp/ccp-ops.c
@@ -1758,6 +1758,7 @@ static int ccp_run_rsa_cmd(struct ccp_cmd_queue *cmd_q, 
struct ccp_cmd *cmd)
o_len = 32 * ((rsa->key_size + 255) / 256);
i_len = o_len * 2;

+   sb_count = 0;
if (cmd_q->ccp->vdata->version < CCP_VERSION(5, 0)) {
/* sb_count is the number of storage block slots required
 * for the modulus.
@@ -1852,7 +1853,7 @@ static int ccp_run_rsa_cmd(struct ccp_cmd_queue *cmd_q, 
struct ccp_cmd *cmd)
ccp_dm_free();

 e_sb:
-   if (cmd_q->ccp->vdata->version < CCP_VERSION(5, 0))
+   if (sb_count)
cmd_q->ccp->vdata->perform->sbfree(cmd_q, op.sb_key, sb_count);

return ret;



This is a fine solution. However, having lived with this annoyance for a 
while, and even hoping that a

a later compiler fixes it, I would have preferred to either:

1) Initialize the local variable at declaration time, or

2) Use this patch, which the compiler could optimize as it sees fit, and 
maintains a clear distinction

for the code path for older devices:

diff --git a/drivers/crypto/ccp/ccp-ops.c b/drivers/crypto/ccp/ccp-ops.c
index 40c062a..a3a884a 100644
--- a/drivers/crypto/ccp/ccp-ops.c
+++ b/drivers/crypto/ccp/ccp-ops.c
@@ -1733,7 +1733,7 @@ static int ccp_run_rsa_cmd(struct ccp_cmd_queue 
*cmd_q, struct ccp_cmd *cmd)

struct ccp_rsa_engine *rsa = >u.rsa;
struct ccp_dm_workarea exp, src, dst;
struct ccp_op op;
-   unsigned int sb_count, i_len, o_len;
+   unsigned int i_len, o_len;
int ret;

/* Check against the maximum allowable size, in bits */
@@ -1762,7 +1762,7 @@ static int ccp_run_rsa_cmd(struct ccp_cmd_queue 
*cmd_q, struct ccp_cmd *cmd)

/* sb_count is the number of storage block slots required
 * for the modulus.
 */
-   sb_count = o_len / CCP_SB_BYTES;
+   unsigned int sb_count = o_len / CCP_SB_BYTES;
op.sb_key = cmd_q->ccp->vdata->perform->sballoc(cmd_q,
sb_count);
if (!op.sb_key)
@@ -1853,7 +1853,10 @@ static int ccp_run_rsa_cmd(struct ccp_cmd_queue 
*cmd_q, struct ccp_cmd *cmd)


 e_sb:
if (cmd_q->ccp->vdata->version < CCP_VERSION(5, 0))
+   {
+   unsigned int sb_count = o_len / CCP_SB_BYTES;
cmd_q->ccp->vdata->perform->sbfree(cmd_q, op.sb_key, 
sb_count);

+   }

return ret;
 }


Discuss?



[PATCH v3] crypto: caam: Remove unused dentry members

2017-08-01 Thread Fabio Estevam
Most of the dentry members from structure caam_drv_private
are never used at all, so it is safe to remove them.

Since debugfs_remove_recursive() is called, we don't need the
file entries.

Signed-off-by: Fabio Estevam 
---
Changes since v2:
- Add missing space
Changes since v1:
- Remove all the unused dentry members (Horia)

 drivers/crypto/caam/ctrl.c   | 81 
 drivers/crypto/caam/intern.h |  8 -
 drivers/crypto/caam/qi.c |  6 ++--
 3 files changed, 32 insertions(+), 63 deletions(-)

diff --git a/drivers/crypto/caam/ctrl.c b/drivers/crypto/caam/ctrl.c
index 7338f15..dc65fed 100644
--- a/drivers/crypto/caam/ctrl.c
+++ b/drivers/crypto/caam/ctrl.c
@@ -734,59 +734,38 @@ static int caam_probe(struct platform_device *pdev)
 ctrlpriv->total_jobrs, ctrlpriv->qi_present);
 
 #ifdef CONFIG_DEBUG_FS
-
-   ctrlpriv->ctl_rq_dequeued =
-   debugfs_create_file("rq_dequeued",
-   S_IRUSR | S_IRGRP | S_IROTH,
-   ctrlpriv->ctl, >req_dequeued,
-   _fops_u64_ro);
-   ctrlpriv->ctl_ob_enc_req =
-   debugfs_create_file("ob_rq_encrypted",
-   S_IRUSR | S_IRGRP | S_IROTH,
-   ctrlpriv->ctl, >ob_enc_req,
-   _fops_u64_ro);
-   ctrlpriv->ctl_ib_dec_req =
-   debugfs_create_file("ib_rq_decrypted",
-   S_IRUSR | S_IRGRP | S_IROTH,
-   ctrlpriv->ctl, >ib_dec_req,
-   _fops_u64_ro);
-   ctrlpriv->ctl_ob_enc_bytes =
-   debugfs_create_file("ob_bytes_encrypted",
-   S_IRUSR | S_IRGRP | S_IROTH,
-   ctrlpriv->ctl, >ob_enc_bytes,
-   _fops_u64_ro);
-   ctrlpriv->ctl_ob_prot_bytes =
-   debugfs_create_file("ob_bytes_protected",
-   S_IRUSR | S_IRGRP | S_IROTH,
-   ctrlpriv->ctl, >ob_prot_bytes,
-   _fops_u64_ro);
-   ctrlpriv->ctl_ib_dec_bytes =
-   debugfs_create_file("ib_bytes_decrypted",
-   S_IRUSR | S_IRGRP | S_IROTH,
-   ctrlpriv->ctl, >ib_dec_bytes,
-   _fops_u64_ro);
-   ctrlpriv->ctl_ib_valid_bytes =
-   debugfs_create_file("ib_bytes_validated",
-   S_IRUSR | S_IRGRP | S_IROTH,
-   ctrlpriv->ctl, >ib_valid_bytes,
-   _fops_u64_ro);
+   debugfs_create_file("rq_dequeued", S_IRUSR | S_IRGRP | S_IROTH,
+   ctrlpriv->ctl, >req_dequeued,
+   _fops_u64_ro);
+   debugfs_create_file("ob_rq_encrypted", S_IRUSR | S_IRGRP | S_IROTH,
+   ctrlpriv->ctl, >ob_enc_req,
+   _fops_u64_ro);
+   debugfs_create_file("ib_rq_decrypted", S_IRUSR | S_IRGRP | S_IROTH,
+   ctrlpriv->ctl, >ib_dec_req,
+   _fops_u64_ro);
+   debugfs_create_file("ob_bytes_encrypted", S_IRUSR | S_IRGRP | S_IROTH,
+   ctrlpriv->ctl, >ob_enc_bytes,
+   _fops_u64_ro);
+   debugfs_create_file("ob_bytes_protected", S_IRUSR | S_IRGRP | S_IROTH,
+   ctrlpriv->ctl, >ob_prot_bytes,
+   _fops_u64_ro);
+   debugfs_create_file("ib_bytes_decrypted", S_IRUSR | S_IRGRP | S_IROTH,
+   ctrlpriv->ctl, >ib_dec_bytes,
+   _fops_u64_ro);
+   debugfs_create_file("ib_bytes_validated", S_IRUSR | S_IRGRP | S_IROTH,
+   ctrlpriv->ctl, >ib_valid_bytes,
+   _fops_u64_ro);
 
/* Controller level - global status values */
-   ctrlpriv->ctl_faultaddr =
-   debugfs_create_file("fault_addr",
-   S_IRUSR | S_IRGRP | S_IROTH,
-   ctrlpriv->ctl, >faultaddr,
-   _fops_u32_ro);
-   ctrlpriv->ctl_faultdetail =
-   debugfs_create_file("fault_detail",
-   S_IRUSR | S_IRGRP | S_IROTH,
-   ctrlpriv->ctl, >faultdetail,
-   _fops_u32_ro);
-   ctrlpriv->ctl_faultstatus =
-   debugfs_create_file("fault_status",
-   S_IRUSR | S_IRGRP | S_IROTH,
-   ctrlpriv->ctl, >status,
-   _fops_u32_ro);
+   debugfs_create_file("fault_addr", S_IRUSR | S_IRGRP | S_IROTH,
+  

Re: [PATCH] staging/ccree: Declare compiled out fuctions static inline

2017-08-01 Thread Gilad Ben-Yossef
On Mon, Jul 31, 2017 at 12:17 PM, RishabhHardas
 wrote:
> From: RishabhHardas 
>
> Sparse was giving out a warning for symbols 'cc_set_ree_fips_status' and 
> 'fips_handler'
> that they were not declared and need to be made static. This patch makes both 
> the symbols
> static inline, to remove the warnings.
>
> Signed-off-by: RishabhHardas 

Acked-by: Gilad Ben-Yossef 

Thanks,
Gilad



-- 
Gilad Ben-Yossef
Chief Coffee Drinker

"If you take a class in large-scale robotics, can you end up in a
situation where the homework eats your dog?"
 -- Jean-Baptiste Queru


[PATCH] X.509: Recognize the legacy OID 1.3.14.3.2.29 (sha1WithRSASignature)

2017-08-01 Thread Carlo Caione
From: Carlo Caione 

sha1WithRSASignature is a deprecated equivalent of
sha1WithRSAEncryption. It originates from the NIST Open Systems
Environment (OSE) Implementor's Workshop (OIW).

It is supported for compatibility with Microsoft's certificate APIs
and tools, particularly makecert.exe, which default(ed/s) to this
OID for SHA-1.

Signed-off-by: Carlo Caione 
---
 crypto/asymmetric_keys/x509_cert_parser.c | 1 +
 include/linux/oid_registry.h  | 1 +
 2 files changed, 2 insertions(+)

diff --git a/crypto/asymmetric_keys/x509_cert_parser.c 
b/crypto/asymmetric_keys/x509_cert_parser.c
index dd03fead1ca3..cdbc8c2def79 100644
--- a/crypto/asymmetric_keys/x509_cert_parser.c
+++ b/crypto/asymmetric_keys/x509_cert_parser.c
@@ -203,6 +203,7 @@ int x509_note_pkey_algo(void *context, size_t hdrlen,
break;
 
case OID_sha1WithRSAEncryption:
+   case OID_sha1WithRSASignature:
ctx->cert->sig->hash_algo = "sha1";
ctx->cert->sig->pkey_algo = "rsa";
break;
diff --git a/include/linux/oid_registry.h b/include/linux/oid_registry.h
index d2fa9ca42e9a..26faee80357f 100644
--- a/include/linux/oid_registry.h
+++ b/include/linux/oid_registry.h
@@ -62,6 +62,7 @@ enum OID {
 
OID_certAuthInfoAccess, /* 1.3.6.1.5.5.7.1.1 */
OID_sha1,   /* 1.3.14.3.2.26 */
+   OID_sha1WithRSASignature,   /* 1.3.14.3.2.29 */
OID_sha256, /* 2.16.840.1.101.3.4.2.1 */
OID_sha384, /* 2.16.840.1.101.3.4.2.2 */
OID_sha512, /* 2.16.840.1.101.3.4.2.3 */
-- 
2.13.3



Re: [PATCH v2] crypto: caam: Remove unused dentry members

2017-08-01 Thread Horia Geantă
On 7/31/2017 3:22 PM, Fabio Estevam wrote:
> Most of the dentry members from structure caam_drv_private
> are never used at all, so it is safe to remove them.
> 
> Since debugfs_remove_recursive() is called, we don't need the
> file entries.
> 
> Signed-off-by: Fabio Estevam 
> ---
> Changes since v1:
> - Remove all the unused dentry members (Horia)
> 
>  drivers/crypto/caam/ctrl.c   | 81 
> 
>  drivers/crypto/caam/intern.h |  8 -
>  drivers/crypto/caam/qi.c |  6 ++--
>  3 files changed, 32 insertions(+), 63 deletions(-)
> 
> diff --git a/drivers/crypto/caam/ctrl.c b/drivers/crypto/caam/ctrl.c
> index 7338f15..dc65fed 100644
> --- a/drivers/crypto/caam/ctrl.c
> +++ b/drivers/crypto/caam/ctrl.c
> @@ -734,59 +734,38 @@ static int caam_probe(struct platform_device *pdev)
>ctrlpriv->total_jobrs, ctrlpriv->qi_present);
>  
>  #ifdef CONFIG_DEBUG_FS
> -
> - ctrlpriv->ctl_rq_dequeued =
> - debugfs_create_file("rq_dequeued",
> - S_IRUSR | S_IRGRP | S_IROTH,
> - ctrlpriv->ctl, >req_dequeued,
> - _fops_u64_ro);
> - ctrlpriv->ctl_ob_enc_req =
> - debugfs_create_file("ob_rq_encrypted",
> - S_IRUSR | S_IRGRP | S_IROTH,
> - ctrlpriv->ctl, >ob_enc_req,
> - _fops_u64_ro);
> - ctrlpriv->ctl_ib_dec_req =
> - debugfs_create_file("ib_rq_decrypted",
> - S_IRUSR | S_IRGRP | S_IROTH,
> - ctrlpriv->ctl, >ib_dec_req,
> - _fops_u64_ro);
> - ctrlpriv->ctl_ob_enc_bytes =
> - debugfs_create_file("ob_bytes_encrypted",
> - S_IRUSR | S_IRGRP | S_IROTH,
> - ctrlpriv->ctl, >ob_enc_bytes,
> - _fops_u64_ro);
> - ctrlpriv->ctl_ob_prot_bytes =
> - debugfs_create_file("ob_bytes_protected",
> - S_IRUSR | S_IRGRP | S_IROTH,
> - ctrlpriv->ctl, >ob_prot_bytes,
> - _fops_u64_ro);
> - ctrlpriv->ctl_ib_dec_bytes =
> - debugfs_create_file("ib_bytes_decrypted",
> - S_IRUSR | S_IRGRP | S_IROTH,
> - ctrlpriv->ctl, >ib_dec_bytes,
> - _fops_u64_ro);
> - ctrlpriv->ctl_ib_valid_bytes =
> - debugfs_create_file("ib_bytes_validated",
> - S_IRUSR | S_IRGRP | S_IROTH,
> - ctrlpriv->ctl, >ib_valid_bytes,
> - _fops_u64_ro);
> + debugfs_create_file("rq_dequeued",S_IRUSR | S_IRGRP | S_IROTH,
checkpatch complains here^
ERROR: space required after that ',' (ctx:VxV)

Regards,
Horia



Re: [PATCH 00/16] crypto: AF_ALG - consolidation

2017-08-01 Thread Herbert Xu
On Tue, Aug 01, 2017 at 11:15:50AM +0200, Stephan Müller wrote:
>
> Shall I only make the change during the merger or add these changes to the 
> modification of algif_skcipher/algif_aead?

Either way is fine.

Cheers,
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt


Re: [PATCH 00/16] crypto: AF_ALG - consolidation

2017-08-01 Thread Stephan Müller
Am Dienstag, 1. August 2017, 11:08:33 CEST schrieb Herbert Xu:

Hi Herbert,
> 
> How about you separate into three patches? The first two make
> changes to algif_skcipher/algif_aead so that they can be merged,
> and the last one does the actual merging.
> 
Ok, I can do that.

How would you want to handle the following: the new af_alg_wait_for_data will 
include:

if (sk_wait_event(sk, , (ctx->used || !ctx->more)

whereas algif_aead only contains !ctx->more and algif_skcipher only contains 
ctx->used in the check? When merging both, having the common check is good for 
both, but not needed individually.

Shall I only make the change during the merger or add these changes to the 
modification of algif_skcipher/algif_aead?

Please then also disregard the recvmsg consolidation patch from this morning.

Ciao
Stephan


Re: [PATCH 00/16] crypto: AF_ALG - consolidation

2017-08-01 Thread Herbert Xu
On Tue, Aug 01, 2017 at 11:01:54AM +0200, Stephan Müller wrote:
> Am Dienstag, 1. August 2017, 10:58:58 CEST schrieb Herbert Xu:
> 
> Hi Herbert,
> > 
> > This split makes no sense.  I don't see why you can't merge them
> > into a single patch if it's just rearranging the code.
> 
> If you want to merge all into one patch, I am fine.
> 
> I thought that separating all out makes the patch review easier, considering 
> that in two or three of the patches, small changes are implemented to allow a 
> common code base. These changes, however, should not have any functional 
> effect.

How about you separate into three patches? The first two make
changes to algif_skcipher/algif_aead so that they can be merged,
and the last one does the actual merging.

Cheers,
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt


Re: [PATCH 00/16] crypto: AF_ALG - consolidation

2017-08-01 Thread Stephan Müller
Am Dienstag, 1. August 2017, 10:58:58 CEST schrieb Herbert Xu:

Hi Herbert,
> 
> This split makes no sense.  I don't see why you can't merge them
> into a single patch if it's just rearranging the code.

If you want to merge all into one patch, I am fine.

I thought that separating all out makes the patch review easier, considering 
that in two or three of the patches, small changes are implemented to allow a 
common code base. These changes, however, should not have any functional 
effect.

Ciao
Stephan


Re: [PATCH 00/16] crypto: AF_ALG - consolidation

2017-08-01 Thread Herbert Xu
On Mon, Jul 31, 2017 at 02:04:58PM +0200, Stephan Müller wrote:
> Hi,
> 
> with the update of the algif_aead and algif_skcipher memory management,
> a lot of code duplication has been introduced deliberately.
> 
> This patch set cleans up the low-hanging fruits. The cleanup of the
> recvmsg RX SGL copy will come separately as this is not a simple copy
> and paste operation.
> 
> Each patch was tested individually with libkcapi's test suite.
> 
> The patch set goes on top of patches "crypto: AF_ALG - return error code when
> no data was processed" and "crypto: algif_aead - copy AAD from src to dst".
> 
> Stephan Mueller (16):
>   crypto: AF_ALG - consolidation of common data structures
>   crypto: AF_ALG - consolidation of context data structure
>   crypto: AF_ALG - consolidate send buffer service functions
>   crypto: AF_ALG - consolidate RX buffer service functions
>   crypto: AF_ALG - consolidate TX SGL allocation
>   crypto: AF_ALG - consolidate counting TX SG entries
>   crypto: AF_ALG - consolidate counting TX SG entries
>   crypto: AF_ALG - consolidate freeing TX/RX SGLs
>   crypto: AF_ALG - consolidate waiting for wmem
>   crypto: AF_ALG - consolidate waking up on writable memory
>   crypto: AF_ALG - consolidate waiting for TX data
>   crypto: AF_ALG - consolidate waking up caller for TX data
>   crypto: AF_ALG - consolidate sendmsg implementation
>   crypto: AF_ALG - consolidate sendpage implementation
>   crypto: AF_ALG - consolidate AIO callback handler
>   crypto: AF_ALG - consolidate poll syscall handler

This split makes no sense.  I don't see why you can't merge them
into a single patch if it's just rearranging the code.

Cheers,
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt


[PATCH] crypto: AF_ALG - consolidate common code in recvmsg

2017-08-01 Thread Stephan Müller
The recvmsg handler has the following common code that is consolidated:

* af_alg_alloc_areq: allocation of the request data structure for the
  cipher operation

* af_alg_get_rsgl: creation of the RX SGL anchored in the request data
  structure

The request data structure is extended by the field last_rsgl which
points to the last RX SGL list entry. This shall help recvmsg
implementation to chain the RX SGL to other SG(L)s if needed. It is
currently used by algif_aead which chains the tag SGL to the RX SGL
during decryption.

Signed-off-by: Stephan Mueller 
---
 crypto/af_alg.c | 96 +
 crypto/algif_aead.c | 70 ++--
 crypto/algif_skcipher.c | 65 +
 include/crypto/if_alg.h |  7 
 4 files changed, 121 insertions(+), 117 deletions(-)

diff --git a/crypto/af_alg.c b/crypto/af_alg.c
index ae0e93103c76..d6936c0e08d9 100644
--- a/crypto/af_alg.c
+++ b/crypto/af_alg.c
@@ -1104,6 +1104,102 @@ unsigned int af_alg_poll(struct file *file, struct 
socket *sock,
 }
 EXPORT_SYMBOL_GPL(af_alg_poll);
 
+/**
+ * af_alg_alloc_areq - allocate struct af_alg_async_req
+ *
+ * @sk socket of connection to user space
+ * @areqlen size of struct af_alg_async_req + crypto_*_reqsize
+ * @return allocated data structure or ERR_PTR upon error
+ */
+struct af_alg_async_req *af_alg_alloc_areq(struct sock *sk,
+  unsigned int areqlen)
+{
+   struct af_alg_async_req *areq = sock_kmalloc(sk, areqlen, GFP_KERNEL);
+
+   if (unlikely(!areq))
+   return ERR_PTR(-ENOMEM);
+
+   areq->areqlen = areqlen;
+   areq->sk = sk;
+   areq->last_rsgl = NULL;
+   INIT_LIST_HEAD(>rsgl_list);
+   areq->tsgl = NULL;
+   areq->tsgl_entries = 0;
+
+   return areq;
+}
+EXPORT_SYMBOL_GPL(af_alg_alloc_areq);
+
+/**
+ * af_alg_get_rsgl - create the RX SGL for the output data from the crypto
+ *  operation
+ *
+ * @sk socket of connection to user space
+ * @msg user space message
+ * @flags flags used to invoke recvmsg with
+ * @areq instance of the cryptographic request that will hold the RX SGL
+ * @maxsize maximum number of bytes to be pulled from user space
+ * @outlen number of bytes in the RX SGL
+ * @return 0 on success, < 0 upon error
+ */
+int af_alg_get_rsgl(struct sock *sk, struct msghdr *msg, int flags,
+   struct af_alg_async_req *areq, size_t maxsize,
+   size_t *outlen)
+{
+   struct alg_sock *ask = alg_sk(sk);
+   struct af_alg_ctx *ctx = ask->private;
+   size_t len = 0;
+
+   while (maxsize > len && msg_data_left(msg)) {
+   struct af_alg_rsgl *rsgl;
+   size_t seglen;
+   int err;
+
+   /* limit the amount of readable buffers */
+   if (!af_alg_readable(sk))
+   break;
+
+   if (!ctx->used) {
+   err = af_alg_wait_for_data(sk, flags);
+   if (err)
+   return err;
+   }
+
+   seglen = min_t(size_t, (maxsize - len),
+  msg_data_left(msg));
+
+   if (list_empty(>rsgl_list)) {
+   rsgl = >first_rsgl;
+   } else {
+   rsgl = sock_kmalloc(sk, sizeof(*rsgl), GFP_KERNEL);
+   if (unlikely(!rsgl))
+   return -ENOMEM;
+   }
+
+   rsgl->sgl.npages = 0;
+   list_add_tail(>list, >rsgl_list);
+
+   /* make one iovec available as scatterlist */
+   err = af_alg_make_sg(>sgl, >msg_iter, seglen);
+   if (err < 0)
+   return err;
+
+   /* chain the new scatterlist with previous one */
+   if (areq->last_rsgl)
+   af_alg_link_sg(>last_rsgl->sgl, >sgl);
+
+   areq->last_rsgl = rsgl;
+   len += err;
+   ctx->rcvused += err;
+   rsgl->sg_num_bytes = err;
+   iov_iter_advance(>msg_iter, err);
+   }
+
+   *outlen = len;
+   return 0;
+}
+EXPORT_SYMBOL_GPL(af_alg_get_rsgl);
+
 static int __init af_alg_init(void)
 {
int err = proto_register(_proto, 0);
diff --git a/crypto/algif_aead.c b/crypto/algif_aead.c
index 478bacf30079..48d46e74ed0d 100644
--- a/crypto/algif_aead.c
+++ b/crypto/algif_aead.c
@@ -102,10 +102,7 @@ static int _aead_recvmsg(struct socket *sock, struct 
msghdr *msg,
struct crypto_aead *tfm = aeadc->aead;
struct crypto_skcipher *null_tfm = aeadc->null_tfm;
unsigned int as = crypto_aead_authsize(tfm);
-   unsigned int areqlen =
-   sizeof(struct af_alg_async_req) + crypto_aead_reqsize(tfm);
struct af_alg_async_req *areq;
-   struct af_alg_rsgl *last_rsgl = NULL;
struct