Re: [PATCH] crypto: af_alg - add async support to algif_aead
Hi Stephan, On 01/27/2016 10:26 PM, Stephan Mueller wrote: >> +for (i = 0; i < areq->tsgls; i++) >> > + put_page(sg_page(sg + i)); > Shouldn't here be the same logic as in put_sgl? I.e. > > for (i = 0; i < sgl->cur; i++) { > if (!sg_page(sg + i)) > continue; > > put_page(sg_page(sg + i)); > sg_assign_page(sg + i, NULL); > } > Thanks for reviewing. I don't think it is possible that there ever will be any gaps in the tsgl. In fact if there is such a possibility then it is a serious problem, because it would mean that we are sending NULL ptrs to the ciphers (see line 640): sg_mark_end(sgl->sg + sgl->cur - 1); aead_request_set_crypt(>aead_req, sgl->sg, ctx->first_rsgl.sgl.sg, used, ctx->iv); I don't see any implementation checking for null in sgls. Most of them just do: for_each_sg(sgl, sg, nents, i) sg_virt(sg)... So it would Oops there. I think this check in put_sgl is redundant. Thanks, -- TS -- To unsubscribe from this list: send the line "unsubscribe linux-crypto" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH] crypto: af_alg - add async support to algif_aead
Am Donnerstag, 28. Januar 2016, 08:00:25 schrieb Tadeusz Struk: Hi Tadeusz, >Hi Stephan, > >On 01/27/2016 10:26 PM, Stephan Mueller wrote: >>> + for (i = 0; i < areq->tsgls; i++) >>> >>> > + put_page(sg_page(sg + i)); >> >> Shouldn't here be the same logic as in put_sgl? I.e. >> >> for (i = 0; i < sgl->cur; i++) { >> >> if (!sg_page(sg + i)) >> >> continue; >> >> put_page(sg_page(sg + i)); >> sg_assign_page(sg + i, NULL); >> >> } > >Thanks for reviewing. >I don't think it is possible that there ever will be any gaps in the tsgl. >In fact if there is such a possibility then it is a serious problem, because >it would mean that we are sending NULL ptrs to the ciphers (see line 640): > > sg_mark_end(sgl->sg + sgl->cur - 1); > aead_request_set_crypt(>aead_req, sgl->sg, ctx- >first_rsgl.sgl.sg, > used, ctx->iv); > >I don't see any implementation checking for null in sgls. Most of them just >do: > > for_each_sg(sgl, sg, nents, i) > sg_virt(sg)... > >So it would Oops there. I think this check in put_sgl is redundant. >Thanks, algif_skcipher does a similar check in skcipher_pull_sgl: ... if (!sg_page(sg + i)) continue; ... if (put) put_page(sg_page(sg + i)); sg_assign_page(sg + i, NULL); ... Ciao Stephan -- To unsubscribe from this list: send the line "unsubscribe linux-crypto" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH] crypto: af_alg - add async support to algif_aead
On 01/28/2016 09:09 AM, Stephan Mueller wrote: > Am Donnerstag, 28. Januar 2016, 08:00:25 schrieb Tadeusz Struk: > > Hi Tadeusz, > >> Hi Stephan, >> >> On 01/27/2016 10:26 PM, Stephan Mueller wrote: + for (i = 0; i < areq->tsgls; i++) > + put_page(sg_page(sg + i)); >>> >>> Shouldn't here be the same logic as in put_sgl? I.e. >>> >>> for (i = 0; i < sgl->cur; i++) { >>> >>> if (!sg_page(sg + i)) >>> >>> continue; >>> >>> put_page(sg_page(sg + i)); >>> sg_assign_page(sg + i, NULL); >>> >>> } >> >> Thanks for reviewing. >> I don't think it is possible that there ever will be any gaps in the tsgl. >> In fact if there is such a possibility then it is a serious problem, because >> it would mean that we are sending NULL ptrs to the ciphers (see line 640): >> >> sg_mark_end(sgl->sg + sgl->cur - 1); >> aead_request_set_crypt(>aead_req, sgl->sg, ctx- >> first_rsgl.sgl.sg, >> used, ctx->iv); >> >> I don't see any implementation checking for null in sgls. Most of them just >> do: >> >> for_each_sg(sgl, sg, nents, i) >> sg_virt(sg)... >> >> So it would Oops there. I think this check in put_sgl is redundant. >> Thanks, > > algif_skcipher does a similar check in skcipher_pull_sgl: > > ... > if (!sg_page(sg + i)) > continue; > ... > if (put) > put_page(sg_page(sg + i)); > sg_assign_page(sg + i, NULL); > ... > Yes, that's true, but this is because if you look at the skcipher_recvmsg_async() function, it invokes crypt operation for each recv segment separately, and after each iteration advances the tsgl forward and marks the sgs that are already processed with NULL. This is completely different to the aead use case, which always sends everything in one go. Thanks, -- TS -- To unsubscribe from this list: send the line "unsubscribe linux-crypto" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [kbuild-all] [PATCH] crypto: af_alg - add async support to algif_aead
On Wed, Jan 27, 2016 at 02:41:36PM -0800, Tadeusz Struk wrote: > On 01/27/2016 02:29 PM, kbuild test robot wrote: > > Hi Tadeusz, > > > > [auto build test ERROR on cryptodev/master] > > [also build test ERROR on v4.5-rc1 next-20160127] > > [if your patch is applied to the wrong git tree, please drop us a note to > > help improving the system] > > > > url: > > https://github.com/0day-ci/linux/commits/Tadeusz-Struk/crypto-af_alg-add-async-support-to-algif_aead/20160128-061818 > > base: > > https://git.kernel.org/pub/scm/linux/kernel/git/herbert/cryptodev-2.6.git > > master > > config: x86_64-randconfig-x011-01270835 (attached as .config) > > reproduce: > > # save the attached .config to linux build tree > > make ARCH=x86_64 > > > > All error/warnings (new ones prefixed by >>): > > > >crypto/algif_aead.c: In function 'aead_async_cb': > >>> crypto/algif_aead.c:386:29: error: implicit declaration of function > >>> 'aead_request_cast' [-Werror=implicit-function-declaration] > > struct aead_request *req = aead_request_cast(_req); > > ^ > >>> crypto/algif_aead.c:386:29: warning: initialization makes pointer from > >>> integer without a cast [-Wint-conversion] > >cc1: some warnings being treated as errors > > > > vim +/aead_request_cast +386 crypto/algif_aead.c > > > >380 static void aead_async_cb(struct crypto_async_request *_req, > > int err) > >381 { > >382 struct sock *sk = _req->data; > >383 struct alg_sock *ask = alg_sk(sk); > >384 struct aead_ctx *ctx = ask->private; > >385 struct crypto_aead *tfm = > > crypto_aead_reqtfm(>aead_req); > > > 386 struct aead_request *req = aead_request_cast(_req); > >387 struct aead_async_req *areq = GET_ASYM_REQ(req, tfm); > >388 struct scatterlist *sg = areq->tsgl; > >389 struct aead_async_rsgl *rsgl; > > > > Thanks for the report. > It has dependency on this one https://patchwork.kernel.org/patch/8144731/ > which has been sent one minute earlier. > I do not want to squash them into one patch, because they are not really > related. > Herbert, how should this be handled? Sorry for the noise -- you may either ignore the warning or send dependent patches in one same patch series in future. Thanks, Fengguang -- To unsubscribe from this list: send the line "unsubscribe linux-crypto" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH] crypto: af_alg - add async support to algif_aead
On 01/27/2016 02:29 PM, kbuild test robot wrote: > Hi Tadeusz, > > [auto build test ERROR on cryptodev/master] > [also build test ERROR on v4.5-rc1 next-20160127] > [if your patch is applied to the wrong git tree, please drop us a note to > help improving the system] > > url: > https://github.com/0day-ci/linux/commits/Tadeusz-Struk/crypto-af_alg-add-async-support-to-algif_aead/20160128-061818 > base: > https://git.kernel.org/pub/scm/linux/kernel/git/herbert/cryptodev-2.6.git > master > config: x86_64-randconfig-x011-01270835 (attached as .config) > reproduce: > # save the attached .config to linux build tree > make ARCH=x86_64 > > All error/warnings (new ones prefixed by >>): > >crypto/algif_aead.c: In function 'aead_async_cb': >>> crypto/algif_aead.c:386:29: error: implicit declaration of function >>> 'aead_request_cast' [-Werror=implicit-function-declaration] > struct aead_request *req = aead_request_cast(_req); > ^ >>> crypto/algif_aead.c:386:29: warning: initialization makes pointer from >>> integer without a cast [-Wint-conversion] >cc1: some warnings being treated as errors > > vim +/aead_request_cast +386 crypto/algif_aead.c > >380static void aead_async_cb(struct crypto_async_request *_req, > int err) >381{ >382struct sock *sk = _req->data; >383struct alg_sock *ask = alg_sk(sk); >384struct aead_ctx *ctx = ask->private; >385struct crypto_aead *tfm = > crypto_aead_reqtfm(>aead_req); > > 386struct aead_request *req = aead_request_cast(_req); >387struct aead_async_req *areq = GET_ASYM_REQ(req, tfm); >388struct scatterlist *sg = areq->tsgl; >389struct aead_async_rsgl *rsgl; > Thanks for the report. It has dependency on this one https://patchwork.kernel.org/patch/8144731/ which has been sent one minute earlier. I do not want to squash them into one patch, because they are not really related. Herbert, how should this be handled? Thanks, -- TS -- To unsubscribe from this list: send the line "unsubscribe linux-crypto" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH] crypto: af_alg - add async support to algif_aead
Hi Tadeusz, [auto build test ERROR on cryptodev/master] [also build test ERROR on v4.5-rc1 next-20160127] [if your patch is applied to the wrong git tree, please drop us a note to help improving the system] url: https://github.com/0day-ci/linux/commits/Tadeusz-Struk/crypto-af_alg-add-async-support-to-algif_aead/20160128-061818 base: https://git.kernel.org/pub/scm/linux/kernel/git/herbert/cryptodev-2.6.git master config: x86_64-randconfig-x011-01270835 (attached as .config) reproduce: # save the attached .config to linux build tree make ARCH=x86_64 All error/warnings (new ones prefixed by >>): crypto/algif_aead.c: In function 'aead_async_cb': >> crypto/algif_aead.c:386:29: error: implicit declaration of function >> 'aead_request_cast' [-Werror=implicit-function-declaration] struct aead_request *req = aead_request_cast(_req); ^ >> crypto/algif_aead.c:386:29: warning: initialization makes pointer from >> integer without a cast [-Wint-conversion] cc1: some warnings being treated as errors vim +/aead_request_cast +386 crypto/algif_aead.c 380 static void aead_async_cb(struct crypto_async_request *_req, int err) 381 { 382 struct sock *sk = _req->data; 383 struct alg_sock *ask = alg_sk(sk); 384 struct aead_ctx *ctx = ask->private; 385 struct crypto_aead *tfm = crypto_aead_reqtfm(>aead_req); > 386 struct aead_request *req = aead_request_cast(_req); 387 struct aead_async_req *areq = GET_ASYM_REQ(req, tfm); 388 struct scatterlist *sg = areq->tsgl; 389 struct aead_async_rsgl *rsgl; --- 0-DAY kernel test infrastructureOpen Source Technology Center https://lists.01.org/pipermail/kbuild-all Intel Corporation .config.gz Description: Binary data
[PATCH] crypto: af_alg - add async support to algif_aead
Following the async change for algif_skcipher this patch adds similar async read to algif_aead. changes in v2: - change internal data structures from fixed size arrays, limited to RSGL_MAX_ENTRIES, to linked list model with no artificial limitation. - use sock_kmalloc instead of kmalloc for memory allocation - use sock_hold instead of separate atomic ctr to wait for outstanding request Signed-off-by: Tadeusz Struk--- crypto/algif_aead.c | 278 +-- 1 file changed, 248 insertions(+), 30 deletions(-) diff --git a/crypto/algif_aead.c b/crypto/algif_aead.c index 147069c..3fa1a95 100644 --- a/crypto/algif_aead.c +++ b/crypto/algif_aead.c @@ -29,15 +29,24 @@ struct aead_sg_list { struct scatterlist sg[ALG_MAX_PAGES]; }; +struct aead_async_rsgl { + struct af_alg_sgl sgl; + struct list_head list; +}; + +struct aead_async_req { + struct scatterlist *tsgl; + struct aead_async_rsgl first_rsgl; + struct list_head list; + struct kiocb *iocb; + unsigned int tsgls; + char iv[]; +}; + struct aead_ctx { struct aead_sg_list tsgl; - /* -* RSGL_MAX_ENTRIES is an artificial limit where user space at maximum -* can cause the kernel to allocate RSGL_MAX_ENTRIES * ALG_MAX_PAGES -* pages -*/ -#define RSGL_MAX_ENTRIES ALG_MAX_PAGES - struct af_alg_sgl rsgl[RSGL_MAX_ENTRIES]; + struct aead_async_rsgl first_rsgl; + struct list_head list; void *iv; @@ -75,6 +84,17 @@ static inline bool aead_sufficient_data(struct aead_ctx *ctx) return ctx->used >= ctx->aead_assoclen + as; } +static void aead_reset_ctx(struct aead_ctx *ctx) +{ + struct aead_sg_list *sgl = >tsgl; + + sg_init_table(sgl->sg, ALG_MAX_PAGES); + sgl->cur = 0; + ctx->used = 0; + ctx->more = 0; + ctx->merge = 0; +} + static void aead_put_sgl(struct sock *sk) { struct alg_sock *ask = alg_sk(sk); @@ -90,11 +110,6 @@ static void aead_put_sgl(struct sock *sk) put_page(sg_page(sg + i)); sg_assign_page(sg + i, NULL); } - sg_init_table(sg, ALG_MAX_PAGES); - sgl->cur = 0; - ctx->used = 0; - ctx->more = 0; - ctx->merge = 0; } static void aead_wmem_wakeup(struct sock *sk) @@ -240,6 +255,7 @@ static int aead_sendmsg(struct socket *sock, struct msghdr *msg, size_t size) if (!aead_writable(sk)) { /* user space sent too much data */ aead_put_sgl(sk); + aead_reset_ctx(ctx); err = -EMSGSIZE; goto unlock; } @@ -251,6 +267,7 @@ static int aead_sendmsg(struct socket *sock, struct msghdr *msg, size_t size) if (sgl->cur >= ALG_MAX_PAGES) { aead_put_sgl(sk); + aead_reset_ctx(ctx); err = -E2BIG; goto unlock; } @@ -287,6 +304,7 @@ static int aead_sendmsg(struct socket *sock, struct msghdr *msg, size_t size) ctx->more = msg->msg_flags & MSG_MORE; if (!ctx->more && !aead_sufficient_data(ctx)) { aead_put_sgl(sk); + aead_reset_ctx(ctx); err = -EMSGSIZE; } @@ -322,6 +340,7 @@ static ssize_t aead_sendpage(struct socket *sock, struct page *page, if (!aead_writable(sk)) { /* user space sent too much data */ aead_put_sgl(sk); + aead_reset_ctx(ctx); err = -EMSGSIZE; goto unlock; } @@ -339,6 +358,7 @@ done: ctx->more = flags & MSG_MORE; if (!ctx->more && !aead_sufficient_data(ctx)) { aead_put_sgl(sk); + aead_reset_ctx(ctx); err = -EMSGSIZE; } @@ -349,23 +369,189 @@ unlock: return err ?: size; } -static int aead_recvmsg(struct socket *sock, struct msghdr *msg, size_t ignored, int flags) +#define GET_ASYM_REQ(req, tfm) (struct aead_async_req *) \ + ((char *)req + sizeof(struct aead_request) + \ +crypto_aead_reqsize(tfm)) + + #define GET_REQ_SIZE(tfm) sizeof(struct aead_async_req) + \ + crypto_aead_reqsize(tfm) + crypto_aead_ivsize(tfm) + \ + sizeof(struct aead_request) + +static void aead_async_cb(struct crypto_async_request *_req, int err) +{ + struct sock *sk = _req->data; + struct alg_sock *ask = alg_sk(sk); + struct aead_ctx *ctx = ask->private; + struct crypto_aead *tfm = crypto_aead_reqtfm(>aead_req); + struct aead_request *req = aead_request_cast(_req); + struct aead_async_req *areq = GET_ASYM_REQ(req, tfm); + struct scatterlist *sg = areq->tsgl; + struct aead_async_rsgl *rsgl; + struct kiocb *iocb =
Re: [PATCH] crypto: af_alg - add async support to algif_aead
Am Mittwoch, 27. Januar 2016, 14:10:31 schrieb Tadeusz Struk: Hi Tadeusz, > Following the async change for algif_skcipher > this patch adds similar async read to algif_aead. > > changes in v2: > - change internal data structures from fixed size arrays, limited to > RSGL_MAX_ENTRIES, to linked list model with no artificial limitation. Thank you for the dynamic allocation support, but I do have a question. > - use sock_kmalloc instead of kmalloc for memory allocation > - use sock_hold instead of separate atomic ctr to wait for outstanding > request > > Signed-off-by: Tadeusz Struk> --- > crypto/algif_aead.c | 278 > +-- 1 file changed, 248 > insertions(+), 30 deletions(-) > > diff --git a/crypto/algif_aead.c b/crypto/algif_aead.c > index 147069c..3fa1a95 100644 > --- a/crypto/algif_aead.c > +++ b/crypto/algif_aead.c > @@ -29,15 +29,24 @@ struct aead_sg_list { > struct scatterlist sg[ALG_MAX_PAGES]; > }; > > +struct aead_async_rsgl { > + struct af_alg_sgl sgl; > + struct list_head list; > +}; > + > +struct aead_async_req { > + struct scatterlist *tsgl; > + struct aead_async_rsgl first_rsgl; > + struct list_head list; > + struct kiocb *iocb; > + unsigned int tsgls; > + char iv[]; > +}; > + > struct aead_ctx { > struct aead_sg_list tsgl; > - /* > - * RSGL_MAX_ENTRIES is an artificial limit where user space at maximum > - * can cause the kernel to allocate RSGL_MAX_ENTRIES * ALG_MAX_PAGES > - * pages > - */ > -#define RSGL_MAX_ENTRIES ALG_MAX_PAGES > - struct af_alg_sgl rsgl[RSGL_MAX_ENTRIES]; > + struct aead_async_rsgl first_rsgl; > + struct list_head list; > > void *iv; > > @@ -75,6 +84,17 @@ static inline bool aead_sufficient_data(struct aead_ctx > *ctx) return ctx->used >= ctx->aead_assoclen + as; > } > > +static void aead_reset_ctx(struct aead_ctx *ctx) > +{ > + struct aead_sg_list *sgl = >tsgl; > + > + sg_init_table(sgl->sg, ALG_MAX_PAGES); > + sgl->cur = 0; > + ctx->used = 0; > + ctx->more = 0; > + ctx->merge = 0; > +} > + > static void aead_put_sgl(struct sock *sk) > { > struct alg_sock *ask = alg_sk(sk); > @@ -90,11 +110,6 @@ static void aead_put_sgl(struct sock *sk) > put_page(sg_page(sg + i)); > sg_assign_page(sg + i, NULL); > } > - sg_init_table(sg, ALG_MAX_PAGES); > - sgl->cur = 0; > - ctx->used = 0; > - ctx->more = 0; > - ctx->merge = 0; > } > > static void aead_wmem_wakeup(struct sock *sk) > @@ -240,6 +255,7 @@ static int aead_sendmsg(struct socket *sock, struct > msghdr *msg, size_t size) if (!aead_writable(sk)) { > /* user space sent too much data */ > aead_put_sgl(sk); > + aead_reset_ctx(ctx); > err = -EMSGSIZE; > goto unlock; > } > @@ -251,6 +267,7 @@ static int aead_sendmsg(struct socket *sock, struct > msghdr *msg, size_t size) > > if (sgl->cur >= ALG_MAX_PAGES) { > aead_put_sgl(sk); > + aead_reset_ctx(ctx); > err = -E2BIG; > goto unlock; > } > @@ -287,6 +304,7 @@ static int aead_sendmsg(struct socket *sock, struct > msghdr *msg, size_t size) ctx->more = msg->msg_flags & MSG_MORE; > if (!ctx->more && !aead_sufficient_data(ctx)) { > aead_put_sgl(sk); > + aead_reset_ctx(ctx); > err = -EMSGSIZE; > } > > @@ -322,6 +340,7 @@ static ssize_t aead_sendpage(struct socket *sock, struct > page *page, if (!aead_writable(sk)) { > /* user space sent too much data */ > aead_put_sgl(sk); > + aead_reset_ctx(ctx); > err = -EMSGSIZE; > goto unlock; > } > @@ -339,6 +358,7 @@ done: > ctx->more = flags & MSG_MORE; > if (!ctx->more && !aead_sufficient_data(ctx)) { > aead_put_sgl(sk); > + aead_reset_ctx(ctx); > err = -EMSGSIZE; > } > > @@ -349,23 +369,189 @@ unlock: > return err ?: size; > } > > -static int aead_recvmsg(struct socket *sock, struct msghdr *msg, size_t > ignored, int flags) +#define GET_ASYM_REQ(req, tfm) (struct aead_async_req > *) \ > + ((char *)req + sizeof(struct aead_request) + \ > + crypto_aead_reqsize(tfm)) > + > + #define GET_REQ_SIZE(tfm) sizeof(struct aead_async_req) + \ > + crypto_aead_reqsize(tfm) + crypto_aead_ivsize(tfm) + \ > + sizeof(struct aead_request) > + > +static void aead_async_cb(struct crypto_async_request *_req, int err) > +{ > + struct sock *sk = _req->data; > + struct alg_sock *ask = alg_sk(sk); > + struct aead_ctx *ctx = ask->private; > + struct crypto_aead *tfm =
Re: [PATCH] crypto: af_alg - add async support to algif_aead
On Wed, Jan 20, 2016 at 12:18:24PM -0800, Tadeusz Struk wrote: > > I tried sock_kmalloc and it will not work. The sysctl_optmem_max by > default is 20480 bytes. The aead ctx by itself takes more than half of > it (11832 bytes). A single async request takes 11408 bytes. > It means we need to use kmalloc or no async request could be allocated. > I would opt to go with this version and I'll convert both algif_aead > and algif_skcipher to use sock_hold later. Then we have to make it smaller. The whole point of this is to stop the user from allocating too much kernel memory, and if you are allocating too much memory to start with... So instead of allocating RSGL_MAX_ENTRIES you should turn it into a linked list like algif_skcipher and then allocate it on demand using sock_kmalloc. Cheers, -- Email: Herbert XuHome Page: http://gondor.apana.org.au/~herbert/ PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt -- To unsubscribe from this list: send the line "unsubscribe linux-crypto" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH] crypto: af_alg - add async support to algif_aead
Hi Herbert, On 01/18/2016 04:34 PM, Herbert Xu wrote: >> My understanding is that the sock_kmalloc is mainly used for allocations >> > of the user provided data, because it keeps tracks of how much memory >> > is allocated by a socket, and makes sure that is will not exceed the >> > sysctl_optmem_max limit. Usually the internal structures, with fixed >> > size are allocated simply with kmalloc. I don't think that using >> > sock_kmalloc will give us any benefit here. > If there is only ever one of them per-socket then kmalloc is fine, > otherwise you should use sock_kmalloc. > I tried sock_kmalloc and it will not work. The sysctl_optmem_max by default is 20480 bytes. The aead ctx by itself takes more than half of it (11832 bytes). A single async request takes 11408 bytes. It means we need to use kmalloc or no async request could be allocated. I would opt to go with this version and I'll convert both algif_aead and algif_skcipher to use sock_hold later. Thanks, -- TS -- To unsubscribe from this list: send the line "unsubscribe linux-crypto" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html