Re: [dm-devel] dm crypt: Fix reqsize in crypt_iv_eboiv_gen

2023-10-05 Thread Herbert Xu
On Thu, Oct 05, 2023 at 10:26:14PM -0400, Mike Snitzer wrote:
>
> Sure, please do.  Shouldn't your header Cc: stable given that the
> Fixes commit implies v6.5 needs this fix?

Sure I'll add it although it will be picked up automatically due
to the Fixes header.  I'll also fix the reporter's name.

> Reviewed-by: Mike Mike Snitzer 

Thanks!
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel



[dm-devel] [PATCH] dm crypt: Fix reqsize in crypt_iv_eboiv_gen

2023-10-05 Thread Herbert Xu
On Fri, Oct 06, 2023 at 08:04:18AM +0700, Bagas Sanjaya wrote:
>
> > Git bisect lead me to:
> > # first bad commit: [e3023094dffb41540330fb0c74cd3a019cd525c2] dm crypt:
> > Avoid using MAX_CIPHER_BLOCKSIZE
> > 
> > If I git revert e3023094dffb41540330fb0c74cd3a019cd525c2 on current Linus'
> > git master, the issue goes away. So I'm personally not all that affected
> > anymore (if I'm ready to compile my kernels from now on), and I understand
> > that you have no clear way to reproduce this as it seems strongly bound to
> > hardware, but seems like this could point to a potentially serious security
> > issue since it involves both crypto and undefined behaviour.

Thanks for the report.  Sorry this is indeed my fault.  The allocated
buffer is too small as it's missing the size for the request object
itself.

Mike, would you be OK with me picking this fix up and pushing it to
Linus?

Cheers,

---8<---
A skcipher_request object is made up of struct skcipher_request
followed by a variable-sized trailer.  The allocation of the
skcipher_request and IV in crypt_iv_eboiv_gen is missing the
memory for struct skcipher_request.  Fix it by adding it to
reqsize.

Fixes: e3023094dffb ("dm crypt: Avoid using MAX_CIPHER_BLOCKSIZE")
Reported-by: Tatu Heikkil� 
Signed-off-by: Herbert Xu 

diff --git a/drivers/md/dm-crypt.c b/drivers/md/dm-crypt.c
index f2662c21a6df..5315fd261c23 100644
--- a/drivers/md/dm-crypt.c
+++ b/drivers/md/dm-crypt.c
@@ -753,7 +753,8 @@ static int crypt_iv_eboiv_gen(struct crypt_config *cc, u8 
*iv,
int err;
u8 *buf;
 
-   reqsize = ALIGN(crypto_skcipher_reqsize(tfm), __alignof__(__le64));
+   reqsize = sizeof(*req) + crypto_skcipher_reqsize(tfm);
+   reqsize = ALIGN(reqsize, __alignof__(__le64));
 
req = kmalloc(reqsize + cc->iv_size, GFP_NOIO);
if (!req)
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel


Re: [dm-devel] [bug report] dm-crypt setup failure with next-20230929 kernel

2023-10-02 Thread Herbert Xu
On Mon, Oct 02, 2023 at 08:08:23AM +, Shinichiro Kawasaki wrote:
> Hello there,
> 
> I ran the command below on top of linux-next kernel with the tag 
> next-20230929,
> and observed dm-crypt setup failed.
> 
>   $ sudo cryptsetup open --type=plain --key-file=/dev/zero /dev/nullb0 test
>   device-mapper: reload ioctl on test (253:0) failed: No such file or 
> directory
> 
> Kernel reported an error related to crypto.
> 
>   device-mapper: table: 253:0: crypt: Error allocating crypto tfm (-ENOENT)
>   device-mapper: ioctl: error adding target to table
> 
> The failure was observed with null_blk and SATA HDD. It looks independent of
> block device type.
> 
> I bisected and found that the commit 31865c4c4db2 ("crypto: skcipher - Add
> lskcipher") is the trigger. I reverted the commit from next-20230929 together
> with other four dependent commits below, and observed the failure disappears.
> 
>   705b52fef3c7 ("crypto: cbc - Convert from skcipher to lskcipher")
>   32a8dc4afcfb ("crypto: ecb - Convert from skcipher to lskcipher")
>   3dfe8786b11a ("crypto: testmgr - Add support for lskcipher algorithms")
>   8aee5d4ebd11 ("crypto: lskcipher - Add compatibility wrapper around ECB")
> 
> Is this a known issue?

Thanks for the report.  I'll look into this.

Cheers,
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel



Re: [dm-devel] [PATCH 1/3] dm integrity: do not filter algos with CRYPTO_ALG_ALLOCATES_MEMORY

2023-07-05 Thread Herbert Xu
On Wed, Jul 05, 2023 at 09:57:54PM +0100, Giovanni Cabiddu wrote:
>
> Then we are then back to square one. We need to check how many entries
> are present in the scatterlists encrypted by crypt_journal() before
> adjusting the meaning of !CRYPTO_ALG_ALLOCATES_MEMORY.

Indeed.  I missed the fact that it was preallocating memory with
GFP_KERNEL.

So perhaps the answer is to adjust our API to allow the drivers to
pre-allocate memory.  I'll look into this.

Thanks,
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel



Re: [dm-devel] dm crypt: Avoid using MAX_CIPHER_BLOCKSIZE

2023-06-09 Thread Herbert Xu
On Thu, Jun 01, 2023 at 03:10:18PM -0400, Mike Snitzer wrote:
>
> Strikes me as strange that open-coding skcipher_request_{alloc,free}
> is ideal, but dm-crypt is the only non-crypto consumer of
> MAX_CIPHER_BLOCKSIZE so really not worth standing up yet another
> interface wrapper.

It is pretty standard when you need to allocate data alongside the
request.  But yes if we could improve the helpers to handle this
that would be nice.

> Anyway, this code is certainly better for dm-crypt's needs.  I'm happy
> with you applying this change via your crypto tree.
> 
> Reviewed-by: Mike Snitzer 

Thanks!
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel



[dm-devel] dm crypt: Avoid using MAX_CIPHER_BLOCKSIZE

2023-06-01 Thread Herbert Xu
MAX_CIPHER_BLOCKSIZE is an internal implementation detail and should
not be relied on by users of the Crypto API.

Instead of storing the IV on the stack, allocate it together with
the crypto request.

Signed-off-by: Herbert Xu 
---

 drivers/md/dm-crypt.c |   15 +++
 1 file changed, 11 insertions(+), 4 deletions(-)

diff --git a/drivers/md/dm-crypt.c b/drivers/md/dm-crypt.c
index 40cb1719ae4d..0e7e443dde11 100644
--- a/drivers/md/dm-crypt.c
+++ b/drivers/md/dm-crypt.c
@@ -31,10 +31,10 @@
 #include 
 #include 
 #include 
-#include 
 #include 
 #include 
 #include 
+#include 
 #include  /* for struct rtattr and RTA macros only */
 #include 
 #include 
@@ -743,16 +743,23 @@ static int crypt_iv_eboiv_ctr(struct crypt_config *cc, 
struct dm_target *ti,
 static int crypt_iv_eboiv_gen(struct crypt_config *cc, u8 *iv,
struct dm_crypt_request *dmreq)
 {
-   u8 buf[MAX_CIPHER_BLOCKSIZE] __aligned(__alignof__(__le64));
+   struct crypto_skcipher *tfm = any_tfm(cc);
struct skcipher_request *req;
struct scatterlist src, dst;
DECLARE_CRYPTO_WAIT(wait);
+   unsigned int reqsize;
int err;
+   u8 *buf;
 
-   req = skcipher_request_alloc(any_tfm(cc), GFP_NOIO);
+   reqsize = ALIGN(crypto_skcipher_reqsize(tfm), __alignof__(__le64));
+
+   req = kmalloc(reqsize + cc->iv_size, GFP_NOIO);
if (!req)
return -ENOMEM;
 
+   skcipher_request_set_tfm(req, tfm);
+
+   buf = (u8 *)req + reqsize;
memset(buf, 0, cc->iv_size);
*(__le64 *)buf = cpu_to_le64(dmreq->iv_sector * cc->sector_size);
 
@@ -761,7 +768,7 @@ static int crypt_iv_eboiv_gen(struct crypt_config *cc, u8 
*iv,
skcipher_request_set_crypt(req, , , cc->iv_size, buf);
skcipher_request_set_callback(req, 0, crypto_req_done, );
err = crypto_wait_req(crypto_skcipher_encrypt(req), );
-   skcipher_request_free(req);
+   kfree_sensitive(req);
 
return err;
 }
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel



[dm-devel] [v2 PATCH 10/17] crypto: api - Use data directly in completion function

2023-02-07 Thread Herbert Xu
v2 adds the actual algapi conversion which went missing.

---8<---
This patch does the final flag day conversion of all completion
functions which are now all contained in the Crypto API.

Signed-off-by: Herbert Xu 
---

 crypto/adiantum.c  |5 +---
 crypto/af_alg.c|6 ++---
 crypto/ahash.c |   12 +-
 crypto/api.c   |4 +--
 crypto/authenc.c   |   14 +---
 crypto/authencesn.c|   15 +---
 crypto/ccm.c   |9 +++
 crypto/chacha20poly1305.c  |   40 +-
 crypto/cryptd.c|   52 ++---
 crypto/cts.c   |   12 +-
 crypto/dh.c|5 +---
 crypto/essiv.c |8 +++---
 crypto/gcm.c   |   36 ++-
 crypto/hctr2.c |5 +---
 crypto/lrw.c   |4 +--
 crypto/pcrypt.c|4 +--
 crypto/rsa-pkcs1pad.c  |   15 +---
 crypto/seqiv.c |5 +---
 crypto/xts.c   |   12 +-
 drivers/crypto/atmel-sha.c |5 +---
 include/crypto/algapi.h|3 --
 include/crypto/if_alg.h|4 ---
 include/linux/crypto.h |   10 
 23 files changed, 133 insertions(+), 152 deletions(-)

diff --git a/crypto/adiantum.c b/crypto/adiantum.c
index 84450130cb6b..c33ba22a6638 100644
--- a/crypto/adiantum.c
+++ b/crypto/adiantum.c
@@ -308,10 +308,9 @@ static int adiantum_finish(struct skcipher_request *req)
return 0;
 }
 
-static void adiantum_streamcipher_done(struct crypto_async_request *areq,
-  int err)
+static void adiantum_streamcipher_done(void *data, int err)
 {
-   struct skcipher_request *req = areq->data;
+   struct skcipher_request *req = data;
 
if (!err)
err = adiantum_finish(req);
diff --git a/crypto/af_alg.c b/crypto/af_alg.c
index 0a4fa2a429e2..5f7252a5b7b4 100644
--- a/crypto/af_alg.c
+++ b/crypto/af_alg.c
@@ -1186,7 +1186,7 @@ EXPORT_SYMBOL_GPL(af_alg_free_resources);
 
 /**
  * af_alg_async_cb - AIO callback handler
- * @_req: async request info
+ * @data: async request completion data
  * @err: if non-zero, error result to be returned via ki_complete();
  *   otherwise return the AIO output length via ki_complete().
  *
@@ -1196,9 +1196,9 @@ EXPORT_SYMBOL_GPL(af_alg_free_resources);
  * The number of bytes to be generated with the AIO operation must be set
  * in areq->outlen before the AIO callback handler is invoked.
  */
-void af_alg_async_cb(struct crypto_async_request *_req, int err)
+void af_alg_async_cb(void *data, int err)
 {
-   struct af_alg_async_req *areq = _req->data;
+   struct af_alg_async_req *areq = data;
struct sock *sk = areq->sk;
struct kiocb *iocb = areq->iocb;
unsigned int resultlen;
diff --git a/crypto/ahash.c b/crypto/ahash.c
index 369447e483cd..5a0f21cb2059 100644
--- a/crypto/ahash.c
+++ b/crypto/ahash.c
@@ -240,9 +240,9 @@ static void ahash_restore_req(struct ahash_request *req, 
int err)
kfree_sensitive(subreq);
 }
 
-static void ahash_op_unaligned_done(struct crypto_async_request *req, int err)
+static void ahash_op_unaligned_done(void *data, int err)
 {
-   struct ahash_request *areq = req->data;
+   struct ahash_request *areq = data;
 
if (err == -EINPROGRESS)
goto out;
@@ -330,9 +330,9 @@ int crypto_ahash_digest(struct ahash_request *req)
 }
 EXPORT_SYMBOL_GPL(crypto_ahash_digest);
 
-static void ahash_def_finup_done2(struct crypto_async_request *req, int err)
+static void ahash_def_finup_done2(void *data, int err)
 {
-   struct ahash_request *areq = req->data;
+   struct ahash_request *areq = data;
 
if (err == -EINPROGRESS)
return;
@@ -360,9 +360,9 @@ static int ahash_def_finup_finish1(struct ahash_request 
*req, int err)
return err;
 }
 
-static void ahash_def_finup_done1(struct crypto_async_request *req, int err)
+static void ahash_def_finup_done1(void *data, int err)
 {
-   struct ahash_request *areq = req->data;
+   struct ahash_request *areq = data;
struct ahash_request *subreq;
 
if (err == -EINPROGRESS)
diff --git a/crypto/api.c b/crypto/api.c
index b022702f6436..e67cc63368ed 100644
--- a/crypto/api.c
+++ b/crypto/api.c
@@ -643,9 +643,9 @@ int crypto_has_alg(const char *name, u32 type, u32 mask)
 }
 EXPORT_SYMBOL_GPL(crypto_has_alg);
 
-void crypto_req_done(struct crypto_async_request *req, int err)
+void crypto_req_done(void *data, int err)
 {
-   struct crypto_wait *wait = req->data;
+   struct crypto_wait *wait = data;
 
if (err == -EINPROGRESS)
return;
diff --git a/crypto/authenc.c b/crypto/authenc.c
index 17f674a7cdff..3326c7343e86 100644
--- a/crypto/authenc.c
+++ b/crypto/authenc.c
@@ -109,9 +109,9 @@ static int crypto_authenc_setkey(struc

Re: [dm-devel] [PATCH 0/17] crypto: api - Change completion callback argument to void star

2023-02-07 Thread Herbert Xu
On Tue, Feb 07, 2023 at 10:51:46AM -0800, Jakub Kicinski wrote:
.
> Any aes-gcm or chacha-poly implementations which would do that come 
> to mind? I'm asking 'cause we probably want to do stable if we know
> of a combination which would be broken, or the chances of one existing
> are high.

Good point.  I had a quick look at tls_sw.c and it *appears* to be
safe with the default software code.  As tls_sw only uses the generic
AEAD algorithms (rather than the IPsec-specific variants which aren't
safe), the software-only paths *should* be OK.

However, drivers that support these algorithms may require fallbacks
for esoteric reasons.  For example, drivers/crypto/amcc appears to
require a fallback for certain input parameters which may or may not
be possible with TLS.

To be on the safe side I would do a backport once this has been
in mainline for a little bit.

Cheers,
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel



[dm-devel] [PATCH] tls: Pass rec instead of aead_req into tls_encrypt_done

2023-02-07 Thread Herbert Xu
On Mon, Feb 06, 2023 at 11:15:21PM -0800, Jakub Kicinski wrote:
>
> > aead_request_set_callback(aead_req, CRYPTO_TFM_REQ_MAY_BACKLOG,
> > - tls_encrypt_done, sk);
> > + tls_encrypt_done, aead_req);
> 
> ... let's just pass rec instead of aead_req here, then?

Good point.  Could we do this as a follow-up patch? Reposting
the whole series would disturb a lot of people.  Of course if
other major issues crop up I can fold this into the existing
patch.

Thanks!

---8<---
The function tls_encrypt_done only uses aead_req to get ahold of
the tls_rec object.  So we could pass that in instead of aead_req
to simplify the code.

Suggested-by: Jakub Kicinski 
Signed-off-by: Herbert Xu 

diff --git a/net/tls/tls_sw.c b/net/tls/tls_sw.c
index 0515cda32fe2..6dfec2e8fdfa 100644
--- a/net/tls/tls_sw.c
+++ b/net/tls/tls_sw.c
@@ -430,18 +430,16 @@ int tls_tx_records(struct sock *sk, int flags)
 
 static void tls_encrypt_done(void *data, int err)
 {
-   struct aead_request *aead_req = data;
struct tls_sw_context_tx *ctx;
struct tls_context *tls_ctx;
struct tls_prot_info *prot;
+   struct tls_rec *rec = data;
struct scatterlist *sge;
struct sk_msg *msg_en;
-   struct tls_rec *rec;
bool ready = false;
struct sock *sk;
int pending;
 
-   rec = container_of(aead_req, struct tls_rec, aead_req);
msg_en = >msg_encrypted;
 
sk = rec->sk;
@@ -536,7 +534,7 @@ static int tls_do_encryption(struct sock *sk,
   data_len, rec->iv_data);
 
aead_request_set_callback(aead_req, CRYPTO_TFM_REQ_MAY_BACKLOG,
- tls_encrypt_done, aead_req);
+ tls_encrypt_done, rec);
 
/* Add the record in tx_list */
list_add_tail((struct list_head *)>list, >tx_list);
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel



Re: [dm-devel] [PATCH 0/17] crypto: api - Change completion callback argument to void star

2023-02-07 Thread Herbert Xu
On Mon, Feb 06, 2023 at 11:10:08PM -0800, Jakub Kicinski wrote:
> On Mon, 6 Feb 2023 18:21:06 +0800 Herbert Xu wrote:
> > The crypto completion function currently takes a pointer to a
> > struct crypto_async_request object.  However, in reality the API
> > does not allow the use of any part of the object apart from the
> > data field.  For example, ahash/shash will create a fake object
> > on the stack to pass along a different data field.
> 
> "different data field" == copy the value to a different structure?
> A bit hard to parse TBH.

The word data here refers to the data field in struct crypto_async_request.
 
> Buggy means bug could be hit in real light or buggy == did not use 
> the API right?

Yes this bug is real.  If you hit a driver/algorithm that returns
a different request object (of which there are many in the API) then
you will be dereferencing random pointers.

Cheers,
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel



[dm-devel] [PATCH 13/17] net: ipv4: Remove completion function scaffolding

2023-02-06 Thread Herbert Xu
This patch removes the temporary scaffolding now that the comletion
function signature has been converted.

Signed-off-by: Herbert Xu 
---

 net/ipv4/ah4.c  |8 
 net/ipv4/esp4.c |   16 
 2 files changed, 12 insertions(+), 12 deletions(-)

diff --git a/net/ipv4/ah4.c b/net/ipv4/ah4.c
index 1fc0231eb1ee..015c0f4ec5ba 100644
--- a/net/ipv4/ah4.c
+++ b/net/ipv4/ah4.c
@@ -117,11 +117,11 @@ static int ip_clear_mutable_options(const struct iphdr 
*iph, __be32 *daddr)
return 0;
 }
 
-static void ah_output_done(crypto_completion_data_t *data, int err)
+static void ah_output_done(void *data, int err)
 {
u8 *icv;
struct iphdr *iph;
-   struct sk_buff *skb = crypto_get_completion_data(data);
+   struct sk_buff *skb = data;
struct xfrm_state *x = skb_dst(skb)->xfrm;
struct ah_data *ahp = x->data;
struct iphdr *top_iph = ip_hdr(skb);
@@ -262,12 +262,12 @@ static int ah_output(struct xfrm_state *x, struct sk_buff 
*skb)
return err;
 }
 
-static void ah_input_done(crypto_completion_data_t *data, int err)
+static void ah_input_done(void *data, int err)
 {
u8 *auth_data;
u8 *icv;
struct iphdr *work_iph;
-   struct sk_buff *skb = crypto_get_completion_data(data);
+   struct sk_buff *skb = data;
struct xfrm_state *x = xfrm_input_state(skb);
struct ah_data *ahp = x->data;
struct ip_auth_hdr *ah = ip_auth_hdr(skb);
diff --git a/net/ipv4/esp4.c b/net/ipv4/esp4.c
index 8abe07c1ff28..ba06ed42e428 100644
--- a/net/ipv4/esp4.c
+++ b/net/ipv4/esp4.c
@@ -244,9 +244,9 @@ static int esp_output_tail_tcp(struct xfrm_state *x, struct 
sk_buff *skb)
 }
 #endif
 
-static void esp_output_done(crypto_completion_data_t *data, int err)
+static void esp_output_done(void *data, int err)
 {
-   struct sk_buff *skb = crypto_get_completion_data(data);
+   struct sk_buff *skb = data;
struct xfrm_offload *xo = xfrm_offload(skb);
void *tmp;
struct xfrm_state *x;
@@ -332,9 +332,9 @@ static struct ip_esp_hdr *esp_output_set_extra(struct 
sk_buff *skb,
return esph;
 }
 
-static void esp_output_done_esn(crypto_completion_data_t *data, int err)
+static void esp_output_done_esn(void *data, int err)
 {
-   struct sk_buff *skb = crypto_get_completion_data(data);
+   struct sk_buff *skb = data;
 
esp_output_restore_header(skb);
esp_output_done(data, err);
@@ -830,9 +830,9 @@ int esp_input_done2(struct sk_buff *skb, int err)
 }
 EXPORT_SYMBOL_GPL(esp_input_done2);
 
-static void esp_input_done(crypto_completion_data_t *data, int err)
+static void esp_input_done(void *data, int err)
 {
-   struct sk_buff *skb = crypto_get_completion_data(data);
+   struct sk_buff *skb = data;
 
xfrm_input_resume(skb, esp_input_done2(skb, err));
 }
@@ -860,9 +860,9 @@ static void esp_input_set_header(struct sk_buff *skb, 
__be32 *seqhi)
}
 }
 
-static void esp_input_done_esn(crypto_completion_data_t *data, int err)
+static void esp_input_done_esn(void *data, int err)
 {
-   struct sk_buff *skb = crypto_get_completion_data(data);
+   struct sk_buff *skb = data;
 
esp_input_restore_header(skb);
esp_input_done(data, err);
--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel



[dm-devel] [PATCH 17/17] crypto: api - Remove completion function scaffolding

2023-02-06 Thread Herbert Xu
This patch removes the temporary scaffolding now that the comletion
function signature has been converted.

Signed-off-by: Herbert Xu 
---

 include/linux/crypto.h |6 --
 1 file changed, 6 deletions(-)

diff --git a/include/linux/crypto.h b/include/linux/crypto.h
index 80f6350fb588..bb1d9b0e1647 100644
--- a/include/linux/crypto.h
+++ b/include/linux/crypto.h
@@ -176,7 +176,6 @@ struct crypto_async_request;
 struct crypto_tfm;
 struct crypto_type;
 
-typedef void crypto_completion_data_t;
 typedef void (*crypto_completion_t)(void *req, int err);
 
 /**
@@ -596,11 +595,6 @@ struct crypto_wait {
 /*
  * Async ops completion helper functioons
  */
-static inline void *crypto_get_completion_data(void *data)
-{
-   return data;
-}
-
 void crypto_req_done(void *req, int err);
 
 static inline int crypto_wait_req(int err, struct crypto_wait *wait)
--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel



[dm-devel] [PATCH 16/17] tls: Remove completion function scaffolding

2023-02-06 Thread Herbert Xu
This patch removes the temporary scaffolding now that the comletion
function signature has been converted.

Signed-off-by: Herbert Xu 
---

 net/tls/tls_sw.c |8 
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/net/tls/tls_sw.c b/net/tls/tls_sw.c
index 5b7f67a7d394..0515cda32fe2 100644
--- a/net/tls/tls_sw.c
+++ b/net/tls/tls_sw.c
@@ -179,9 +179,9 @@ static int tls_padding_length(struct tls_prot_info *prot, 
struct sk_buff *skb,
return sub;
 }
 
-static void tls_decrypt_done(crypto_completion_data_t *data, int err)
+static void tls_decrypt_done(void *data, int err)
 {
-   struct aead_request *aead_req = crypto_get_completion_data(data);
+   struct aead_request *aead_req = data;
struct crypto_aead *aead = crypto_aead_reqtfm(aead_req);
struct scatterlist *sgout = aead_req->dst;
struct scatterlist *sgin = aead_req->src;
@@ -428,9 +428,9 @@ int tls_tx_records(struct sock *sk, int flags)
return rc;
 }
 
-static void tls_encrypt_done(crypto_completion_data_t *data, int err)
+static void tls_encrypt_done(void *data, int err)
 {
-   struct aead_request *aead_req = crypto_get_completion_data(data);
+   struct aead_request *aead_req = data;
struct tls_sw_context_tx *ctx;
struct tls_context *tls_ctx;
struct tls_prot_info *prot;
--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel



[dm-devel] [PATCH 15/17] tipc: Remove completion function scaffolding

2023-02-06 Thread Herbert Xu
This patch removes the temporary scaffolding now that the comletion
function signature has been converted.

Signed-off-by: Herbert Xu 
---

 net/tipc/crypto.c |   12 ++--
 1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/net/tipc/crypto.c b/net/tipc/crypto.c
index ab356e7a3870..577fa5af33ec 100644
--- a/net/tipc/crypto.c
+++ b/net/tipc/crypto.c
@@ -267,10 +267,10 @@ static int tipc_aead_encrypt(struct tipc_aead *aead, 
struct sk_buff *skb,
 struct tipc_bearer *b,
 struct tipc_media_addr *dst,
 struct tipc_node *__dnode);
-static void tipc_aead_encrypt_done(crypto_completion_data_t *data, int err);
+static void tipc_aead_encrypt_done(void *data, int err);
 static int tipc_aead_decrypt(struct net *net, struct tipc_aead *aead,
 struct sk_buff *skb, struct tipc_bearer *b);
-static void tipc_aead_decrypt_done(crypto_completion_data_t *data, int err);
+static void tipc_aead_decrypt_done(void *data, int err);
 static inline int tipc_ehdr_size(struct tipc_ehdr *ehdr);
 static int tipc_ehdr_build(struct net *net, struct tipc_aead *aead,
   u8 tx_key, struct sk_buff *skb,
@@ -830,9 +830,9 @@ static int tipc_aead_encrypt(struct tipc_aead *aead, struct 
sk_buff *skb,
return rc;
 }
 
-static void tipc_aead_encrypt_done(crypto_completion_data_t *data, int err)
+static void tipc_aead_encrypt_done(void *data, int err)
 {
-   struct sk_buff *skb = crypto_get_completion_data(data);
+   struct sk_buff *skb = data;
struct tipc_crypto_tx_ctx *tx_ctx = TIPC_SKB_CB(skb)->crypto_ctx;
struct tipc_bearer *b = tx_ctx->bearer;
struct tipc_aead *aead = tx_ctx->aead;
@@ -954,9 +954,9 @@ static int tipc_aead_decrypt(struct net *net, struct 
tipc_aead *aead,
return rc;
 }
 
-static void tipc_aead_decrypt_done(crypto_completion_data_t *data, int err)
+static void tipc_aead_decrypt_done(void *data, int err)
 {
-   struct sk_buff *skb = crypto_get_completion_data(data);
+   struct sk_buff *skb = data;
struct tipc_crypto_rx_ctx *rx_ctx = TIPC_SKB_CB(skb)->crypto_ctx;
struct tipc_bearer *b = rx_ctx->bearer;
struct tipc_aead *aead = rx_ctx->aead;
--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel



[dm-devel] [PATCH 14/17] net: ipv6: Remove completion function scaffolding

2023-02-06 Thread Herbert Xu
This patch removes the temporary scaffolding now that the comletion
function signature has been converted.

Signed-off-by: Herbert Xu 
---

 net/ipv6/ah6.c  |8 
 net/ipv6/esp6.c |   16 
 2 files changed, 12 insertions(+), 12 deletions(-)

diff --git a/net/ipv6/ah6.c b/net/ipv6/ah6.c
index e43735578a76..01005035ad10 100644
--- a/net/ipv6/ah6.c
+++ b/net/ipv6/ah6.c
@@ -281,12 +281,12 @@ static int ipv6_clear_mutable_options(struct ipv6hdr 
*iph, int len, int dir)
return 0;
 }
 
-static void ah6_output_done(crypto_completion_data_t *data, int err)
+static void ah6_output_done(void *data, int err)
 {
int extlen;
u8 *iph_base;
u8 *icv;
-   struct sk_buff *skb = crypto_get_completion_data(data);
+   struct sk_buff *skb = data;
struct xfrm_state *x = skb_dst(skb)->xfrm;
struct ah_data *ahp = x->data;
struct ipv6hdr *top_iph = ipv6_hdr(skb);
@@ -451,12 +451,12 @@ static int ah6_output(struct xfrm_state *x, struct 
sk_buff *skb)
return err;
 }
 
-static void ah6_input_done(crypto_completion_data_t *data, int err)
+static void ah6_input_done(void *data, int err)
 {
u8 *auth_data;
u8 *icv;
u8 *work_iph;
-   struct sk_buff *skb = crypto_get_completion_data(data);
+   struct sk_buff *skb = data;
struct xfrm_state *x = xfrm_input_state(skb);
struct ah_data *ahp = x->data;
struct ip_auth_hdr *ah = ip_auth_hdr(skb);
diff --git a/net/ipv6/esp6.c b/net/ipv6/esp6.c
index b9ee81c7dfcf..fddd0cbdede1 100644
--- a/net/ipv6/esp6.c
+++ b/net/ipv6/esp6.c
@@ -278,9 +278,9 @@ static void esp_output_encap_csum(struct sk_buff *skb)
}
 }
 
-static void esp_output_done(crypto_completion_data_t *data, int err)
+static void esp_output_done(void *data, int err)
 {
-   struct sk_buff *skb = crypto_get_completion_data(data);
+   struct sk_buff *skb = data;
struct xfrm_offload *xo = xfrm_offload(skb);
void *tmp;
struct xfrm_state *x;
@@ -368,9 +368,9 @@ static struct ip_esp_hdr *esp_output_set_esn(struct sk_buff 
*skb,
return esph;
 }
 
-static void esp_output_done_esn(crypto_completion_data_t *data, int err)
+static void esp_output_done_esn(void *data, int err)
 {
-   struct sk_buff *skb = crypto_get_completion_data(data);
+   struct sk_buff *skb = data;
 
esp_output_restore_header(skb);
esp_output_done(data, err);
@@ -879,9 +879,9 @@ int esp6_input_done2(struct sk_buff *skb, int err)
 }
 EXPORT_SYMBOL_GPL(esp6_input_done2);
 
-static void esp_input_done(crypto_completion_data_t *data, int err)
+static void esp_input_done(void *data, int err)
 {
-   struct sk_buff *skb = crypto_get_completion_data(data);
+   struct sk_buff *skb = data;
 
xfrm_input_resume(skb, esp6_input_done2(skb, err));
 }
@@ -909,9 +909,9 @@ static void esp_input_set_header(struct sk_buff *skb, 
__be32 *seqhi)
}
 }
 
-static void esp_input_done_esn(crypto_completion_data_t *data, int err)
+static void esp_input_done_esn(void *data, int err)
 {
-   struct sk_buff *skb = crypto_get_completion_data(data);
+   struct sk_buff *skb = data;
 
esp_input_restore_header(skb);
esp_input_done(data, err);
--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel



[dm-devel] [PATCH 6/17] net: ipv6: Add scaffolding to change completion function signature

2023-02-06 Thread Herbert Xu
This patch adds temporary scaffolding so that the Crypto API
completion function can take a void * instead of crypto_async_request.
Once affected users have been converted this can be removed.

Signed-off-by: Herbert Xu 
---

 net/ipv6/ah6.c  |8 
 net/ipv6/esp6.c |   20 ++--
 2 files changed, 14 insertions(+), 14 deletions(-)

diff --git a/net/ipv6/ah6.c b/net/ipv6/ah6.c
index 5228d2716289..e43735578a76 100644
--- a/net/ipv6/ah6.c
+++ b/net/ipv6/ah6.c
@@ -281,12 +281,12 @@ static int ipv6_clear_mutable_options(struct ipv6hdr 
*iph, int len, int dir)
return 0;
 }
 
-static void ah6_output_done(struct crypto_async_request *base, int err)
+static void ah6_output_done(crypto_completion_data_t *data, int err)
 {
int extlen;
u8 *iph_base;
u8 *icv;
-   struct sk_buff *skb = base->data;
+   struct sk_buff *skb = crypto_get_completion_data(data);
struct xfrm_state *x = skb_dst(skb)->xfrm;
struct ah_data *ahp = x->data;
struct ipv6hdr *top_iph = ipv6_hdr(skb);
@@ -451,12 +451,12 @@ static int ah6_output(struct xfrm_state *x, struct 
sk_buff *skb)
return err;
 }
 
-static void ah6_input_done(struct crypto_async_request *base, int err)
+static void ah6_input_done(crypto_completion_data_t *data, int err)
 {
u8 *auth_data;
u8 *icv;
u8 *work_iph;
-   struct sk_buff *skb = base->data;
+   struct sk_buff *skb = crypto_get_completion_data(data);
struct xfrm_state *x = xfrm_input_state(skb);
struct ah_data *ahp = x->data;
struct ip_auth_hdr *ah = ip_auth_hdr(skb);
diff --git a/net/ipv6/esp6.c b/net/ipv6/esp6.c
index 14ed868680c6..b9ee81c7dfcf 100644
--- a/net/ipv6/esp6.c
+++ b/net/ipv6/esp6.c
@@ -278,9 +278,9 @@ static void esp_output_encap_csum(struct sk_buff *skb)
}
 }
 
-static void esp_output_done(struct crypto_async_request *base, int err)
+static void esp_output_done(crypto_completion_data_t *data, int err)
 {
-   struct sk_buff *skb = base->data;
+   struct sk_buff *skb = crypto_get_completion_data(data);
struct xfrm_offload *xo = xfrm_offload(skb);
void *tmp;
struct xfrm_state *x;
@@ -368,12 +368,12 @@ static struct ip_esp_hdr *esp_output_set_esn(struct 
sk_buff *skb,
return esph;
 }
 
-static void esp_output_done_esn(struct crypto_async_request *base, int err)
+static void esp_output_done_esn(crypto_completion_data_t *data, int err)
 {
-   struct sk_buff *skb = base->data;
+   struct sk_buff *skb = crypto_get_completion_data(data);
 
esp_output_restore_header(skb);
-   esp_output_done(base, err);
+   esp_output_done(data, err);
 }
 
 static struct ip_esp_hdr *esp6_output_udp_encap(struct sk_buff *skb,
@@ -879,9 +879,9 @@ int esp6_input_done2(struct sk_buff *skb, int err)
 }
 EXPORT_SYMBOL_GPL(esp6_input_done2);
 
-static void esp_input_done(struct crypto_async_request *base, int err)
+static void esp_input_done(crypto_completion_data_t *data, int err)
 {
-   struct sk_buff *skb = base->data;
+   struct sk_buff *skb = crypto_get_completion_data(data);
 
xfrm_input_resume(skb, esp6_input_done2(skb, err));
 }
@@ -909,12 +909,12 @@ static void esp_input_set_header(struct sk_buff *skb, 
__be32 *seqhi)
}
 }
 
-static void esp_input_done_esn(struct crypto_async_request *base, int err)
+static void esp_input_done_esn(crypto_completion_data_t *data, int err)
 {
-   struct sk_buff *skb = base->data;
+   struct sk_buff *skb = crypto_get_completion_data(data);
 
esp_input_restore_header(skb);
-   esp_input_done(base, err);
+   esp_input_done(data, err);
 }
 
 static int esp6_input(struct xfrm_state *x, struct sk_buff *skb)
--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel



[dm-devel] [PATCH 10/17] crypto: api - Use data directly in completion function

2023-02-06 Thread Herbert Xu
This patch does the final flag day conversion of all completion
functions which are now all contained in the Crypto API.

Signed-off-by: Herbert Xu 
---

 crypto/adiantum.c  |5 +---
 crypto/af_alg.c|6 ++---
 crypto/ahash.c |   12 +-
 crypto/api.c   |4 +--
 crypto/authenc.c   |   14 +---
 crypto/authencesn.c|   15 +---
 crypto/ccm.c   |9 +++
 crypto/chacha20poly1305.c  |   40 +-
 crypto/cryptd.c|   52 ++---
 crypto/cts.c   |   12 +-
 crypto/dh.c|5 +---
 crypto/essiv.c |8 +++---
 crypto/gcm.c   |   36 ++-
 crypto/hctr2.c |5 +---
 crypto/lrw.c   |4 +--
 crypto/pcrypt.c|4 +--
 crypto/rsa-pkcs1pad.c  |   15 +---
 crypto/seqiv.c |5 +---
 crypto/xts.c   |   12 +-
 drivers/crypto/atmel-sha.c |5 +---
 include/crypto/if_alg.h|4 ---
 include/linux/crypto.h |   10 
 22 files changed, 132 insertions(+), 150 deletions(-)

diff --git a/crypto/adiantum.c b/crypto/adiantum.c
index 84450130cb6b..c33ba22a6638 100644
--- a/crypto/adiantum.c
+++ b/crypto/adiantum.c
@@ -308,10 +308,9 @@ static int adiantum_finish(struct skcipher_request *req)
return 0;
 }
 
-static void adiantum_streamcipher_done(struct crypto_async_request *areq,
-  int err)
+static void adiantum_streamcipher_done(void *data, int err)
 {
-   struct skcipher_request *req = areq->data;
+   struct skcipher_request *req = data;
 
if (!err)
err = adiantum_finish(req);
diff --git a/crypto/af_alg.c b/crypto/af_alg.c
index 0a4fa2a429e2..5f7252a5b7b4 100644
--- a/crypto/af_alg.c
+++ b/crypto/af_alg.c
@@ -1186,7 +1186,7 @@ EXPORT_SYMBOL_GPL(af_alg_free_resources);
 
 /**
  * af_alg_async_cb - AIO callback handler
- * @_req: async request info
+ * @data: async request completion data
  * @err: if non-zero, error result to be returned via ki_complete();
  *   otherwise return the AIO output length via ki_complete().
  *
@@ -1196,9 +1196,9 @@ EXPORT_SYMBOL_GPL(af_alg_free_resources);
  * The number of bytes to be generated with the AIO operation must be set
  * in areq->outlen before the AIO callback handler is invoked.
  */
-void af_alg_async_cb(struct crypto_async_request *_req, int err)
+void af_alg_async_cb(void *data, int err)
 {
-   struct af_alg_async_req *areq = _req->data;
+   struct af_alg_async_req *areq = data;
struct sock *sk = areq->sk;
struct kiocb *iocb = areq->iocb;
unsigned int resultlen;
diff --git a/crypto/ahash.c b/crypto/ahash.c
index 369447e483cd..5a0f21cb2059 100644
--- a/crypto/ahash.c
+++ b/crypto/ahash.c
@@ -240,9 +240,9 @@ static void ahash_restore_req(struct ahash_request *req, 
int err)
kfree_sensitive(subreq);
 }
 
-static void ahash_op_unaligned_done(struct crypto_async_request *req, int err)
+static void ahash_op_unaligned_done(void *data, int err)
 {
-   struct ahash_request *areq = req->data;
+   struct ahash_request *areq = data;
 
if (err == -EINPROGRESS)
goto out;
@@ -330,9 +330,9 @@ int crypto_ahash_digest(struct ahash_request *req)
 }
 EXPORT_SYMBOL_GPL(crypto_ahash_digest);
 
-static void ahash_def_finup_done2(struct crypto_async_request *req, int err)
+static void ahash_def_finup_done2(void *data, int err)
 {
-   struct ahash_request *areq = req->data;
+   struct ahash_request *areq = data;
 
if (err == -EINPROGRESS)
return;
@@ -360,9 +360,9 @@ static int ahash_def_finup_finish1(struct ahash_request 
*req, int err)
return err;
 }
 
-static void ahash_def_finup_done1(struct crypto_async_request *req, int err)
+static void ahash_def_finup_done1(void *data, int err)
 {
-   struct ahash_request *areq = req->data;
+   struct ahash_request *areq = data;
struct ahash_request *subreq;
 
if (err == -EINPROGRESS)
diff --git a/crypto/api.c b/crypto/api.c
index b022702f6436..e67cc63368ed 100644
--- a/crypto/api.c
+++ b/crypto/api.c
@@ -643,9 +643,9 @@ int crypto_has_alg(const char *name, u32 type, u32 mask)
 }
 EXPORT_SYMBOL_GPL(crypto_has_alg);
 
-void crypto_req_done(struct crypto_async_request *req, int err)
+void crypto_req_done(void *data, int err)
 {
-   struct crypto_wait *wait = req->data;
+   struct crypto_wait *wait = data;
 
if (err == -EINPROGRESS)
return;
diff --git a/crypto/authenc.c b/crypto/authenc.c
index 17f674a7cdff..3326c7343e86 100644
--- a/crypto/authenc.c
+++ b/crypto/authenc.c
@@ -109,9 +109,9 @@ static int crypto_authenc_setkey(struct crypto_aead 
*authenc, const u8 *key,
return err;
 }
 
-static void authenc_geniv_ahash_done(struct

[dm-devel] [PATCH 12/17] net: macsec: Remove completion function scaffolding

2023-02-06 Thread Herbert Xu
This patch removes the temporary scaffolding now that the comletion
function signature has been converted.

Signed-off-by: Herbert Xu 
---

 drivers/net/macsec.c |8 
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/drivers/net/macsec.c b/drivers/net/macsec.c
index b7d9d487ccd2..becb04123d3e 100644
--- a/drivers/net/macsec.c
+++ b/drivers/net/macsec.c
@@ -528,9 +528,9 @@ static void count_tx(struct net_device *dev, int ret, int 
len)
}
 }
 
-static void macsec_encrypt_done(crypto_completion_data_t *data, int err)
+static void macsec_encrypt_done(void *data, int err)
 {
-   struct sk_buff *skb = crypto_get_completion_data(data);
+   struct sk_buff *skb = data;
struct net_device *dev = skb->dev;
struct macsec_dev *macsec = macsec_priv(dev);
struct macsec_tx_sa *sa = macsec_skb_cb(skb)->tx_sa;
@@ -835,9 +835,9 @@ static void count_rx(struct net_device *dev, int len)
u64_stats_update_end(>syncp);
 }
 
-static void macsec_decrypt_done(crypto_completion_data_t *data, int err)
+static void macsec_decrypt_done(void *data, int err)
 {
-   struct sk_buff *skb = crypto_get_completion_data(data);
+   struct sk_buff *skb = data;
struct net_device *dev = skb->dev;
struct macsec_dev *macsec = macsec_priv(dev);
struct macsec_rx_sa *rx_sa = macsec_skb_cb(skb)->rx_sa;
--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel



[dm-devel] [PATCH 11/17] dm: Remove completion function scaffolding

2023-02-06 Thread Herbert Xu
This patch removes the temporary scaffolding now that the comletion
function signature has been converted.

Signed-off-by: Herbert Xu 
---

 drivers/md/dm-crypt.c |6 +++---
 drivers/md/dm-integrity.c |4 ++--
 2 files changed, 5 insertions(+), 5 deletions(-)

diff --git a/drivers/md/dm-crypt.c b/drivers/md/dm-crypt.c
index 7609fe39ab8c..3aeeb8f2802f 100644
--- a/drivers/md/dm-crypt.c
+++ b/drivers/md/dm-crypt.c
@@ -1458,7 +1458,7 @@ static int crypt_convert_block_skcipher(struct 
crypt_config *cc,
return r;
 }
 
-static void kcryptd_async_done(crypto_completion_data_t *async_req, int error);
+static void kcryptd_async_done(void *async_req, int error);
 
 static int crypt_alloc_req_skcipher(struct crypt_config *cc,
 struct convert_context *ctx)
@@ -2146,9 +2146,9 @@ static void kcryptd_crypt_read_convert(struct dm_crypt_io 
*io)
crypt_dec_pending(io);
 }
 
-static void kcryptd_async_done(crypto_completion_data_t *data, int error)
+static void kcryptd_async_done(void *data, int error)
 {
-   struct dm_crypt_request *dmreq = crypto_get_completion_data(data);
+   struct dm_crypt_request *dmreq = data;
struct convert_context *ctx = dmreq->ctx;
struct dm_crypt_io *io = container_of(ctx, struct dm_crypt_io, ctx);
struct crypt_config *cc = io->cc;
diff --git a/drivers/md/dm-integrity.c b/drivers/md/dm-integrity.c
index eefe25ed841e..c58156deb2b1 100644
--- a/drivers/md/dm-integrity.c
+++ b/drivers/md/dm-integrity.c
@@ -955,9 +955,9 @@ static void xor_journal(struct dm_integrity_c *ic, bool 
encrypt, unsigned sectio
async_tx_issue_pending_all();
 }
 
-static void complete_journal_encrypt(crypto_completion_data_t *data, int err)
+static void complete_journal_encrypt(void *data, int err)
 {
-   struct journal_completion *comp = crypto_get_completion_data(data);
+   struct journal_completion *comp = data;
if (unlikely(err)) {
if (likely(err == -EINPROGRESS)) {
complete(>ic->crypto_backoff);
--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel



[dm-devel] [PATCH 9/17] KEYS: DH: Use crypto_wait_req

2023-02-06 Thread Herbert Xu
This patch replaces the custom crypto completion function with
crypto_req_done.

Signed-off-by: Herbert Xu 
---

 security/keys/dh.c |   30 +-
 1 file changed, 5 insertions(+), 25 deletions(-)

diff --git a/security/keys/dh.c b/security/keys/dh.c
index b339760a31dd..da64c358474b 100644
--- a/security/keys/dh.c
+++ b/security/keys/dh.c
@@ -64,22 +64,6 @@ static void dh_free_data(struct dh *dh)
kfree_sensitive(dh->g);
 }
 
-struct dh_completion {
-   struct completion completion;
-   int err;
-};
-
-static void dh_crypto_done(struct crypto_async_request *req, int err)
-{
-   struct dh_completion *compl = req->data;
-
-   if (err == -EINPROGRESS)
-   return;
-
-   compl->err = err;
-   complete(>completion);
-}
-
 static int kdf_alloc(struct crypto_shash **hash, char *hashname)
 {
struct crypto_shash *tfm;
@@ -146,7 +130,7 @@ long __keyctl_dh_compute(struct keyctl_dh_params __user 
*params,
struct keyctl_dh_params pcopy;
struct dh dh_inputs;
struct scatterlist outsg;
-   struct dh_completion compl;
+   DECLARE_CRYPTO_WAIT(compl);
struct crypto_kpp *tfm;
struct kpp_request *req;
uint8_t *secret;
@@ -266,22 +250,18 @@ long __keyctl_dh_compute(struct keyctl_dh_params __user 
*params,
 
kpp_request_set_input(req, NULL, 0);
kpp_request_set_output(req, , outlen);
-   init_completion();
kpp_request_set_callback(req, CRYPTO_TFM_REQ_MAY_BACKLOG |
 CRYPTO_TFM_REQ_MAY_SLEEP,
-dh_crypto_done, );
+crypto_req_done, );
 
/*
 * For DH, generate_public_key and generate_shared_secret are
 * the same calculation
 */
ret = crypto_kpp_generate_public_key(req);
-   if (ret == -EINPROGRESS) {
-   wait_for_completion();
-   ret = compl.err;
-   if (ret)
-   goto out6;
-   }
+   ret = crypto_wait_req(ret, );
+   if (ret)
+   goto out6;
 
if (kdfcopy) {
/*
--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel



[dm-devel] [PATCH 7/17] tipc: Add scaffolding to change completion function signature

2023-02-06 Thread Herbert Xu
This patch adds temporary scaffolding so that the Crypto API
completion function can take a void * instead of crypto_async_request.
Once affected users have been converted this can be removed.

Signed-off-by: Herbert Xu 
---

 net/tipc/crypto.c |   12 ++--
 1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/net/tipc/crypto.c b/net/tipc/crypto.c
index d67440de011e..ab356e7a3870 100644
--- a/net/tipc/crypto.c
+++ b/net/tipc/crypto.c
@@ -267,10 +267,10 @@ static int tipc_aead_encrypt(struct tipc_aead *aead, 
struct sk_buff *skb,
 struct tipc_bearer *b,
 struct tipc_media_addr *dst,
 struct tipc_node *__dnode);
-static void tipc_aead_encrypt_done(struct crypto_async_request *base, int err);
+static void tipc_aead_encrypt_done(crypto_completion_data_t *data, int err);
 static int tipc_aead_decrypt(struct net *net, struct tipc_aead *aead,
 struct sk_buff *skb, struct tipc_bearer *b);
-static void tipc_aead_decrypt_done(struct crypto_async_request *base, int err);
+static void tipc_aead_decrypt_done(crypto_completion_data_t *data, int err);
 static inline int tipc_ehdr_size(struct tipc_ehdr *ehdr);
 static int tipc_ehdr_build(struct net *net, struct tipc_aead *aead,
   u8 tx_key, struct sk_buff *skb,
@@ -830,9 +830,9 @@ static int tipc_aead_encrypt(struct tipc_aead *aead, struct 
sk_buff *skb,
return rc;
 }
 
-static void tipc_aead_encrypt_done(struct crypto_async_request *base, int err)
+static void tipc_aead_encrypt_done(crypto_completion_data_t *data, int err)
 {
-   struct sk_buff *skb = base->data;
+   struct sk_buff *skb = crypto_get_completion_data(data);
struct tipc_crypto_tx_ctx *tx_ctx = TIPC_SKB_CB(skb)->crypto_ctx;
struct tipc_bearer *b = tx_ctx->bearer;
struct tipc_aead *aead = tx_ctx->aead;
@@ -954,9 +954,9 @@ static int tipc_aead_decrypt(struct net *net, struct 
tipc_aead *aead,
return rc;
 }
 
-static void tipc_aead_decrypt_done(struct crypto_async_request *base, int err)
+static void tipc_aead_decrypt_done(crypto_completion_data_t *data, int err)
 {
-   struct sk_buff *skb = base->data;
+   struct sk_buff *skb = crypto_get_completion_data(data);
struct tipc_crypto_rx_ctx *rx_ctx = TIPC_SKB_CB(skb)->crypto_ctx;
struct tipc_bearer *b = rx_ctx->bearer;
struct tipc_aead *aead = rx_ctx->aead;
--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel



[dm-devel] [PATCH 8/17] tls: Only use data field in crypto completion function

2023-02-06 Thread Herbert Xu
The crypto_async_request passed to the completion is not guaranteed
to be the original request object.  Only the data field can be relied
upon.

Fix this by storing the socket pointer with the AEAD request.

Signed-off-by: Herbert Xu 
---

 net/tls/tls.h|2 ++
 net/tls/tls_sw.c |   40 +---
 2 files changed, 31 insertions(+), 11 deletions(-)

diff --git a/net/tls/tls.h b/net/tls/tls.h
index 0e840a0c3437..804c3880d028 100644
--- a/net/tls/tls.h
+++ b/net/tls/tls.h
@@ -70,6 +70,8 @@ struct tls_rec {
char content_type;
struct scatterlist sg_content_type;
 
+   struct sock *sk;
+
char aad_space[TLS_AAD_SPACE_SIZE];
u8 iv_data[MAX_IV_SIZE];
struct aead_request aead_req;
diff --git a/net/tls/tls_sw.c b/net/tls/tls_sw.c
index 9ed978634125..5b7f67a7d394 100644
--- a/net/tls/tls_sw.c
+++ b/net/tls/tls_sw.c
@@ -38,6 +38,7 @@
 #include 
 #include 
 #include 
+#include 
 #include 
 #include 
 
@@ -57,6 +58,7 @@ struct tls_decrypt_arg {
 };
 
 struct tls_decrypt_ctx {
+   struct sock *sk;
u8 iv[MAX_IV_SIZE];
u8 aad[TLS_MAX_AAD_SIZE];
u8 tail;
@@ -177,18 +179,25 @@ static int tls_padding_length(struct tls_prot_info *prot, 
struct sk_buff *skb,
return sub;
 }
 
-static void tls_decrypt_done(struct crypto_async_request *req, int err)
+static void tls_decrypt_done(crypto_completion_data_t *data, int err)
 {
-   struct aead_request *aead_req = (struct aead_request *)req;
+   struct aead_request *aead_req = crypto_get_completion_data(data);
+   struct crypto_aead *aead = crypto_aead_reqtfm(aead_req);
struct scatterlist *sgout = aead_req->dst;
struct scatterlist *sgin = aead_req->src;
struct tls_sw_context_rx *ctx;
+   struct tls_decrypt_ctx *dctx;
struct tls_context *tls_ctx;
struct scatterlist *sg;
unsigned int pages;
struct sock *sk;
+   int aead_size;
 
-   sk = (struct sock *)req->data;
+   aead_size = sizeof(*aead_req) + crypto_aead_reqsize(aead);
+   aead_size = ALIGN(aead_size, __alignof__(*dctx));
+   dctx = (void *)((u8 *)aead_req + aead_size);
+
+   sk = dctx->sk;
tls_ctx = tls_get_ctx(sk);
ctx = tls_sw_ctx_rx(tls_ctx);
 
@@ -240,7 +249,7 @@ static int tls_do_decryption(struct sock *sk,
if (darg->async) {
aead_request_set_callback(aead_req,
  CRYPTO_TFM_REQ_MAY_BACKLOG,
- tls_decrypt_done, sk);
+ tls_decrypt_done, aead_req);
atomic_inc(>decrypt_pending);
} else {
aead_request_set_callback(aead_req,
@@ -336,6 +345,8 @@ static struct tls_rec *tls_get_rec(struct sock *sk)
sg_set_buf(>sg_aead_out[0], rec->aad_space, prot->aad_size);
sg_unmark_end(>sg_aead_out[1]);
 
+   rec->sk = sk;
+
return rec;
 }
 
@@ -417,22 +428,27 @@ int tls_tx_records(struct sock *sk, int flags)
return rc;
 }
 
-static void tls_encrypt_done(struct crypto_async_request *req, int err)
+static void tls_encrypt_done(crypto_completion_data_t *data, int err)
 {
-   struct aead_request *aead_req = (struct aead_request *)req;
-   struct sock *sk = req->data;
-   struct tls_context *tls_ctx = tls_get_ctx(sk);
-   struct tls_prot_info *prot = _ctx->prot_info;
-   struct tls_sw_context_tx *ctx = tls_sw_ctx_tx(tls_ctx);
+   struct aead_request *aead_req = crypto_get_completion_data(data);
+   struct tls_sw_context_tx *ctx;
+   struct tls_context *tls_ctx;
+   struct tls_prot_info *prot;
struct scatterlist *sge;
struct sk_msg *msg_en;
struct tls_rec *rec;
bool ready = false;
+   struct sock *sk;
int pending;
 
rec = container_of(aead_req, struct tls_rec, aead_req);
msg_en = >msg_encrypted;
 
+   sk = rec->sk;
+   tls_ctx = tls_get_ctx(sk);
+   prot = _ctx->prot_info;
+   ctx = tls_sw_ctx_tx(tls_ctx);
+
sge = sk_msg_elem(msg_en, msg_en->sg.curr);
sge->offset -= prot->prepend_size;
sge->length += prot->prepend_size;
@@ -520,7 +536,7 @@ static int tls_do_encryption(struct sock *sk,
   data_len, rec->iv_data);
 
aead_request_set_callback(aead_req, CRYPTO_TFM_REQ_MAY_BACKLOG,
- tls_encrypt_done, sk);
+ tls_encrypt_done, aead_req);
 
/* Add the record in tx_list */
list_add_tail((struct list_head *)>list, >tx_list);
@@ -1485,6 +1501,7 @@ static int tls_decrypt_sg(struct sock *sk, struct 
iov_iter *out_iov,
 * Both structs are variable length.
 */
aead_size = sizeof(*aead_req) + crypto_aead_reqsize(ctx->aead_recv);
+   aead_size = ALIGN(aead_size, __alignof__(*dctx)

[dm-devel] [PATCH 5/17] net: ipv4: Add scaffolding to change completion function signature

2023-02-06 Thread Herbert Xu
This patch adds temporary scaffolding so that the Crypto API
completion function can take a void * instead of crypto_async_request.
Once affected users have been converted this can be removed.

Signed-off-by: Herbert Xu 
---

 net/ipv4/ah4.c  |8 
 net/ipv4/esp4.c |   20 ++--
 2 files changed, 14 insertions(+), 14 deletions(-)

diff --git a/net/ipv4/ah4.c b/net/ipv4/ah4.c
index ee4e578c7f20..1fc0231eb1ee 100644
--- a/net/ipv4/ah4.c
+++ b/net/ipv4/ah4.c
@@ -117,11 +117,11 @@ static int ip_clear_mutable_options(const struct iphdr 
*iph, __be32 *daddr)
return 0;
 }
 
-static void ah_output_done(struct crypto_async_request *base, int err)
+static void ah_output_done(crypto_completion_data_t *data, int err)
 {
u8 *icv;
struct iphdr *iph;
-   struct sk_buff *skb = base->data;
+   struct sk_buff *skb = crypto_get_completion_data(data);
struct xfrm_state *x = skb_dst(skb)->xfrm;
struct ah_data *ahp = x->data;
struct iphdr *top_iph = ip_hdr(skb);
@@ -262,12 +262,12 @@ static int ah_output(struct xfrm_state *x, struct sk_buff 
*skb)
return err;
 }
 
-static void ah_input_done(struct crypto_async_request *base, int err)
+static void ah_input_done(crypto_completion_data_t *data, int err)
 {
u8 *auth_data;
u8 *icv;
struct iphdr *work_iph;
-   struct sk_buff *skb = base->data;
+   struct sk_buff *skb = crypto_get_completion_data(data);
struct xfrm_state *x = xfrm_input_state(skb);
struct ah_data *ahp = x->data;
struct ip_auth_hdr *ah = ip_auth_hdr(skb);
diff --git a/net/ipv4/esp4.c b/net/ipv4/esp4.c
index 52c8047efedb..8abe07c1ff28 100644
--- a/net/ipv4/esp4.c
+++ b/net/ipv4/esp4.c
@@ -244,9 +244,9 @@ static int esp_output_tail_tcp(struct xfrm_state *x, struct 
sk_buff *skb)
 }
 #endif
 
-static void esp_output_done(struct crypto_async_request *base, int err)
+static void esp_output_done(crypto_completion_data_t *data, int err)
 {
-   struct sk_buff *skb = base->data;
+   struct sk_buff *skb = crypto_get_completion_data(data);
struct xfrm_offload *xo = xfrm_offload(skb);
void *tmp;
struct xfrm_state *x;
@@ -332,12 +332,12 @@ static struct ip_esp_hdr *esp_output_set_extra(struct 
sk_buff *skb,
return esph;
 }
 
-static void esp_output_done_esn(struct crypto_async_request *base, int err)
+static void esp_output_done_esn(crypto_completion_data_t *data, int err)
 {
-   struct sk_buff *skb = base->data;
+   struct sk_buff *skb = crypto_get_completion_data(data);
 
esp_output_restore_header(skb);
-   esp_output_done(base, err);
+   esp_output_done(data, err);
 }
 
 static struct ip_esp_hdr *esp_output_udp_encap(struct sk_buff *skb,
@@ -830,9 +830,9 @@ int esp_input_done2(struct sk_buff *skb, int err)
 }
 EXPORT_SYMBOL_GPL(esp_input_done2);
 
-static void esp_input_done(struct crypto_async_request *base, int err)
+static void esp_input_done(crypto_completion_data_t *data, int err)
 {
-   struct sk_buff *skb = base->data;
+   struct sk_buff *skb = crypto_get_completion_data(data);
 
xfrm_input_resume(skb, esp_input_done2(skb, err));
 }
@@ -860,12 +860,12 @@ static void esp_input_set_header(struct sk_buff *skb, 
__be32 *seqhi)
}
 }
 
-static void esp_input_done_esn(struct crypto_async_request *base, int err)
+static void esp_input_done_esn(crypto_completion_data_t *data, int err)
 {
-   struct sk_buff *skb = base->data;
+   struct sk_buff *skb = crypto_get_completion_data(data);
 
esp_input_restore_header(skb);
-   esp_input_done(base, err);
+   esp_input_done(data, err);
 }
 
 /*
--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel



[dm-devel] [PATCH 4/17] Bluetooth: Use crypto_wait_req

2023-02-06 Thread Herbert Xu
This patch replaces the custom crypto completion function with
crypto_req_done.

Signed-off-by: Herbert Xu 
---

 net/bluetooth/ecdh_helper.c |   37 ++---
 1 file changed, 6 insertions(+), 31 deletions(-)

diff --git a/net/bluetooth/ecdh_helper.c b/net/bluetooth/ecdh_helper.c
index 989401f116e9..0efc93fdae8a 100644
--- a/net/bluetooth/ecdh_helper.c
+++ b/net/bluetooth/ecdh_helper.c
@@ -25,22 +25,6 @@
 #include 
 #include 
 
-struct ecdh_completion {
-   struct completion completion;
-   int err;
-};
-
-static void ecdh_complete(struct crypto_async_request *req, int err)
-{
-   struct ecdh_completion *res = req->data;
-
-   if (err == -EINPROGRESS)
-   return;
-
-   res->err = err;
-   complete(>completion);
-}
-
 static inline void swap_digits(u64 *in, u64 *out, unsigned int ndigits)
 {
int i;
@@ -60,9 +44,9 @@ static inline void swap_digits(u64 *in, u64 *out, unsigned 
int ndigits)
 int compute_ecdh_secret(struct crypto_kpp *tfm, const u8 public_key[64],
u8 secret[32])
 {
+   DECLARE_CRYPTO_WAIT(result);
struct kpp_request *req;
u8 *tmp;
-   struct ecdh_completion result;
struct scatterlist src, dst;
int err;
 
@@ -76,8 +60,6 @@ int compute_ecdh_secret(struct crypto_kpp *tfm, const u8 
public_key[64],
goto free_tmp;
}
 
-   init_completion();
-
swap_digits((u64 *)public_key, (u64 *)tmp, 4); /* x */
swap_digits((u64 *)_key[32], (u64 *)[32], 4); /* y */
 
@@ -86,12 +68,9 @@ int compute_ecdh_secret(struct crypto_kpp *tfm, const u8 
public_key[64],
kpp_request_set_input(req, , 64);
kpp_request_set_output(req, , 32);
kpp_request_set_callback(req, CRYPTO_TFM_REQ_MAY_BACKLOG,
-ecdh_complete, );
+crypto_req_done, );
err = crypto_kpp_compute_shared_secret(req);
-   if (err == -EINPROGRESS) {
-   wait_for_completion();
-   err = result.err;
-   }
+   err = crypto_wait_req(err, );
if (err < 0) {
pr_err("alg: ecdh: compute shared secret failed. err %d\n",
   err);
@@ -165,9 +144,9 @@ int set_ecdh_privkey(struct crypto_kpp *tfm, const u8 
private_key[32])
  */
 int generate_ecdh_public_key(struct crypto_kpp *tfm, u8 public_key[64])
 {
+   DECLARE_CRYPTO_WAIT(result);
struct kpp_request *req;
u8 *tmp;
-   struct ecdh_completion result;
struct scatterlist dst;
int err;
 
@@ -181,18 +160,14 @@ int generate_ecdh_public_key(struct crypto_kpp *tfm, u8 
public_key[64])
goto free_tmp;
}
 
-   init_completion();
sg_init_one(, tmp, 64);
kpp_request_set_input(req, NULL, 0);
kpp_request_set_output(req, , 64);
kpp_request_set_callback(req, CRYPTO_TFM_REQ_MAY_BACKLOG,
-ecdh_complete, );
+crypto_req_done, );
 
err = crypto_kpp_generate_public_key(req);
-   if (err == -EINPROGRESS) {
-   wait_for_completion();
-   err = result.err;
-   }
+   err = crypto_wait_req(err, );
if (err < 0)
goto free_all;
 
--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel



[dm-devel] [PATCH 3/17] fs: ecryptfs: Use crypto_wait_req

2023-02-06 Thread Herbert Xu
This patch replaces the custom crypto completion function with
crypto_req_done.

Signed-off-by: Herbert Xu 
---

 fs/ecryptfs/crypto.c |   30 +++---
 1 file changed, 3 insertions(+), 27 deletions(-)

diff --git a/fs/ecryptfs/crypto.c b/fs/ecryptfs/crypto.c
index e3f5d7f3c8a0..c3057539f088 100644
--- a/fs/ecryptfs/crypto.c
+++ b/fs/ecryptfs/crypto.c
@@ -260,22 +260,6 @@ int virt_to_scatterlist(const void *addr, int size, struct 
scatterlist *sg,
return i;
 }
 
-struct extent_crypt_result {
-   struct completion completion;
-   int rc;
-};
-
-static void extent_crypt_complete(struct crypto_async_request *req, int rc)
-{
-   struct extent_crypt_result *ecr = req->data;
-
-   if (rc == -EINPROGRESS)
-   return;
-
-   ecr->rc = rc;
-   complete(>completion);
-}
-
 /**
  * crypt_scatterlist
  * @crypt_stat: Pointer to the crypt_stat struct to initialize.
@@ -293,7 +277,7 @@ static int crypt_scatterlist(struct ecryptfs_crypt_stat 
*crypt_stat,
 unsigned char *iv, int op)
 {
struct skcipher_request *req = NULL;
-   struct extent_crypt_result ecr;
+   DECLARE_CRYPTO_WAIT(ecr);
int rc = 0;
 
if (unlikely(ecryptfs_verbosity > 0)) {
@@ -303,8 +287,6 @@ static int crypt_scatterlist(struct ecryptfs_crypt_stat 
*crypt_stat,
  crypt_stat->key_size);
}
 
-   init_completion();
-
mutex_lock(_stat->cs_tfm_mutex);
req = skcipher_request_alloc(crypt_stat->tfm, GFP_NOFS);
if (!req) {
@@ -315,7 +297,7 @@ static int crypt_scatterlist(struct ecryptfs_crypt_stat 
*crypt_stat,
 
skcipher_request_set_callback(req,
CRYPTO_TFM_REQ_MAY_BACKLOG | CRYPTO_TFM_REQ_MAY_SLEEP,
-   extent_crypt_complete, );
+   crypto_req_done, );
/* Consider doing this once, when the file is opened */
if (!(crypt_stat->flags & ECRYPTFS_KEY_SET)) {
rc = crypto_skcipher_setkey(crypt_stat->tfm, crypt_stat->key,
@@ -334,13 +316,7 @@ static int crypt_scatterlist(struct ecryptfs_crypt_stat 
*crypt_stat,
skcipher_request_set_crypt(req, src_sg, dst_sg, size, iv);
rc = op == ENCRYPT ? crypto_skcipher_encrypt(req) :
 crypto_skcipher_decrypt(req);
-   if (rc == -EINPROGRESS || rc == -EBUSY) {
-   struct extent_crypt_result *ecr = req->base.data;
-
-   wait_for_completion(>completion);
-   rc = ecr->rc;
-   reinit_completion(>completion);
-   }
+   rc = crypto_wait_req(rc, );
 out:
skcipher_request_free(req);
return rc;
--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel



[dm-devel] [PATCH 1/17] dm: Add scaffolding to change completion function signature

2023-02-06 Thread Herbert Xu
This patch adds temporary scaffolding so that the Crypto API
completion function can take a void * instead of crypto_async_request.
Once affected users have been converted this can be removed.

Signed-off-by: Herbert Xu 
---

 drivers/md/dm-crypt.c |8 +++-
 drivers/md/dm-integrity.c |4 ++--
 2 files changed, 5 insertions(+), 7 deletions(-)

diff --git a/drivers/md/dm-crypt.c b/drivers/md/dm-crypt.c
index 2653516bcdef..7609fe39ab8c 100644
--- a/drivers/md/dm-crypt.c
+++ b/drivers/md/dm-crypt.c
@@ -1458,8 +1458,7 @@ static int crypt_convert_block_skcipher(struct 
crypt_config *cc,
return r;
 }
 
-static void kcryptd_async_done(struct crypto_async_request *async_req,
-  int error);
+static void kcryptd_async_done(crypto_completion_data_t *async_req, int error);
 
 static int crypt_alloc_req_skcipher(struct crypt_config *cc,
 struct convert_context *ctx)
@@ -2147,10 +2146,9 @@ static void kcryptd_crypt_read_convert(struct 
dm_crypt_io *io)
crypt_dec_pending(io);
 }
 
-static void kcryptd_async_done(struct crypto_async_request *async_req,
-  int error)
+static void kcryptd_async_done(crypto_completion_data_t *data, int error)
 {
-   struct dm_crypt_request *dmreq = async_req->data;
+   struct dm_crypt_request *dmreq = crypto_get_completion_data(data);
struct convert_context *ctx = dmreq->ctx;
struct dm_crypt_io *io = container_of(ctx, struct dm_crypt_io, ctx);
struct crypt_config *cc = io->cc;
diff --git a/drivers/md/dm-integrity.c b/drivers/md/dm-integrity.c
index 1388ee35571e..eefe25ed841e 100644
--- a/drivers/md/dm-integrity.c
+++ b/drivers/md/dm-integrity.c
@@ -955,9 +955,9 @@ static void xor_journal(struct dm_integrity_c *ic, bool 
encrypt, unsigned sectio
async_tx_issue_pending_all();
 }
 
-static void complete_journal_encrypt(struct crypto_async_request *req, int err)
+static void complete_journal_encrypt(crypto_completion_data_t *data, int err)
 {
-   struct journal_completion *comp = req->data;
+   struct journal_completion *comp = crypto_get_completion_data(data);
if (unlikely(err)) {
if (likely(err == -EINPROGRESS)) {
complete(>ic->crypto_backoff);
--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel



[dm-devel] [PATCH 2/17] net: macsec: Add scaffolding to change completion function signature

2023-02-06 Thread Herbert Xu
This patch adds temporary scaffolding so that the Crypto API
completion function can take a void * instead of crypto_async_request.
Once affected users have been converted this can be removed.

Signed-off-by: Herbert Xu 
---

 drivers/net/macsec.c |8 
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/drivers/net/macsec.c b/drivers/net/macsec.c
index bf8ac7a3ded7..b7d9d487ccd2 100644
--- a/drivers/net/macsec.c
+++ b/drivers/net/macsec.c
@@ -528,9 +528,9 @@ static void count_tx(struct net_device *dev, int ret, int 
len)
}
 }
 
-static void macsec_encrypt_done(struct crypto_async_request *base, int err)
+static void macsec_encrypt_done(crypto_completion_data_t *data, int err)
 {
-   struct sk_buff *skb = base->data;
+   struct sk_buff *skb = crypto_get_completion_data(data);
struct net_device *dev = skb->dev;
struct macsec_dev *macsec = macsec_priv(dev);
struct macsec_tx_sa *sa = macsec_skb_cb(skb)->tx_sa;
@@ -835,9 +835,9 @@ static void count_rx(struct net_device *dev, int len)
u64_stats_update_end(>syncp);
 }
 
-static void macsec_decrypt_done(struct crypto_async_request *base, int err)
+static void macsec_decrypt_done(crypto_completion_data_t *data, int err)
 {
-   struct sk_buff *skb = base->data;
+   struct sk_buff *skb = crypto_get_completion_data(data);
struct net_device *dev = skb->dev;
struct macsec_dev *macsec = macsec_priv(dev);
struct macsec_rx_sa *rx_sa = macsec_skb_cb(skb)->rx_sa;
--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel



[dm-devel] [PATCH 0/17] crypto: api - Change completion callback argument to void star

2023-02-06 Thread Herbert Xu
Hi:

The crypto completion function currently takes a pointer to a
struct crypto_async_request object.  However, in reality the API
does not allow the use of any part of the object apart from the
data field.  For example, ahash/shash will create a fake object
on the stack to pass along a different data field.

This leads to potential bugs where the user may try to dereference
or otherwise use the crypto_async_request object.

This series changes the completion function to take a void *
argument instead of crypto_async_request.

This series touches code in a number of different subsystems.
Most of them are trivial except for tls which was actually buggy
as it did exactly what was described above.

I'd like to pull all the changes through the crypto tree.  But
feel free to object if you'd like the relevant patches to go
through your trees instead and I'll split this up.

Thanks,
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel



Re: [dm-devel] Intel QAT on A2SDi-8C-HLN4F causes massive data corruption with dm-crypt + xfs

2022-03-16 Thread Herbert Xu
On Wed, Mar 16, 2022 at 02:38:10PM -0700, Kyle Sanderson wrote:
> > Makes sense. I'm going to send it upstream and Cc stable as documented
> > in 
> > https://www.kernel.org/doc/html/v4.10/process/stable-kernel-rules.html#option-1
> > I will then revert this change in the set that fixes the problem.
> 
> Did this go anywhere? I'm still not seeing it in any of the stable trees.

It's in the cryptodev tree which should hit mainline when the merge
window opens.

Thanks,
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel



Re: [dm-devel] Intel QAT on A2SDi-8C-HLN4F causes massive data corruption with dm-crypt + xfs

2022-03-02 Thread Herbert Xu
On Wed, Mar 02, 2022 at 10:42:20PM +, Giovanni Cabiddu wrote:
>
> I was thinking, as an alternative, to lower the cra_priority in the QAT
> driver for the algorithms used by dm-crypt so they are not used by
> default.
> Is that a viable option?

Yes I think that should work too.

Thanks,
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel



Re: [dm-devel] Intel QAT on A2SDi-8C-HLN4F causes massive data corruption with dm-crypt + xfs

2022-03-02 Thread Herbert Xu
On Wed, Mar 02, 2022 at 03:56:36PM +0100, Greg KH wrote:
>
> > If not, then these are the patches that should be backported:
> > 7bcb2c99f8ed crypto: algapi - use common mechanism for inheriting flags
> > 2eb27c11937e crypto: algapi - add NEED_FALLBACK to INHERITED_FLAGS
> > fbb6cda44190 crypto: algapi - introduce the flag 
> > CRYPTO_ALG_ALLOCATES_MEMORY
> > b8aa7dc5c753 crypto: drivers - set the flag CRYPTO_ALG_ALLOCATES_MEMORY
> > cd74693870fb dm crypt: don't use drivers that have 
> > CRYPTO_ALG_ALLOCATES_MEMORY
> > Herbert, correct me if I'm wrong here.
> 
> These need to be manually backported as they do not apply cleanly.  Can
> you provide such a set?  Or should I just disable a specific driver here
> instead which would be easier overall?

I think the safest thing is to disable qat in stable (possibly only
when DM_CRYPT is enabled/modular).  The patches in question while
good may have too wide an effect for the stable kernel series.

Giovanni, could you send Greg a Kconfig patch to do that?

Thanks,
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel



Re: [dm-devel] Intel QAT on A2SDi-8C-HLN4F causes massive data corruption with dm-crypt + xfs

2022-02-28 Thread Herbert Xu
On Mon, Feb 28, 2022 at 05:12:20PM -0800, Linus Torvalds wrote:
> 
> It sounds like it was incidental and almost accidental that it fixed
> that thing, and nobody realized it should perhaps be also moved to
> stable.

Yes this was incidental.  The patch in question fixes an issue in
OOM situations where drivers that must allocate memory on each
request may lead to dead-lock so it's not really targeted at qat.

Cheers,
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel



Re: [dm-devel] Intel QAT on A2SDi-8C-HLN4F causes massive data corruption with dm-crypt + xfs

2022-02-28 Thread Herbert Xu
On Mon, Feb 28, 2022 at 08:39:11PM +, Giovanni Cabiddu wrote:
>
> The dm-crypt + QAT use-case is already disabled since kernel 5.10 due to
> a different issue.

Indeed, qat has been disabled for dm-crypt since

commit b8aa7dc5c7535f9abfca4bceb0ade9ee10cf5f54
Author: Mikulas Patocka 
Date:   Thu Jul 9 23:20:41 2020 -0700

crypto: drivers - set the flag CRYPTO_ALG_ALLOCATES_MEMORY

So this should no longer be an issue with an up-to-date kernel.

Cheers,
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel



[dm-devel] [PATCH] crypto: scatterwalk - Remove obsolete PageSlab check

2021-06-24 Thread Herbert Xu
On Fri, Jun 18, 2021 at 11:12:58AM -0700, Ira Weiny wrote:
>
> Interesting!  Thanks!
> 
> Digging around a bit more I found:
> 
> https://lore.kernel.org/patchwork/patch/439637/

Nice find.  So we can at least get rid of the PageSlab call from
the Crypto API.

---8<---
As it is now legal to call flush_dcache_page on slab pages we
no longer need to do the check in the Crypto API.

Reported-by: Ira Weiny 
Signed-off-by: Herbert Xu 

diff --git a/include/crypto/scatterwalk.h b/include/crypto/scatterwalk.h
index c837d0775474..7af08174a721 100644
--- a/include/crypto/scatterwalk.h
+++ b/include/crypto/scatterwalk.h
@@ -81,12 +81,7 @@ static inline void scatterwalk_pagedone(struct scatter_walk 
*walk, int out,
struct page *page;
 
page = sg_page(walk->sg) + ((walk->offset - 1) >> PAGE_SHIFT);
-   /* Test ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE first as
-* PageSlab cannot be optimised away per se due to
-* use of volatile pointer.
-*/
-   if (ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE && !PageSlab(page))
-   flush_dcache_page(page);
+   flush_dcache_page(page);
}
 
if (more && walk->offset >= walk->sg->offset + walk->sg->length)
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel



Re: [dm-devel] [PATCH 01/18] mm: add a kunmap_local_dirty helper

2021-06-17 Thread Herbert Xu
On Thu, Jun 17, 2021 at 08:01:57PM -0700, Ira Weiny wrote:
>
> > +   flush_kernel_dcache_page(__page);   \
> 
> Is this required on 32bit systems?  Why is kunmap_flush_on_unmap() not
> sufficient on 64bit systems?  The normal kunmap_local() path does that.
> 
> I'm sorry but I did not see a conclusion to my query on V1. Herbert implied 
> the
> he just copied from the crypto code.[1]  I'm concerned that this _dirty() call
> is just going to confuse the users of kmap even more.  So why can't we get to
> the bottom of why flush_kernel_dcache_page() needs so much logic around it
> before complicating the general kernel users.
> 
> I would like to see it go away if possible.

This thread may be related:

https://lwn.net/Articles/240249/

Cheers,
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel



Re: [dm-devel] [PATCH 09/16] ps3disk: use memcpy_{from,to}_bvec

2021-06-14 Thread Herbert Xu
On Fri, Jun 11, 2021 at 09:07:43PM -0700, Ira Weiny wrote:
>
> More recently this was added:
> 
> 7e34e0bbc644 crypto: omap-crypto - fix userspace copied buffer access
> 
> I'm CC'ing Tero and Herbert to see why they added the SLAB check.

Probably because the generic Crypto API has the same check.  This
all goes back to

commit 4f3e797ad07d52d34983354a77b365dfcd48c1b4
Author: Herbert Xu 
Date:   Mon Feb 9 14:22:14 2009 +1100

crypto: scatterwalk - Avoid flush_dcache_page on slab pages

It's illegal to call flush_dcache_page on slab pages on a number
of architectures.  So this patch avoids doing so if PageSlab is
true.

In future we can move the flush_dcache_page call to those page
cache users that actually need it.

Reported-by: David S. Miller 
    Signed-off-by: Herbert Xu 

But I can't find any emails discussing this so let me ask Dave
directly and see if he can tell us what the issue was or might
have been.

Thanks,
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel



Re: [dm-devel] dm-crypt with no_read_workqueue and no_write_workqueue + btrfs scrub = BUG()

2020-12-23 Thread Herbert Xu
On Wed, Dec 23, 2020 at 04:37:34PM +0100, Maciej S. Szmigiero wrote:
> 
> It looks like to me that the skcipher API might not be safe to
> call from a softirq context, after all.

skcipher is safe to use in a softirq.  The problem is only in
dm-crypt where it tries to allocate memory with GFP_NOIO.

Cheers,
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel



Re: [dm-devel] [PATCH 3/4] dm crypt: switch to EBOIV crypto API template

2020-10-28 Thread Herbert Xu
On Wed, Oct 28, 2020 at 01:41:28PM +0200, Gilad Ben-Yossef wrote:
>
> Sorry if I'm being daft, but what did you refer to be "an existing
> option"? there was no CONFIG_EBOIV before my patchset, it was simply
> built as part of dm-crypt so it seems that setting CONFIG_EBOIV
> default to dm-crypto Kconfig option value does solves the problem, or
> have I missed something?

Oh I'm mistaken then.  I thought it was an existing option.  If
it's a new option then a default depending on dm-crypt should be
sufficient.

Thanks,
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel



Re: [dm-devel] [PATCH 3/4] dm crypt: switch to EBOIV crypto API template

2020-10-26 Thread Herbert Xu
On Mon, Oct 26, 2020 at 11:39:36AM -0700, Eric Biggers wrote:
> 
> CONFIG_DM_CRYPT can either select every weird combination of algorithms anyone
> can ever be using, or it can select some defaults and require any other needed
> algorithms to be explicitly selected.
> 
> In reality, dm-crypt has never even selected any particular block ciphers, 
> even
> AES.  Nor has it ever selected XTS.  So it's actually always made users (or
> kernel distributors) explicitly select algorithms.  Why the Bitlocker support
> suddenly different?
> 
> I'd think a lot of dm-crypt users don't want to bloat their kernels with 
> random
> legacy algorithms.

The point is that people rebuilding their kernel can end up with a
broken system.  Just set a default on EBOIV if dm-crypt is on.

Cheers,
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel



Re: [dm-devel] [PATCH 3/4] dm crypt: switch to EBOIV crypto API template

2020-10-26 Thread Herbert Xu
On Tue, Oct 27, 2020 at 05:41:55AM +1100, Herbert Xu wrote:
> 
> The point is that people rebuilding their kernel can end up with a
> broken system.  Just set a default on EBOIV if dm-crypt is on.

That's not enough as it's an existing option.  So we need to
add a new Kconfig option with a default equal to dm-crypt.

Cheers,
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel



Re: [dm-devel] [PATCH v2 0/7] crypto: add CRYPTO_ALG_ALLOCATES_MEMORY

2020-07-22 Thread Herbert Xu
On Fri, Jul 17, 2020 at 05:42:43PM +0300, Horia Geantă wrote:
>
> Looks like there's no mention of a limit on src, dst scatterlists size
> that crypto implementations could use when pre-allocating memory
> and crypto users needing CRYPTO_ALG_ALLOCATES_MEMORY should be aware of
> (for the contract to be honoured):
> https://lore.kernel.org/linux-crypto/780cb500-2241-61bc-eb44-6f872ad56...@nxp.com

Good point.  I think we should limit this flag only to the cases
applicable to dm-crypt, which seems to be 4 entries maximum.

Anything else should be allowed to allocate extra memory as needed.

Thanks,
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel

Re: [dm-devel] [PATCH v2 0/7] crypto: add CRYPTO_ALG_ALLOCATES_MEMORY

2020-07-16 Thread Herbert Xu
|   3 +-
> drivers/crypto/picoxcell_crypto.c |  17 ++-
> drivers/crypto/qat/qat_common/qat_algs.c  |  13 +-
> drivers/crypto/qce/skcipher.c |   1 +
> drivers/crypto/talitos.c  | 117 --
> drivers/crypto/virtio/virtio_crypto_algs.c|   3 +-
> drivers/crypto/xilinx/zynqmp-aes-gcm.c|   1 +
> drivers/md/dm-crypt.c         |  17 ++-
> include/crypto/algapi.h   |  25 ++--
> include/crypto/internal/geniv.h   |   2 +-
> include/linux/crypto.h|  36 +-
> 62 files changed, 562 insertions(+), 405 deletions(-)

Patches 1-6 applied.  Thanks.
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel



Re: [dm-devel] [PATCH 2/6] crypto: algapi - use common mechanism for inheriting flags

2020-07-10 Thread Herbert Xu
On Thu, Jul 09, 2020 at 11:24:03PM -0700, Eric Biggers wrote:
>
> I decided to make crypto_check_attr_type() return the mask instead, and do so
> via a pointer argument instead of the return value (so that we don't overload 
> an
> errno return value and prevent flag 0x8000 from working).
> Please take a look at v2.  Thanks!

Looks good.  Thanks!
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel



Re: [dm-devel] [PATCH 2/6] crypto: algapi - use common mechanism for inheriting flags

2020-07-08 Thread Herbert Xu
Eric Biggers  wrote:
>
> @@ -875,14 +873,21 @@ static void cbcmac_exit_tfm(struct crypto_tfm *tfm)
> 
> static int cbcmac_create(struct crypto_template *tmpl, struct rtattr **tb)
> {
> +   struct crypto_attr_type *algt;
>struct shash_instance *inst;
>struct crypto_cipher_spawn *spawn;
>struct crypto_alg *alg;
> +   u32 mask;
>int err;
> 
> -   err = crypto_check_attr_type(tb, CRYPTO_ALG_TYPE_SHASH);
> -   if (err)
> -   return err;
> +   algt = crypto_get_attr_type(tb);
> +   if (IS_ERR(algt))
> +   return PTR_ERR(algt);
> +
> +   if ((algt->type ^ CRYPTO_ALG_TYPE_SHASH) & algt->mask)
> +   return -EINVAL;
> +
> +   mask = crypto_algt_inherited_mask(algt);

How about moving the types check into crypto_algt_inherited_mask,
e.g.,

u32 mask;
int err;

err = crypto_algt_inherited_mask(tb, CRYPTO_ALG_TYPE_SHASH);
if (err < 0)
return err;

mask = err;

This could then be used to simplify other templates too, such as
gcm.

Thanks,
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel



Re: [dm-devel] [PATCH 1/3 v6] crypto: introduce the flag CRYPTO_ALG_ALLOCATES_MEMORY

2020-06-30 Thread Herbert Xu
On Tue, Jun 30, 2020 at 02:15:55PM -0400, Mikulas Patocka wrote:
>
> Index: linux-2.6/crypto/pcrypt.c
> ===
> --- linux-2.6.orig/crypto/pcrypt.c2020-06-29 16:03:07.346417000 +0200
> +++ linux-2.6/crypto/pcrypt.c 2020-06-30 20:11:56.636417000 +0200
> @@ -225,19 +225,21 @@ static int pcrypt_init_instance(struct c
>   return 0;
>  }
>  
> -static int pcrypt_create_aead(struct crypto_template *tmpl, struct rtattr 
> **tb,
> -   u32 type, u32 mask)
> +static int pcrypt_create_aead(struct crypto_template *tmpl, struct rtattr 
> **tb)
>  {

Rather than removing these two arguments, I think you should pass
along algt instead.

>   struct pcrypt_instance_ctx *ctx;
>   struct crypto_attr_type *algt;
>   struct aead_instance *inst;
>   struct aead_alg *alg;
> + u32 mask;
>   int err;
>  
>   algt = crypto_get_attr_type(tb);
>   if (IS_ERR(algt))
>   return PTR_ERR(algt);

Then we could remove this bit.

Cheers,
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel



Re: [dm-devel] [PATCH v2] dm crypt: add flags to optionally bypass dm-crypt workqueues

2020-06-29 Thread Herbert Xu
On Tue, Jun 30, 2020 at 02:51:17AM +, Damien Le Moal wrote:
>
> > @@ -1463,12 +1465,12 @@ static void crypt_alloc_req_skcipher(struct 
> > crypt_config *cc,
> >  * requests if driver request queue is full.
> >  */
> > skcipher_request_set_callback(ctx->r.req,
> > -   CRYPTO_TFM_REQ_MAY_BACKLOG,
> > +   nobacklog ? 0 : CRYPTO_TFM_REQ_MAY_BACKLOG,
> > kcryptd_async_done, dmreq_of_req(cc, ctx->r.req));
> 
> Will not specifying CRYPTO_TFM_REQ_MAY_BACKLOG always cause the crypto API to
> return -EBUSY ? From the comment above the skcipher_request_set_callback(), it
> seems that this will be the case only if the skcipher diver queue is full. So 
> in
> other word, keeping the kcryptd_async_done() callback and executing the 
> skcipher
> request through crypt_convert() and crypt_convert_block_skcipher() may still 
> end
> up being an asynchronous operation. Can you confirm this and is it what you
> intended to implement ?

The purpose of MAY_BACKLOG is to make the crypto request reliable.
It has nothing to do with whether the request will be synchronous
or not.

Without the backlog flag, if the hardware queue is full the request
will simply be dropped, which is appropriate in the network stack
with IPsec where congestion can be dealt with at the source.

Block layer on the other hand should always use the backlog flag
and stop sending more requests to the crypto API until the congestion
goes away.

Cheers,
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel



Re: [dm-devel] [PATCH 1/2] cpt-crypto: don't sleep of CRYPTO_TFM_REQ_MAY_SLEEP was not specified

2020-06-26 Thread Herbert Xu
On Wed, Jun 17, 2020 at 09:48:56AM -0400, Mikulas Patocka wrote:
> There is this call chain:
> cvm_encrypt -> cvm_enc_dec -> cptvf_do_request -> process_request -> kzalloc
> where we call sleeping allocator function even if CRYPTO_TFM_REQ_MAY_SLEEP 
> was not specified.
> 
> Signed-off-by: Mikulas Patocka 
> Cc: sta...@vger.kernel.org# v4.11+
> Fixes: c694b233295b ("crypto: cavium - Add the Virtual Function driver for 
> CPT")
> 
> ---
>  drivers/crypto/cavium/cpt/cptvf_algs.c   |1 +
>  drivers/crypto/cavium/cpt/cptvf_reqmanager.c |   12 ++--
>  drivers/crypto/cavium/cpt/request_manager.h  |2 ++
>  3 files changed, 9 insertions(+), 6 deletions(-)

All applied.  Thanks.
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel



Re: [dm-devel] [PATCH 1/3] crypto: pass the flag CRYPTO_ALG_ALLOCATES_MEMORY

2020-06-25 Thread Herbert Xu
On Wed, Jun 17, 2020 at 11:09:28AM -0400, Mikulas Patocka wrote:
>
> Index: linux-2.6/include/linux/crypto.h
> ===
> --- linux-2.6.orig/include/linux/crypto.h
> +++ linux-2.6/include/linux/crypto.h
> @@ -97,9 +97,18 @@
>  #define CRYPTO_ALG_OPTIONAL_KEY  0x4000
>  
>  /*
> + * The driver may allocate memory during request processing, so it shouldn't 
> be
> + * used in cases where memory allocation failures aren't acceptable, such as
> + * during block device encryption.
> + */
> +#define CRYPTO_ALG_ALLOCATES_MEMORY  0x8000
> +
> +/*
>   * Don't trigger module loading
>   */
> -#define CRYPTO_NOLOAD0x8000
> +#define CRYPTO_NOLOAD0x0001
> +
> +#define CRYPTO_ALG_INHERITED_FLAGS   (CRYPTO_ALG_ASYNC | 
> CRYPTO_ALG_ALLOCATES_MEMORY)

Any reason why you need to renumber NOLOAD? If not please keep
the existing values.

Thanks,
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel



Re: [dm-devel] [RFC PATCH 0/1] dm-crypt excessive overhead

2020-06-23 Thread Herbert Xu
On Tue, Jun 23, 2020 at 05:24:39PM +0100, Ignat Korchagin wrote:
> 
> I may be misunderstanding the terminology, but tasklets execute in
> soft IRQ, don't they? What we care about is to execute the decryption
> as fast as possible, but we can't do it in a hard IRQ context (that
> is, the interrupt context where other interrupts are disabled). As far
> as I understand, tasklets are executed right after the hard IRQ
> context, but with interrupts enabled - which is the first safe-ish
> place to do more lengthy processing without the risk of missing an
> interrupt.

Yes you are absolutely right.  In general high-performance work
should be carried out in softirq context.  That's how the networking
stack works for example.

Cheers,
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel



Re: [dm-devel] [RFC PATCH 0/1] dm-crypt excessive overhead

2020-06-19 Thread Herbert Xu
On Fri, Jun 19, 2020 at 02:39:39PM -0400, Mikulas Patocka wrote:
>
> I'm looking at this and I'd like to know why does the crypto API fail in 
> hard-irq context and why does it work in tasklet context. What's the exact 
> reason behind this?

You're not supposed to do any real work in IRQ handlers.  All
the substantial work should be postponed to softirq context.

Why do you need to do work in hard IRQ context?

Cheers,
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel



Re: [dm-devel] crypto API and GFP_ATOMIC

2020-06-10 Thread Herbert Xu
On Wed, Jun 10, 2020 at 08:02:23AM -0400, Mikulas Patocka wrote:
>
> Yes, fixing the drivers would be the best - but you can hardly find any 
> person who has all the crypto hardware and who is willing to rewrite all 
> the drivers for it.

We don't have to rewrite them straight away.  We could mark the
known broken ones (or the known working ones) and then dm-crypt
can allocate only those using the types/mask to crypto_alloc.

Cheers,
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel



Re: [dm-devel] crypto API and GFP_ATOMIC

2020-06-09 Thread Herbert Xu
On Tue, Jun 09, 2020 at 01:11:05PM -0400, Mikulas Patocka wrote:
>
> Do you have another idea how to solve this problem?

I think the better approach would be to modify the drivers to not
allocate any memory.  In general, any memory needed by the driver
to fulfil a request *should* be allocated within the crypto request
object.  That's why we have the reqsize field to indicate how much
memory could be needed per request.

Thanks,
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel



Re: [dm-devel] [PATCH 1/4] qat: fix misunderstood -EBUSY return code

2020-06-03 Thread Herbert Xu
On Wed, Jun 03, 2020 at 04:31:54AM -0400, Mikulas Patocka wrote:
>
> > Should we just retry a number of times and then fail?
> 
> It's better to get stuck in an infinite loop than to cause random I/O 
> errors. The infinite loop requires reboot, but it doesn't damage data on 
> disks.
> 
> The proper solution would be to add the request to a queue and process the 
> queue when some other request ended - but it would need substantial 
> rewrite of the driver. Do you want to rewrite it using a queue?
> 
> > Or, should we just move to the crypto-engine?
> 
> What do you mean by the crypto-engine?

crypto-engine is the generic queueing mechanism that any crypto
driver can use to implement the queueing.

Thanks,
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel



Re: [dm-devel] [PATCH v13 0/6] crypto: switch to crypto API for ESSIV generation

2019-08-30 Thread Herbert Xu
 my
>   testing
> - some cosmetic tweaks to the ESSIV template skcipher setkey function
>   to align it with the aead one
> - add a test case for essiv(cbc(aes),aes,sha256)
> - add an accelerated implementation for arm64 that combines the IV
>   derivation and the actual en/decryption in a single asm routine
> 
> Code can be found here
> https://git.kernel.org/pub/scm/linux/kernel/git/ardb/linux.git/log/?h=essiv-v11
> 
> Cc: Herbert Xu 
> Cc: Eric Biggers 
> Cc: dm-devel@redhat.com
> Cc: linux-fscr...@vger.kernel.org
> Cc: Gilad Ben-Yossef 
> Cc: Milan Broz 
> 
> Ard Biesheuvel (6):
>   crypto: essiv - create wrapper template for ESSIV generation
>   crypto: essiv - add tests for essiv in cbc(aes)+sha256 mode
>   crypto: arm64/aes-cts-cbc - factor out CBC en/decryption of a walk
>   crypto: arm64/aes - implement accelerated ESSIV/CBC mode
>   md: dm-crypt: switch to ESSIV crypto API template
>   md: dm-crypt: omit parsing of the encapsulated cipher
> 
>  arch/arm64/crypto/aes-glue.c  | 206 --
>  arch/arm64/crypto/aes-modes.S |  28 +
>  crypto/Kconfig|  28 +
>  crypto/Makefile   |   1 +
>  crypto/essiv.c| 663 ++++++++
>  crypto/tcrypt.c   |   9 +
>  crypto/testmgr.c  |  14 +
>  crypto/testmgr.h  | 497 +++
>  drivers/md/Kconfig|   1 +
>  drivers/md/dm-crypt.c | 271 ++--
>  10 files changed, 1448 insertions(+), 270 deletions(-)
>  create mode 100644 crypto/essiv.c

Patches 2-4 applied.  Thanks.
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel


Re: [dm-devel] [PATCH v13 1/6] crypto: essiv - create wrapper template for ESSIV generation

2019-08-29 Thread Herbert Xu
On Mon, Aug 19, 2019 at 05:17:33PM +0300, Ard Biesheuvel wrote:
> Implement a template that wraps a (skcipher,shash) or (aead,shash) tuple
> so that we can consolidate the ESSIV handling in fscrypt and dm-crypt and
> move it into the crypto API. This will result in better test coverage, and
> will allow future changes to make the bare cipher interface internal to the
> crypto subsystem, in order to increase robustness of the API against misuse.
> 
> Signed-off-by: Ard Biesheuvel 
> ---
>  crypto/Kconfig  |  28 +
>  crypto/Makefile |   1 +
>  crypto/essiv.c  | 663 
>  3 files changed, 692 insertions(+)

Acked-by: Herbert Xu 
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel


Re: [dm-devel] [PATCH 6/9] dm crypt: support diskcipher

2019-08-22 Thread Herbert Xu
On Fri, Aug 23, 2019 at 01:20:37PM +0900, boojin.kim wrote:
>
> If yes, I think the following API needs to be added to skcipher:  
> - _set(): BIO submitter (dm-crypt, f2fs, ext4) sets cipher to BIO.
> - _mergeable(): Block layer checks if two BIOs have the same cipher.
> - _get(): Storage driver gets cipher from BIO.
> - _set_crypt(): Storage driver gets crypto information from cipher and 
> writes it on the descriptor of Storage controller.
> Is it acceptable to skcipher ?

No.  If you're after total offload then the crypto API is not for
you.  What we can support is the offloading of encryption/decryption
over many sectors.

Cheers,
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel


Re: [dm-devel] [PATCH 6/9] dm crypt: support diskcipher

2019-08-21 Thread Herbert Xu
On Wed, Aug 21, 2019 at 04:57:41PM +0900, boojin.kim wrote:
>
> Can you tell me which patch you mentioned? Is this?
> https://patches.linaro.org/project/linux-crypto/list/?series=22762

Yes this is the one.

Cheers,
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel


Re: [PATCH 6/9] dm crypt: support diskcipher

2019-08-21 Thread Herbert Xu
On Wed, Aug 21, 2019 at 09:13:36AM +0200, Milan Broz wrote:
>
> NACK.
> 
> The whole principle of dm-crypt target is that it NEVER EVER submits
> plaintext data down the stack in bio.
> 
> If you want to do some lower/higher layer encryption, use key management
> on a different layer.
> So here, just setup encryption for fs, do not stack it with dm-crypt.
> 
> Also, dm-crypt is software-independent solution
> (software-based full disk encryption), it must not depend on
> any underlying hardware.
> Hardware can be of course used used for acceleration, but then
> just implement proper crypto API module that accelerates particular cipher.

I agree.  Please take a look at the recent ESSIV patches on
linux-crypto and build multi-block operations on top of them
which can then be implemented by the hardware.

Cheers,
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt


Re: [PATCH v12 1/4] crypto: essiv - create wrapper template for ESSIV generation

2019-08-19 Thread Herbert Xu
On Thu, Aug 15, 2019 at 10:28:55PM +0300, Ard Biesheuvel wrote:
>
> + /* Synchronous hash, e.g., "sha256" */
> + ictx->hash = crypto_alloc_shash(shash_name, 0, 0);
> + if (IS_ERR(ictx->hash)) {
> + err = PTR_ERR(ictx->hash);
> + goto out_drop_skcipher;
> + }

Holding a reference to this algorithm for the life-time of the
instance is not nice.  How about just doing a lookup as you were
doing before with crypto_alg_mod_lookup and getting the cra_name
from that?

Thanks,
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt


Re: [PATCH v11 1/4] crypto: essiv - create wrapper template for ESSIV generation

2019-08-15 Thread Herbert Xu
On Thu, Aug 15, 2019 at 08:15:29AM +0300, Ard Biesheuvel wrote:
> 
> So what about checking that the cipher key size matches the shash
> digest size, or that the cipher block size matches the skcipher IV
> size? This all moves to the TFM init function?

I don't think you need to check those things.  If the shash produces
an incorrect key size the setkey will just fail naturally.  As to
the block size matching the IV size, in the kernel it's not actually
possible to get an underlying cipher with different block size
than the cbc mode that you used to derive it.

The size checks that we have in general are to stop people from
making crazy combinations such as lrw(des3_ede), it's not there
to test the correctness of a given implementation.  That is,
we assume that whoever provides "aes" will give it the correct
geometry for it.

Sure we haven't made it explicit (which we should at some point)
but as it stands, it can only occur if we have a bug or someone
loads a malicious kernel module in which case none of this matters.

> Are there any existing templates that use this approach?

I'm not sure of templates doing this but this is similar to fallbacks.
In fact we don't check any gemoetry on the fallbacks at all.

Cheers,
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt


Re: [dm-devel] [PATCH v11 1/4] crypto: essiv - create wrapper template for ESSIV generation

2019-08-14 Thread Herbert Xu
On Thu, Aug 15, 2019 at 07:59:34AM +0300, Ard Biesheuvel wrote:
>
> So how do I ensure that the cipher and shash are actually loaded at
> create() time, and that they are still loaded at TFM init time?

If they're not available at TFM init then you just fail the init
and therefore the TFM allocation.  What is the problem?

IOW the presence of the block cipher and unkeyed hash does not
affect the creation of the instance object.

Cheers,
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel


Re: [dm-devel] [PATCH v11 1/4] crypto: essiv - create wrapper template for ESSIV generation

2019-08-14 Thread Herbert Xu
On Wed, Aug 14, 2019 at 07:37:43PM +0300, Ard Biesheuvel wrote:
>
> + /* Block cipher, e.g., "aes" */
> + crypto_set_spawn(>essiv_cipher_spawn, inst);
> + err = crypto_grab_spawn(>essiv_cipher_spawn, essiv_cipher_name,
> + CRYPTO_ALG_TYPE_CIPHER, CRYPTO_ALG_TYPE_MASK);
> + if (err)
> + goto out_drop_skcipher;
> + essiv_cipher_alg = ictx->essiv_cipher_spawn.alg;
> +
> + /* Synchronous hash, e.g., "sha256" */
> + _hash_alg = crypto_alg_mod_lookup(shash_name,
> +   CRYPTO_ALG_TYPE_SHASH,
> +   CRYPTO_ALG_TYPE_MASK);
> + if (IS_ERR(_hash_alg)) {
> + err = PTR_ERR(_hash_alg);
> + goto out_drop_essiv_cipher;
> + }
> + hash_alg = __crypto_shash_alg(_hash_alg);
> + err = crypto_init_shash_spawn(>hash_spawn, hash_alg, inst);
> + if (err)
> + goto out_put_hash;

I wouldn't use spawns for these two algorithms.  The point of
spawns is mainly to serve as a notification channel so we can
tear down the top-level instance when a better underlying spawn
implementation is added to the system.

For these two algorithms, we don't really care about their performance
to do such a tear-down since they only operate on small pieces of
data.

Therefore just keep things simple and allocate them in the tfm
init function.

Thanks,
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel


Re: [PATCH v8 1/7] crypto: essiv - create wrapper template for ESSIV generation

2019-08-14 Thread Herbert Xu
On Sat, Aug 03, 2019 at 10:36:44AM +0300, Ard Biesheuvel wrote:
> 
> To use your GCM analogy: gcm_base(ctr-ppc-spe, ghash-generic) is a
> supported aead identifier, but  there is nothing in the name that
> identifies the skcipher as one that encapsulates AES.

I would've thought that you would first grab (literally :) ahold
of ctr-ppc-spe, at which point you could query its cra_name and then
derive AES from that.  Is that going to be a problem?

> > So I would envisage something similar for essiv where essiv just has
> > U, X and Y (as you said that U and X are independent) while you can
> > then have an essiv_base that spells everything out in detail.
> >
> 
> That might be a useful enhancement by itself, but it does not fix the
> issue above. The only way to instantiate the same cipher as the one
> encapsulated by "cbc-ppc-spe" is to instantiate the latter, parse the
> cipher name and pass it to crypto_alloc_cipher()

That's pretty much what I'm suggesting.  Except that I would point
out that you don't need to instantiate it (i.e., allocate a tfm),
you just need to grab ahold of the algorithm object.

The actual allocation of the AES cipher can be done in the cra_init
function.

We only need to grab algorithms that form a core part of the
resultant instance.  IOW, if the underlying algorithm is replaced,
you would always recreate the instance on top of it.  This is not
the case with AES here, since it's just used for a very small part
in the instance and we don't really care about recreating the essiv
intance when the block AES algorithm changes.

> > Also, do we allow anything other than HMAC for X? If not then that
> > should be encoded either into the name by dropping the hmac in the
> > algorithm name and adding it through the driver name, or by checking
> > for it in the template creation function.
> >
> > What I'd like to achieve is a state where we support only what is
> > currently supported and no more.
> >
> 
> Yeah, that makes sense. But we have h/w drivers that instantiate
> authenc(X,Y) in its entirety, and passing those driver names is
> something that is supported today, so we can't just remove that.

Sure, we only need to impose a restriction on the cra_name, not
on the driver name.

Thanks,
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt


Re: [dm-devel] xts fuzz testing and lack of ciphertext stealing support

2019-08-11 Thread Herbert Xu
On Sun, Aug 11, 2019 at 09:29:38PM +, Pascal Van Leeuwen wrote:
>
> It will very likely fail with that CAAM h/w, but that only proves that you
> should not use plain64be IV's together with CAAM h/w. Which should be

It doesn't matter whether it's wrong or not.

The fact is that this is an interface that we export to user-space
and we must NEVER break that.  So if your driver is breaking this
interface then your driver is broken and needs to be fixed.

Cheers,
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt


Re: [dm-devel] xts fuzz testing and lack of ciphertext stealing support

2019-08-08 Thread Herbert Xu
On Thu, Aug 08, 2019 at 06:01:49PM +, Horia Geanta wrote:
>
> -- >8 --
> 
> Subject: [PATCH] crypto: testmgr - Add additional AES-XTS vectors for covering
>  CTS (part II)

Patchwork doesn't like it when you do this and it'll discard
your patch.  To make it into patchwork you need to put the new
Subject in the email headers.

Thanks,
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt


Re: [PATCH v8 1/7] crypto: essiv - create wrapper template for ESSIV generation

2019-08-01 Thread Herbert Xu
On Fri, Jul 26, 2019 at 12:00:20PM +0300, Ard Biesheuvel wrote:
>
> For Y and Z, it is not straightforward either: since the crypto API
> permits the use of driver names in addition to the plain CRA names,
> we'd have to infer from the first parameter which cipher is being
> used.

We don't really permit that.  It might work but it is certainly not
guaranteed to work.  The only thing we guarantee is that the
algorithm name and the canonical driver name will work.  For example,
with gcm you can either say gcm(aes) or gcm_base(drv_name1, drv_name2).

Anything else is not supported.

So I would envisage something similar for essiv where essiv just has
U, X and Y (as you said that U and X are independent) while you can
then have an essiv_base that spells everything out in detail.

Also, do we allow anything other than HMAC for X? If not then that
should be encoded either into the name by dropping the hmac in the
algorithm name and adding it through the driver name, or by checking
for it in the template creation function.

What I'd like to achieve is a state where we support only what is
currently supported and no more.

> > Because this is legacy stuff, I don't want it to support any more
> > than what is currently being supported by dm-crypt.
> >
> > Similary for the skcipher case, given
> >
> > essiv(cbc(X),Y,Z)
> >
> > is it ever possible for X != Y? If not then we should just make the
> > algorithm name essiv(X,Z).
> >
> 
> Same problem. We'd need to instantiate the skcipher and parse the cra_name.
> 
> Perhaps we should introduce a crypto API call that infers the cra_name
> from a cra_driver_name?

You don't need to do that.  Just copy whatever gcm does in its
creation function when you invoke it as gcm_base.

Thanks,
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt


Re: [PATCH v8 1/7] crypto: essiv - create wrapper template for ESSIV generation

2019-07-25 Thread Herbert Xu
Ard Biesheuvel  wrote:
>
> + * The typical use of this template is to instantiate the skcipher
> + * 'essiv(cbc(aes),aes,sha256)', which is the only instantiation used by
> + * fscrypt, and the most relevant one for dm-crypt. However, dm-crypt
> + * also permits ESSIV to be used in combination with the authenc template,
> + * e.g., 'essiv(authenc(hmac(sha256),cbc(aes)),aes,sha256)', in which case
> + * we need to instantiate an aead that accepts the same special key format
> + * as the authenc template, and deals with the way the encrypted IV is
> + * embedded into the AAD area of the aead request. This means the AEAD
> + * flavor produced by this template is tightly coupled to the way dm-crypt
> + * happens to use it.

IIRC only authenc is allowed in dm-crypt currently in conjunction
with ESSIV.  Does it ever allow a different hash algorithm in
authenc compared to the one used for ESSIV? IOW given

essiv(authenc(hmac(X),cbc(Y)),Z,U)

is it currently possible for X != U or Y != Z? If not then let's
just make the algorithm name be essiv(Y,X).

Because this is legacy stuff, I don't want it to support any more
than what is currently being supported by dm-crypt.

Similary for the skcipher case, given

essiv(cbc(X),Y,Z)

is it ever possible for X != Y? If not then we should just make the
algorithm name essiv(X,Z).

Thanks,
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt


Re: xts fuzz testing and lack of ciphertext stealing support

2019-07-18 Thread Herbert Xu
On Thu, Jul 18, 2019 at 04:35:26PM +, Pascal Van Leeuwen wrote:
>
> Tthen there's still the issue of advancing that last tweak. From what I've 
> seen,
> the xts implementation does not output the last tweak so you would have to 
> recompute it locally in cts.c as block_cipher_enc(iv) * alpha^j. Which is 
> wasteful.
> Of course this could be solved by redefining xts to output the last tweak
> like CBC/CTR output their last IV ... Which would probably give us HW guys
> trouble again because our HW cannot do *that* ... (While the HW very likely 
> *does* implement the cipher text stealing properly, just to be able to boast
> about IEEE P1619 compliance ...)

If your hardware supports XTS properly then you wouldn't even need
to use this new template.  You would directly export the xts
interface.

Cheers,
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt


Re: [dm-devel] xts fuzz testing and lack of ciphertext stealing support

2019-07-18 Thread Herbert Xu
On Thu, Jul 18, 2019 at 06:19:24PM +0200, Ard Biesheuvel wrote:
>
> Note that for software algorithms such as the bit sliced NEON
> implementation of AES, which can only operate on 8 AES blocks at a
> time, doing the final 2 blocks sequentially is going to seriously
> impact performance. This means whatever wrapper we invent around xex()
> (or whatever we call it) should go out of its way to ensure that the
> common, non-CTS case does not regress in performance, and the special
> handling is only invoked when necessary (which will be never).

Agreed.

Cheers,
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel


Re: xts fuzz testing and lack of ciphertext stealing support

2019-07-18 Thread Herbert Xu
On Thu, Jul 18, 2019 at 03:43:28PM +, Pascal Van Leeuwen wrote:
>
> Hmmm ... so the generic CTS template would have to figure out whether it is 
> wrapped 
> around ECB, CBC, XTS or whatever and then adjust to that?

That's not hard to do.  Right now cts only supports cbc.  IOW
if you pass it anything else it will refuse to instantiate.

> For XTS, you have this additional curve ball being thrown in called the 
> "tweak".
> For encryption, the underlying "xts" would need to be able to chain the tweak,
> from what I've seen of the source the implementation cannot do that.

You simply use the underlying xts for the first n - 2 blocks and
do the last two by hand.

Cheers,
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt


Re: xts fuzz testing and lack of ciphertext stealing support

2019-07-18 Thread Herbert Xu
On Thu, Jul 18, 2019 at 10:40:54AM +, Pascal Van Leeuwen wrote:
>
> In fact, using the current cts template around the current xts template 
> actually does NOT
> implement standards compliant XTS at all, as the CTS *implementation* for XTS 
> is 
> different from the one for CBC as implemented by the current CTS template.

The template is just a name.  The implementation can do whatever it
wants for each instance.  So obviously we would employ a different
implementation for xts compared to cbc.

Cheers,
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt


Re: xts fuzz testing and lack of ciphertext stealing support

2019-07-18 Thread Herbert Xu
On Thu, Jul 18, 2019 at 01:19:41PM +0200, Milan Broz wrote:
>
> Also, I would like to avoid another "just because it is nicer" module 
> dependence (XTS->XEX->ECB).
> Last time (when XTS was reimplemented using ECB) we have many reports with 
> initramfs
> missing ECB module preventing boot from AES-XTS encrypted root after kernel 
> upgrade...
> Just saying. (Despite the last time it was keyring what broke encrypted boot 
> ;-)
> 
> (That said, I will try to find some volunteer to help with CTS in XTS 
> implementation, if needed.)

Well the main advantage of doing it on top of the existing xts is
that you can retain the existing ARM implementations without any
changes.  This would also apply to any existing xts drivers that
also don't implement CTS (I'm not aware of the status on these so
someone will need to check them one by one).

But if you were going to volunteer to change them all in one swoop
then it wouldn't matter.

Cheers,
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt


Re: xts fuzz testing and lack of ciphertext stealing support

2019-07-18 Thread Herbert Xu
On Thu, Jul 18, 2019 at 09:28:03AM +0200, Ard Biesheuvel wrote:
>
> If we were adding XTS to the kernel today, then I would agree with
> you. But xts() has an established meaning now, and I don't think it
> makes sense to update all implementations for a theoretical use case,
> given that no portable userland code can rely on the correct semantics
> today, since CAAM is the only one that implements them correctly.
> 
> In any case, I won't have time to fix the ARM or arm64 implementations
> (or review the changes if someone else steps up) until the end of
> September.

I'm not asking you or anyone to fix this right away.  I'm just
saying that this is the direction we should be moving in.

After all, there is no immediate crisis as all that is broken
today is a fuzz test.

It should be possible to do this without causing performance
regressions for ARM.  We could rename the existing xts to a
new name (xek perhaps) and add xts into the cts template as
a wrapper around xek.

That way you don't have to touch the ARM code at all and it
should just work.

PS should we mark xek or whatever it's called as internal so
it isn't visible to user-space?

Cheers,
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt


Re: xts fuzz testing and lack of ciphertext stealing support

2019-07-18 Thread Herbert Xu
On Thu, Jul 18, 2019 at 09:15:39AM +0200, Ard Biesheuvel wrote:
>
> Not just the generic implementation: there are numerous synchronous
> and asynchronous implementations of xts(aes) in the kernel that would
> have to be fixed, while there are no in-kernel users that actually
> rely on CTS. Also, in the cbc case, we support CTS by wrapping it into
> another template, i.e., cts(cbc(aes)).
> 
> So retroactively redefining what xts(...) means seems like a bad idea
> to me. If we want to support XTS ciphertext stealing for the benefit
> of userland, let's do so via the existing cts template, and add
> support for wrapping XTS to it.

XTS without stealing should be renamed as XEX.  Sure you can then
wrap it inside cts to form xts but the end result needs to be called
xts.

Cheers,
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt


Re: [dm-devel] xts fuzz testing and lack of ciphertext stealing support

2019-07-18 Thread Herbert Xu
On Wed, Jul 17, 2019 at 08:08:27PM +0200, Ard Biesheuvel wrote:
>
> Since the kernel does not support CTS for XTS any way, and since no
> AF_ALG users can portably rely on this, I agree with Eric that the
> only sensible way to address this is to disable this functionality in
> the driver.

But the whole point of XTS is that it supports sizes that are
not multiples of the block size.  So implementing it without
supporting ciphertext stealing is just wrong.

So let's fix the generic implementation rather than breaking
the caam driver.

Thanks,
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel


Re: [PATCH 3/3] dm-crypt: Implement eboiv - encrypted byte-offset initialization vector.

2019-07-05 Thread Herbert Xu
On Fri, Jul 05, 2019 at 08:32:03AM +0200, Ard Biesheuvel wrote:
>
> > AFAICS this is using the same key as the actual data.  So why
> > don't you combine it with the actual data when encrypting/decrypting?
> >
> > That is, add a block at the front of the actual data containing
> > the little-endian byte offset and then use an IV of zero.
> >
> 
> That would only work for encryption.

True.  So this doesn't obviate the need to access the single-block
cipher.  But the code probably should still do it that way for
encryption for performance reasons.

Cheers,
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt


Re: [PATCH 3/3] dm-crypt: Implement eboiv - encrypted byte-offset initialization vector.

2019-07-04 Thread Herbert Xu
On Thu, Jul 04, 2019 at 08:11:46PM +0200, Ard Biesheuvel wrote:
> 
> To be clear, making the cipher API internal only is something that I
> am proposing but hasn't been widely discussed yet. So if you make a
> good argument why it is a terrible idea, I'm sure it will be taken
> into account.
> 
> The main issue is that the cipher API is suboptimal if you process
> many blocks in sequence, since SIMD implementations will not be able
> to amortize the cost of kernel_fpu_begin()/end(). This is something
> that we might be able to fix in other ways (and a SIMD get/put
> interface has already been proposed which looks suitable for this as
> well) but that would still involve an API change for something that
> isn't the correct abstraction in the first place in many cases. (There
> are multiple implementations of ccm(aes) using cipher_encrypt_one() in
> a loop, for instance, and these are not able to benefit from, e.g,
> the accelerated implementation that I created for arm64, since it open
> codes the CCM operations)

I agree with what you guys have concluded so far.  But I do have
something I want to say about eboiv's implementation itself.

AFAICS this is using the same key as the actual data.  So why
don't you combine it with the actual data when encrypting/decrypting?

That is, add a block at the front of the actual data containing
the little-endian byte offset and then use an IV of zero.

Cheers,
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt


Re: [PATCH v4 0/6] crypto: switch to crypto API for ESSIV generation

2019-06-23 Thread Herbert Xu
On Sun, Jun 23, 2019 at 11:30:41AM +0200, Ard Biesheuvel wrote:
>
> So with that in mind, I think we should decouple the multi-sector
> discussion and leave it for a followup series, preferably proposed by
> someone who also has access to some hardware to prototype it on.

Yes that makes sense.

Thanks,
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt


Re: [PATCH v3 1/6] crypto: essiv - create wrapper template for ESSIV generation

2019-06-20 Thread Herbert Xu
On Thu, Jun 20, 2019 at 03:53:45PM +0200, Ard Biesheuvel wrote:
>
> We'd need at least 512 and 4k for dm-crypt, but I don't think the
> sector size is limited at all tbh

In that case my preference would be to encode this into the key
and hardware that encounters unsupported sector sizes can use a
fallback.

Thanks,
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt


Re: [PATCH v3 1/6] crypto: essiv - create wrapper template for ESSIV generation

2019-06-20 Thread Herbert Xu
On Thu, Jun 20, 2019 at 03:02:04PM +0200, Ard Biesheuvel wrote:
>
> It also depend on how realistic it is that we will need to support
> arbitrary sector sizes in the future. I mean, if we decide today that
> essiv() uses an implicit sector size of 4k, we can always add
> essiv64k() later, rather than adding lots of complexity now that we
> are never going to use. Note that ESSIV is already more or less
> deprecated, so there is really no point in inventing these weird and
> wonderful things if we want people to move to XTS and plain IV
> generation instead.

Well whatever we do for ESSIV should also extend to other IV
generators in dm-crypt so that potentially we can have a single
interface for dm-crypt multi-sector processing in future (IOW
you don't have special code for ESSIV vs. other algos).

That is why we should get the ESSIV interface right as it could
serve as an example for future implementations.

What do the dm-crypt people think? Are you ever going to need
processing in units other than 4K?

Cheers,
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt


Re: [PATCH v3 1/6] crypto: essiv - create wrapper template for ESSIV generation

2019-06-20 Thread Herbert Xu
On Thu, Jun 20, 2019 at 09:30:41AM +0200, Ard Biesheuvel wrote:
>
> Is this the right approach? Or are there better ways to convey this
> information when instantiating the template?
> Also, it seems to me that the dm-crypt and fscrypt layers would
> require major surgery in order to take advantage of this.

Oh and you don't have to make dm-crypt use it from the start.  That
is, you can just make things simple by doing it one sector at a
time in the dm-crypt code even though the underlying essiv code
supports multiple sectors.

Someone who cares about this is sure to come along and fix it later.

Cheers,
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt


Re: [PATCH v3 1/6] crypto: essiv - create wrapper template for ESSIV generation

2019-06-20 Thread Herbert Xu
On Thu, Jun 20, 2019 at 09:30:41AM +0200, Ard Biesheuvel wrote:
>
> Is this the right approach? Or are there better ways to convey this
> information when instantiating the template?
> Also, it seems to me that the dm-crypt and fscrypt layers would
> require major surgery in order to take advantage of this.

My preference would be to encode the sector size into the key.
Hardware that can only support some sector sizes can use fallbacks
as usual.

Cheers,
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt


Re: [PATCH v3 1/6] crypto: essiv - create wrapper template for ESSIV generation

2019-06-19 Thread Herbert Xu
On Thu, Jun 20, 2019 at 09:13:25AM +0800, Herbert Xu wrote:
> On Wed, Jun 19, 2019 at 06:04:17PM -0700, Eric Biggers wrote:
> >
> > > +#define ESSIV_IV_SIZEsizeof(u64) // IV size of the outer 
> > > algo
> > > +#define MAX_INNER_IV_SIZE16  // max IV size of inner 
> > > algo
> > 
> > Why does the outer algorithm declare a smaller IV size?  Shouldn't it just 
> > be
> > the same as the inner algorithm's?
> 
> In general we allow outer algorithms to have distinct IV sizes
> compared to the inner algorithm.  For example, rfc4106 has a
> different IV size compared to gcm.
> 
> In this case, the outer IV size is the block number so that's
> presumably why 64 bits is sufficient.  Do you forsee a case where
> we need 128-bit block numbers?

Actually this reminds me, the essiv template needs to be able to
handle multiple blocks/sectors, as otherwise this will still only
be able to push a single block/sector to the hardware at a time.

Thanks,
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt


Re: [PATCH v3 1/6] crypto: essiv - create wrapper template for ESSIV generation

2019-06-19 Thread Herbert Xu
On Wed, Jun 19, 2019 at 06:04:17PM -0700, Eric Biggers wrote:
>
> > +#define ESSIV_IV_SIZE  sizeof(u64) // IV size of the outer 
> > algo
> > +#define MAX_INNER_IV_SIZE  16  // max IV size of inner algo
> 
> Why does the outer algorithm declare a smaller IV size?  Shouldn't it just be
> the same as the inner algorithm's?

In general we allow outer algorithms to have distinct IV sizes
compared to the inner algorithm.  For example, rfc4106 has a
different IV size compared to gcm.

In this case, the outer IV size is the block number so that's
presumably why 64 bits is sufficient.  Do you forsee a case where
we need 128-bit block numbers?

Cheers,
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt


Re: [RFC PATCH 0/3] crypto: switch to shash for ESSIV generation

2019-06-17 Thread Herbert Xu
On Mon, Jun 17, 2019 at 11:15:01AM +0200, Ard Biesheuvel wrote:
>
> Ah yes, thanks for reminding me. There was some debate in the past
> about this, but I don't remember the details.

I think there were some controversy regarding whether the resulting
code lived in drivers/md or crypto.  I think as long as drivers/md
is the only user of the said code then having it in drivers/md should
be fine.

However, if we end up using/reusing the same code for others such as
fs/crypto then it might make more sense to have them in crypto.

Cheers,
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt


Re: [dm-devel] [PATCH v4 0/4] crypto: lrw - Fixes and improvements

2018-09-20 Thread Herbert Xu
On Thu, Sep 13, 2018 at 10:51:30AM +0200, Ondrej Mosnacek wrote:
> This patchset contains a corner-case fix and several improvements  for
> the LRW template.
> 
> The first patch fixes an out-of-bounds array access (and subsequently
> incorrect cipher output) when the LRW counter goes from all ones to all
> zeros. This patch should be applied to the crypto-2.6 tree and also go
> to stable.
> 
> The second patch adds a test vector for lrw(aes) that covers the above
> bug.
> 
> The third patch is a small optimization of the LRW tweak computation.
> 
> The fourth patch is a follow-up to a similar patch for XTS (it
> simplifies away the use of dynamically allocated auxiliary buffer to
> cache the computed tweak values):
> https://patchwork.kernel.org/patch/10588775/
> 
> Patches 2-4 should be applied only to cryptodev-2.6, but they all depend
> on the first patch.
> 
> Changes in v4:
>   - applied various corrections/suggestions from Eric Biggers
>   - added a fix for buggy behavior on counter wrap-around (+ test vector)
> 
> v3: https://www.spinics.net/lists/linux-crypto/msg34946.html
> Changes in v3:
>   - fix a copy-paste error
> 
> v2: https://www.spinics.net/lists/linux-crypto/msg34890.html
> Changes in v2:
>   - small cleanup suggested by Eric Biggers
> 
> v1: https://www.spinics.net/lists/linux-crypto/msg34871.html
> 
> Ondrej Mosnacek (4):
>   crypto: lrw - Fix out-of bounds access on counter overflow
>   crypto: testmgr - Add test for LRW counter wrap-around
>   crypto: lrw - Optimize tweak computation
>   crypto: lrw - Do not use auxiliary buffer
> 
>  crypto/lrw.c | 342 +--
>  crypto/testmgr.h |  21 +++
>  2 files changed, 112 insertions(+), 251 deletions(-)

All applied.  Thanks.
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel


Re: [dm-devel] [PATCH v5] crypto: xts - Drop use of auxiliary buffer

2018-09-20 Thread Herbert Xu
On Tue, Sep 11, 2018 at 09:40:08AM +0200, Ondrej Mosnacek wrote:
> Since commit acb9b159c784 ("crypto: gf128mul - define gf128mul_x_* in
> gf128mul.h"), the gf128mul_x_*() functions are very fast and therefore
> caching the computed XTS tweaks has only negligible advantage over
> computing them twice.
> 
> In fact, since the current caching implementation limits the size of
> the calls to the child ecb(...) algorithm to PAGE_SIZE (usually 4096 B),
> it is often actually slower than the simple recomputing implementation.
> 
> This patch simplifies the XTS template to recompute the XTS tweaks from
> scratch in the second pass and thus also removes the need to allocate a
> dynamic buffer using kmalloc().
> 
> As discussed at [1], the use of kmalloc causes deadlocks with dm-crypt.
> 
> PERFORMANCE RESULTS
> I measured time to encrypt/decrypt a memory buffer of varying sizes with
> xts(ecb-aes-aesni) using a tool I wrote ([2]) and the results suggest
> that after this patch the performance is either better or comparable for
> both small and large buffers. Note that there is a lot of noise in the
> measurements, but the overall difference is easy to see.
> 
> Old code:
>ALGORITHM KEY (b)DATA (B)   TIME ENC (ns)   TIME DEC (ns)
> xts(aes) 256  64 331 328
> xts(aes) 384  64 332 333
> xts(aes) 512  64 338 348
> xts(aes) 256 512 889 920
> xts(aes) 384 5121019 993
> xts(aes) 512 5121032 990
> xts(aes) 256409621522292
> xts(aes) 384409624532597
> xts(aes) 512409630412641
> xts(aes) 256   1638494438027
> xts(aes) 384   1638485368925
> xts(aes) 512   1638492329417
> xts(aes) 256   32768   16383   14897
> xts(aes) 384   32768   17527   16102
> xts(aes) 512   32768   18483   17322
> 
> New code:
>ALGORITHM KEY (b)DATA (B)   TIME ENC (ns)   TIME DEC (ns)
> xts(aes) 256  64 328 324
> xts(aes) 384  64 324 319
> xts(aes) 512  64 320 322
> xts(aes) 256 512 476 473
> xts(aes) 384 512 509 492
> xts(aes) 512 512 531 514
> xts(aes) 256409621321829
> xts(aes) 384409623572055
> xts(aes) 512409621782027
> xts(aes) 256   1638469206983
> xts(aes) 384   1638485977505
> xts(aes) 512   1638478418164
> xts(aes) 256   32768   13468   12307
> xts(aes) 384   32768   14808   13402
> xts(aes) 512   32768   15753   14636
> 
> [1] https://lkml.org/lkml/2018/8/23/1315
> [2] https://gitlab.com/omos/linux-crypto-bench
> 
> Signed-off-by: Ondrej Mosnacek 
> ---
>  crypto/xts.c | 269 +--
>  1 file changed, 46 insertions(+), 223 deletions(-)
> 
> Changes in v5:
>   - fix dumb mistakes

Patch applied.  Thanks.
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel


Re: [dm-devel] [PATCH v8 5/9] dm: Remove VLA usage from hashes

2018-09-14 Thread Herbert Xu
On Thu, Sep 13, 2018 at 01:54:39PM -0400, Mike Snitzer wrote:
>
> Acked-by: Mike Snitzer 

Patch applied.  Thanks.
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel


Re: [dm-devel] [PATCH] crypto: xts - Drop use of auxiliary buffer

2018-09-05 Thread Herbert Xu
Ondrej Mosnacek  wrote:
> Hi Herbert, Mikulas,
> 
> I noticed the discussion about using kmalloc() inside crypto code and it
> made me wonder if the code in xts.c can actually be simplified to not
> require kmalloc() at all, while not badly affecting performace. I played
> around with it a little and it turns out we can drop the whole caching
> of tweak blocks, reducing code size by ~200 lines and not only preserve,
> but even improve the performance in some cases. See the full patch
> below.
> 
> Obviously, this doesn't solve the general issue of using kmalloc() in
> crypto API routines, but it does resolve the original reported problem
> and also makes the XTS code easier to maintain.

Nice work.  Unfortunately it doesn't apply against the latest
cryptodev tree.  Please rebase and resubmit.

Thanks,
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel


Re: [dm-devel] [PATCH v8 5/9] dm: Remove VLA usage from hashes

2018-09-05 Thread Herbert Xu
On Tue, Aug 07, 2018 at 02:18:39PM -0700, Kees Cook wrote:
> In the quest to remove all stack VLA usage from the kernel[1], this uses
> the new HASH_MAX_DIGESTSIZE from the crypto layer to allocate the upper
> bounds on stack usage.
> 
> [1] 
> https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qpxydaacu1rq...@mail.gmail.com
> 
> Signed-off-by: Kees Cook 

Can the dm folks please review this patch?

Thanks,
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel


Re: [dm-devel] [PATCH v8 9/9] crypto: skcipher: Remove VLA usage for SKCIPHER_REQUEST_ON_STACK

2018-09-05 Thread Herbert Xu
On Tue, Aug 07, 2018 at 02:18:43PM -0700, Kees Cook wrote:
> In the quest to remove all stack VLA usage from the kernel[1], this
> caps the skcipher request size similar to other limits and adds a sanity
> check at registration. Looking at instrumented tcrypt output, the largest
> is for lrw:
> 
>   crypt: testing lrw(aes)
>   crypto_skcipher_set_reqsize: 8
>   crypto_skcipher_set_reqsize: 88
>   crypto_skcipher_set_reqsize: 472
> 
> [1] 
> https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qpxydaacu1rq...@mail.gmail.com
> 
> Signed-off-by: Kees Cook 
> ---
>  include/crypto/internal/skcipher.h | 1 +
>  include/crypto/skcipher.h  | 4 +++-
>  2 files changed, 4 insertions(+), 1 deletion(-)
> 
> diff --git a/include/crypto/internal/skcipher.h 
> b/include/crypto/internal/skcipher.h
> index e42f7063f245..5035482cbe68 100644
> --- a/include/crypto/internal/skcipher.h
> +++ b/include/crypto/internal/skcipher.h
> @@ -130,6 +130,7 @@ static inline struct crypto_skcipher 
> *crypto_spawn_skcipher(
>  static inline void crypto_skcipher_set_reqsize(
>   struct crypto_skcipher *skcipher, unsigned int reqsize)
>  {
> + BUG_ON(reqsize > SKCIPHER_MAX_REQSIZE);

Please do not add these BUG_ONs.  Instead allow this function to
fail and check for the failure in the caller.

Thanks,
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel


Re: [dm-devel] [PATCH v8 0/9] crypto: Remove VLA usage

2018-09-05 Thread Herbert Xu
On Tue, Aug 07, 2018 at 02:18:34PM -0700, Kees Cook wrote:
> v8 cover letter:
> 
> I continue to hope this can land in v4.19, but I realize that's unlikely.
> It would be nice, though, if some of the "trivial" patches could get taken
> (e.g. cbc, xcbc, ccm VLA removals) so I don't have to keep repeating them.
> *fingers crossed*
> 
> Series cover letter:
> 
> This is nearly the last of the VLA removals[1], but it's one of the
> largest because crypto gets used in lots of places. After looking
> through code, usage, reading the threads Gustavo started, and comparing
> the use-cases to the other VLA removals that have landed in the kernel,
> I think this series is likely the best way forward to shut the door on
> VLAs forever.
> 
> For background, the crypto stack usage is for callers to do an immediate
> bit of work that doesn't allocate new memory. This means that other VLA
> removal techniques (like just using kmalloc) aren't workable, and the
> next common technique is needed: examination of maximum stack usage and
> the addition of sanity checks. This series does that, and in several
> cases, these maximums were already implicit in the code.
> 
> This series is intended to land via the crypto tree for 4.19, though it
> touches dm, networking, and a few other things as well, since there are
> dependent patches (new crypto #defines being used, etc).

I have applied patches 1-4 and 6-8.  I'd like to get an ack from
the dm folks regarding patch 5.  As to patch 9, please fix it so
it doesn't rely on the BUG_ON to catch things.

Thanks,
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel


Re: [dm-devel] Deadlock when using crypto API for block devices

2018-08-24 Thread Herbert Xu
On Fri, Aug 24, 2018 at 09:21:45PM +0800, Herbert Xu wrote:
> On Fri, Aug 24, 2018 at 09:00:00AM -0400, Mikulas Patocka wrote:
> >
> > BTW. gcmaes_crypt_by_sg also contains GFP_ATOMIC and -ENOMEM, behind a 
> > pretty complex condition. Do you mean that this condition is part of the 
> > contract that the crypto API provides?
> 
> This is an implementation defect.  I think for this case we should
> fall back to software GCM if the accelerated version fails.
> 
> > Should "req->src->offset + req->src->length < PAGE_SIZE" use "<=" instead? 
> > Because if the data ends up at page boundary, it will use the atomic 
> > allocation that can fail.
> 
> This condition does look strange.  It's introduced by the commit
> e845520707f85c539ce04bb73c6070e9441480be.   Dave, what exactly is
> it meant to do?

I forgot to Cc Dave Watson.

Thanks,
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel


Re: [dm-devel] Deadlock when using crypto API for block devices

2018-08-24 Thread Herbert Xu
On Fri, Aug 24, 2018 at 09:00:00AM -0400, Mikulas Patocka wrote:
>
> BTW. gcmaes_crypt_by_sg also contains GFP_ATOMIC and -ENOMEM, behind a 
> pretty complex condition. Do you mean that this condition is part of the 
> contract that the crypto API provides?

This is an implementation defect.  I think for this case we should
fall back to software GCM if the accelerated version fails.

> Should "req->src->offset + req->src->length < PAGE_SIZE" use "<=" instead? 
> Because if the data ends up at page boundary, it will use the atomic 
> allocation that can fail.

This condition does look strange.  It's introduced by the commit
e845520707f85c539ce04bb73c6070e9441480be.   Dave, what exactly is
it meant to do?

Thanks,
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel


Re: [dm-devel] Deadlock when using crypto API for block devices

2018-08-24 Thread Herbert Xu
On Fri, Aug 24, 2018 at 07:22:19AM -0400, Mikulas Patocka wrote:
>
> And also ablkcipher_next_slow, ablkcipher_copy_iv, skcipher_next_slow, 
> skcipher_next_copy, skcipher_copy_iv, blkcipher_next_slow, 
> blkcipher_copy_iv.
> 
> So, I think that dropping CRYPTO_TFM_REQ_MAY_SLEEP is not possible.

These should never trigger for dm-crypt AFAICS since it only
happens if you provide unaligned input/output.

Cheers,
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel


Re: [dm-devel] Deadlock when using crypto API for block devices

2018-08-23 Thread Herbert Xu
On Thu, Aug 23, 2018 at 04:39:23PM -0400, Mikulas Patocka wrote:
>
> 1. don't set CRYPTO_TFM_REQ_MAY_SLEEP in dm-crypt
> =
> If we don't set it, dm-crypt will use GFP_ATOMIC and GFP_ATOMIC may fail. 
> The function init_crypt in xts.c handles the failure gracefully - but the 
> question is - does the whole crypto code handle allocation failures 
> gracefully? If not and if it returns -ENOMEM somewhere, it would result in 
> I/O errors and data corruption.

It is safe to use GFP_ATOMIC.  First of the allocation is really
small (less than a page) so it will only fail when the system is
almost completely out of memory.  Even when it does fail the crypto
operation will still succeed, albeit it will process things at a
smaller granularity so the performance will degrade.

The sleeping part of that flag is also not an issue because it
only kicks in once per page.  As this is going to be less than
or equal to a page it shouldn't matter.
 
> 3. introduce new flag CRYPTO_TFM_REQ_MAY_SLEEP_NOIO
> ===
> Would you like to introduce it?

For now I don't think this is necessary given that this allocation
and similar ones in lrw and other places in the crypto API are just
performance enhancements and unlikely to fail except in very dire
situations.

But if new problems arise I'm certainly not opposed to this.

Thanks,
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel


Re: [dm-devel] [PATCH v7 2/9] crypto: cbc: Remove VLA usage

2018-08-07 Thread Herbert Xu
On Thu, Aug 02, 2018 at 03:51:45PM -0700, Kees Cook wrote:
> In the quest to remove all stack VLA usage from the kernel[1], this
> uses the upper bounds on blocksize. Since this is always a cipher
> blocksize, use the existing cipher max blocksize.
> 
> [1] 
> https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qpxydaacu1rq...@mail.gmail.com
> 
> Signed-off-by: Kees Cook 
> ---
>  include/crypto/cbc.h | 4 +++-
>  1 file changed, 3 insertions(+), 1 deletion(-)
> 
> diff --git a/include/crypto/cbc.h b/include/crypto/cbc.h
> index f5b8bfc22e6d..47db0aac2ab9 100644
> --- a/include/crypto/cbc.h
> +++ b/include/crypto/cbc.h
> @@ -113,7 +113,9 @@ static inline int crypto_cbc_decrypt_inplace(
>   unsigned int bsize = crypto_skcipher_blocksize(tfm);
>   unsigned int nbytes = walk->nbytes;
>   u8 *src = walk->src.virt.addr;
> - u8 last_iv[bsize];
> + u8 last_iv[MAX_CIPHER_BLOCKSIZE];
> +
> + BUG_ON(bsize > sizeof(last_iv));

Ugh, please don't add these BUG_ONs.  Add them to the places where
the algorithm is created (if they aren't checking that already).

Thanks,
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel


Re: [dm-devel] [PATCH 4/5] crypto: Add IV generation templates

2018-07-19 Thread Herbert Xu
On Thu, Jul 19, 2018 at 10:50:12AM +0200, Arnd Bergmann wrote:
>
> There seems to be some support for at least essiv(aes) in
> drivers/crypto/ccree/cc_cipher.c, is that compatible with your essiv(*)
> template, or is that something else?

Whatever it is it should be removed.  We should not be adding
hardware algorithms for which there is no software equivalent.

Thanks,
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel


Re: [dm-devel] [PATCH v5 01/11] crypto: xcbc: Remove VLA usage

2018-07-17 Thread Herbert Xu
On Mon, Jul 16, 2018 at 09:21:40PM -0700, Kees Cook wrote:
> In the quest to remove all stack VLA usage from the kernel[1], this uses
> the maximum blocksize and adds a sanity check. For xcbc, the blocksize
> must always be 16, so use that, since it's already being enforced during
> instantiation.
> 
> [1] 
> https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qpxydaacu1rq...@mail.gmail.com
> 
> Signed-off-by: Kees Cook 
> ---
>  crypto/xcbc.c | 9 +++--
>  1 file changed, 7 insertions(+), 2 deletions(-)
> 
> diff --git a/crypto/xcbc.c b/crypto/xcbc.c
> index 25c75af50d3f..7aa03beed795 100644
> --- a/crypto/xcbc.c
> +++ b/crypto/xcbc.c
> @@ -57,6 +57,8 @@ struct xcbc_desc_ctx {
>   u8 ctx[];
>  };
>  
> +#define XCBC_BLOCKSIZE   16
> +
>  static int crypto_xcbc_digest_setkey(struct crypto_shash *parent,
>const u8 *inkey, unsigned int keylen)
>  {
> @@ -65,7 +67,10 @@ static int crypto_xcbc_digest_setkey(struct crypto_shash 
> *parent,
>   int bs = crypto_shash_blocksize(parent);
>   u8 *consts = PTR_ALIGN(>ctx[0], alignmask + 1);
>   int err = 0;
> - u8 key1[bs];
> + u8 key1[XCBC_BLOCKSIZE];
> +
> + if (WARN_ON(bs > sizeof(key1)))
> + return -EINVAL;

Please remove this.  You already checked it in xcbc_create.

In fact, you should just make bs XCBC_BLOCKSIZE unconditinoally
in this function.

Cheers,
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel


Re: [dm-devel] [PATCH v4 11/14] treewide: Prepare to remove VLA usage for AHASH_REQUEST_ON_STACK

2018-07-15 Thread Herbert Xu
On Sat, Jul 14, 2018 at 07:59:09PM -0700, Kees Cook wrote:
> On Sat, Jul 14, 2018 at 7:44 PM, Herbert Xu  
> wrote:
> > On Fri, Jul 13, 2018 at 08:07:10PM -0700, Kees Cook wrote:
> >>
> >> On a plane today I started converting all these to shash. IIUC, it
> >> just looks like this (apologies for whitespace damage):
> >
> > Yes if it doesn't actually make use of SGs then shash would be
> > the way to go.  However, for SG users ahash is the best interface.
> 
> Nearly all of them artificially build an sg explicitly to use the
> ahash interface. :P
> 
> So, I'll take that as a "yes, do these conversions." :) Thanks!

Yeah anything that's doing a single-element SG list should just
be converted.

Thanks,
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel


  1   2   >