Re: [dm-devel] [PATCH] fix an integer overflow in __blkdev_sectors_to_bio_pages

2017-08-14 Thread Damien Le Moal

On 8/15/17 09:01, Mikulas Patocka wrote:
> 
> 
> On Mon, 14 Aug 2017, Damien Le Moal wrote:
> 
>> On Sun, 2017-08-13 at 22:47 -0400, Mikulas Patocka wrote:
>>>
>>> On Wed, 9 Aug 2017, h...@lst.de wrote:
>>>
 Does commit 615d22a51c04856efe62af6e1d5b450aaf5cc2c0
 "block: Fix __blkdev_issue_zeroout loop" fix the issue for you?

 --
 dm-devel mailing list
 dm-devel@redhat.com
 https://www.redhat.com/mailman/listinfo/dm-devel
>>>
>>> I think that patch is incorrect. sector_t may be a 32-bit type and 
>>> nr_sects << 9 may overflow.
>>>
>>> static unsigned int __blkdev_sectors_to_bio_pages(sector_t nr_sects)
>>> {
>>>sector_t bytes = (nr_sects << 9) + PAGE_SIZE - 1;
>>>
>>>return min(bytes >> PAGE_SHIFT, (sector_t)BIO_MAX_PAGES);
>>> }
>>>
>>> Mikulas
>>
>> Mikulas,
>>
>> Does the follwing patch fix the problem ?
>>
>> From 947b3cf41e759b2b23f684e215e651d0c8037f88 Mon Sep 17 00:00:00 2001
>> From: Damien Le Moal 
>> Date: Mon, 14 Aug 2017 13:01:16 +0900
>> Subject: [PATCH] block: Fix __blkdev_sectors_to_bio_pages()
>>
>> On 32bit systems where sector_t is a 32bits type, the calculation of
>> bytes may overflow. Use the u64 type for the local calculation to avoid
>> overflows.
>>
>> Signed-off-by: Damien Le Moal 
>> ---
>>  block/blk-lib.c | 4 ++--
>>  1 file changed, 2 insertions(+), 2 deletions(-)
>>
>> diff --git a/block/blk-lib.c b/block/blk-lib.c
>> index 3fe0aec90597..ccf22dba21f0 100644
>> --- a/block/blk-lib.c
>> +++ b/block/blk-lib.c
>> @@ -269,9 +269,9 @@ static int __blkdev_issue_write_zeroes(struct 
>> block_device
>> *bdev,
>>   */
>>  static unsigned int __blkdev_sectors_to_bio_pages(sector_t nr_sects)
>>  {
>> -sector_t bytes = (nr_sects << 9) + PAGE_SIZE - 1;
>> +u64 bytes = ((u64)nr_sects << 9) + PAGE_SIZE - 1;
>>  
>> -return min(bytes >> PAGE_SHIFT, (sector_t)BIO_MAX_PAGES);
>> +return min(bytes >> PAGE_SHIFT, (u64)BIO_MAX_PAGES);
>>  }
>>  
> 
> It's OK, but it is not needed to use 64-bit arithmetic here if all we need 
> is to shift the value right. Here I submit a simplified patch, using the 
> macro DIV_ROUND_UP_SECTOR_T (the macro gets optimized to just an addition 
> and right shift).
> 
> 
> 
> From: Mikulas Patocka 
> 
> Fix possible integer overflow in __blkdev_sectors_to_bio_pages if sector_t 
> is 32-bit.
> 
> Signed-off-by: Mikulas Patocka 
> Fixes: 615d22a51c04 ("block: Fix __blkdev_issue_zeroout loop")
> 
> ---
>  block/blk-lib.c |4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> Index: linux-2.6/block/blk-lib.c
> ===
> --- linux-2.6.orig/block/blk-lib.c
> +++ linux-2.6/block/blk-lib.c
> @@ -269,9 +269,9 @@ static int __blkdev_issue_write_zeroes(s
>   */
>  static unsigned int __blkdev_sectors_to_bio_pages(sector_t nr_sects)
>  {
> - sector_t bytes = (nr_sects << 9) + PAGE_SIZE - 1;
> + sector_t pages = DIV_ROUND_UP_SECTOR_T(nr_sects, PAGE_SIZE / 512);
>  
> - return min(bytes >> PAGE_SHIFT, (sector_t)BIO_MAX_PAGES);
> + return min(pages, (sector_t)BIO_MAX_PAGES);
>  }
>  
>  /**
> 

Nice ! Thank you.

Reviewed-by: Damien Le Moal 

-- 
Damien Le Moal,
Western Digital

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel


Re: [dm-devel] [PATCH] dm-crypt: limit the number of allocated pages

2017-08-14 Thread Mikulas Patocka


On Tue, 15 Aug 2017, Tom Yan wrote:

> Just tested the patch with kernel 4.12.6. Well it sort of worked. No
> more OOM or kernel panic. Memory takeup is around ~250M on a machine
> with 8G RAM. However I keep getting this:
> 
> Aug 15 04:04:10 archlinux kernel: INFO: task blkdiscard:538 blocked
> for more than 120 seconds.
> Aug 15 04:04:10 archlinux kernel:   Tainted: P C O
> 4.12.6-1-ARCH #1
> Aug 15 04:04:10 archlinux kernel: "echo 0 >
> /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> Aug 15 04:04:10 archlinux kernel: blkdiscard  D0   538537 
> 0x
> Aug 15 04:04:10 archlinux kernel: Call Trace:
> Aug 15 04:04:10 archlinux kernel:  __schedule+0x236/0x870
> Aug 15 04:04:10 archlinux kernel:  schedule+0x3d/0x90
> Aug 15 04:04:10 archlinux kernel:  schedule_timeout+0x21f/0x330
> Aug 15 04:04:10 archlinux kernel:  io_schedule_timeout+0x1e/0x50
> Aug 15 04:04:10 archlinux kernel:  ? io_schedule_timeout+0x1e/0x50
> Aug 15 04:04:10 archlinux kernel:  wait_for_completion_io+0xa5/0x120
> Aug 15 04:04:10 archlinux kernel:  ? wake_up_q+0x80/0x80
> Aug 15 04:04:10 archlinux kernel:  submit_bio_wait+0x68/0x90
> Aug 15 04:04:10 archlinux kernel:  blkdev_issue_zeroout+0x80/0xc0
> Aug 15 04:04:10 archlinux kernel:  blkdev_ioctl+0x707/0x940
> Aug 15 04:04:10 archlinux kernel:  ? blkdev_ioctl+0x707/0x940
> Aug 15 04:04:10 archlinux kernel:  block_ioctl+0x3d/0x50
> Aug 15 04:04:10 archlinux kernel:  do_vfs_ioctl+0xa5/0x600
> Aug 15 04:04:10 archlinux kernel:  ? SYSC_newfstat+0x44/0x70
> Aug 15 04:04:10 archlinux kernel:  ? getrawmonotonic64+0x36/0xc0
> Aug 15 04:04:10 archlinux kernel:  SyS_ioctl+0x79/0x90
> Aug 15 04:04:10 archlinux kernel:  entry_SYSCALL_64_fastpath+0x1a/0xa5
> Aug 15 04:04:10 archlinux kernel: RIP: 0033:0x7f2b463378b7
> Aug 15 04:04:10 archlinux kernel: RSP: 002b:7fffb2dad8b8 EFLAGS:
> 0246 ORIG_RAX: 0010
> Aug 15 04:04:10 archlinux kernel: RAX: ffda RBX:
> 00568a3922c8 RCX: 7f2b463378b7
> Aug 15 04:04:10 archlinux kernel: RDX: 7fffb2dad910 RSI:
> 127f RDI: 0003
> Aug 15 04:04:10 archlinux kernel: RBP:  R08:
> 0200 R09: 
> Aug 15 04:04:10 archlinux kernel: R10: 7fffb2dad870 R11:
> 0246 R12: 
> Aug 15 04:04:10 archlinux kernel: R13: 0003 R14:
> 7fffb2dadae8 R15: 
> 
> which I do not get if I do `blkdiscard -z` on the underlying device
> (which does not support SCSI WRITE SAME) instead of the dm-crypt
> container.

It happens because encryption is too slow, it takes more than 120 seconds 
to clear the device. I don't know what else it should do. It is not easy 
to interrupt the operation because you would then have to deal with 
dangling bios.

> In the first trial I got some more lower-level errors. blkdiscard
> could not exit after the job was seemingly finished (no more write
> according to iostat). The container could not be closed either so I
> had to just disconnect the drive. I could not reproduce it in second
> trial though, so I am not sure if it was just some coincidental
> hardware hiccup. My systemd journal happened to be broken as well so
> the log of that trial was lost.

So, it's probably bug in error handling in the underlying device (or in 
dm-crypt). Was the device that experienced errors the same as the root 
device of the system?

> I also have doubt in the approach. Can't we split the bio chain as per
> how it was chained and allocate memory bio per bio, and if it's not
> enough, also limit the memory allocation with a maximum number
> (arbitrary or not) of bios?

The number of in-flight bios could be also limited in next_bio (because if 
you have really small memory and large device, the memory could be 
exhausted by the allocation of bio structures). But this is not your case.

Note that no pages are allocated in function that does the zeroing 
(__blkdev_issue_zeroout). The function creates large number of bios and 
all the bios point to the same page containing all zeroes. The memory 
allocation happens when dm-crypt attempts to allocate pages that hold the 
encrypted data.

There are other cases where dm-crypt can cause memory exhaustion, so the 
fix in dm-crypt is appropriate. Another case where it blows the memory is 
when you create a device that has an odd number of sectors (so block size 
512 is used), use dm-crypt on this device and write to the encrypted 
device using the block device's page cache. In this case, dm-crypt 
receives large amount of 512-byte bios, allocates a full 4k page for each 
bio and it also exhausts memory. This patch fixes this problem as well.

> On 14 August 2017 at 10:45, Mikulas Patocka  wrote:
> > dm-crypt consumes excessive amount memory when the user attempts to zero
> > a dm-crypt device with "blkdiscard -z". The command "blkdiscard -z" calls
> > the BLKZEROOUT ioctl, it goes to the function 

Re: [dm-devel] [PATCH] dm-crypt: limit the number of allocated pages

2017-08-14 Thread Mikulas Patocka


On Mon, 14 Aug 2017, John Stoffel wrote:

> > "Mikulas" == Mikulas Patocka  writes:
> 
> Mikulas> dm-crypt consumes excessive amount memory when the user attempts to 
> zero
> Mikulas> a dm-crypt device with "blkdiscard -z". The command "blkdiscard -z" 
> calls
> Mikulas> the BLKZEROOUT ioctl, it goes to the function __blkdev_issue_zeroout,
> Mikulas> __blkdev_issue_zeroout sends large amount of write bios that contain 
> the
> Mikulas> zero page as their payload.
> 
> Mikulas> For each incoming page, dm-crypt allocates another page that holds 
> the
> Mikulas> encrypted data, so when processing "blkdiscard -z", dm-crypt tries to
> Mikulas> allocate the amount of memory that is equal to the size of the 
> device.
> Mikulas> This can trigger OOM killer or cause system crash.
> 
> Mikulas> This patch fixes the bug by limiting the amount of memory that 
> dm-crypt
> Mikulas> allocates to 2% of total system memory.
> 
> Is this limit per-device or system wide?  it's not clear to me if the
> minimum is just one page, or more so it might be nice to clarify
> that.

The limit is system-wide. The limit is divided by the number of active 
dm-crypt devices and each device receives an equal share.

There is a lower bound of BIO_MAX_PAGES * 16. The per-device limit is 
never lower than this.

> Also, for large memory systems, that's still alot of pages.  Maybe it
> should be more exponential in it's clamping as memory sizes go up?  2%
> of 2T is 4G, which is pretty darn big...

The more we restrict it, the less parallelism is used in dm-crypt, so it 
makes sense to have a big value on machines with many cores.

> Mikulas> Signed-off-by: Mikulas Patocka 
> Mikulas> Cc: sta...@vger.kernel.org
> 
> Mikulas> ---
> Mikulas>  drivers/md/dm-crypt.c |   60 
> +-
> Mikulas>  1 file changed, 59 insertions(+), 1 deletion(-)
> 
> Mikulas> Index: linux-2.6/drivers/md/dm-crypt.c
> Mikulas> ===
> Mikulas> --- linux-2.6.orig/drivers/md/dm-crypt.c
> Mikulas> +++ linux-2.6/drivers/md/dm-crypt.c
> Mikulas> @@ -148,6 +148,8 @@ struct crypt_config {
> Mikulas>  mempool_t *tag_pool;
> Mikulas>  unsigned tag_pool_max_sectors;
>  
> Mikulas> +struct percpu_counter n_allocated_pages;
> Mikulas> +
> Mikulas>  struct bio_set *bs;
> Mikulas>  struct mutex bio_alloc_lock;
>  
> Mikulas> @@ -219,6 +221,12 @@ struct crypt_config {
> Mikulas>  #define MAX_TAG_SIZE480
> Mikulas>  #define POOL_ENTRY_SIZE 512
>  
> Mikulas> +static DEFINE_SPINLOCK(dm_crypt_clients_lock);
> Mikulas> +static unsigned dm_crypt_clients_n = 0;
> Mikulas> +static volatile unsigned long dm_crypt_pages_per_client;
> Mikulas> +#define DM_CRYPT_MEMORY_PERCENT 2
> Mikulas> +#define DM_CRYPT_MIN_PAGES_PER_CLIENT   (BIO_MAX_PAGES 
> * 16)
> Mikulas> +
> Mikulas>  static void clone_init(struct dm_crypt_io *, struct bio *);
> Mikulas>  static void kcryptd_queue_crypt(struct dm_crypt_io *io);
> Mikulas>  static struct scatterlist *crypt_get_sg_data(struct crypt_config 
> *cc,
> Mikulas> @@ -2158,6 +2166,37 @@ static int crypt_wipe_key(struct crypt_c
> Mikulas>  return r;
> Mikulas>  }
>  
> Mikulas> +static void crypt_calculate_pages_per_client(void)
> Mikulas> +{
> Mikulas> +unsigned long pages = (totalram_pages - totalhigh_pages) * 
> DM_CRYPT_MEMORY_PERCENT / 100;
> Mikulas> +if (!dm_crypt_clients_n)
> Mikulas> +return;
> Mikulas> +pages /= dm_crypt_clients_n;
> 
> Would it make sense to use a shift here, or does this only run once on
> setup?  And shouldn't you return a value here, if only to make it
> clear what you're defaulting to?  

This piece of code is executed only when a dm-crypt device is created or 
deactivated, so performance doesn't matter.

> Mikulas> +if (pages < DM_CRYPT_MIN_PAGES_PER_CLIENT)
> Mikulas> +pages = DM_CRYPT_MIN_PAGES_PER_CLIENT;
> Mikulas> +dm_crypt_pages_per_client = pages;
> Mikulas> +}
> Mikulas> +
> Mikulas> +static void *crypt_page_alloc(gfp_t gfp_mask, void *pool_data)
> Mikulas> +{
> Mikulas> +struct crypt_config *cc = pool_data;
> Mikulas> +struct page *page;
> Mikulas> +if (unlikely(percpu_counter_compare(>n_allocated_pages, 
> dm_crypt_pages_per_client) >= 0) &&
> Mikulas> +likely(gfp_mask & __GFP_NORETRY))
> Mikulas> +return NULL;
> Mikulas> +page = alloc_page(gfp_mask);
> Mikulas> +if (likely(page != NULL))
> Mikulas> +percpu_counter_add(>n_allocated_pages, 1);
> Mikulas> +return page;
> Mikulas> +}
> Mikulas> +
> Mikulas> +static void crypt_page_free(void *page, void *pool_data)
> Mikulas> +{
> Mikulas> +struct crypt_config *cc = pool_data;
> Mikulas> +__free_page(page);
> Mikulas> +percpu_counter_sub(>n_allocated_pages, 1);
> Mikulas> +}
> Mikulas> +
> Mikulas>  static void crypt_dtr(struct dm_target 

[dm-devel] [PATCH] fix an integer overflow in __blkdev_sectors_to_bio_pages

2017-08-14 Thread Mikulas Patocka


On Mon, 14 Aug 2017, Damien Le Moal wrote:

> On Sun, 2017-08-13 at 22:47 -0400, Mikulas Patocka wrote:
> > 
> > On Wed, 9 Aug 2017, h...@lst.de wrote:
> > 
> > > Does commit 615d22a51c04856efe62af6e1d5b450aaf5cc2c0
> > > "block: Fix __blkdev_issue_zeroout loop" fix the issue for you?
> > > 
> > > --
> > > dm-devel mailing list
> > > dm-devel@redhat.com
> > > https://www.redhat.com/mailman/listinfo/dm-devel
> > 
> > I think that patch is incorrect. sector_t may be a 32-bit type and 
> > nr_sects << 9 may overflow.
> > 
> > static unsigned int __blkdev_sectors_to_bio_pages(sector_t nr_sects)
> > {
> >sector_t bytes = (nr_sects << 9) + PAGE_SIZE - 1;
> > 
> >return min(bytes >> PAGE_SHIFT, (sector_t)BIO_MAX_PAGES);
> > }
> > 
> > Mikulas
> 
> Mikulas,
> 
> Does the follwing patch fix the problem ?
> 
> From 947b3cf41e759b2b23f684e215e651d0c8037f88 Mon Sep 17 00:00:00 2001
> From: Damien Le Moal 
> Date: Mon, 14 Aug 2017 13:01:16 +0900
> Subject: [PATCH] block: Fix __blkdev_sectors_to_bio_pages()
> 
> On 32bit systems where sector_t is a 32bits type, the calculation of
> bytes may overflow. Use the u64 type for the local calculation to avoid
> overflows.
> 
> Signed-off-by: Damien Le Moal 
> ---
>  block/blk-lib.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/block/blk-lib.c b/block/blk-lib.c
> index 3fe0aec90597..ccf22dba21f0 100644
> --- a/block/blk-lib.c
> +++ b/block/blk-lib.c
> @@ -269,9 +269,9 @@ static int __blkdev_issue_write_zeroes(struct block_device
> *bdev,
>   */
>  static unsigned int __blkdev_sectors_to_bio_pages(sector_t nr_sects)
>  {
> - sector_t bytes = (nr_sects << 9) + PAGE_SIZE - 1;
> + u64 bytes = ((u64)nr_sects << 9) + PAGE_SIZE - 1;
>  
> - return min(bytes >> PAGE_SHIFT, (sector_t)BIO_MAX_PAGES);
> + return min(bytes >> PAGE_SHIFT, (u64)BIO_MAX_PAGES);
>  }
>  

It's OK, but it is not needed to use 64-bit arithmetic here if all we need 
is to shift the value right. Here I submit a simplified patch, using the 
macro DIV_ROUND_UP_SECTOR_T (the macro gets optimized to just an addition 
and right shift).



From: Mikulas Patocka 

Fix possible integer overflow in __blkdev_sectors_to_bio_pages if sector_t 
is 32-bit.

Signed-off-by: Mikulas Patocka 
Fixes: 615d22a51c04 ("block: Fix __blkdev_issue_zeroout loop")

---
 block/blk-lib.c |4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

Index: linux-2.6/block/blk-lib.c
===
--- linux-2.6.orig/block/blk-lib.c
+++ linux-2.6/block/blk-lib.c
@@ -269,9 +269,9 @@ static int __blkdev_issue_write_zeroes(s
  */
 static unsigned int __blkdev_sectors_to_bio_pages(sector_t nr_sects)
 {
-   sector_t bytes = (nr_sects << 9) + PAGE_SIZE - 1;
+   sector_t pages = DIV_ROUND_UP_SECTOR_T(nr_sects, PAGE_SIZE / 512);
 
-   return min(bytes >> PAGE_SHIFT, (sector_t)BIO_MAX_PAGES);
+   return min(pages, (sector_t)BIO_MAX_PAGES);
 }
 
 /**

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel


[dm-devel] [PATCH 1/3] kpartx/devmapper.h: remove dm_no_partitions

2017-08-14 Thread Martin Wilck
This removes the deleted function from the header file.

Fixes: 2ea69fc9 'kpartx: remove "no_partitions" support'
Signed-off-by: Martin Wilck 
---
 kpartx/devmapper.h | 1 -
 1 file changed, 1 deletion(-)

diff --git a/kpartx/devmapper.h b/kpartx/devmapper.h
index 2e28c780..f00373e0 100644
--- a/kpartx/devmapper.h
+++ b/kpartx/devmapper.h
@@ -19,6 +19,5 @@ dev_t dm_get_first_dep(char *devname);
 char * dm_mapuuid(const char *mapname);
 int dm_devn (const char * mapname, int *major, int *minor);
 int dm_remove_partmaps (char * mapname, char *uuid, dev_t devt, int verbose);
-int dm_no_partitions(char * mapname);
 
 #endif /* _KPARTX_DEVMAPPER_H */
-- 
2.14.0

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel


[dm-devel] [PATCH 2/3] libmultipath: cli_add_map: Use CMD_NONE

2017-08-14 Thread Martin Wilck
Commands issued from the daemon itself should always pass
CMD_NONE to coalesce_paths. Otherwise, the daemon mistakenly
tries to talk to itself via the socket, causing deadlock.

Reproduce simply by calling "multipathd add map $X".

Fixes: d19936f4 "libmultipath: Do not access 'conf->cmd' in domap()"
Signed-off-by: Martin Wilck 
---
 multipathd/cli_handlers.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/multipathd/cli_handlers.c b/multipathd/cli_handlers.c
index b4a95e37..f5ba4cfb 100644
--- a/multipathd/cli_handlers.c
+++ b/multipathd/cli_handlers.c
@@ -782,7 +782,7 @@ cli_add_map (void * v, char ** reply, int * len, void * 
data)
 vecs->pathvec, );
if (refwwid) {
if (coalesce_paths(vecs, NULL, refwwid,
-  FORCE_RELOAD_NONE, 1))
+  FORCE_RELOAD_NONE, CMD_NONE))
condlog(2, "%s: coalesce_paths failed",
param);
dm_lib_release();
-- 
2.14.0

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel


[dm-devel] [PATCH 0/3] some minor multipath-tools fixes

2017-08-14 Thread Martin Wilck
Fix some glitches that were discovered recently here at SUSE.

Martin Wilck (3):
  kpartx/devmapper.h: remove dm_no_partitions
  libmultipath: cli_add_map: Use CMD_NONE
  multipath-tools: link internal libraries before foreigns

 kpartx/devmapper.h| 1 -
 mpathpersist/Makefile | 4 ++--
 multipath/Makefile| 4 ++--
 multipathd/Makefile   | 4 ++--
 multipathd/cli_handlers.c | 2 +-
 5 files changed, 7 insertions(+), 8 deletions(-)

-- 
2.14.0

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel


[dm-devel] [PATCH 3/3] multipath-tools: link internal libraries before foreigns

2017-08-14 Thread Martin Wilck
Otherwise, the runtime linker may resolve foreign symbols instead of
internal ones for certain symbol names (observed with xfree()
from libreadline).

Reported-by: nikola.pajkov...@suse.com
Signed-off-by: Martin Wilck 
---
 mpathpersist/Makefile | 4 ++--
 multipath/Makefile| 4 ++--
 multipathd/Makefile   | 4 ++--
 3 files changed, 6 insertions(+), 6 deletions(-)

diff --git a/mpathpersist/Makefile b/mpathpersist/Makefile
index bd1c0df9..6e5acd3b 100644
--- a/mpathpersist/Makefile
+++ b/mpathpersist/Makefile
@@ -3,8 +3,8 @@ include ../Makefile.inc
 CFLAGS += $(BIN_CFLAGS) -I$(multipathdir) -I$(mpathpersistdir)
 LDFLAGS += $(BIN_LDFLAGS)
 
-LIBDEPS += -lpthread -ldevmapper -L$(mpathpersistdir) -lmpathpersist \
-  -L$(multipathdir) -L$(mpathcmddir) -lmpathcmd -lmultipath -ludev
+LIBDEPS += -L$(mpathpersistdir) -lmpathpersist -L$(multipathdir) -lmultipath \
+   -L$(mpathcmddir) -lmpathcmd -lpthread -ldevmapper -ludev
 
 EXEC = mpathpersist
 
diff --git a/multipath/Makefile b/multipath/Makefile
index c85314e3..468c056d 100644
--- a/multipath/Makefile
+++ b/multipath/Makefile
@@ -5,8 +5,8 @@ include ../Makefile.inc
 
 CFLAGS += $(BIN_CFLAGS) -I$(multipathdir) -I$(mpathcmddir)
 LDFLAGS += $(BIN_LDFLAGS)
-LIBDEPS += -lpthread -ldevmapper -ldl -L$(multipathdir) -lmultipath -ludev \
-  -L$(mpathcmddir) -lmpathcmd
+LIBDEPS += -L$(multipathdir) -lmultipath -L$(mpathcmddir) -lmpathcmd \
+   -lpthread -ldevmapper -ldl -ludev
 
 EXEC = multipath
 
diff --git a/multipathd/Makefile b/multipathd/Makefile
index d5782a18..e6f140bf 100644
--- a/multipathd/Makefile
+++ b/multipathd/Makefile
@@ -9,8 +9,8 @@ include ../Makefile.inc
 CFLAGS += $(BIN_CFLAGS) -I$(multipathdir) -I$(mpathpersistdir) \
  -I$(mpathcmddir) -I$(thirdpartydir)
 LDFLAGS += $(BIN_LDFLAGS)
-LIBDEPS += -ludev -ldl -L$(multipathdir) -lmultipath -L$(mpathpersistdir) \
-  -lmpathpersist -L$(mpathcmddir) -lmpathcmd -lurcu -lpthread \
+LIBDEPS += -L$(multipathdir) -lmultipath -L$(mpathpersistdir) -lmpathpersist \
+  -L$(mpathcmddir) -lmpathcmd -ludev -ldl -lurcu -lpthread \
   -ldevmapper -lreadline
 
 ifdef SYSTEMD
-- 
2.14.0

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel


Re: [dm-devel] [PATCH] dm-crypt: limit the number of allocated pages

2017-08-14 Thread John Stoffel
> "Mikulas" == Mikulas Patocka  writes:

Mikulas> dm-crypt consumes excessive amount memory when the user attempts to 
zero
Mikulas> a dm-crypt device with "blkdiscard -z". The command "blkdiscard -z" 
calls
Mikulas> the BLKZEROOUT ioctl, it goes to the function __blkdev_issue_zeroout,
Mikulas> __blkdev_issue_zeroout sends large amount of write bios that contain 
the
Mikulas> zero page as their payload.

Mikulas> For each incoming page, dm-crypt allocates another page that holds the
Mikulas> encrypted data, so when processing "blkdiscard -z", dm-crypt tries to
Mikulas> allocate the amount of memory that is equal to the size of the device.
Mikulas> This can trigger OOM killer or cause system crash.

Mikulas> This patch fixes the bug by limiting the amount of memory that dm-crypt
Mikulas> allocates to 2% of total system memory.

Is this limit per-device or system wide?  it's not clear to me if the
minimum is just one page, or more so it might be nice to clarify
that.

Also, for large memory systems, that's still alot of pages.  Maybe it
should be more exponential in it's clamping as memory sizes go up?  2%
of 2T is 4G, which is pretty darn big...


Mikulas> Signed-off-by: Mikulas Patocka 
Mikulas> Cc: sta...@vger.kernel.org

Mikulas> ---
Mikulas>  drivers/md/dm-crypt.c |   60 
+-
Mikulas>  1 file changed, 59 insertions(+), 1 deletion(-)

Mikulas> Index: linux-2.6/drivers/md/dm-crypt.c
Mikulas> ===
Mikulas> --- linux-2.6.orig/drivers/md/dm-crypt.c
Mikulas> +++ linux-2.6/drivers/md/dm-crypt.c
Mikulas> @@ -148,6 +148,8 @@ struct crypt_config {
Mikulas>mempool_t *tag_pool;
Mikulas>unsigned tag_pool_max_sectors;
 
Mikulas> +  struct percpu_counter n_allocated_pages;
Mikulas> +
Mikulas>struct bio_set *bs;
Mikulas>struct mutex bio_alloc_lock;
 
Mikulas> @@ -219,6 +221,12 @@ struct crypt_config {
Mikulas>  #define MAX_TAG_SIZE  480
Mikulas>  #define POOL_ENTRY_SIZE   512
 
Mikulas> +static DEFINE_SPINLOCK(dm_crypt_clients_lock);
Mikulas> +static unsigned dm_crypt_clients_n = 0;
Mikulas> +static volatile unsigned long dm_crypt_pages_per_client;
Mikulas> +#define DM_CRYPT_MEMORY_PERCENT   2
Mikulas> +#define DM_CRYPT_MIN_PAGES_PER_CLIENT (BIO_MAX_PAGES * 16)
Mikulas> +
Mikulas>  static void clone_init(struct dm_crypt_io *, struct bio *);
Mikulas>  static void kcryptd_queue_crypt(struct dm_crypt_io *io);
Mikulas>  static struct scatterlist *crypt_get_sg_data(struct crypt_config *cc,
Mikulas> @@ -2158,6 +2166,37 @@ static int crypt_wipe_key(struct crypt_c
Mikulas>return r;
Mikulas>  }
 
Mikulas> +static void crypt_calculate_pages_per_client(void)
Mikulas> +{
Mikulas> +  unsigned long pages = (totalram_pages - totalhigh_pages) * 
DM_CRYPT_MEMORY_PERCENT / 100;
Mikulas> +  if (!dm_crypt_clients_n)
Mikulas> +  return;
Mikulas> +  pages /= dm_crypt_clients_n;

Would it make sense to use a shift here, or does this only run once on
setup?  And shouldn't you return a value here, if only to make it
clear what you're defaulting to?  

Mikulas> +  if (pages < DM_CRYPT_MIN_PAGES_PER_CLIENT)
Mikulas> +  pages = DM_CRYPT_MIN_PAGES_PER_CLIENT;
Mikulas> +  dm_crypt_pages_per_client = pages;
Mikulas> +}
Mikulas> +
Mikulas> +static void *crypt_page_alloc(gfp_t gfp_mask, void *pool_data)
Mikulas> +{
Mikulas> +  struct crypt_config *cc = pool_data;
Mikulas> +  struct page *page;
Mikulas> +  if (unlikely(percpu_counter_compare(>n_allocated_pages, 
dm_crypt_pages_per_client) >= 0) &&
Mikulas> +  likely(gfp_mask & __GFP_NORETRY))
Mikulas> +  return NULL;
Mikulas> +  page = alloc_page(gfp_mask);
Mikulas> +  if (likely(page != NULL))
Mikulas> +  percpu_counter_add(>n_allocated_pages, 1);
Mikulas> +  return page;
Mikulas> +}
Mikulas> +
Mikulas> +static void crypt_page_free(void *page, void *pool_data)
Mikulas> +{
Mikulas> +  struct crypt_config *cc = pool_data;
Mikulas> +  __free_page(page);
Mikulas> +  percpu_counter_sub(>n_allocated_pages, 1);
Mikulas> +}
Mikulas> +
Mikulas>  static void crypt_dtr(struct dm_target *ti)
Mikulas>  {
Mikulas>struct crypt_config *cc = ti->private;
Mikulas> @@ -2184,6 +2223,10 @@ static void crypt_dtr(struct dm_target *
Mikulas>mempool_destroy(cc->req_pool);
Mikulas>mempool_destroy(cc->tag_pool);
 
Mikulas> +  if (cc->page_pool)
Mikulas> +  WARN_ON(percpu_counter_sum(>n_allocated_pages) != 
0);
Mikulas> +  percpu_counter_destroy(>n_allocated_pages);
Mikulas> +
Mikulas>if (cc->iv_gen_ops && cc->iv_gen_ops->dtr)
cc-> iv_gen_ops->dtr(cc);
 
Mikulas> @@ -2198,6 +2241,12 @@ static void crypt_dtr(struct dm_target *
 
Mikulas>/* Must zero key material before freeing */
Mikulas>kzfree(cc);
Mikulas> +
Mikulas> + 

[dm-devel] [PATCH v5 08/19] crypto: move drbg to generic async completion

2017-08-14 Thread Gilad Ben-Yossef
DRBG is starting an async. crypto op and waiting for it complete.
Move it over to generic code doing the same.

The code now also passes CRYPTO_TFM_REQ_MAY_SLEEP flag indicating
crypto request memory allocation may use GFP_KERNEL which should
be perfectly fine as the code is obviously sleeping for the
completion of the request any way.

Signed-off-by: Gilad Ben-Yossef 
---
 crypto/drbg.c | 36 +---
 include/crypto/drbg.h |  3 +--
 2 files changed, 10 insertions(+), 29 deletions(-)

diff --git a/crypto/drbg.c b/crypto/drbg.c
index 633a88e..c522251 100644
--- a/crypto/drbg.c
+++ b/crypto/drbg.c
@@ -1651,16 +1651,6 @@ static int drbg_fini_sym_kernel(struct drbg_state *drbg)
return 0;
 }
 
-static void drbg_skcipher_cb(struct crypto_async_request *req, int error)
-{
-   struct drbg_state *drbg = req->data;
-
-   if (error == -EINPROGRESS)
-   return;
-   drbg->ctr_async_err = error;
-   complete(>ctr_completion);
-}
-
 static int drbg_init_sym_kernel(struct drbg_state *drbg)
 {
struct crypto_cipher *tfm;
@@ -1691,7 +1681,7 @@ static int drbg_init_sym_kernel(struct drbg_state *drbg)
return PTR_ERR(sk_tfm);
}
drbg->ctr_handle = sk_tfm;
-   init_completion(>ctr_completion);
+   crypto_init_wait(>ctr_wait);
 
req = skcipher_request_alloc(sk_tfm, GFP_KERNEL);
if (!req) {
@@ -1700,8 +1690,9 @@ static int drbg_init_sym_kernel(struct drbg_state *drbg)
return -ENOMEM;
}
drbg->ctr_req = req;
-   skcipher_request_set_callback(req, CRYPTO_TFM_REQ_MAY_BACKLOG,
-   drbg_skcipher_cb, drbg);
+   skcipher_request_set_callback(req, CRYPTO_TFM_REQ_MAY_BACKLOG |
+   CRYPTO_TFM_REQ_MAY_SLEEP,
+   crypto_req_done, >ctr_wait);
 
alignmask = crypto_skcipher_alignmask(sk_tfm);
drbg->ctr_null_value_buf = kzalloc(DRBG_CTR_NULL_LEN + alignmask,
@@ -1762,21 +1753,12 @@ static int drbg_kcapi_sym_ctr(struct drbg_state *drbg,
/* Output buffer may not be valid for SGL, use scratchpad */
skcipher_request_set_crypt(drbg->ctr_req, _in, _out,
   cryptlen, drbg->V);
-   ret = crypto_skcipher_encrypt(drbg->ctr_req);
-   switch (ret) {
-   case 0:
-   break;
-   case -EINPROGRESS:
-   case -EBUSY:
-   wait_for_completion(>ctr_completion);
-   if (!drbg->ctr_async_err) {
-   reinit_completion(>ctr_completion);
-   break;
-   }
-   default:
+   ret = crypto_wait_req(crypto_skcipher_encrypt(drbg->ctr_req),
+   >ctr_wait);
+   if (ret)
goto out;
-   }
-   init_completion(>ctr_completion);
+
+   crypto_init_wait(>ctr_wait);
 
memcpy(outbuf, drbg->outscratchpad, cryptlen);
 
diff --git a/include/crypto/drbg.h b/include/crypto/drbg.h
index 22f884c..8f94110 100644
--- a/include/crypto/drbg.h
+++ b/include/crypto/drbg.h
@@ -126,8 +126,7 @@ struct drbg_state {
__u8 *ctr_null_value;   /* CTR mode aligned zero buf */
__u8 *outscratchpadbuf; /* CTR mode output scratchpad */
 __u8 *outscratchpad;   /* CTR mode aligned outbuf */
-   struct completion ctr_completion;   /* CTR mode async handler */
-   int ctr_async_err;  /* CTR mode async error */
+   struct crypto_wait ctr_wait;/* CTR mode async wait obj */
 
bool seeded;/* DRBG fully seeded? */
bool pr;/* Prediction resistance enabled? */
-- 
2.1.4

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel


[dm-devel] [PATCH v5 11/19] fscrypt: move to generic async completion

2017-08-14 Thread Gilad Ben-Yossef
fscrypt starts several async. crypto ops and waiting for them to
complete. Move it over to generic code doing the same.

Signed-off-by: Gilad Ben-Yossef 
---
 fs/crypto/crypto.c  | 28 
 fs/crypto/fname.c   | 36 ++--
 fs/crypto/fscrypt_private.h | 10 --
 fs/crypto/keyinfo.c | 21 +++--
 4 files changed, 13 insertions(+), 82 deletions(-)

diff --git a/fs/crypto/crypto.c b/fs/crypto/crypto.c
index c7835df..80a3cad 100644
--- a/fs/crypto/crypto.c
+++ b/fs/crypto/crypto.c
@@ -126,21 +126,6 @@ struct fscrypt_ctx *fscrypt_get_ctx(const struct inode 
*inode, gfp_t gfp_flags)
 }
 EXPORT_SYMBOL(fscrypt_get_ctx);
 
-/**
- * page_crypt_complete() - completion callback for page crypto
- * @req: The asynchronous cipher request context
- * @res: The result of the cipher operation
- */
-static void page_crypt_complete(struct crypto_async_request *req, int res)
-{
-   struct fscrypt_completion_result *ecr = req->data;
-
-   if (res == -EINPROGRESS)
-   return;
-   ecr->res = res;
-   complete(>completion);
-}
-
 int fscrypt_do_page_crypto(const struct inode *inode, fscrypt_direction_t rw,
   u64 lblk_num, struct page *src_page,
   struct page *dest_page, unsigned int len,
@@ -151,7 +136,7 @@ int fscrypt_do_page_crypto(const struct inode *inode, 
fscrypt_direction_t rw,
u8 padding[FS_IV_SIZE - sizeof(__le64)];
} iv;
struct skcipher_request *req = NULL;
-   DECLARE_FS_COMPLETION_RESULT(ecr);
+   DECLARE_CRYPTO_WAIT(wait);
struct scatterlist dst, src;
struct fscrypt_info *ci = inode->i_crypt_info;
struct crypto_skcipher *tfm = ci->ci_ctfm;
@@ -179,7 +164,7 @@ int fscrypt_do_page_crypto(const struct inode *inode, 
fscrypt_direction_t rw,
 
skcipher_request_set_callback(
req, CRYPTO_TFM_REQ_MAY_BACKLOG | CRYPTO_TFM_REQ_MAY_SLEEP,
-   page_crypt_complete, );
+   crypto_req_done, );
 
sg_init_table(, 1);
sg_set_page(, dest_page, len, offs);
@@ -187,14 +172,9 @@ int fscrypt_do_page_crypto(const struct inode *inode, 
fscrypt_direction_t rw,
sg_set_page(, src_page, len, offs);
skcipher_request_set_crypt(req, , , len, );
if (rw == FS_DECRYPT)
-   res = crypto_skcipher_decrypt(req);
+   res = crypto_wait_req(crypto_skcipher_decrypt(req), );
else
-   res = crypto_skcipher_encrypt(req);
-   if (res == -EINPROGRESS || res == -EBUSY) {
-   BUG_ON(req->base.data != );
-   wait_for_completion();
-   res = ecr.res;
-   }
+   res = crypto_wait_req(crypto_skcipher_encrypt(req), );
skcipher_request_free(req);
if (res) {
printk_ratelimited(KERN_ERR
diff --git a/fs/crypto/fname.c b/fs/crypto/fname.c
index ad9f814..a80a0d3 100644
--- a/fs/crypto/fname.c
+++ b/fs/crypto/fname.c
@@ -15,21 +15,6 @@
 #include "fscrypt_private.h"
 
 /**
- * fname_crypt_complete() - completion callback for filename crypto
- * @req: The asynchronous cipher request context
- * @res: The result of the cipher operation
- */
-static void fname_crypt_complete(struct crypto_async_request *req, int res)
-{
-   struct fscrypt_completion_result *ecr = req->data;
-
-   if (res == -EINPROGRESS)
-   return;
-   ecr->res = res;
-   complete(>completion);
-}
-
-/**
  * fname_encrypt() - encrypt a filename
  *
  * The caller must have allocated sufficient memory for the @oname string.
@@ -40,7 +25,7 @@ static int fname_encrypt(struct inode *inode,
const struct qstr *iname, struct fscrypt_str *oname)
 {
struct skcipher_request *req = NULL;
-   DECLARE_FS_COMPLETION_RESULT(ecr);
+   DECLARE_CRYPTO_WAIT(wait);
struct fscrypt_info *ci = inode->i_crypt_info;
struct crypto_skcipher *tfm = ci->ci_ctfm;
int res = 0;
@@ -76,17 +61,12 @@ static int fname_encrypt(struct inode *inode,
}
skcipher_request_set_callback(req,
CRYPTO_TFM_REQ_MAY_BACKLOG | CRYPTO_TFM_REQ_MAY_SLEEP,
-   fname_crypt_complete, );
+   crypto_req_done, );
sg_init_one(, oname->name, cryptlen);
skcipher_request_set_crypt(req, , , cryptlen, iv);
 
/* Do the encryption */
-   res = crypto_skcipher_encrypt(req);
-   if (res == -EINPROGRESS || res == -EBUSY) {
-   /* Request is being completed asynchronously; wait for it */
-   wait_for_completion();
-   res = ecr.res;
-   }
+   res = crypto_wait_req(crypto_skcipher_encrypt(req), );
skcipher_request_free(req);
if (res < 0) {
printk_ratelimited(KERN_ERR
@@ -110,7 +90,7 @@ static int fname_decrypt(struct inode *inode,

[dm-devel] [PATCH v5 16/19] crypto: talitos: move to generic async completion

2017-08-14 Thread Gilad Ben-Yossef
The talitos driver starts several async crypto ops and  waits for their
completions. Move it over to generic code doing the same.

Signed-off-by: Gilad Ben-Yossef 
---
 drivers/crypto/talitos.c | 38 +-
 1 file changed, 5 insertions(+), 33 deletions(-)

diff --git a/drivers/crypto/talitos.c b/drivers/crypto/talitos.c
index 79791c6..194a307 100644
--- a/drivers/crypto/talitos.c
+++ b/drivers/crypto/talitos.c
@@ -2037,22 +2037,6 @@ static int ahash_import(struct ahash_request *areq, 
const void *in)
return 0;
 }
 
-struct keyhash_result {
-   struct completion completion;
-   int err;
-};
-
-static void keyhash_complete(struct crypto_async_request *req, int err)
-{
-   struct keyhash_result *res = req->data;
-
-   if (err == -EINPROGRESS)
-   return;
-
-   res->err = err;
-   complete(>completion);
-}
-
 static int keyhash(struct crypto_ahash *tfm, const u8 *key, unsigned int 
keylen,
   u8 *hash)
 {
@@ -2060,10 +2044,10 @@ static int keyhash(struct crypto_ahash *tfm, const u8 
*key, unsigned int keylen,
 
struct scatterlist sg[1];
struct ahash_request *req;
-   struct keyhash_result hresult;
+   struct crypto_wait wait;
int ret;
 
-   init_completion();
+   crypto_init_wait();
 
req = ahash_request_alloc(tfm, GFP_KERNEL);
if (!req)
@@ -2072,25 +2056,13 @@ static int keyhash(struct crypto_ahash *tfm, const u8 
*key, unsigned int keylen,
/* Keep tfm keylen == 0 during hash of the long key */
ctx->keylen = 0;
ahash_request_set_callback(req, CRYPTO_TFM_REQ_MAY_BACKLOG,
-  keyhash_complete, );
+  crypto_req_done, );
 
sg_init_one([0], key, keylen);
 
ahash_request_set_crypt(req, sg, hash, keylen);
-   ret = crypto_ahash_digest(req);
-   switch (ret) {
-   case 0:
-   break;
-   case -EINPROGRESS:
-   case -EBUSY:
-   ret = wait_for_completion_interruptible(
-   );
-   if (!ret)
-   ret = hresult.err;
-   break;
-   default:
-   break;
-   }
+   ret = crypto_wait_req(crypto_ahash_digest(req), );
+
ahash_request_free(req);
 
return ret;
-- 
2.1.4

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel


[dm-devel] [PATCH v5 12/19] dm: move dm-verity to generic async completion

2017-08-14 Thread Gilad Ben-Yossef
dm-verity is starting async. crypto ops and waiting for them to complete.
Move it over to generic code doing the same.

This also fixes a possible data coruption bug created by the
use of wait_for_completion_interruptible() without dealing
correctly with an interrupt aborting the wait prior to the
async op finishing.

Signed-off-by: Gilad Ben-Yossef 
---
 drivers/md/dm-verity-target.c | 81 +++
 drivers/md/dm-verity.h|  5 ---
 2 files changed, 20 insertions(+), 66 deletions(-)

diff --git a/drivers/md/dm-verity-target.c b/drivers/md/dm-verity-target.c
index 79f18d4..8df08a8 100644
--- a/drivers/md/dm-verity-target.c
+++ b/drivers/md/dm-verity-target.c
@@ -92,74 +92,33 @@ static sector_t verity_position_at_level(struct dm_verity 
*v, sector_t block,
return block >> (level * v->hash_per_block_bits);
 }
 
-/*
- * Callback function for asynchrnous crypto API completion notification
- */
-static void verity_op_done(struct crypto_async_request *base, int err)
-{
-   struct verity_result *res = (struct verity_result *)base->data;
-
-   if (err == -EINPROGRESS)
-   return;
-
-   res->err = err;
-   complete(>completion);
-}
-
-/*
- * Wait for async crypto API callback
- */
-static inline int verity_complete_op(struct verity_result *res, int ret)
-{
-   switch (ret) {
-   case 0:
-   break;
-
-   case -EINPROGRESS:
-   case -EBUSY:
-   ret = wait_for_completion_interruptible(>completion);
-   if (!ret)
-   ret = res->err;
-   reinit_completion(>completion);
-   break;
-
-   default:
-   DMERR("verity_wait_hash: crypto op submission failed: %d", ret);
-   }
-
-   if (unlikely(ret < 0))
-   DMERR("verity_wait_hash: crypto op failed: %d", ret);
-
-   return ret;
-}
-
 static int verity_hash_update(struct dm_verity *v, struct ahash_request *req,
const u8 *data, size_t len,
-   struct verity_result *res)
+   struct crypto_wait *wait)
 {
struct scatterlist sg;
 
sg_init_one(, data, len);
ahash_request_set_crypt(req, , NULL, len);
 
-   return verity_complete_op(res, crypto_ahash_update(req));
+   return crypto_wait_req(crypto_ahash_update(req), wait);
 }
 
 /*
  * Wrapper for crypto_ahash_init, which handles verity salting.
  */
 static int verity_hash_init(struct dm_verity *v, struct ahash_request *req,
-   struct verity_result *res)
+   struct crypto_wait *wait)
 {
int r;
 
ahash_request_set_tfm(req, v->tfm);
ahash_request_set_callback(req, CRYPTO_TFM_REQ_MAY_SLEEP |
CRYPTO_TFM_REQ_MAY_BACKLOG,
-   verity_op_done, (void *)res);
-   init_completion(>completion);
+   crypto_req_done, (void *)wait);
+   crypto_init_wait(wait);
 
-   r = verity_complete_op(res, crypto_ahash_init(req));
+   r = crypto_wait_req(crypto_ahash_init(req), wait);
 
if (unlikely(r < 0)) {
DMERR("crypto_ahash_init failed: %d", r);
@@ -167,18 +126,18 @@ static int verity_hash_init(struct dm_verity *v, struct 
ahash_request *req,
}
 
if (likely(v->salt_size && (v->version >= 1)))
-   r = verity_hash_update(v, req, v->salt, v->salt_size, res);
+   r = verity_hash_update(v, req, v->salt, v->salt_size, wait);
 
return r;
 }
 
 static int verity_hash_final(struct dm_verity *v, struct ahash_request *req,
-u8 *digest, struct verity_result *res)
+u8 *digest, struct crypto_wait *wait)
 {
int r;
 
if (unlikely(v->salt_size && (!v->version))) {
-   r = verity_hash_update(v, req, v->salt, v->salt_size, res);
+   r = verity_hash_update(v, req, v->salt, v->salt_size, wait);
 
if (r < 0) {
DMERR("verity_hash_final failed updating salt: %d", r);
@@ -187,7 +146,7 @@ static int verity_hash_final(struct dm_verity *v, struct 
ahash_request *req,
}
 
ahash_request_set_crypt(req, NULL, digest, 0);
-   r = verity_complete_op(res, crypto_ahash_final(req));
+   r = crypto_wait_req(crypto_ahash_final(req), wait);
 out:
return r;
 }
@@ -196,17 +155,17 @@ int verity_hash(struct dm_verity *v, struct ahash_request 
*req,
const u8 *data, size_t len, u8 *digest)
 {
int r;
-   struct verity_result res;
+   struct crypto_wait wait;
 
-   r = verity_hash_init(v, req, );
+   r = verity_hash_init(v, req, );
if (unlikely(r < 0))
goto out;
 
-   r = verity_hash_update(v, req, data, len, );
+   r = verity_hash_update(v, req, 

[dm-devel] [PATCH v5 18/19] crypto: mediatek: move to generic async completion

2017-08-14 Thread Gilad Ben-Yossef
The mediatek driver starts several async crypto ops and waits for their
completions. Move it over to generic code doing the same.

Signed-off-by: Gilad Ben-Yossef 
---
 drivers/crypto/mediatek/mtk-aes.c | 31 +--
 1 file changed, 5 insertions(+), 26 deletions(-)

diff --git a/drivers/crypto/mediatek/mtk-aes.c 
b/drivers/crypto/mediatek/mtk-aes.c
index 9e845e8..e2c7c95 100644
--- a/drivers/crypto/mediatek/mtk-aes.c
+++ b/drivers/crypto/mediatek/mtk-aes.c
@@ -137,11 +137,6 @@ struct mtk_aes_gcm_ctx {
struct crypto_skcipher *ctr;
 };
 
-struct mtk_aes_gcm_setkey_result {
-   int err;
-   struct completion completion;
-};
-
 struct mtk_aes_drv {
struct list_head dev_list;
/* Device list lock */
@@ -936,17 +931,6 @@ static int mtk_aes_gcm_crypt(struct aead_request *req, u64 
mode)
>base);
 }
 
-static void mtk_gcm_setkey_done(struct crypto_async_request *req, int err)
-{
-   struct mtk_aes_gcm_setkey_result *result = req->data;
-
-   if (err == -EINPROGRESS)
-   return;
-
-   result->err = err;
-   complete(>completion);
-}
-
 /*
  * Because of the hardware limitation, we need to pre-calculate key(H)
  * for the GHASH operation. The result of the encryption operation
@@ -962,7 +946,7 @@ static int mtk_aes_gcm_setkey(struct crypto_aead *aead, 
const u8 *key,
u32 hash[4];
u8 iv[8];
 
-   struct mtk_aes_gcm_setkey_result result;
+   struct crypto_wait wait;
 
struct scatterlist sg[1];
struct skcipher_request req;
@@ -1002,22 +986,17 @@ static int mtk_aes_gcm_setkey(struct crypto_aead *aead, 
const u8 *key,
if (!data)
return -ENOMEM;
 
-   init_completion(>result.completion);
+   crypto_init_wait(>wait);
sg_init_one(data->sg, >hash, AES_BLOCK_SIZE);
skcipher_request_set_tfm(>req, ctr);
skcipher_request_set_callback(>req, CRYPTO_TFM_REQ_MAY_SLEEP |
  CRYPTO_TFM_REQ_MAY_BACKLOG,
- mtk_gcm_setkey_done, >result);
+ crypto_req_done, >wait);
skcipher_request_set_crypt(>req, data->sg, data->sg,
   AES_BLOCK_SIZE, data->iv);
 
-   err = crypto_skcipher_encrypt(>req);
-   if (err == -EINPROGRESS || err == -EBUSY) {
-   err = wait_for_completion_interruptible(
-   >result.completion);
-   if (!err)
-   err = data->result.err;
-   }
+   err = crypto_wait_req(crypto_skcipher_encrypt(>req),
+ >wait);
if (err)
goto out;
 
-- 
2.1.4

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel


[dm-devel] [PATCH v5 17/19] crypto: qce: move to generic async completion

2017-08-14 Thread Gilad Ben-Yossef
The qce driver starts several async crypto ops and  waits for their
completions. Move it over to generic code doing the same.

Signed-off-by: Gilad Ben-Yossef 
---
 drivers/crypto/qce/sha.c | 30 --
 1 file changed, 4 insertions(+), 26 deletions(-)

diff --git a/drivers/crypto/qce/sha.c b/drivers/crypto/qce/sha.c
index 47e114a..53227d7 100644
--- a/drivers/crypto/qce/sha.c
+++ b/drivers/crypto/qce/sha.c
@@ -349,28 +349,12 @@ static int qce_ahash_digest(struct ahash_request *req)
return qce->async_req_enqueue(tmpl->qce, >base);
 }
 
-struct qce_ahash_result {
-   struct completion completion;
-   int error;
-};
-
-static void qce_digest_complete(struct crypto_async_request *req, int error)
-{
-   struct qce_ahash_result *result = req->data;
-
-   if (error == -EINPROGRESS)
-   return;
-
-   result->error = error;
-   complete(>completion);
-}
-
 static int qce_ahash_hmac_setkey(struct crypto_ahash *tfm, const u8 *key,
 unsigned int keylen)
 {
unsigned int digestsize = crypto_ahash_digestsize(tfm);
struct qce_sha_ctx *ctx = crypto_tfm_ctx(>base);
-   struct qce_ahash_result result;
+   struct crypto_wait wait;
struct ahash_request *req;
struct scatterlist sg;
unsigned int blocksize;
@@ -405,9 +389,9 @@ static int qce_ahash_hmac_setkey(struct crypto_ahash *tfm, 
const u8 *key,
goto err_free_ahash;
}
 
-   init_completion();
+   crypto_init_wait();
ahash_request_set_callback(req, CRYPTO_TFM_REQ_MAY_BACKLOG,
-  qce_digest_complete, );
+  crypto_req_done, );
crypto_ahash_clear_flags(ahash_tfm, ~0);
 
buf = kzalloc(keylen + QCE_MAX_ALIGN_SIZE, GFP_KERNEL);
@@ -420,13 +404,7 @@ static int qce_ahash_hmac_setkey(struct crypto_ahash *tfm, 
const u8 *key,
sg_init_one(, buf, keylen);
ahash_request_set_crypt(req, , ctx->authkey, keylen);
 
-   ret = crypto_ahash_digest(req);
-   if (ret == -EINPROGRESS || ret == -EBUSY) {
-   ret = wait_for_completion_interruptible();
-   if (!ret)
-   ret = result.error;
-   }
-
+   ret = crypto_wait_req(crypto_ahash_digest(req), );
if (ret)
crypto_ahash_set_flags(tfm, CRYPTO_TFM_RES_BAD_KEY_LEN);
 
-- 
2.1.4

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel


[dm-devel] [PATCH v5 14/19] ima: move to generic async completion

2017-08-14 Thread Gilad Ben-Yossef
ima starts several async crypto ops and  waits for their completions.
Move it over to generic code doing the same.

Signed-off-by: Gilad Ben-Yossef 
Acked-by: Mimi Zohar 
---
 security/integrity/ima/ima_crypto.c | 56 +++--
 1 file changed, 17 insertions(+), 39 deletions(-)

diff --git a/security/integrity/ima/ima_crypto.c 
b/security/integrity/ima/ima_crypto.c
index 802d5d2..0e4db1fe 100644
--- a/security/integrity/ima/ima_crypto.c
+++ b/security/integrity/ima/ima_crypto.c
@@ -27,11 +27,6 @@
 
 #include "ima.h"
 
-struct ahash_completion {
-   struct completion completion;
-   int err;
-};
-
 /* minimum file size for ahash use */
 static unsigned long ima_ahash_minsize;
 module_param_named(ahash_minsize, ima_ahash_minsize, ulong, 0644);
@@ -196,30 +191,13 @@ static void ima_free_atfm(struct crypto_ahash *tfm)
crypto_free_ahash(tfm);
 }
 
-static void ahash_complete(struct crypto_async_request *req, int err)
+static inline int ahash_wait(int err, struct crypto_wait *wait)
 {
-   struct ahash_completion *res = req->data;
 
-   if (err == -EINPROGRESS)
-   return;
-   res->err = err;
-   complete(>completion);
-}
+   err = crypto_wait_req(err, wait);
 
-static int ahash_wait(int err, struct ahash_completion *res)
-{
-   switch (err) {
-   case 0:
-   break;
-   case -EINPROGRESS:
-   case -EBUSY:
-   wait_for_completion(>completion);
-   reinit_completion(>completion);
-   err = res->err;
-   /* fall through */
-   default:
+   if (err)
pr_crit_ratelimited("ahash calculation failed: err: %d\n", err);
-   }
 
return err;
 }
@@ -233,7 +211,7 @@ static int ima_calc_file_hash_atfm(struct file *file,
int rc, read = 0, rbuf_len, active = 0, ahash_rc = 0;
struct ahash_request *req;
struct scatterlist sg[1];
-   struct ahash_completion res;
+   struct crypto_wait wait;
size_t rbuf_size[2];
 
hash->length = crypto_ahash_digestsize(tfm);
@@ -242,12 +220,12 @@ static int ima_calc_file_hash_atfm(struct file *file,
if (!req)
return -ENOMEM;
 
-   init_completion();
+   crypto_init_wait();
ahash_request_set_callback(req, CRYPTO_TFM_REQ_MAY_BACKLOG |
   CRYPTO_TFM_REQ_MAY_SLEEP,
-  ahash_complete, );
+  crypto_req_done, );
 
-   rc = ahash_wait(crypto_ahash_init(req), );
+   rc = ahash_wait(crypto_ahash_init(req), );
if (rc)
goto out1;
 
@@ -288,7 +266,7 @@ static int ima_calc_file_hash_atfm(struct file *file,
 * read/request, wait for the completion of the
 * previous ahash_update() request.
 */
-   rc = ahash_wait(ahash_rc, );
+   rc = ahash_wait(ahash_rc, );
if (rc)
goto out3;
}
@@ -304,7 +282,7 @@ static int ima_calc_file_hash_atfm(struct file *file,
 * read/request, wait for the completion of the
 * previous ahash_update() request.
 */
-   rc = ahash_wait(ahash_rc, );
+   rc = ahash_wait(ahash_rc, );
if (rc)
goto out3;
}
@@ -318,7 +296,7 @@ static int ima_calc_file_hash_atfm(struct file *file,
active = !active; /* swap buffers, if we use two */
}
/* wait for the last update request to complete */
-   rc = ahash_wait(ahash_rc, );
+   rc = ahash_wait(ahash_rc, );
 out3:
if (read)
file->f_mode &= ~FMODE_READ;
@@ -327,7 +305,7 @@ static int ima_calc_file_hash_atfm(struct file *file,
 out2:
if (!rc) {
ahash_request_set_crypt(req, NULL, hash->digest, 0);
-   rc = ahash_wait(crypto_ahash_final(req), );
+   rc = ahash_wait(crypto_ahash_final(req), );
}
 out1:
ahash_request_free(req);
@@ -527,7 +505,7 @@ static int calc_buffer_ahash_atfm(const void *buf, loff_t 
len,
 {
struct ahash_request *req;
struct scatterlist sg;
-   struct ahash_completion res;
+   struct crypto_wait wait;
int rc, ahash_rc = 0;
 
hash->length = crypto_ahash_digestsize(tfm);
@@ -536,12 +514,12 @@ static int calc_buffer_ahash_atfm(const void *buf, loff_t 
len,
if (!req)
return -ENOMEM;
 
-   init_completion();
+   crypto_init_wait();
ahash_request_set_callback(req, CRYPTO_TFM_REQ_MAY_BACKLOG |
   CRYPTO_TFM_REQ_MAY_SLEEP,
-  ahash_complete, );
+  

[dm-devel] [PATCH v5 13/19] cifs: move to generic async completion

2017-08-14 Thread Gilad Ben-Yossef
cifs starts an async. crypto op and waits for their completion.
Move it over to generic code doing the same.

Signed-off-by: Gilad Ben-Yossef 
Acked-by: Pavel Shilovsky 
---
 fs/cifs/smb2ops.c | 30 --
 1 file changed, 4 insertions(+), 26 deletions(-)

diff --git a/fs/cifs/smb2ops.c b/fs/cifs/smb2ops.c
index cfacf2c..16fb041 100644
--- a/fs/cifs/smb2ops.c
+++ b/fs/cifs/smb2ops.c
@@ -1878,22 +1878,6 @@ init_sg(struct smb_rqst *rqst, u8 *sign)
return sg;
 }
 
-struct cifs_crypt_result {
-   int err;
-   struct completion completion;
-};
-
-static void cifs_crypt_complete(struct crypto_async_request *req, int err)
-{
-   struct cifs_crypt_result *res = req->data;
-
-   if (err == -EINPROGRESS)
-   return;
-
-   res->err = err;
-   complete(>completion);
-}
-
 static int
 smb2_get_enc_key(struct TCP_Server_Info *server, __u64 ses_id, int enc, u8 
*key)
 {
@@ -1934,12 +1918,10 @@ crypt_message(struct TCP_Server_Info *server, struct 
smb_rqst *rqst, int enc)
struct aead_request *req;
char *iv;
unsigned int iv_len;
-   struct cifs_crypt_result result = {0, };
+   DECLARE_CRYPTO_WAIT(wait);
struct crypto_aead *tfm;
unsigned int crypt_len = le32_to_cpu(tr_hdr->OriginalMessageSize);
 
-   init_completion();
-
rc = smb2_get_enc_key(server, tr_hdr->SessionId, enc, key);
if (rc) {
cifs_dbg(VFS, "%s: Could not get %scryption key\n", __func__,
@@ -1999,14 +1981,10 @@ crypt_message(struct TCP_Server_Info *server, struct 
smb_rqst *rqst, int enc)
aead_request_set_ad(req, assoc_data_len);
 
aead_request_set_callback(req, CRYPTO_TFM_REQ_MAY_BACKLOG,
- cifs_crypt_complete, );
+ crypto_req_done, );
 
-   rc = enc ? crypto_aead_encrypt(req) : crypto_aead_decrypt(req);
-
-   if (rc == -EINPROGRESS || rc == -EBUSY) {
-   wait_for_completion();
-   rc = result.err;
-   }
+   rc = crypto_wait_req(enc ? crypto_aead_encrypt(req)
+   : crypto_aead_decrypt(req), );
 
if (!rc && enc)
memcpy(_hdr->Signature, sign, SMB2_SIGNATURE_SIZE);
-- 
2.1.4

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel


[dm-devel] [PATCH v5 10/19] crypto: move testmgr to generic async completion

2017-08-14 Thread Gilad Ben-Yossef
testmgr is starting async. crypto ops and waiting for them to complete.
Move it over to generic code doing the same.

This also provides a test of the generic crypto async. wait code.

Signed-off-by: Gilad Ben-Yossef 
---
 crypto/testmgr.c | 204 ++-
 1 file changed, 66 insertions(+), 138 deletions(-)

diff --git a/crypto/testmgr.c b/crypto/testmgr.c
index 7125ba3..a65b4d5 100644
--- a/crypto/testmgr.c
+++ b/crypto/testmgr.c
@@ -76,11 +76,6 @@ int alg_test(const char *driver, const char *alg, u32 type, 
u32 mask)
 #define ENCRYPT 1
 #define DECRYPT 0
 
-struct tcrypt_result {
-   struct completion completion;
-   int err;
-};
-
 struct aead_test_suite {
struct {
const struct aead_testvec *vecs;
@@ -155,17 +150,6 @@ static void hexdump(unsigned char *buf, unsigned int len)
buf, len, false);
 }
 
-static void tcrypt_complete(struct crypto_async_request *req, int err)
-{
-   struct tcrypt_result *res = req->data;
-
-   if (err == -EINPROGRESS)
-   return;
-
-   res->err = err;
-   complete(>completion);
-}
-
 static int testmgr_alloc_buf(char *buf[XBUFSIZE])
 {
int i;
@@ -193,20 +177,10 @@ static void testmgr_free_buf(char *buf[XBUFSIZE])
free_page((unsigned long)buf[i]);
 }
 
-static int wait_async_op(struct tcrypt_result *tr, int ret)
-{
-   if (ret == -EINPROGRESS || ret == -EBUSY) {
-   wait_for_completion(>completion);
-   reinit_completion(>completion);
-   ret = tr->err;
-   }
-   return ret;
-}
-
 static int ahash_partial_update(struct ahash_request **preq,
struct crypto_ahash *tfm, const struct hash_testvec *template,
void *hash_buff, int k, int temp, struct scatterlist *sg,
-   const char *algo, char *result, struct tcrypt_result *tresult)
+   const char *algo, char *result, struct crypto_wait *wait)
 {
char *state;
struct ahash_request *req;
@@ -236,7 +210,7 @@ static int ahash_partial_update(struct ahash_request **preq,
}
ahash_request_set_callback(req,
CRYPTO_TFM_REQ_MAY_BACKLOG,
-   tcrypt_complete, tresult);
+   crypto_req_done, wait);
 
memcpy(hash_buff, template->plaintext + temp,
template->tap[k]);
@@ -247,7 +221,7 @@ static int ahash_partial_update(struct ahash_request **preq,
pr_err("alg: hash: Failed to import() for %s\n", algo);
goto out;
}
-   ret = wait_async_op(tresult, crypto_ahash_update(req));
+   ret = crypto_wait_req(crypto_ahash_update(req), wait);
if (ret)
goto out;
*preq = req;
@@ -272,7 +246,7 @@ static int __test_hash(struct crypto_ahash *tfm,
char *result;
char *key;
struct ahash_request *req;
-   struct tcrypt_result tresult;
+   struct crypto_wait wait;
void *hash_buff;
char *xbuf[XBUFSIZE];
int ret = -ENOMEM;
@@ -286,7 +260,7 @@ static int __test_hash(struct crypto_ahash *tfm,
if (testmgr_alloc_buf(xbuf))
goto out_nobuf;
 
-   init_completion();
+   crypto_init_wait();
 
req = ahash_request_alloc(tfm, GFP_KERNEL);
if (!req) {
@@ -295,7 +269,7 @@ static int __test_hash(struct crypto_ahash *tfm,
goto out_noreq;
}
ahash_request_set_callback(req, CRYPTO_TFM_REQ_MAY_BACKLOG,
-  tcrypt_complete, );
+  crypto_req_done, );
 
j = 0;
for (i = 0; i < tcount; i++) {
@@ -335,26 +309,26 @@ static int __test_hash(struct crypto_ahash *tfm,
 
ahash_request_set_crypt(req, sg, result, template[i].psize);
if (use_digest) {
-   ret = wait_async_op(, crypto_ahash_digest(req));
+   ret = crypto_wait_req(crypto_ahash_digest(req), );
if (ret) {
pr_err("alg: hash: digest failed on test %d "
   "for %s: ret=%d\n", j, algo, -ret);
goto out;
}
} else {
-   ret = wait_async_op(, crypto_ahash_init(req));
+   ret = crypto_wait_req(crypto_ahash_init(req), );
if (ret) {
pr_err("alg: hash: init failed on test %d "
   "for %s: ret=%d\n", j, algo, -ret);
goto out;
}
-   ret = wait_async_op(, crypto_ahash_update(req));
+   ret = crypto_wait_req(crypto_ahash_update(req), );
if (ret) {
pr_err("alg: hash: update failed on test %d "
  

[dm-devel] [PATCH v5 09/19] crypto: move gcm to generic async completion

2017-08-14 Thread Gilad Ben-Yossef
gcm is starting an async. crypto op and waiting for it complete.
Move it over to generic code doing the same.

Signed-off-by: Gilad Ben-Yossef 
---
 crypto/gcm.c | 32 ++--
 1 file changed, 6 insertions(+), 26 deletions(-)

diff --git a/crypto/gcm.c b/crypto/gcm.c
index 3841b5e..fb923a5 100644
--- a/crypto/gcm.c
+++ b/crypto/gcm.c
@@ -16,7 +16,6 @@
 #include 
 #include 
 #include "internal.h"
-#include 
 #include 
 #include 
 #include 
@@ -78,11 +77,6 @@ struct crypto_gcm_req_priv_ctx {
} u;
 };
 
-struct crypto_gcm_setkey_result {
-   int err;
-   struct completion completion;
-};
-
 static struct {
u8 buf[16];
struct scatterlist sg;
@@ -98,17 +92,6 @@ static inline struct crypto_gcm_req_priv_ctx 
*crypto_gcm_reqctx(
return (void *)PTR_ALIGN((u8 *)aead_request_ctx(req), align + 1);
 }
 
-static void crypto_gcm_setkey_done(struct crypto_async_request *req, int err)
-{
-   struct crypto_gcm_setkey_result *result = req->data;
-
-   if (err == -EINPROGRESS)
-   return;
-
-   result->err = err;
-   complete(>completion);
-}
-
 static int crypto_gcm_setkey(struct crypto_aead *aead, const u8 *key,
 unsigned int keylen)
 {
@@ -119,7 +102,7 @@ static int crypto_gcm_setkey(struct crypto_aead *aead, 
const u8 *key,
be128 hash;
u8 iv[16];
 
-   struct crypto_gcm_setkey_result result;
+   struct crypto_wait wait;
 
struct scatterlist sg[1];
struct skcipher_request req;
@@ -140,21 +123,18 @@ static int crypto_gcm_setkey(struct crypto_aead *aead, 
const u8 *key,
if (!data)
return -ENOMEM;
 
-   init_completion(>result.completion);
+   crypto_init_wait(>wait);
sg_init_one(data->sg, >hash, sizeof(data->hash));
skcipher_request_set_tfm(>req, ctr);
skcipher_request_set_callback(>req, CRYPTO_TFM_REQ_MAY_SLEEP |
  CRYPTO_TFM_REQ_MAY_BACKLOG,
- crypto_gcm_setkey_done,
- >result);
+ crypto_req_done,
+ >wait);
skcipher_request_set_crypt(>req, data->sg, data->sg,
   sizeof(data->hash), data->iv);
 
-   err = crypto_skcipher_encrypt(>req);
-   if (err == -EINPROGRESS || err == -EBUSY) {
-   wait_for_completion(>result.completion);
-   err = data->result.err;
-   }
+   err = crypto_wait_req(crypto_skcipher_encrypt(>req),
+   >wait);
 
if (err)
goto out;
-- 
2.1.4

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel


[dm-devel] [PATCH v5 07/19] crypto: move pub key to generic async completion

2017-08-14 Thread Gilad Ben-Yossef
public_key_verify_signature() is starting an async crypto op and
waiting for it to complete. Move it over to generic code doing
the same.

Signed-off-by: Gilad Ben-Yossef 
---
 crypto/asymmetric_keys/public_key.c | 28 
 1 file changed, 4 insertions(+), 24 deletions(-)

diff --git a/crypto/asymmetric_keys/public_key.c 
b/crypto/asymmetric_keys/public_key.c
index 3cd6e12..d916235 100644
--- a/crypto/asymmetric_keys/public_key.c
+++ b/crypto/asymmetric_keys/public_key.c
@@ -57,29 +57,13 @@ static void public_key_destroy(void *payload0, void 
*payload3)
public_key_signature_free(payload3);
 }
 
-struct public_key_completion {
-   struct completion completion;
-   int err;
-};
-
-static void public_key_verify_done(struct crypto_async_request *req, int err)
-{
-   struct public_key_completion *compl = req->data;
-
-   if (err == -EINPROGRESS)
-   return;
-
-   compl->err = err;
-   complete(>completion);
-}
-
 /*
  * Verify a signature using a public key.
  */
 int public_key_verify_signature(const struct public_key *pkey,
const struct public_key_signature *sig)
 {
-   struct public_key_completion compl;
+   struct crypto_wait cwait;
struct crypto_akcipher *tfm;
struct akcipher_request *req;
struct scatterlist sig_sg, digest_sg;
@@ -131,20 +115,16 @@ int public_key_verify_signature(const struct public_key 
*pkey,
sg_init_one(_sg, output, outlen);
akcipher_request_set_crypt(req, _sg, _sg, sig->s_size,
   outlen);
-   init_completion();
+   crypto_init_wait();
akcipher_request_set_callback(req, CRYPTO_TFM_REQ_MAY_BACKLOG |
  CRYPTO_TFM_REQ_MAY_SLEEP,
- public_key_verify_done, );
+ crypto_req_done, );
 
/* Perform the verification calculation.  This doesn't actually do the
 * verification, but rather calculates the hash expected by the
 * signature and returns that to us.
 */
-   ret = crypto_akcipher_verify(req);
-   if ((ret == -EINPROGRESS) || (ret == -EBUSY)) {
-   wait_for_completion();
-   ret = compl.err;
-   }
+   ret = crypto_wait_req(crypto_akcipher_verify(req), );
if (ret < 0)
goto out_free_output;
 
-- 
2.1.4

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel


[dm-devel] [PATCH v5 06/19] crypto: move algif to generic async completion

2017-08-14 Thread Gilad Ben-Yossef
algif starts several async crypto ops and waits for their completion.
Move it over to generic code doing the same.

Signed-off-by: Gilad Ben-Yossef 
---
 crypto/af_alg.c | 27 ---
 crypto/algif_aead.c |  8 
 crypto/algif_hash.c | 30 ++
 crypto/algif_skcipher.c |  9 -
 include/crypto/if_alg.h | 15 +--
 5 files changed, 23 insertions(+), 66 deletions(-)

diff --git a/crypto/af_alg.c b/crypto/af_alg.c
index d6936c0..f8917e7 100644
--- a/crypto/af_alg.c
+++ b/crypto/af_alg.c
@@ -481,33 +481,6 @@ int af_alg_cmsg_send(struct msghdr *msg, struct 
af_alg_control *con)
 }
 EXPORT_SYMBOL_GPL(af_alg_cmsg_send);
 
-int af_alg_wait_for_completion(int err, struct af_alg_completion *completion)
-{
-   switch (err) {
-   case -EINPROGRESS:
-   case -EBUSY:
-   wait_for_completion(>completion);
-   reinit_completion(>completion);
-   err = completion->err;
-   break;
-   };
-
-   return err;
-}
-EXPORT_SYMBOL_GPL(af_alg_wait_for_completion);
-
-void af_alg_complete(struct crypto_async_request *req, int err)
-{
-   struct af_alg_completion *completion = req->data;
-
-   if (err == -EINPROGRESS)
-   return;
-
-   completion->err = err;
-   complete(>completion);
-}
-EXPORT_SYMBOL_GPL(af_alg_complete);
-
 /**
  * af_alg_alloc_tsgl - allocate the TX SGL
  *
diff --git a/crypto/algif_aead.c b/crypto/algif_aead.c
index 48d46e7..abbac8a 100644
--- a/crypto/algif_aead.c
+++ b/crypto/algif_aead.c
@@ -278,11 +278,11 @@ static int _aead_recvmsg(struct socket *sock, struct 
msghdr *msg,
/* Synchronous operation */
aead_request_set_callback(>cra_u.aead_req,
  CRYPTO_TFM_REQ_MAY_BACKLOG,
- af_alg_complete, >completion);
-   err = af_alg_wait_for_completion(ctx->enc ?
+ crypto_req_done, >wait);
+   err = crypto_wait_req(ctx->enc ?
crypto_aead_encrypt(>cra_u.aead_req) :
crypto_aead_decrypt(>cra_u.aead_req),
->completion);
+   >wait);
}
 
/* AIO operation in progress */
@@ -554,7 +554,7 @@ static int aead_accept_parent_nokey(void *private, struct 
sock *sk)
ctx->merge = 0;
ctx->enc = 0;
ctx->aead_assoclen = 0;
-   af_alg_init_completion(>completion);
+   crypto_init_wait(>wait);
 
ask->private = ctx;
 
diff --git a/crypto/algif_hash.c b/crypto/algif_hash.c
index 3b3c154..d2ab8de 100644
--- a/crypto/algif_hash.c
+++ b/crypto/algif_hash.c
@@ -26,7 +26,7 @@ struct hash_ctx {
 
u8 *result;
 
-   struct af_alg_completion completion;
+   struct crypto_wait wait;
 
unsigned int len;
bool more;
@@ -102,8 +102,7 @@ static int hash_sendmsg(struct socket *sock, struct msghdr 
*msg,
if ((msg->msg_flags & MSG_MORE))
hash_free_result(sk, ctx);
 
-   err = af_alg_wait_for_completion(crypto_ahash_init(>req),
-   >completion);
+   err = crypto_wait_req(crypto_ahash_init(>req), >wait);
if (err)
goto unlock;
}
@@ -124,8 +123,8 @@ static int hash_sendmsg(struct socket *sock, struct msghdr 
*msg,
 
ahash_request_set_crypt(>req, ctx->sgl.sg, NULL, len);
 
-   err = af_alg_wait_for_completion(crypto_ahash_update(>req),
->completion);
+   err = crypto_wait_req(crypto_ahash_update(>req),
+ >wait);
af_alg_free_sg(>sgl);
if (err)
goto unlock;
@@ -143,8 +142,8 @@ static int hash_sendmsg(struct socket *sock, struct msghdr 
*msg,
goto unlock;
 
ahash_request_set_crypt(>req, NULL, ctx->result, 0);
-   err = af_alg_wait_for_completion(crypto_ahash_final(>req),
->completion);
+   err = crypto_wait_req(crypto_ahash_final(>req),
+ >wait);
}
 
 unlock:
@@ -185,7 +184,7 @@ static ssize_t hash_sendpage(struct socket *sock, struct 
page *page,
} else {
if (!ctx->more) {
err = crypto_ahash_init(>req);
-   err = af_alg_wait_for_completion(err, >completion);
+   err = crypto_wait_req(err, >wait);
if (err)
goto unlock;
}
@@ -193,7 +192,7 @@ static ssize_t hash_sendpage(struct socket *sock, struct 
page *page,
err = 

[dm-devel] [PATCH v5 05/19] crypto: introduce crypto wait for async op

2017-08-14 Thread Gilad Ben-Yossef
Invoking a possibly async. crypto op and waiting for completion
while correctly handling backlog processing is a common task
in the crypto API implementation and outside users of it.

This patch adds a generic implementation for doing so in
preparation for using it across the board instead of hand
rolled versions.

Signed-off-by: Gilad Ben-Yossef 
CC: Eric Biggers 
---
 crypto/api.c   | 13 +
 include/linux/crypto.h | 41 +
 2 files changed, 54 insertions(+)

diff --git a/crypto/api.c b/crypto/api.c
index 941cd4c..2a2479d 100644
--- a/crypto/api.c
+++ b/crypto/api.c
@@ -24,6 +24,7 @@
 #include 
 #include 
 #include 
+#include 
 #include "internal.h"
 
 LIST_HEAD(crypto_alg_list);
@@ -595,5 +596,17 @@ int crypto_has_alg(const char *name, u32 type, u32 mask)
 }
 EXPORT_SYMBOL_GPL(crypto_has_alg);
 
+void crypto_req_done(struct crypto_async_request *req, int err)
+{
+   struct crypto_wait *wait = req->data;
+
+   if (err == -EINPROGRESS)
+   return;
+
+   wait->err = err;
+   complete(>completion);
+}
+EXPORT_SYMBOL_GPL(crypto_req_done);
+
 MODULE_DESCRIPTION("Cryptographic core API");
 MODULE_LICENSE("GPL");
diff --git a/include/linux/crypto.h b/include/linux/crypto.h
index 84da997..bb00186 100644
--- a/include/linux/crypto.h
+++ b/include/linux/crypto.h
@@ -24,6 +24,7 @@
 #include 
 #include 
 #include 
+#include 
 
 /*
  * Autoloaded crypto modules should only use a prefixed name to avoid allowing
@@ -468,6 +469,45 @@ struct crypto_alg {
 } CRYPTO_MINALIGN_ATTR;
 
 /*
+ * A helper struct for waiting for completion of async crypto ops
+ */
+struct crypto_wait {
+   struct completion completion;
+   int err;
+};
+
+/*
+ * Macro for declaring a crypto op async wait object on stack
+ */
+#define DECLARE_CRYPTO_WAIT(_wait) \
+   struct crypto_wait _wait = { \
+   COMPLETION_INITIALIZER_ONSTACK((_wait).completion), 0 }
+
+/*
+ * Async ops completion helper functioons
+ */
+void crypto_req_done(struct crypto_async_request *req, int err);
+
+static inline int crypto_wait_req(int err, struct crypto_wait *wait)
+{
+   switch (err) {
+   case -EINPROGRESS:
+   case -EBUSY:
+   wait_for_completion(>completion);
+   reinit_completion(>completion);
+   err = wait->err;
+   break;
+   };
+
+   return err;
+}
+
+static inline void crypto_init_wait(struct crypto_wait *wait)
+{
+   init_completion(>completion);
+}
+
+/*
  * Algorithm registration interface.
  */
 int crypto_register_alg(struct crypto_alg *alg);
@@ -1604,5 +1644,6 @@ static inline int crypto_comp_decompress(struct 
crypto_comp *tfm,
src, slen, dst, dlen);
 }
 
+
 #endif /* _LINUX_CRYPTO_H */
 
-- 
2.1.4

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel


[dm-devel] [PATCH v5 04/19] crypto: marvell/cesa: remove redundant backlog checks on EBUSY

2017-08-14 Thread Gilad Ben-Yossef
Now that -EBUSY return code only indicates backlog queueing
we can safely remove the now redundant check for the
CRYPTO_TFM_REQ_MAY_BACKLOG flag when -EBUSY is returned.

Signed-off-by: Gilad Ben-Yossef 
---
 drivers/crypto/marvell/cesa.c | 3 +--
 drivers/crypto/marvell/cesa.h | 2 +-
 2 files changed, 2 insertions(+), 3 deletions(-)

diff --git a/drivers/crypto/marvell/cesa.c b/drivers/crypto/marvell/cesa.c
index 6e7a5c7..269737f 100644
--- a/drivers/crypto/marvell/cesa.c
+++ b/drivers/crypto/marvell/cesa.c
@@ -183,8 +183,7 @@ int mv_cesa_queue_req(struct crypto_async_request *req,
spin_lock_bh(>lock);
ret = crypto_enqueue_request(>queue, req);
if ((mv_cesa_req_get_type(creq) == CESA_DMA_REQ) &&
-   (ret == -EINPROGRESS ||
-   (ret == -EBUSY && req->flags & CRYPTO_TFM_REQ_MAY_BACKLOG)))
+   (ret == -EINPROGRESS || ret == -EBUSY)
mv_cesa_tdma_chain(engine, creq);
spin_unlock_bh(>lock);
 
diff --git a/drivers/crypto/marvell/cesa.h b/drivers/crypto/marvell/cesa.h
index b7872f6..63c8457 100644
--- a/drivers/crypto/marvell/cesa.h
+++ b/drivers/crypto/marvell/cesa.h
@@ -763,7 +763,7 @@ static inline int mv_cesa_req_needs_cleanup(struct 
crypto_async_request *req,
 * the backlog and will be processed later. There's no need to
 * clean it up.
 */
-   if (ret == -EBUSY && req->flags & CRYPTO_TFM_REQ_MAY_BACKLOG)
+   if (ret == -EBUSY)
return false;
 
/* Request wasn't queued, we need to clean it up */
-- 
2.1.4

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel


[dm-devel] [PATCH v5 03/19] crypto: remove redundant backlog checks on EBUSY

2017-08-14 Thread Gilad Ben-Yossef
Now that -EBUSY return code only indicates backlog queueing
we can safely remove the now redundant check for the
CRYPTO_TFM_REQ_MAY_BACKLOG flag when -EBUSY is returned.

Signed-off-by: Gilad Ben-Yossef 
---
 crypto/ahash.c| 12 +++-
 crypto/cts.c  |  6 ++
 crypto/lrw.c  |  8 ++--
 crypto/rsa-pkcs1pad.c | 16 
 crypto/xts.c  |  8 ++--
 5 files changed, 13 insertions(+), 37 deletions(-)

diff --git a/crypto/ahash.c b/crypto/ahash.c
index 826cd7a..d63eeef 100644
--- a/crypto/ahash.c
+++ b/crypto/ahash.c
@@ -334,9 +334,7 @@ static int ahash_op_unaligned(struct ahash_request *req,
return err;
 
err = op(req);
-   if (err == -EINPROGRESS ||
-   (err == -EBUSY && (ahash_request_flags(req) &
-  CRYPTO_TFM_REQ_MAY_BACKLOG)))
+   if (err == -EINPROGRESS || err == -EBUSY)
return err;
 
ahash_restore_req(req, err);
@@ -394,9 +392,7 @@ static int ahash_def_finup_finish1(struct ahash_request 
*req, int err)
req->base.complete = ahash_def_finup_done2;
 
err = crypto_ahash_reqtfm(req)->final(req);
-   if (err == -EINPROGRESS ||
-   (err == -EBUSY && (ahash_request_flags(req) &
-  CRYPTO_TFM_REQ_MAY_BACKLOG)))
+   if (err == -EINPROGRESS || err == -EBUSY)
return err;
 
 out:
@@ -432,9 +428,7 @@ static int ahash_def_finup(struct ahash_request *req)
return err;
 
err = tfm->update(req);
-   if (err == -EINPROGRESS ||
-   (err == -EBUSY && (ahash_request_flags(req) &
-  CRYPTO_TFM_REQ_MAY_BACKLOG)))
+   if (err == -EINPROGRESS || err == -EBUSY)
return err;
 
return ahash_def_finup_finish1(req, err);
diff --git a/crypto/cts.c b/crypto/cts.c
index 243f591..4773c18 100644
--- a/crypto/cts.c
+++ b/crypto/cts.c
@@ -136,8 +136,7 @@ static void crypto_cts_encrypt_done(struct 
crypto_async_request *areq, int err)
goto out;
 
err = cts_cbc_encrypt(req);
-   if (err == -EINPROGRESS ||
-   (err == -EBUSY && req->base.flags & CRYPTO_TFM_REQ_MAY_BACKLOG))
+   if (err == -EINPROGRESS || err == -EBUSY)
return;
 
 out:
@@ -229,8 +228,7 @@ static void crypto_cts_decrypt_done(struct 
crypto_async_request *areq, int err)
goto out;
 
err = cts_cbc_decrypt(req);
-   if (err == -EINPROGRESS ||
-   (err == -EBUSY && req->base.flags & CRYPTO_TFM_REQ_MAY_BACKLOG))
+   if (err == -EINPROGRESS || err == -EBUSY)
return;
 
 out:
diff --git a/crypto/lrw.c b/crypto/lrw.c
index a8bfae4..695cea9 100644
--- a/crypto/lrw.c
+++ b/crypto/lrw.c
@@ -328,9 +328,7 @@ static int do_encrypt(struct skcipher_request *req, int err)
  crypto_skcipher_encrypt(subreq) ?:
  post_crypt(req);
 
-   if (err == -EINPROGRESS ||
-   (err == -EBUSY &&
-req->base.flags & CRYPTO_TFM_REQ_MAY_BACKLOG))
+   if (err == -EINPROGRESS || err == -EBUSY)
return err;
}
 
@@ -380,9 +378,7 @@ static int do_decrypt(struct skcipher_request *req, int err)
  crypto_skcipher_decrypt(subreq) ?:
  post_crypt(req);
 
-   if (err == -EINPROGRESS ||
-   (err == -EBUSY &&
-req->base.flags & CRYPTO_TFM_REQ_MAY_BACKLOG))
+   if (err == -EINPROGRESS || err == -EBUSY)
return err;
}
 
diff --git a/crypto/rsa-pkcs1pad.c b/crypto/rsa-pkcs1pad.c
index 407c64b..2908f93 100644
--- a/crypto/rsa-pkcs1pad.c
+++ b/crypto/rsa-pkcs1pad.c
@@ -279,9 +279,7 @@ static int pkcs1pad_encrypt(struct akcipher_request *req)
   req->dst, ctx->key_size - 1, req->dst_len);
 
err = crypto_akcipher_encrypt(_ctx->child_req);
-   if (err != -EINPROGRESS &&
-   (err != -EBUSY ||
-!(req->base.flags & CRYPTO_TFM_REQ_MAY_BACKLOG)))
+   if (err != -EINPROGRESS && err != -EBUSY)
return pkcs1pad_encrypt_sign_complete(req, err);
 
return err;
@@ -383,9 +381,7 @@ static int pkcs1pad_decrypt(struct akcipher_request *req)
   ctx->key_size);
 
err = crypto_akcipher_decrypt(_ctx->child_req);
-   if (err != -EINPROGRESS &&
-   (err != -EBUSY ||
-!(req->base.flags & CRYPTO_TFM_REQ_MAY_BACKLOG)))
+   if (err != -EINPROGRESS && err != -EBUSY)
return pkcs1pad_decrypt_complete(req, err);
 
return err;
@@ -440,9 +436,7 @@ static int pkcs1pad_sign(struct akcipher_request *req)
   req->dst, ctx->key_size - 1, req->dst_len);
 
err = crypto_akcipher_sign(_ctx->child_req);
-   if 

[dm-devel] [PATCH v5 02/19] crypto: ccp: use -EAGAIN for transient busy indication

2017-08-14 Thread Gilad Ben-Yossef
Replace -EBUSY with -EAGAIN when reporting transient busy
indication in the absence of backlog.

Signed-off-by: Gilad Ben-Yossef 
---
 drivers/crypto/ccp/ccp-crypto-main.c | 8 +++-
 drivers/crypto/ccp/ccp-dev.c | 7 +--
 2 files changed, 8 insertions(+), 7 deletions(-)

diff --git a/drivers/crypto/ccp/ccp-crypto-main.c 
b/drivers/crypto/ccp/ccp-crypto-main.c
index 35a9de7..403ff0a 100644
--- a/drivers/crypto/ccp/ccp-crypto-main.c
+++ b/drivers/crypto/ccp/ccp-crypto-main.c
@@ -222,9 +222,10 @@ static int ccp_crypto_enqueue_cmd(struct ccp_crypto_cmd 
*crypto_cmd)
 
/* Check if the cmd can/should be queued */
if (req_queue.cmd_count >= CCP_CRYPTO_MAX_QLEN) {
-   ret = -EBUSY;
-   if (!(crypto_cmd->cmd->flags & CCP_CMD_MAY_BACKLOG))
+   if (!(crypto_cmd->cmd->flags & CCP_CMD_MAY_BACKLOG)) {
+   ret = -EAGAIN;
goto e_lock;
+   }
}
 
/* Look for an entry with the same tfm.  If there is a cmd
@@ -243,9 +244,6 @@ static int ccp_crypto_enqueue_cmd(struct ccp_crypto_cmd 
*crypto_cmd)
ret = ccp_enqueue_cmd(crypto_cmd->cmd);
if (!ccp_crypto_success(ret))
goto e_lock;/* Error, don't queue it */
-   if ((ret == -EBUSY) &&
-   !(crypto_cmd->cmd->flags & CCP_CMD_MAY_BACKLOG))
-   goto e_lock;/* Not backlogging, don't queue it */
}
 
if (req_queue.cmd_count >= CCP_CRYPTO_MAX_QLEN) {
diff --git a/drivers/crypto/ccp/ccp-dev.c b/drivers/crypto/ccp/ccp-dev.c
index 4e029b1..3d637e3 100644
--- a/drivers/crypto/ccp/ccp-dev.c
+++ b/drivers/crypto/ccp/ccp-dev.c
@@ -292,9 +292,12 @@ int ccp_enqueue_cmd(struct ccp_cmd *cmd)
i = ccp->cmd_q_count;
 
if (ccp->cmd_count >= MAX_CMD_QLEN) {
-   ret = -EBUSY;
-   if (cmd->flags & CCP_CMD_MAY_BACKLOG)
+   if (cmd->flags & CCP_CMD_MAY_BACKLOG) {
+   ret = -EBUSY;
list_add_tail(>entry, >backlog);
+   } else {
+   ret = -EAGAIN;
+   }
} else {
ret = -EINPROGRESS;
ccp->cmd_count++;
-- 
2.1.4

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel


[dm-devel] [PATCH v5 00/19] simplify crypto wait for async op

2017-08-14 Thread Gilad Ben-Yossef
Many users of kernel async. crypto services have a pattern of
starting an async. crypto op and than using a completion
to wait for it to end.

This patch set simplifies this common use case in two ways:

First, by separating the return codes of the case where a
request is queued to a backlog due to the provider being
busy (-EBUSY) from the case the request has failed due
to the provider being busy and backlogging is not enabled
(-EAGAIN).

Next, this change is than built on to create a generic API
to wait for a async. crypto operation to complete.

The end result is a smaller code base and an API that is
easier to use and more difficult to get wrong.

The patch set was boot tested on x86_64 and arm64 which
at the very least tests the crypto users via testmgr and
tcrypt but I do note that I do not have access to some
of the HW whose drivers are modified nor do I claim I was
able to test all of the corner cases.

The patch set is based upon linux-next release tagged
next-20170811.

Changes from v4:
- Rebase on top of latest algif changes from Stephan
  Mueller.
- Fix typo in ccp patch title.

Changes from v3:
- Instead of changing the return code to indicate
  backlog queueing, change the return code to indicate
  transient busy state, as suggested by Herbert Xu.

Changes from v2:
- Patch title changed from "introduce crypto wait for
  async op" to better reflect the current state.
- Rebase on top of latest linux-next.
- Add a new return code of -EIOCBQUEUED for backlog
  queueing, as suggested by Herbert Xu.
- Transform more users to the new API.
- Update the drbg change to account for new init as
  indicated by Stephan Muller.

Changes from v1:
- Address review comments from Eric Biggers.
- Separated out bug fixes of existing code and rebase
  on top of that patch set.
- Rename 'ecr' to 'wait' in fscrypto code.
- Split patch introducing the new API from the change
  moving over the algif code which it originated from
  to the new API.
- Inline crypto_wait_req().
- Some code indentation fixes.

Gilad Ben-Yossef (19):
  crypto: change transient busy return code to -EAGAIN
  crypto: ccp: use -EAGAIN for transient busy indication
  crypto: remove redundant backlog checks on EBUSY
  crypto: marvell/cesa: remove redundant backlog checks on EBUSY
  crypto: introduce crypto wait for async op
  crypto: move algif to generic async completion
  crypto: move pub key to generic async completion
  crypto: move drbg to generic async completion
  crypto: move gcm to generic async completion
  crypto: move testmgr to generic async completion
  fscrypt: move to generic async completion
  dm: move dm-verity to generic async completion
  cifs: move to generic async completion
  ima: move to generic async completion
  crypto: tcrypt: move to generic async completion
  crypto: talitos: move to generic async completion
  crypto: qce: move to generic async completion
  crypto: mediatek: move to generic async completion
  crypto: adapt api sample to use async. op wait

 Documentation/crypto/api-samples.rst |  52 ++---
 crypto/af_alg.c  |  27 -
 crypto/ahash.c   |  12 +--
 crypto/algapi.c  |   6 +-
 crypto/algif_aead.c  |   8 +-
 crypto/algif_hash.c  |  50 +
 crypto/algif_skcipher.c  |   9 +-
 crypto/api.c |  13 +++
 crypto/asymmetric_keys/public_key.c  |  28 +
 crypto/cryptd.c  |   4 +-
 crypto/cts.c |   6 +-
 crypto/drbg.c|  36 ++-
 crypto/gcm.c |  32 ++
 crypto/lrw.c |   8 +-
 crypto/rsa-pkcs1pad.c|  16 +--
 crypto/tcrypt.c  |  84 +--
 crypto/testmgr.c | 204 ---
 crypto/xts.c |   8 +-
 drivers/crypto/ccp/ccp-crypto-main.c |   8 +-
 drivers/crypto/ccp/ccp-dev.c |   7 +-
 drivers/crypto/marvell/cesa.c|   3 +-
 drivers/crypto/marvell/cesa.h|   2 +-
 drivers/crypto/mediatek/mtk-aes.c|  31 +-
 drivers/crypto/qce/sha.c |  30 +-
 drivers/crypto/talitos.c |  38 +--
 drivers/md/dm-verity-target.c|  81 --
 drivers/md/dm-verity.h   |   5 -
 fs/cifs/smb2ops.c|  30 +-
 fs/crypto/crypto.c   |  28 +
 fs/crypto/fname.c|  36 ++-
 fs/crypto/fscrypt_private.h  |  10 --
 fs/crypto/keyinfo.c  |  21 +---
 include/crypto/drbg.h|   3 +-
 include/crypto/if_alg.h  |  15 +--
 include/linux/crypto.h   |  41 +++
 security/integrity/ima/ima_crypto.c  |  56 +++---
 36 files changed, 311 insertions(+), 737 deletions(-)

-- 
2.1.4

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel


[dm-devel] [PATCH v5 01/19] crypto: change transient busy return code to -EAGAIN

2017-08-14 Thread Gilad Ben-Yossef
The crypto API was using the -EBUSY return value to indicate
both a hard failure to submit a crypto operation into a
transformation provider when the latter was busy and the backlog
mechanism was not enabled as well as a notification that the
operation was queued into the backlog when the backlog mechanism
was enabled.

Having the same return code indicate two very different conditions
depending on a flag is both error prone and requires extra runtime
check like the following to discern between the cases:

if (err == -EINPROGRESS ||
(err == -EBUSY && (ahash_request_flags(req) &
   CRYPTO_TFM_REQ_MAY_BACKLOG)))

This patch changes the return code used to indicate a crypto op
failed due to the transformation provider being transiently busy
to -EAGAIN.

Signed-off-by: Gilad Ben-Yossef 
---
 crypto/algapi.c |  6 --
 crypto/algif_hash.c | 20 +---
 crypto/cryptd.c |  4 +---
 3 files changed, 22 insertions(+), 8 deletions(-)

diff --git a/crypto/algapi.c b/crypto/algapi.c
index aa699ff..916bee3 100644
--- a/crypto/algapi.c
+++ b/crypto/algapi.c
@@ -897,9 +897,11 @@ int crypto_enqueue_request(struct crypto_queue *queue,
int err = -EINPROGRESS;
 
if (unlikely(queue->qlen >= queue->max_qlen)) {
-   err = -EBUSY;
-   if (!(request->flags & CRYPTO_TFM_REQ_MAY_BACKLOG))
+   if (!(request->flags & CRYPTO_TFM_REQ_MAY_BACKLOG)) {
+   err = -EAGAIN;
goto out;
+   }
+   err = -EBUSY;
if (queue->backlog == >list)
queue->backlog = >list;
}
diff --git a/crypto/algif_hash.c b/crypto/algif_hash.c
index 5e92bd2..3b3c154 100644
--- a/crypto/algif_hash.c
+++ b/crypto/algif_hash.c
@@ -39,6 +39,20 @@ struct algif_hash_tfm {
bool has_key;
 };
 
+/* Previous versions of crypto_* ops used to return -EBUSY
+ * rather than -EAGAIN to indicate being tied up. The in
+ * kernel API changed but we don't want to break the user
+ * space API. As only the hash user interface exposed this
+ * error ever to the user, do the translation here.
+ */
+static inline int crypto_user_err(int err)
+{
+   if (err == -EAGAIN)
+   return -EBUSY;
+
+   return err;
+}
+
 static int hash_alloc_result(struct sock *sk, struct hash_ctx *ctx)
 {
unsigned ds;
@@ -136,7 +150,7 @@ static int hash_sendmsg(struct socket *sock, struct msghdr 
*msg,
 unlock:
release_sock(sk);
 
-   return err ?: copied;
+   return err ? crypto_user_err(err) : copied;
 }
 
 static ssize_t hash_sendpage(struct socket *sock, struct page *page,
@@ -188,7 +202,7 @@ static ssize_t hash_sendpage(struct socket *sock, struct 
page *page,
 unlock:
release_sock(sk);
 
-   return err ?: size;
+   return err ? crypto_user_err(err) : size;
 }
 
 static int hash_recvmsg(struct socket *sock, struct msghdr *msg, size_t len,
@@ -236,7 +250,7 @@ static int hash_recvmsg(struct socket *sock, struct msghdr 
*msg, size_t len,
hash_free_result(sk, ctx);
release_sock(sk);
 
-   return err ?: len;
+   return err ? crypto_user_err(err) : len;
 }
 
 static int hash_accept(struct socket *sock, struct socket *newsock, int flags,
diff --git a/crypto/cryptd.c b/crypto/cryptd.c
index 0508c48..d1dbdce 100644
--- a/crypto/cryptd.c
+++ b/crypto/cryptd.c
@@ -137,16 +137,14 @@ static int cryptd_enqueue_request(struct cryptd_queue 
*queue,
int cpu, err;
struct cryptd_cpu_queue *cpu_queue;
atomic_t *refcnt;
-   bool may_backlog;
 
cpu = get_cpu();
cpu_queue = this_cpu_ptr(queue->cpu_queue);
err = crypto_enqueue_request(_queue->queue, request);
 
refcnt = crypto_tfm_ctx(request->tfm);
-   may_backlog = request->flags & CRYPTO_TFM_REQ_MAY_BACKLOG;
 
-   if (err == -EBUSY && !may_backlog)
+   if (err == -EAGAIN)
goto out_put_cpu;
 
queue_work_on(cpu, kcrypto_wq, _queue->work);
-- 
2.1.4

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel