[dm-devel] [PATCH v3 1/4] drbd: use PAGE_SECTORS_SHIFT and PAGE_SECTORS

2023-09-15 Thread Luis Chamberlain
Replace common constants with generic versions.
This produces no functional changes.

Acked-by: Christoph Böhmwalder 
Reviewed-by: Johannes Thumshirn 
Signed-off-by: Luis Chamberlain 
---
 drivers/block/drbd/drbd_bitmap.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/block/drbd/drbd_bitmap.c b/drivers/block/drbd/drbd_bitmap.c
index 85ca000a0564..1a1782f55e61 100644
--- a/drivers/block/drbd/drbd_bitmap.c
+++ b/drivers/block/drbd/drbd_bitmap.c
@@ -1000,7 +1000,7 @@ static void bm_page_io_async(struct drbd_bm_aio_ctx *ctx, 
int page_nr) __must_ho
unsigned int len;
 
first_bm_sect = device->ldev->md.md_offset + device->ldev->md.bm_offset;
-   on_disk_sector = first_bm_sect + (((sector_t)page_nr) << 
(PAGE_SHIFT-SECTOR_SHIFT));
+   on_disk_sector = first_bm_sect + (((sector_t)page_nr) << 
PAGE_SECTORS_SHIFT);
 
/* this might happen with very small
 * flexible external meta data device,
@@ -1008,7 +1008,7 @@ static void bm_page_io_async(struct drbd_bm_aio_ctx *ctx, 
int page_nr) __must_ho
last_bm_sect = drbd_md_last_bitmap_sector(device->ldev);
if (first_bm_sect <= on_disk_sector && last_bm_sect >= on_disk_sector) {
sector_t len_sect = last_bm_sect - on_disk_sector + 1;
-   if (len_sect < PAGE_SIZE/SECTOR_SIZE)
+   if (len_sect < PAGE_SECTORS)
len = (unsigned int)len_sect*SECTOR_SIZE;
else
len = PAGE_SIZE;
-- 
2.39.2

--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel


[dm-devel] [PATCH v3 0/4] block: simplify with PAGE_SECTORS_SHIFT

2023-09-15 Thread Luis Chamberlain
Just spinning these low hanging fruit patches I forgot about.

Changes on v3:

 o Add tags for Reviews/acks
 o rebase onto next-20230914
 o Fixes as suggested by Johannes Thumshirn:
   - drop not-needed parens on dm bufio
   - use SECTOR_MASK instead of PAGE_SECTORS - 1 for the zram driver
 o Drop shmem patches as they are already merged / modified

Changes on v2:


  
 o keep iomap math visibly simple   

  
 o Add tags for Reviews/acks

  
 o rebase onto next-20230525 

Luis Chamberlain (4):
  drbd: use PAGE_SECTORS_SHIFT and PAGE_SECTORS
  iomap: simplify iomap_init() with PAGE_SECTORS
  dm bufio: simplify by using PAGE_SECTORS_SHIFT
  zram: use generic PAGE_SECTORS and PAGE_SECTORS_SHIFT

 drivers/block/drbd/drbd_bitmap.c |  4 ++--
 drivers/block/zram/zram_drv.c| 15 ++-
 drivers/block/zram/zram_drv.h|  2 --
 drivers/md/dm-bufio.c|  4 ++--
 fs/iomap/buffered-io.c   |  2 +-
 5 files changed, 11 insertions(+), 16 deletions(-)

-- 
2.39.2

--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel



[dm-devel] [PATCH v3 2/4] iomap: simplify iomap_init() with PAGE_SECTORS

2023-09-15 Thread Luis Chamberlain
Replace common constants with generic versions. This produces no
functional changes.

Reviewed-by: Christoph Hellwig 
Signed-off-by: Luis Chamberlain 
---
 fs/iomap/buffered-io.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c
index ae8673ce08b1..1a16c0e5d190 100644
--- a/fs/iomap/buffered-io.c
+++ b/fs/iomap/buffered-io.c
@@ -1985,7 +1985,7 @@ EXPORT_SYMBOL_GPL(iomap_writepages);
 
 static int __init iomap_init(void)
 {
-   return bioset_init(_ioend_bioset, 4 * (PAGE_SIZE / SECTOR_SIZE),
+   return bioset_init(_ioend_bioset, 4 * PAGE_SECTORS,
   offsetof(struct iomap_ioend, io_inline_bio),
   BIOSET_NEED_BVECS);
 }
-- 
2.39.2

--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel



[dm-devel] [PATCH v3 3/4] dm bufio: simplify by using PAGE_SECTORS_SHIFT

2023-09-15 Thread Luis Chamberlain
The PAGE_SHIFT - SECTOR_SHIFT constant be replaced with PAGE_SECTORS_SHIFT
defined in linux/blt_types.h, which is included by linux/blkdev.h.

This produces no functional changes.

Signed-off-by: Luis Chamberlain 
---
 drivers/md/dm-bufio.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/md/dm-bufio.c b/drivers/md/dm-bufio.c
index 62eb27639c9b..a5b48be93b30 100644
--- a/drivers/md/dm-bufio.c
+++ b/drivers/md/dm-bufio.c
@@ -1152,7 +1152,7 @@ static void *alloc_buffer_data(struct dm_bufio_client *c, 
gfp_t gfp_mask,
gfp_mask & __GFP_NORETRY) {
*data_mode = DATA_MODE_GET_FREE_PAGES;
return (void *)__get_free_pages(gfp_mask,
-   c->sectors_per_block_bits - 
(PAGE_SHIFT - SECTOR_SHIFT));
+   c->sectors_per_block_bits - 
PAGE_SECTORS_SHIFT);
}
 
*data_mode = DATA_MODE_VMALLOC;
@@ -1173,7 +1173,7 @@ static void free_buffer_data(struct dm_bufio_client *c,
 
case DATA_MODE_GET_FREE_PAGES:
free_pages((unsigned long)data,
-  c->sectors_per_block_bits - (PAGE_SHIFT - 
SECTOR_SHIFT));
+  c->sectors_per_block_bits - PAGE_SECTORS_SHIFT);
break;
 
case DATA_MODE_VMALLOC:
-- 
2.39.2

--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel



[dm-devel] [PATCH v3 4/4] zram: use generic PAGE_SECTORS and PAGE_SECTORS_SHIFT

2023-09-15 Thread Luis Chamberlain
Instead of re-defining the already existing constants use the provided ones:

So replace:

 o SECTORS_PER_PAGE_SHIFT with PAGE_SECTORS_SHIFT
 o SECTORS_PER_PAGE   with PAGE_SECTORS
 o SECTORS_PER_PAGE - 1   with SECTOR_MASK

This produces no functional changes.

Reviewed-by: Sergey Senozhatsky 
Signed-off-by: Luis Chamberlain 
---
 drivers/block/zram/zram_drv.c | 15 ++-
 drivers/block/zram/zram_drv.h |  2 --
 2 files changed, 6 insertions(+), 11 deletions(-)

diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
index 06673c6ca255..58d36c8574d4 100644
--- a/drivers/block/zram/zram_drv.c
+++ b/drivers/block/zram/zram_drv.c
@@ -1834,9 +1834,8 @@ static ssize_t recompress_store(struct device *dev,
 static void zram_bio_discard(struct zram *zram, struct bio *bio)
 {
size_t n = bio->bi_iter.bi_size;
-   u32 index = bio->bi_iter.bi_sector >> SECTORS_PER_PAGE_SHIFT;
-   u32 offset = (bio->bi_iter.bi_sector & (SECTORS_PER_PAGE - 1)) <<
-   SECTOR_SHIFT;
+   u32 index = bio->bi_iter.bi_sector >> PAGE_SECTORS_SHIFT;
+   u32 offset = (bio->bi_iter.bi_sector & SECTOR_MASK) << SECTOR_SHIFT;
 
/*
 * zram manages data in physical block size units. Because logical block
@@ -1874,9 +1873,8 @@ static void zram_bio_read(struct zram *zram, struct bio 
*bio)
struct bvec_iter iter = bio->bi_iter;
 
do {
-   u32 index = iter.bi_sector >> SECTORS_PER_PAGE_SHIFT;
-   u32 offset = (iter.bi_sector & (SECTORS_PER_PAGE - 1)) <<
-   SECTOR_SHIFT;
+   u32 index = iter.bi_sector >> PAGE_SECTORS_SHIFT;
+   u32 offset = (iter.bi_sector & SECTOR_MASK) << SECTOR_SHIFT;
struct bio_vec bv = bio_iter_iovec(bio, iter);
 
bv.bv_len = min_t(u32, bv.bv_len, PAGE_SIZE - offset);
@@ -1905,9 +1903,8 @@ static void zram_bio_write(struct zram *zram, struct bio 
*bio)
struct bvec_iter iter = bio->bi_iter;
 
do {
-   u32 index = iter.bi_sector >> SECTORS_PER_PAGE_SHIFT;
-   u32 offset = (iter.bi_sector & (SECTORS_PER_PAGE - 1)) <<
-   SECTOR_SHIFT;
+   u32 index = iter.bi_sector >> PAGE_SECTORS_SHIFT;
+   u32 offset = (iter.bi_sector & SECTOR_MASK) << SECTOR_SHIFT;
struct bio_vec bv = bio_iter_iovec(bio, iter);
 
bv.bv_len = min_t(u32, bv.bv_len, PAGE_SIZE - offset);
diff --git a/drivers/block/zram/zram_drv.h b/drivers/block/zram/zram_drv.h
index ca7a15bd4845..9f2543af5c76 100644
--- a/drivers/block/zram/zram_drv.h
+++ b/drivers/block/zram/zram_drv.h
@@ -21,8 +21,6 @@
 
 #include "zcomp.h"
 
-#define SECTORS_PER_PAGE_SHIFT (PAGE_SHIFT - SECTOR_SHIFT)
-#define SECTORS_PER_PAGE   (1 << SECTORS_PER_PAGE_SHIFT)
 #define ZRAM_LOGICAL_BLOCK_SHIFT 12
 #define ZRAM_LOGICAL_BLOCK_SIZE(1 << ZRAM_LOGICAL_BLOCK_SHIFT)
 #define ZRAM_SECTOR_PER_LOGICAL_BLOCK  \
-- 
2.39.2

--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel



Re: [dm-devel] [PATCH v15 01/12] block: Introduce queue limits and sysfs for copy-offload support

2023-09-07 Thread Luis Chamberlain
On Wed, Sep 06, 2023 at 10:08:26PM +0530, Nitesh Shetty wrote:
> Add device limits as sysfs entries,
>   - copy_max_bytes (RW)
>   - copy_max_hw_bytes (RO)
> 
> Above limits help to split the copy payload in block layer.
> copy_max_bytes: maximum total length of copy in single payload.
> copy_max_hw_bytes: Reflects the device supported maximum limit.
> 
> Reviewed-by: Hannes Reinecke 
> Signed-off-by: Nitesh Shetty 
> Signed-off-by: Kanchan Joshi 
> Signed-off-by: Anuj Gupta 

Reviewed-by: Luis Chamberlain 

  Luis

--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel



Re: [dm-devel] [PATCH v2 1/5] block: annotate bdev_disk_changed() deprecation with a symbol namespace

2023-05-26 Thread Luis Chamberlain
On Fri, May 26, 2023 at 01:13:14AM -0700, Christoph Hellwig wrote:
> On Fri, May 26, 2023 at 12:33:32AM -0700, Luis Chamberlain wrote:
> > This ensures no other users pop up by mistake easily and provides
> > us a with an easy vehicle to do the same with other routines should
> > we need it later.
> 
> I don't see how this is related to the rest of the seris.

Jessh, sorry it is too late here and that was a typo that the commit
went into the series. I'll go sleep now. This I just had queued
as a reminder for the new annotation for deprecated symbols to be
used in some situations.

Please ignore this patch.

  Luis

--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel



[dm-devel] [PATCH v2 0/5] block: simplify with PAGE_SECTORS_SHIFT

2023-05-26 Thread Luis Chamberlain
A bit of block drivers have their own incantations with
PAGE_SHIFT - SECTOR_SHIFT. Just simplfy and use PAGE_SECTORS_SHIFT
all over.


  
Based on linux-next next-20230525.

Changes since v1:

 o keep iomap math visibly simple
 o Add tags for Reviews/acks
 o rebase onto next-20230525

Luis Chamberlain (5):
  block: annotate bdev_disk_changed() deprecation with a symbol
namespace
  drbd: use PAGE_SECTORS_SHIFT and PAGE_SECTORS
  iomap: simplify iomap_init() with PAGE_SECTORS
  dm bufio: simplify by using PAGE_SECTORS_SHIFT
  zram: use generic PAGE_SECTORS and PAGE_SECTORS_SHIFT

 block/partitions/core.c  |  6 +-
 drivers/block/drbd/drbd_bitmap.c |  4 ++--
 drivers/block/loop.c |  2 ++
 drivers/block/zram/zram_drv.c| 12 ++--
 drivers/block/zram/zram_drv.h|  2 --
 drivers/md/dm-bufio.c|  4 ++--
 drivers/s390/block/dasd_genhd.c  |  2 ++
 fs/iomap/buffered-io.c   |  2 +-
 8 files changed, 16 insertions(+), 18 deletions(-)

-- 
2.39.2

--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel



[dm-devel] [PATCH v2 4/5] dm bufio: simplify by using PAGE_SECTORS_SHIFT

2023-05-26 Thread Luis Chamberlain
The PAGE_SHIFT - SECTOR_SHIFT constant be replaced with PAGE_SECTORS_SHIFT
defined in linux/blt_types.h, which is included by linux/blkdev.h.

This produces no functional changes.

Signed-off-by: Luis Chamberlain 
---
 drivers/md/dm-bufio.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/md/dm-bufio.c b/drivers/md/dm-bufio.c
index eea977662e81..08c4730e1819 100644
--- a/drivers/md/dm-bufio.c
+++ b/drivers/md/dm-bufio.c
@@ -1152,7 +1152,7 @@ static void *alloc_buffer_data(struct dm_bufio_client *c, 
gfp_t gfp_mask,
gfp_mask & __GFP_NORETRY) {
*data_mode = DATA_MODE_GET_FREE_PAGES;
return (void *)__get_free_pages(gfp_mask,
-   c->sectors_per_block_bits - 
(PAGE_SHIFT - SECTOR_SHIFT));
+   c->sectors_per_block_bits - 
(PAGE_SECTORS_SHIFT));
}
 
*data_mode = DATA_MODE_VMALLOC;
@@ -1190,7 +1190,7 @@ static void free_buffer_data(struct dm_bufio_client *c,
 
case DATA_MODE_GET_FREE_PAGES:
free_pages((unsigned long)data,
-  c->sectors_per_block_bits - (PAGE_SHIFT - 
SECTOR_SHIFT));
+  c->sectors_per_block_bits - (PAGE_SECTORS_SHIFT));
break;
 
case DATA_MODE_VMALLOC:
-- 
2.39.2

--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel



[dm-devel] [PATCH v2 5/5] zram: use generic PAGE_SECTORS and PAGE_SECTORS_SHIFT

2023-05-26 Thread Luis Chamberlain
Instead of re-defining the already existing constants use the provided ones:

So replace:

 o SECTORS_PER_PAGE_SHIFT with PAGE_SECTORS_SHIFT
 o SECTORS_PER_PAGE   with PAGE_SECTORS

This produces no functional changes.

Reviewed-by: Sergey Senozhatsky 
Signed-off-by: Luis Chamberlain 
---
 drivers/block/zram/zram_drv.c | 12 ++--
 drivers/block/zram/zram_drv.h |  2 --
 2 files changed, 6 insertions(+), 8 deletions(-)

diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
index f6d90f1ba5cf..5fdeb78ace9a 100644
--- a/drivers/block/zram/zram_drv.c
+++ b/drivers/block/zram/zram_drv.c
@@ -1834,8 +1834,8 @@ static ssize_t recompress_store(struct device *dev,
 static void zram_bio_discard(struct zram *zram, struct bio *bio)
 {
size_t n = bio->bi_iter.bi_size;
-   u32 index = bio->bi_iter.bi_sector >> SECTORS_PER_PAGE_SHIFT;
-   u32 offset = (bio->bi_iter.bi_sector & (SECTORS_PER_PAGE - 1)) <<
+   u32 index = bio->bi_iter.bi_sector >> PAGE_SECTORS_SHIFT;
+   u32 offset = (bio->bi_iter.bi_sector & (PAGE_SECTORS - 1)) <<
SECTOR_SHIFT;
 
/*
@@ -1876,8 +1876,8 @@ static void zram_bio_read(struct zram *zram, struct bio 
*bio)
 
start_time = bio_start_io_acct(bio);
bio_for_each_segment(bv, bio, iter) {
-   u32 index = iter.bi_sector >> SECTORS_PER_PAGE_SHIFT;
-   u32 offset = (iter.bi_sector & (SECTORS_PER_PAGE - 1)) <<
+   u32 index = iter.bi_sector >> PAGE_SECTORS_SHIFT;
+   u32 offset = (iter.bi_sector & (PAGE_SECTORS - 1)) <<
SECTOR_SHIFT;
 
if (zram_bvec_read(zram, , index, offset, bio) < 0) {
@@ -1903,8 +1903,8 @@ static void zram_bio_write(struct zram *zram, struct bio 
*bio)
 
start_time = bio_start_io_acct(bio);
bio_for_each_segment(bv, bio, iter) {
-   u32 index = iter.bi_sector >> SECTORS_PER_PAGE_SHIFT;
-   u32 offset = (iter.bi_sector & (SECTORS_PER_PAGE - 1)) <<
+   u32 index = iter.bi_sector >> PAGE_SECTORS_SHIFT;
+   u32 offset = (iter.bi_sector & (PAGE_SECTORS - 1)) <<
SECTOR_SHIFT;
 
if (zram_bvec_write(zram, , index, offset, bio) < 0) {
diff --git a/drivers/block/zram/zram_drv.h b/drivers/block/zram/zram_drv.h
index ca7a15bd4845..9f2543af5c76 100644
--- a/drivers/block/zram/zram_drv.h
+++ b/drivers/block/zram/zram_drv.h
@@ -21,8 +21,6 @@
 
 #include "zcomp.h"
 
-#define SECTORS_PER_PAGE_SHIFT (PAGE_SHIFT - SECTOR_SHIFT)
-#define SECTORS_PER_PAGE   (1 << SECTORS_PER_PAGE_SHIFT)
 #define ZRAM_LOGICAL_BLOCK_SHIFT 12
 #define ZRAM_LOGICAL_BLOCK_SIZE(1 << ZRAM_LOGICAL_BLOCK_SHIFT)
 #define ZRAM_SECTOR_PER_LOGICAL_BLOCK  \
-- 
2.39.2

--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel



[dm-devel] [PATCH v2 2/5] drbd: use PAGE_SECTORS_SHIFT and PAGE_SECTORS

2023-05-26 Thread Luis Chamberlain
Replace common constants with generic versions.
This produces no functional changes.

Acked-by: Christoph Böhmwalder 
Signed-off-by: Luis Chamberlain 
---
 drivers/block/drbd/drbd_bitmap.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/block/drbd/drbd_bitmap.c b/drivers/block/drbd/drbd_bitmap.c
index 6ac8c54b44c7..b556e6634f13 100644
--- a/drivers/block/drbd/drbd_bitmap.c
+++ b/drivers/block/drbd/drbd_bitmap.c
@@ -1000,7 +1000,7 @@ static void bm_page_io_async(struct drbd_bm_aio_ctx *ctx, 
int page_nr) __must_ho
unsigned int len;
 
first_bm_sect = device->ldev->md.md_offset + device->ldev->md.bm_offset;
-   on_disk_sector = first_bm_sect + (((sector_t)page_nr) << 
(PAGE_SHIFT-SECTOR_SHIFT));
+   on_disk_sector = first_bm_sect + (((sector_t)page_nr) << 
PAGE_SECTORS_SHIFT);
 
/* this might happen with very small
 * flexible external meta data device,
@@ -1008,7 +1008,7 @@ static void bm_page_io_async(struct drbd_bm_aio_ctx *ctx, 
int page_nr) __must_ho
last_bm_sect = drbd_md_last_bitmap_sector(device->ldev);
if (first_bm_sect <= on_disk_sector && last_bm_sect >= on_disk_sector) {
sector_t len_sect = last_bm_sect - on_disk_sector + 1;
-   if (len_sect < PAGE_SIZE/SECTOR_SIZE)
+   if (len_sect < PAGE_SECTORS)
len = (unsigned int)len_sect*SECTOR_SIZE;
else
len = PAGE_SIZE;
-- 
2.39.2

--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel


[dm-devel] [PATCH v2 3/5] iomap: simplify iomap_init() with PAGE_SECTORS

2023-05-26 Thread Luis Chamberlain
Replace common constants with generic versions. This produces no
functional changes.

Signed-off-by: Luis Chamberlain 
---
 fs/iomap/buffered-io.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c
index 915b448b8554..5641e696fb3f 100644
--- a/fs/iomap/buffered-io.c
+++ b/fs/iomap/buffered-io.c
@@ -1831,7 +1831,7 @@ EXPORT_SYMBOL_GPL(iomap_writepages);
 
 static int __init iomap_init(void)
 {
-   return bioset_init(_ioend_bioset, 4 * (PAGE_SIZE / SECTOR_SIZE),
+   return bioset_init(_ioend_bioset, 4 * PAGE_SECTORS,
   offsetof(struct iomap_ioend, io_inline_bio),
   BIOSET_NEED_BVECS);
 }
-- 
2.39.2

--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel



[dm-devel] [PATCH v2 1/5] block: annotate bdev_disk_changed() deprecation with a symbol namespace

2023-05-26 Thread Luis Chamberlain
This ensures no other users pop up by mistake easily and provides
us a with an easy vehicle to do the same with other routines should
we need it later.

Signed-off-by: Luis Chamberlain 
---
 block/partitions/core.c | 6 +-
 drivers/block/loop.c| 2 ++
 drivers/s390/block/dasd_genhd.c | 2 ++
 3 files changed, 5 insertions(+), 5 deletions(-)

diff --git a/block/partitions/core.c b/block/partitions/core.c
index 49e0496ff23c..6f18444be4fe 100644
--- a/block/partitions/core.c
+++ b/block/partitions/core.c
@@ -690,11 +690,7 @@ int bdev_disk_changed(struct gendisk *disk, bool 
invalidate)
 
return ret;
 }
-/*
- * Only exported for loop and dasd for historic reasons.  Don't use in new
- * code!
- */
-EXPORT_SYMBOL_GPL(bdev_disk_changed);
+EXPORT_SYMBOL_NS_GPL(bdev_disk_changed, BLOCK_DEPRECATED);
 
 void *read_part_sector(struct parsed_partitions *state, sector_t n, Sector *p)
 {
diff --git a/drivers/block/loop.c b/drivers/block/loop.c
index bc31bb7072a2..50aa5b4c0c56 100644
--- a/drivers/block/loop.c
+++ b/drivers/block/loop.c
@@ -37,6 +37,8 @@
 #include 
 #include 
 
+MODULE_IMPORT_NS(BLOCK_DEPRECATED);
+
 /* Possible states of device */
 enum {
Lo_unbound,
diff --git a/drivers/s390/block/dasd_genhd.c b/drivers/s390/block/dasd_genhd.c
index 998a961e1704..5ea244aec534 100644
--- a/drivers/s390/block/dasd_genhd.c
+++ b/drivers/s390/block/dasd_genhd.c
@@ -25,6 +25,8 @@
 
 #include "dasd_int.h"
 
+MODULE_IMPORT_NS(BLOCK_DEPRECATED);
+
 static unsigned int queue_depth = 32;
 static unsigned int nr_hw_queues = 4;
 
-- 
2.39.2

--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel



Re: [dm-devel] [PATCH 3/5] iomap: simplify iomap_init() with PAGE_SECTORS

2023-04-21 Thread Luis Chamberlain
On Fri, Apr 21, 2023 at 04:24:57PM -0600, Jens Axboe wrote:
> On 4/21/23 4:02 PM, Luis Chamberlain wrote:
> > On Fri, Apr 21, 2023 at 09:14:00PM +0100, Matthew Wilcox wrote:
> >> On Fri, Apr 21, 2023 at 12:58:05PM -0700, Luis Chamberlain wrote:
> >>> Just use the PAGE_SECTORS generic define. This produces no functional
> >>> changes. While at it use left shift to simplify this even further.
> >>
> >> How is FOO << 2 simpler than FOO * 4?
> >>
> >>> - return bioset_init(_ioend_bioset, 4 * (PAGE_SIZE / SECTOR_SIZE),
> >>> + return bioset_init(_ioend_bioset, PAGE_SECTORS << 2,
> > 
> > We could just do:
> > 
> > 
> > -   return bioset_init(_ioend_bioset, 4 * (PAGE_SIZE / SECTOR_SIZE),
> > +   return bioset_init(_ioend_bioset, 4 * PAGE_SECTORS,
> > 
> > The shift just seemed optimal if we're just going to change it.
> 
> It's going to generate the same code, but the multiplication is arguably
> easier to read (or harder to misread).

Then let's stick with the 4 * PAGE_SECTORS. Let me know if you need another
patch.

  Luis

--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel


Re: [dm-devel] [PATCH 3/5] iomap: simplify iomap_init() with PAGE_SECTORS

2023-04-21 Thread Luis Chamberlain
On Fri, Apr 21, 2023 at 09:14:00PM +0100, Matthew Wilcox wrote:
> On Fri, Apr 21, 2023 at 12:58:05PM -0700, Luis Chamberlain wrote:
> > Just use the PAGE_SECTORS generic define. This produces no functional
> > changes. While at it use left shift to simplify this even further.
> 
> How is FOO << 2 simpler than FOO * 4?
> 
> > -   return bioset_init(_ioend_bioset, 4 * (PAGE_SIZE / SECTOR_SIZE),
> > +   return bioset_init(_ioend_bioset, PAGE_SECTORS << 2,

We could just do:


-   return bioset_init(_ioend_bioset, 4 * (PAGE_SIZE / SECTOR_SIZE),
+   return bioset_init(_ioend_bioset, 4 * PAGE_SECTORS,

The shift just seemed optimal if we're just going to change it.

  Luis

--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel



[dm-devel] [PATCH 3/5] iomap: simplify iomap_init() with PAGE_SECTORS

2023-04-21 Thread Luis Chamberlain
Just use the PAGE_SECTORS generic define. This produces no functional
changes. While at it use left shift to simplify this even further.

Signed-off-by: Luis Chamberlain 
---
 fs/iomap/buffered-io.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c
index 063133ec77f4..ba2824f405df 100644
--- a/fs/iomap/buffered-io.c
+++ b/fs/iomap/buffered-io.c
@@ -1819,7 +1819,7 @@ EXPORT_SYMBOL_GPL(iomap_writepages);
 
 static int __init iomap_init(void)
 {
-   return bioset_init(_ioend_bioset, 4 * (PAGE_SIZE / SECTOR_SIZE),
+   return bioset_init(_ioend_bioset, PAGE_SECTORS << 2,
   offsetof(struct iomap_ioend, io_inline_bio),
   BIOSET_NEED_BVECS);
 }
-- 
2.39.2

--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel



[dm-devel] [PATCH 5/5] zram: use generic PAGE_SECTORS and PAGE_SECTORS_SHIFT

2023-04-21 Thread Luis Chamberlain
Instead of re-defining the already existing constants use the provided ones:

So replace:

 o SECTORS_PER_PAGE_SHIFT with PAGE_SECTORS_SHIFT
 o SECTORS_PER_PAGE   with PAGE_SECTORS

This produces no functional changes.

Signed-off-by: Luis Chamberlain 
---
 drivers/block/zram/zram_drv.c | 12 ++--
 drivers/block/zram/zram_drv.h |  2 --
 2 files changed, 6 insertions(+), 8 deletions(-)

diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
index a84c4268257a..11c9b5a9ac7a 100644
--- a/drivers/block/zram/zram_drv.c
+++ b/drivers/block/zram/zram_drv.c
@@ -1834,8 +1834,8 @@ static ssize_t recompress_store(struct device *dev,
 static void zram_bio_discard(struct zram *zram, struct bio *bio)
 {
size_t n = bio->bi_iter.bi_size;
-   u32 index = bio->bi_iter.bi_sector >> SECTORS_PER_PAGE_SHIFT;
-   u32 offset = (bio->bi_iter.bi_sector & (SECTORS_PER_PAGE - 1)) <<
+   u32 index = bio->bi_iter.bi_sector >> PAGE_SECTORS_SHIFT;
+   u32 offset = (bio->bi_iter.bi_sector & (PAGE_SECTORS - 1)) <<
SECTOR_SHIFT;
 
/*
@@ -1876,8 +1876,8 @@ static void zram_bio_read(struct zram *zram, struct bio 
*bio)
 
start_time = bio_start_io_acct(bio);
bio_for_each_segment(bv, bio, iter) {
-   u32 index = iter.bi_sector >> SECTORS_PER_PAGE_SHIFT;
-   u32 offset = (iter.bi_sector & (SECTORS_PER_PAGE - 1)) <<
+   u32 index = iter.bi_sector >> PAGE_SECTORS_SHIFT;
+   u32 offset = (iter.bi_sector & (PAGE_SECTORS - 1)) <<
SECTOR_SHIFT;
 
if (zram_bvec_read(zram, , index, offset, bio) < 0) {
@@ -1903,8 +1903,8 @@ static void zram_bio_write(struct zram *zram, struct bio 
*bio)
 
start_time = bio_start_io_acct(bio);
bio_for_each_segment(bv, bio, iter) {
-   u32 index = iter.bi_sector >> SECTORS_PER_PAGE_SHIFT;
-   u32 offset = (iter.bi_sector & (SECTORS_PER_PAGE - 1)) <<
+   u32 index = iter.bi_sector >> PAGE_SECTORS_SHIFT;
+   u32 offset = (iter.bi_sector & (PAGE_SECTORS - 1)) <<
SECTOR_SHIFT;
 
if (zram_bvec_write(zram, , index, offset, bio) < 0) {
diff --git a/drivers/block/zram/zram_drv.h b/drivers/block/zram/zram_drv.h
index ca7a15bd4845..9f2543af5c76 100644
--- a/drivers/block/zram/zram_drv.h
+++ b/drivers/block/zram/zram_drv.h
@@ -21,8 +21,6 @@
 
 #include "zcomp.h"
 
-#define SECTORS_PER_PAGE_SHIFT (PAGE_SHIFT - SECTOR_SHIFT)
-#define SECTORS_PER_PAGE   (1 << SECTORS_PER_PAGE_SHIFT)
 #define ZRAM_LOGICAL_BLOCK_SHIFT 12
 #define ZRAM_LOGICAL_BLOCK_SIZE(1 << ZRAM_LOGICAL_BLOCK_SHIFT)
 #define ZRAM_SECTOR_PER_LOGICAL_BLOCK  \
-- 
2.39.2

--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel



[dm-devel] [PATCH 4/5] dm bufio: simplify by using PAGE_SECTORS_SHIFT

2023-04-21 Thread Luis Chamberlain
The PAGE_SHIFT - SECTOR_SHIFT constant be replaced with PAGE_SECTORS_SHIFT
defined in linux/blt_types.h, which is included by linux/blkdev.h.

This produces no functional changes.

Signed-off-by: Luis Chamberlain 
---
 drivers/md/dm-bufio.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/md/dm-bufio.c b/drivers/md/dm-bufio.c
index eea977662e81..08c4730e1819 100644
--- a/drivers/md/dm-bufio.c
+++ b/drivers/md/dm-bufio.c
@@ -1152,7 +1152,7 @@ static void *alloc_buffer_data(struct dm_bufio_client *c, 
gfp_t gfp_mask,
gfp_mask & __GFP_NORETRY) {
*data_mode = DATA_MODE_GET_FREE_PAGES;
return (void *)__get_free_pages(gfp_mask,
-   c->sectors_per_block_bits - 
(PAGE_SHIFT - SECTOR_SHIFT));
+   c->sectors_per_block_bits - 
(PAGE_SECTORS_SHIFT));
}
 
*data_mode = DATA_MODE_VMALLOC;
@@ -1190,7 +1190,7 @@ static void free_buffer_data(struct dm_bufio_client *c,
 
case DATA_MODE_GET_FREE_PAGES:
free_pages((unsigned long)data,
-  c->sectors_per_block_bits - (PAGE_SHIFT - 
SECTOR_SHIFT));
+  c->sectors_per_block_bits - (PAGE_SECTORS_SHIFT));
break;
 
case DATA_MODE_VMALLOC:
-- 
2.39.2

--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel



[dm-devel] [PATCH 1/5] dm integrity: simplify by using PAGE_SECTORS_SHIFT

2023-04-21 Thread Luis Chamberlain
The PAGE_SHIFT - SECTOR_SHIFT constant be replaced with PAGE_SECTORS_SHIFT
defined in linux/blt_types.h, which is included by linux/blkdev.h.

This produces no functional changes.

Signed-off-by: Luis Chamberlain 
---
 drivers/md/dm-integrity.c | 10 +-
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/drivers/md/dm-integrity.c b/drivers/md/dm-integrity.c
index 31838b13ea54..a179265970dd 100644
--- a/drivers/md/dm-integrity.c
+++ b/drivers/md/dm-integrity.c
@@ -752,7 +752,7 @@ static void page_list_location(struct dm_integrity_c *ic, 
unsigned int section,
 
sector = section * ic->journal_section_sectors + offset;
 
-   *pl_index = sector >> (PAGE_SHIFT - SECTOR_SHIFT);
+   *pl_index = sector >> (PAGE_SECTORS_SHIFT);
*pl_offset = (sector << SECTOR_SHIFT) & (PAGE_SIZE - 1);
 }
 
@@ -1077,7 +1077,7 @@ static void rw_journal_sectors(struct dm_integrity_c *ic, 
blk_opf_t opf,
return;
}
 
-   pl_index = sector >> (PAGE_SHIFT - SECTOR_SHIFT);
+   pl_index = sector >> (PAGE_SECTORS_SHIFT);
pl_offset = (sector << SECTOR_SHIFT) & (PAGE_SIZE - 1);
 
io_req.bi_opf = opf;
@@ -1201,7 +1201,7 @@ static void copy_from_journal(struct dm_integrity_c *ic, 
unsigned int section, u
 
sector = section * ic->journal_section_sectors + JOURNAL_BLOCK_SECTORS 
+ offset;
 
-   pl_index = sector >> (PAGE_SHIFT - SECTOR_SHIFT);
+   pl_index = sector >> (PAGE_SECTORS_SHIFT);
pl_offset = (sector << SECTOR_SHIFT) & (PAGE_SIZE - 1);
 
io_req.bi_opf = REQ_OP_WRITE;
@@ -3765,7 +3765,7 @@ static int create_journal(struct dm_integrity_c *ic, char 
**error)
ic->commit_ids[3] = cpu_to_le64(0xULL);
 
journal_pages = roundup((__u64)ic->journal_sections * 
ic->journal_section_sectors,
-   PAGE_SIZE >> SECTOR_SHIFT) >> (PAGE_SHIFT - 
SECTOR_SHIFT);
+   PAGE_SIZE >> SECTOR_SHIFT) >> 
(PAGE_SECTORS_SHIFT);
journal_desc_size = journal_pages * sizeof(struct page_list);
if (journal_pages >= totalram_pages() - totalhigh_pages() || 
journal_desc_size > ULONG_MAX) {
*error = "Journal doesn't fit into memory";
@@ -4542,7 +4542,7 @@ static int dm_integrity_ctr(struct dm_target *ti, 
unsigned int argc, char **argv
spin_lock_init(>bio_queue_lock);
 
sector = i * (BITMAP_BLOCK_SIZE >> SECTOR_SHIFT);
-   pl_index = sector >> (PAGE_SHIFT - SECTOR_SHIFT);
+   pl_index = sector >> (PAGE_SECTORS_SHIFT);
pl_offset = (sector << SECTOR_SHIFT) & (PAGE_SIZE - 1);
 
bbs->bitmap = 
lowmem_page_address(ic->journal[pl_index].page) + pl_offset;
-- 
2.39.2

--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel



[dm-devel] [PATCH 0/5] block: simplify with PAGE_SECTORS_SHIFT

2023-04-21 Thread Luis Chamberlain
A bit of block drivers have their own incantations with
PAGE_SHIFT - SECTOR_SHIFT. Just simplfy and use PAGE_SECTORS_SHIFT
all over.

Based on linux-next next-20230421.

Luis Chamberlain (5):
  dm integrity: simplify by using PAGE_SECTORS_SHIFT
  drbd: use PAGE_SECTORS_SHIFT and PAGE_SECTORS
  iomap: simplify iomap_init() with PAGE_SECTORS
  dm bufio: simplify by using PAGE_SECTORS_SHIFT
  zram: use generic PAGE_SECTORS and PAGE_SECTORS_SHIFT

 drivers/block/drbd/drbd_bitmap.c |  4 ++--
 drivers/block/zram/zram_drv.c| 12 ++--
 drivers/block/zram/zram_drv.h|  2 --
 drivers/md/dm-bufio.c|  4 ++--
 drivers/md/dm-integrity.c| 10 +-
 fs/iomap/buffered-io.c   |  2 +-
 6 files changed, 16 insertions(+), 18 deletions(-)

-- 
2.39.2

--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel



[dm-devel] [PATCH 2/5] drbd: use PAGE_SECTORS_SHIFT and PAGE_SECTORS

2023-04-21 Thread Luis Chamberlain
Replace common constants with generic versions.
This produces no functional changes.

Signed-off-by: Luis Chamberlain 
---
 drivers/block/drbd/drbd_bitmap.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/block/drbd/drbd_bitmap.c b/drivers/block/drbd/drbd_bitmap.c
index 6ac8c54b44c7..b556e6634f13 100644
--- a/drivers/block/drbd/drbd_bitmap.c
+++ b/drivers/block/drbd/drbd_bitmap.c
@@ -1000,7 +1000,7 @@ static void bm_page_io_async(struct drbd_bm_aio_ctx *ctx, 
int page_nr) __must_ho
unsigned int len;
 
first_bm_sect = device->ldev->md.md_offset + device->ldev->md.bm_offset;
-   on_disk_sector = first_bm_sect + (((sector_t)page_nr) << 
(PAGE_SHIFT-SECTOR_SHIFT));
+   on_disk_sector = first_bm_sect + (((sector_t)page_nr) << 
PAGE_SECTORS_SHIFT);
 
/* this might happen with very small
 * flexible external meta data device,
@@ -1008,7 +1008,7 @@ static void bm_page_io_async(struct drbd_bm_aio_ctx *ctx, 
int page_nr) __must_ho
last_bm_sect = drbd_md_last_bitmap_sector(device->ldev);
if (first_bm_sect <= on_disk_sector && last_bm_sect >= on_disk_sector) {
sector_t len_sect = last_bm_sect - on_disk_sector + 1;
-   if (len_sect < PAGE_SIZE/SECTOR_SIZE)
+   if (len_sect < PAGE_SECTORS)
len = (unsigned int)len_sect*SECTOR_SIZE;
else
len = PAGE_SIZE;
-- 
2.39.2

--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel



Re: [dm-devel] [PATCH v7 02/13] block: allow blk-zoned devices to have non-power-of-2 zone size

2022-06-16 Thread Luis Chamberlain
On Thu, Jun 16, 2022 at 12:09:35PM +0200, Pankaj Raghav wrote:
> On 2022-06-15 22:28, Bart Van Assche wrote:
> >> +    if (!is_power_of_2(zone->len) && zone->capacity < zone->len) {
> >> +    pr_warn("%s: Invalid zone capacity for non power of 2
> >> zone size",
> >> +    disk->disk_name);
> >>   return -ENODEV;
> >>   }
> > 
> > The above check seems wrong to me. I don't see why devices that report a
> > capacity that is less than the zone size should be rejected.
> > 
> This was brought up by Damien during previous reviews. The argument was
> that the reason to allow non power-of-2 zoned device is to remove the
> gaps between zone size and zone capacity. Allowing a npo2 zone size with
> a different capacity, even though it is technically possible, it does
> not make any practical sense. That is why this check was introduced.
> Does that answer your question?

Perhaps just add a comment because unless you are involved in the prior
reviews this might not be clear.

  Luis

--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel



Re: [dm-devel] [PATCH v5 6/7] null_blk: use zone_size_sects_shift for power of 2 zoned devices

2022-05-23 Thread Luis Chamberlain
On Mon, May 23, 2022 at 06:16:00PM +0200, Pankaj Raghav wrote:
> Instead of doing is_power_of_2 and ilog2 operation for every IO, cache
> the zone_size_sects_shift variable and use it for power of 2 zoned
> devices.
> 
> This variable will be set to zero for non power of 2 zoned devices.
> 
> Suggested-by: Damien Le Moal 
> Signed-off-by: Pankaj Raghav 

Reviewed-by: Luis Chamberlain 

  Luis

--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel



Re: [dm-devel] [PATCH v4 00/13] support non power of 2 zoned devices

2022-05-18 Thread Luis Chamberlain
On Thu, May 19, 2022 at 12:08:26PM +0900, Damien Le Moal wrote:
> On 5/18/22 00:34, Theodore Ts'o wrote:
> > On Tue, May 17, 2022 at 10:10:48AM +0200, Christoph Hellwig wrote:
> >> I'm a little surprised about all this activity.
> >>
> >> I though the conclusion at LSF/MM was that for Linux itself there
> >> is very little benefit in supporting this scheme.  It will massively
> >> fragment the supported based of devices and applications, while only
> >> having the benefit of supporting some Samsung legacy devices.
> > 
> > FWIW,
> > 
> > That wasn't my impression from that LSF/MM session, but once the
> > videos become available, folks can decide for themselves.
> 
> There was no real discussion about zone size constraint on the zone
> storage BoF. Many discussions happened in the hallway track though.

Right so no direct clear blockers mentioned at all during the BoF.

  Luis

--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel



Re: [dm-devel] [PATCH v4 00/13] support non power of 2 zoned devices

2022-05-18 Thread Luis Chamberlain
On Tue, May 17, 2022 at 11:34:54AM -0400, Theodore Ts'o wrote:
> On Tue, May 17, 2022 at 10:10:48AM +0200, Christoph Hellwig wrote:
> > I'm a little surprised about all this activity.
> > 
> > I though the conclusion at LSF/MM was that for Linux itself there
> > is very little benefit in supporting this scheme.  It will massively
> > fragment the supported based of devices and applications, while only
> > having the benefit of supporting some Samsung legacy devices.
> 
> FWIW,
> 
> That wasn't my impression from that LSF/MM session, but once the
> videos become available, folks can decide for themselves.

Agreed, contrary to conventional storage devices, with the zone storage
ecosystem we simply have a requirement of zone drive replacements matching
zone size. That requirement exists for po2 or npo2. The work in this patch
set proves that supporting npo2 was in the end straight forward. As the one
putting together the BoF I can say that there were no sticking points raised
to move forward with this when the topic came up. So I am very surprised to
hear about any other perceived conclusion.

  Luis

--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel



Re: [dm-devel] [PATCH 16/16] dm-zoned: ensure only power of 2 zone sizes are allowed

2022-04-28 Thread Luis Chamberlain
On Fri, Apr 29, 2022 at 06:43:58AM +0900, Damien Le Moal wrote:
> On 4/29/22 02:34, Luis Chamberlain wrote:
> > One step at a time.
> 
> Yes, in general, I agree. But in this case, that will create kernel
> versions that end up having partial support for zoned drives. Not ideal to
> say the least. So if the patches are not that big, I would rather like to
> see everything go into a single release.

This would have delayed the patches more, and I see no rush for npo2 to
use dm-zoned really. Just as with f2fs, we can take priorities at a
time.

  Luis

--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel



Re: [dm-devel] [PATCH 04/16] block: allow blk-zoned devices to have non-power-of-2 zone size

2022-04-28 Thread Luis Chamberlain
On Thu, Apr 28, 2022 at 08:37:27AM +0900, Damien Le Moal wrote:
> Also, the entire premise of this patch series is that it is hard for
> people to support the unusable sectors between zone capacity and zone end
> for drives with a zone capacity smaller than the zone size.
> 
> Yet, here you do not check that zone capacity == zone size for drives that
> do not have a zone size equal to a power of 2 number of sectors. This
> means that we can still have drives with ZC < ZS AND ZS not equal to a
> power of 2. So from the point of view of your arguments, no gains at all.
> Any thoughts on this ?

You are right, a check should be added on bringup so that if npo2 is
used, zone cap == zone size. That should be added on the next iteration
of this patch.

  Luis

--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel



Re: [dm-devel] [PATCH 16/16] dm-zoned: ensure only power of 2 zone sizes are allowed

2022-04-28 Thread Luis Chamberlain
On Thu, Apr 28, 2022 at 08:42:41AM +0900, Damien Le Moal wrote:
> On 4/28/22 01:02, Pankaj Raghav wrote:
> > From: Luis Chamberlain 
> > 
> > Today dm-zoned relies on the assumption that you have a zone size
> > with a power of 2. Even though the block layer today enforces this
> > requirement, these devices do exist and so provide a stop-gap measure
> > to ensure these devices cannot be used by mistake
> > 
> > Signed-off-by: Luis Chamberlain 
> > Signed-off-by: Pankaj Raghav 
> > ---
> >  drivers/md/dm-zone.c | 12 
> >  1 file changed, 12 insertions(+)
> > 
> > diff --git a/drivers/md/dm-zone.c b/drivers/md/dm-zone.c
> > index 57daa86c19cf..221e0aa0f1a7 100644
> > --- a/drivers/md/dm-zone.c
> > +++ b/drivers/md/dm-zone.c
> > @@ -231,6 +231,18 @@ static int dm_revalidate_zones(struct mapped_device 
> > *md, struct dm_table *t)
> > struct request_queue *q = md->queue;
> > unsigned int noio_flag;
> > int ret;
> > +   struct block_device *bdev = md->disk->part0;
> > +   sector_t zone_sectors;
> > +   char bname[BDEVNAME_SIZE];
> > +
> > +   zone_sectors = bdev_zone_sectors(bdev);
> > +
> > +   if (!is_power_of_2(zone_sectors)) {
> > +   DMWARN("%s: %s only power of two zone size supported\n",
> > +  dm_device_name(md),
> > +  bdevname(bdev, bname));
> > +   return 1;
> > +   }
> 
> Why ?
> 
> See my previous email about still allowing ZC < ZS for non power of 2 zone
> size drives. dm-zoned can easily support non power of 2 zone size as long
> as ZC == ZS for all zones.

Great, thanks for the heads up.

> The problem with dm-zoned is ZC < ZS *AND* potentially variable ZC per
> zone. That cannot be supported easily (still not impossible, but
> definitely a lot more complex).

I see thanks.

Testing would still be required to ensure this all works well with npo2.
So I'd prefer to do that as a separate effort, even if it is easy. So
for now I think it makes sense to avoid this as this is not yet well
tested.

As with filesystem support, we've even have gotten hints that support
for npo2 should be easy, but without proper testing it would not be
prudent to enable support for users yet.

One step at a time.

  Luis

--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel



Re: [dm-devel] [PATCH v3 02/10] block: Introduce queue limits for copy-offload support

2022-02-22 Thread Luis Chamberlain
On Thu, Feb 17, 2022 at 06:29:01PM +0530, Nitesh Shetty wrote:
>  Thu, Feb 17, 2022 at 01:07:00AM -0800, Luis Chamberlain wrote:
> > The subject says limits for copy-offload...
> > 
> > On Mon, Feb 14, 2022 at 01:29:52PM +0530, Nitesh Shetty wrote:
> > > Add device limits as sysfs entries,
> > > - copy_offload (RW)
> > > - copy_max_bytes (RW)
> > > - copy_max_hw_bytes (RO)
> > > - copy_max_range_bytes (RW)
> > > - copy_max_range_hw_bytes (RO)
> > > - copy_max_nr_ranges (RW)
> > > - copy_max_nr_ranges_hw (RO)
> > 
> > Some of these seem like generic... and also I see a few more max_hw ones
> > not listed above...
> >
> queue_limits and sysfs entries are differently named.
> All sysfs entries start with copy_* prefix. Also it makes easy to lookup
> all copy sysfs.
> For queue limits naming, I tried to following existing queue limit
> convention (like discard).

My point was that your subject seems to indicate the changes are just
for copy-offload, but you seem to be adding generic queue limits as
well. Is that correct? If so then perhaps the subject should be changed
or the patch split up.

> > > +static ssize_t queue_copy_offload_store(struct request_queue *q,
> > > +const char *page, size_t count)
> > > +{
> > > + unsigned long copy_offload;
> > > + ssize_t ret = queue_var_store(_offload, page, count);
> > > +
> > > + if (ret < 0)
> > > + return ret;
> > > +
> > > + if (copy_offload && !q->limits.max_hw_copy_sectors)
> > > + return -EINVAL;
> > 
> > 
> > If the kernel schedules, copy_offload may still be true and
> > max_hw_copy_sectors may be set to 0. Is that an issue?
> >
> 
> This check ensures that, we dont enable offload if device doesnt support
> offload. I feel it shouldn't be an issue.

My point was this:

CPU1   CPU2
Time
1) if (copy_offload 
2)---> preemption so it schedules  
3)---> some other high priority task  Sets q->limits.max_hw_copy_sectors to 0
4) && !q->limits.max_hw_copy_sectors)

Can something bad happen if we allow for this?

--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel



Re: [dm-devel] [PATCH v3 02/10] block: Introduce queue limits for copy-offload support

2022-02-17 Thread Luis Chamberlain
The subject says limits for copy-offload...

On Mon, Feb 14, 2022 at 01:29:52PM +0530, Nitesh Shetty wrote:
> Add device limits as sysfs entries,
> - copy_offload (RW)
> - copy_max_bytes (RW)
> - copy_max_hw_bytes (RO)
> - copy_max_range_bytes (RW)
> - copy_max_range_hw_bytes (RO)
> - copy_max_nr_ranges (RW)
> - copy_max_nr_ranges_hw (RO)

Some of these seem like generic... and also I see a few more max_hw ones
not listed above...

> --- a/block/blk-settings.c
> +++ b/block/blk-settings.c
> +/**
> + * blk_queue_max_copy_sectors - set max sectors for a single copy payload
> + * @q:  the request queue for the device
> + * @max_copy_sectors: maximum number of sectors to copy
> + **/
> +void blk_queue_max_copy_sectors(struct request_queue *q,
> + unsigned int max_copy_sectors)
> +{
> + q->limits.max_hw_copy_sectors = max_copy_sectors;
> + q->limits.max_copy_sectors = max_copy_sectors;
> +}
> +EXPORT_SYMBOL(blk_queue_max_copy_sectors);

Please use EXPORT_SYMBOL_GPL() for all new things.

Why is this setting both? The documentation does't seem to say.
What's the point?

> +
> +/**
> + * blk_queue_max_copy_range_sectors - set max sectors for a single range, in 
> a copy payload
> + * @q:  the request queue for the device
> + * @max_copy_range_sectors: maximum number of sectors to copy in a single 
> range
> + **/
> +void blk_queue_max_copy_range_sectors(struct request_queue *q,
> + unsigned int max_copy_range_sectors)
> +{
> + q->limits.max_hw_copy_range_sectors = max_copy_range_sectors;
> + q->limits.max_copy_range_sectors = max_copy_range_sectors;
> +}
> +EXPORT_SYMBOL(blk_queue_max_copy_range_sectors);

Same here.

> +/**
> + * blk_queue_max_copy_nr_ranges - set max number of ranges, in a copy payload
> + * @q:  the request queue for the device
> + * @max_copy_nr_ranges: maximum number of ranges
> + **/
> +void blk_queue_max_copy_nr_ranges(struct request_queue *q,
> + unsigned int max_copy_nr_ranges)
> +{
> + q->limits.max_hw_copy_nr_ranges = max_copy_nr_ranges;
> + q->limits.max_copy_nr_ranges = max_copy_nr_ranges;
> +}
> +EXPORT_SYMBOL(blk_queue_max_copy_nr_ranges);

Same.

> +
>  /**
>   * blk_queue_max_write_same_sectors - set max sectors for a single write same
>   * @q:  the request queue for the device
> @@ -541,6 +592,14 @@ int blk_stack_limits(struct queue_limits *t, struct 
> queue_limits *b,
>   t->max_segment_size = min_not_zero(t->max_segment_size,
>  b->max_segment_size);
>  
> + t->max_copy_sectors = min(t->max_copy_sectors, b->max_copy_sectors);
> + t->max_hw_copy_sectors = min(t->max_hw_copy_sectors, 
> b->max_hw_copy_sectors);
> + t->max_copy_range_sectors = min(t->max_copy_range_sectors, 
> b->max_copy_range_sectors);
> + t->max_hw_copy_range_sectors = min(t->max_hw_copy_range_sectors,
> + b->max_hw_copy_range_sectors);
> + t->max_copy_nr_ranges = min(t->max_copy_nr_ranges, 
> b->max_copy_nr_ranges);
> + t->max_hw_copy_nr_ranges = min(t->max_hw_copy_nr_ranges, 
> b->max_hw_copy_nr_ranges);
> +
>   t->misaligned |= b->misaligned;
>  
>   alignment = queue_limit_alignment_offset(b, start);
> diff --git a/block/blk-sysfs.c b/block/blk-sysfs.c
> index 9f32882ceb2f..9ddd07f142d9 100644
> --- a/block/blk-sysfs.c
> +++ b/block/blk-sysfs.c
> @@ -212,6 +212,129 @@ static ssize_t queue_discard_zeroes_data_show(struct 
> request_queue *q, char *pag
>   return queue_var_show(0, page);
>  }
>  
> +static ssize_t queue_copy_offload_show(struct request_queue *q, char *page)
> +{
> + return queue_var_show(blk_queue_copy(q), page);
> +}
> +
> +static ssize_t queue_copy_offload_store(struct request_queue *q,
> +const char *page, size_t count)
> +{
> + unsigned long copy_offload;
> + ssize_t ret = queue_var_store(_offload, page, count);
> +
> + if (ret < 0)
> + return ret;
> +
> + if (copy_offload && !q->limits.max_hw_copy_sectors)
> + return -EINVAL;


If the kernel schedules, copy_offload may still be true and
max_hw_copy_sectors may be set to 0. Is that an issue?

> +
> + if (copy_offload)
> + blk_queue_flag_set(QUEUE_FLAG_COPY, q);
> + else
> + blk_queue_flag_clear(QUEUE_FLAG_COPY, q);

The flag may be set but the queue flag could be set. Is that an issue?

> @@ -597,6 +720,14 @@ QUEUE_RO_ENTRY(queue_nr_zones, "nr_zones");
>  QUEUE_RO_ENTRY(queue_max_open_zones, "max_open_zones");
>  QUEUE_RO_ENTRY(queue_max_active_zones, "max_active_zones");
>  
> +QUEUE_RW_ENTRY(queue_copy_offload, "copy_offload");
> +QUEUE_RO_ENTRY(queue_copy_max_hw, "copy_max_hw_bytes");
> +QUEUE_RW_ENTRY(queue_copy_max, "copy_max_bytes");
> +QUEUE_RO_ENTRY(queue_copy_range_max_hw, "copy_max_range_hw_bytes");
> +QUEUE_RW_ENTRY(queue_copy_range_max, "copy_max_range_bytes");
> 

Re: [dm-devel] [PATCH v3 01/10] block: make bio_map_kern() non static

2022-02-17 Thread Luis Chamberlain
On Mon, Feb 14, 2022 at 01:29:51PM +0530, Nitesh Shetty wrote:
> From: SelvaKumar S 
> 
> Make bio_map_kern() non static
> 
> Signed-off-by: SelvaKumar S 
> Signed-off-by: Nitesh Shetty 

This patch makes no sense on its own. I'd just merge it with
its first user.

  Luis

--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel



Re: [dm-devel] [RFC PATCH 3/3] nvme: add the "debug" host driver

2022-02-03 Thread Luis Chamberlain
On Thu, Feb 03, 2022 at 05:15:34PM +0100, Christoph Hellwig wrote:
> On Thu, Feb 03, 2022 at 08:06:33AM -0800, Luis Chamberlain wrote:
> > On Wed, Feb 02, 2022 at 06:01:13AM +, Adam Manzanares wrote:
> > > BTW I think having the target code be able to implement simple copy 
> > > without 
> > > moving data over the fabric would be a great way of showing off the 
> > > command.
> > 
> > Do you mean this should be implemented instead as a fabrics backend
> > instead because fabrics already instantiates and creates a virtual
> > nvme device? And so this would mean less code?
> 
> It would be a lot less code.  In fact I don't think we need any new code
> at all.  Just using nvme-loop on top of null_blk or brd should be all
> that is needed.

Mikulas,

That begs the question why add this instead of using null_blk with
nvme-loop?

  Luis

--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel



Re: [dm-devel] [RFC PATCH 3/3] nvme: add the "debug" host driver

2022-02-03 Thread Luis Chamberlain
On Wed, Feb 02, 2022 at 06:01:13AM +, Adam Manzanares wrote:
> BTW I think having the target code be able to implement simple copy without 
> moving data over the fabric would be a great way of showing off the command.

Do you mean this should be implemented instead as a fabrics backend
instead because fabrics already instantiates and creates a virtual
nvme device? And so this would mean less code?

  Luis

--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel



Re: [dm-devel] [RFC PATCH 3/3] nvme: add the "debug" host driver

2022-02-03 Thread Luis Chamberlain
On Wed, Feb 02, 2022 at 08:00:12AM +, Chaitanya Kulkarni wrote:
> Mikulas,
> 
> On 2/1/22 10:33 AM, Mikulas Patocka wrote:
> > External email: Use caution opening links or attachments
> > 
> > 
> > This patch adds a new driver "nvme-debug". It uses memory as a backing
> > store and it is used to test the copy offload functionality.
> > 
> > Signed-off-by: Mikulas Patocka 
> > 
> 
> 
> NVMe Controller specific memory backed features needs to go into
> QEMU which are targeted for testing and debugging, just like what
> we have done for NVMe ZNS QEMU support and not in kernel.
> 
> I don't see any special reason to make copy offload an exception.

One can instantiate scsi devices with qemu by using fake scsi devices,
but one can also just use scsi_debug to do the same. I see both efforts
as desirable, so long as someone mantains this.

For instance, blktests uses scsi_debug for simplicity.

In the end you decide what you want to use.

  Luis

--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel



Re: [dm-devel] [LSF/MM/BFP ATTEND] [LSF/MM/BFP TOPIC] Storage: Copy Offload

2022-01-31 Thread Luis Chamberlain
> * What we will discuss in the proposed session ?
> ---
> 
> I'd like to propose a session to go over this topic to understand :-
> 
> 1. What are the blockers for Copy Offload implementation ?
> 2. Discussion about having a file system interface.
> 3. Discussion about having right system call for user-space.
> 4. What is the right way to move this work forward ?
> 5. How can we help to contribute and move this work forward ?
> 
> * Required Participants :-
> ---
> 
> I'd like to invite file system, block layer, and device drivers
> developers to:-
> 
> 1. Share their opinion on the topic.
> 2. Share their experience and any other issues with [4].
> 3. Uncover additional details that are missing from this proposal.

Consider me intersted in this topic.

  Luis

--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel



Re: [dm-devel] [PATCH 1/9] scsi/sd: add error handling support for add_disk()

2021-10-18 Thread Luis Chamberlain
On Sat, Oct 16, 2021 at 10:51:48PM -0400, Martin K. Petersen wrote:
> 
> Luis,
> 
> > We never checked for errors on add_disk() as this function returned
> > void. Now that this is fixed, use the shiny new error handling.
> >
> > As with the error handling for device_add() we follow the same logic
> > and just put the device so that cleanup is done via the
> > scsi_disk_release().
> 
> Acked-by: Martin K. Petersen 

Thanks, would you like Jens to pick this up and the other scsi/sr patch
or are you taking it through your tree?

  Luis

--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel



[dm-devel] [PATCH 6/9] m68k/emu/nfblock: add error handling support for add_disk()

2021-10-15 Thread Luis Chamberlain
We never checked for errors on add_disk() as this function
returned void. Now that this is fixed, use the shiny new
error handling.

Reviewed-by: Geert Uytterhoeven 
Acked-by: Geert Uytterhoeven 
Signed-off-by: Luis Chamberlain 
---
 arch/m68k/emu/nfblock.c | 9 +++--
 1 file changed, 7 insertions(+), 2 deletions(-)

diff --git a/arch/m68k/emu/nfblock.c b/arch/m68k/emu/nfblock.c
index 9a8394e96388..4de5a6087034 100644
--- a/arch/m68k/emu/nfblock.c
+++ b/arch/m68k/emu/nfblock.c
@@ -100,6 +100,7 @@ static int __init nfhd_init_one(int id, u32 blocks, u32 
bsize)
 {
struct nfhd_device *dev;
int dev_id = id - NFHD_DEV_OFFSET;
+   int err = -ENOMEM;
 
pr_info("nfhd%u: found device with %u blocks (%u bytes)\n", dev_id,
blocks, bsize);
@@ -130,16 +131,20 @@ static int __init nfhd_init_one(int id, u32 blocks, u32 
bsize)
sprintf(dev->disk->disk_name, "nfhd%u", dev_id);
set_capacity(dev->disk, (sector_t)blocks * (bsize / 512));
blk_queue_logical_block_size(dev->disk->queue, bsize);
-   add_disk(dev->disk);
+   err = add_disk(dev->disk);
+   if (err)
+   goto out_cleanup_disk;
 
list_add_tail(>list, _list);
 
return 0;
 
+out_cleanup_disk:
+   blk_cleanup_disk(dev->disk);
 free_dev:
kfree(dev);
 out:
-   return -ENOMEM;
+   return err;
 }
 
 static int __init nfhd_init(void)
-- 
2.30.2

--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel



[dm-devel] [PATCH 7/9] um/drivers/ubd_kern: add error handling support for add_disk()

2021-10-15 Thread Luis Chamberlain
We never checked for errors on add_disk() as this function
returned void. Now that this is fixed, use the shiny new
error handling.

ubd_disk_register() never returned an error, so just fix
that now and let the caller handle the error condition.

Reviewed-by: Gabriel Krisman Bertazi 
Signed-off-by: Luis Chamberlain 
---
 arch/um/drivers/ubd_kern.c | 13 +
 1 file changed, 9 insertions(+), 4 deletions(-)

diff --git a/arch/um/drivers/ubd_kern.c b/arch/um/drivers/ubd_kern.c
index fefd343412c7..69d2d0049a61 100644
--- a/arch/um/drivers/ubd_kern.c
+++ b/arch/um/drivers/ubd_kern.c
@@ -855,8 +855,8 @@ static const struct attribute_group *ubd_attr_groups[] = {
NULL,
 };
 
-static void ubd_disk_register(int major, u64 size, int unit,
- struct gendisk *disk)
+static int ubd_disk_register(int major, u64 size, int unit,
+struct gendisk *disk)
 {
disk->major = major;
disk->first_minor = unit << UBD_SHIFT;
@@ -873,7 +873,7 @@ static void ubd_disk_register(int major, u64 size, int unit,
 
disk->private_data = _devs[unit];
disk->queue = ubd_devs[unit].queue;
-   device_add_disk(_devs[unit].pdev.dev, disk, ubd_attr_groups);
+   return device_add_disk(_devs[unit].pdev.dev, disk, ubd_attr_groups);
 }
 
 #define ROUND_BLOCK(n) ((n + (SECTOR_SIZE - 1)) & (-SECTOR_SIZE))
@@ -920,10 +920,15 @@ static int ubd_add(int n, char **error_out)
blk_queue_write_cache(ubd_dev->queue, true, false);
blk_queue_max_segments(ubd_dev->queue, MAX_SG);
blk_queue_segment_boundary(ubd_dev->queue, PAGE_SIZE - 1);
-   ubd_disk_register(UBD_MAJOR, ubd_dev->size, n, disk);
+   err = ubd_disk_register(UBD_MAJOR, ubd_dev->size, n, disk);
+   if (err)
+   goto out_cleanup_disk;
+
ubd_gendisk[n] = disk;
return 0;
 
+out_cleanup_disk:
+   blk_cleanup_disk(disk);
 out_cleanup_tags:
blk_mq_free_tag_set(_dev->tag_set);
 out:
-- 
2.30.2

--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel



[dm-devel] [PATCH 3/9] dm: add add_disk() error handling

2021-10-15 Thread Luis Chamberlain
We never checked for errors on add_disk() as this function
returned void. Now that this is fixed, use the shiny new
error handling.

There are two calls to dm_setup_md_queue() which can fail then,
one on dm_early_create() and we can easily see that the error path
there calls dm_destroy in the error path. The other use case is on
the ioctl table_load case. If that fails userspace needs to call
the DM_DEV_REMOVE_CMD to cleanup the state - similar to any other
failure.

Reviewed-by: Hannes Reinecke 
Signed-off-by: Luis Chamberlain 
---
 drivers/md/dm.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/drivers/md/dm.c b/drivers/md/dm.c
index 245fa4153306..6d3265ed37c0 100644
--- a/drivers/md/dm.c
+++ b/drivers/md/dm.c
@@ -2086,7 +2086,9 @@ int dm_setup_md_queue(struct mapped_device *md, struct 
dm_table *t)
if (r)
return r;
 
-   add_disk(md->disk);
+   r = add_disk(md->disk);
+   if (r)
+   return r;
 
r = dm_sysfs_init(md);
if (r) {
-- 
2.30.2

--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel



[dm-devel] [PATCH 2/9] scsi/sr: add error handling support for add_disk()

2021-10-15 Thread Luis Chamberlain
We never checked for errors on add_disk() as this function
returned void. Now that this is fixed, use the shiny new
error handling.

Just put the cdrom kref and have the unwinding be done by
sr_kref_release().

Reviewed-by: Christoph Hellwig 
Signed-off-by: Luis Chamberlain 
---
 drivers/scsi/sr.c | 7 ++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/drivers/scsi/sr.c b/drivers/scsi/sr.c
index 115f7ef7a5de..4e5515848fa6 100644
--- a/drivers/scsi/sr.c
+++ b/drivers/scsi/sr.c
@@ -728,7 +728,12 @@ static int sr_probe(struct device *dev)
dev_set_drvdata(dev, cd);
disk->flags |= GENHD_FL_REMOVABLE;
sr_revalidate_disk(cd);
-   device_add_disk(>sdev_gendev, disk, NULL);
+
+   error = device_add_disk(>sdev_gendev, disk, NULL);
+   if (error) {
+   kref_put(>kref, sr_kref_release);
+   goto fail;
+   }
 
sdev_printk(KERN_DEBUG, sdev,
"Attached scsi CD-ROM %s\n", cd->cdi.name);
-- 
2.30.2

--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel



[dm-devel] [PATCH 1/9] scsi/sd: add error handling support for add_disk()

2021-10-15 Thread Luis Chamberlain
We never checked for errors on add_disk() as this function
returned void. Now that this is fixed, use the shiny new
error handling.

As with the error handling for device_add() we follow the same
logic and just put the device so that cleanup is done via the
scsi_disk_release().

Reviewed-by: Christoph Hellwig 
Signed-off-by: Luis Chamberlain 
---
 drivers/scsi/sd.c | 8 +++-
 1 file changed, 7 insertions(+), 1 deletion(-)

diff --git a/drivers/scsi/sd.c b/drivers/scsi/sd.c
index a646d27df681..d69f2e626e76 100644
--- a/drivers/scsi/sd.c
+++ b/drivers/scsi/sd.c
@@ -3457,7 +3457,13 @@ static int sd_probe(struct device *dev)
pm_runtime_set_autosuspend_delay(dev,
sdp->host->hostt->rpm_autosuspend_delay);
}
-   device_add_disk(dev, gd, NULL);
+
+   error = device_add_disk(dev, gd, NULL);
+   if (error) {
+   put_device(>dev);
+   goto out;
+   }
+
if (sdkp->capacity)
sd_dif_config_host(sdkp);
 
-- 
2.30.2

--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel



[dm-devel] [PATCH 4/9] bcache: add error handling support for add_disk()

2021-10-15 Thread Luis Chamberlain
We never checked for errors on add_disk() as this function
returned void. Now that this is fixed, use the shiny new
error handling.

This driver doesn't do any unwinding with blk_cleanup_disk()
even on errors after add_disk() and so we follow that
tradition.

Acked-by: Coly Li 
Signed-off-by: Luis Chamberlain 
---
 drivers/md/bcache/super.c | 17 -
 1 file changed, 12 insertions(+), 5 deletions(-)

diff --git a/drivers/md/bcache/super.c b/drivers/md/bcache/super.c
index f2874c77ff79..f0c32cdd6594 100644
--- a/drivers/md/bcache/super.c
+++ b/drivers/md/bcache/super.c
@@ -1082,7 +1082,9 @@ int bch_cached_dev_run(struct cached_dev *dc)
closure_sync();
}
 
-   add_disk(d->disk);
+   ret = add_disk(d->disk);
+   if (ret)
+   goto out;
bd_link_disk_holder(dc->bdev, dc->disk.disk);
/*
 * won't show up in the uevent file, use udevadm monitor -e instead
@@ -1534,10 +1536,11 @@ static void flash_dev_flush(struct closure *cl)
 
 static int flash_dev_run(struct cache_set *c, struct uuid_entry *u)
 {
+   int err = -ENOMEM;
struct bcache_device *d = kzalloc(sizeof(struct bcache_device),
  GFP_KERNEL);
if (!d)
-   return -ENOMEM;
+   goto err_ret;
 
closure_init(>cl, NULL);
set_closure_fn(>cl, flash_dev_flush, system_wq);
@@ -1551,9 +1554,12 @@ static int flash_dev_run(struct cache_set *c, struct 
uuid_entry *u)
bcache_device_attach(d, c, u - c->uuids);
bch_sectors_dirty_init(d);
bch_flash_dev_request_init(d);
-   add_disk(d->disk);
+   err = add_disk(d->disk);
+   if (err)
+   goto err;
 
-   if (kobject_add(>kobj, _to_dev(d->disk)->kobj, "bcache"))
+   err = kobject_add(>kobj, _to_dev(d->disk)->kobj, "bcache");
+   if (err)
goto err;
 
bcache_device_link(d, c, "volume");
@@ -1567,7 +1573,8 @@ static int flash_dev_run(struct cache_set *c, struct 
uuid_entry *u)
return 0;
 err:
kobject_put(>kobj);
-   return -ENOMEM;
+err_ret:
+   return err;
 }
 
 static int flash_devs_run(struct cache_set *c)
-- 
2.30.2

--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel



[dm-devel] [PATCH 8/9] rnbd: add error handling support for add_disk()

2021-10-15 Thread Luis Chamberlain
We never checked for errors on add_disk() as this function
returned void. Now that this is fixed, use the shiny new
error handling.

Acked-by: Jack Wang 
Signed-off-by: Luis Chamberlain 
---
 drivers/block/rnbd/rnbd-clt.c | 13 +
 1 file changed, 9 insertions(+), 4 deletions(-)

diff --git a/drivers/block/rnbd/rnbd-clt.c b/drivers/block/rnbd/rnbd-clt.c
index 5864c9b46cb9..3b78dc55a9a2 100644
--- a/drivers/block/rnbd/rnbd-clt.c
+++ b/drivers/block/rnbd/rnbd-clt.c
@@ -1384,8 +1384,10 @@ static void setup_request_queue(struct rnbd_clt_dev *dev)
blk_queue_write_cache(dev->queue, dev->wc, dev->fua);
 }
 
-static void rnbd_clt_setup_gen_disk(struct rnbd_clt_dev *dev, int idx)
+static int rnbd_clt_setup_gen_disk(struct rnbd_clt_dev *dev, int idx)
 {
+   int err;
+
dev->gd->major  = rnbd_client_major;
dev->gd->first_minor= idx << RNBD_PART_BITS;
dev->gd->minors = 1 << RNBD_PART_BITS;
@@ -1410,7 +1412,11 @@ static void rnbd_clt_setup_gen_disk(struct rnbd_clt_dev 
*dev, int idx)
 
if (!dev->rotational)
blk_queue_flag_set(QUEUE_FLAG_NONROT, dev->queue);
-   add_disk(dev->gd);
+   err = add_disk(dev->gd);
+   if (err)
+   blk_cleanup_disk(dev->gd);
+
+   return err;
 }
 
 static int rnbd_client_setup_device(struct rnbd_clt_dev *dev)
@@ -1426,8 +1432,7 @@ static int rnbd_client_setup_device(struct rnbd_clt_dev 
*dev)
rnbd_init_mq_hw_queues(dev);
 
setup_request_queue(dev);
-   rnbd_clt_setup_gen_disk(dev, idx);
-   return 0;
+   return rnbd_clt_setup_gen_disk(dev, idx);
 }
 
 static struct rnbd_clt_dev *init_dev(struct rnbd_clt_session *sess,
-- 
2.30.2

--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel



[dm-devel] [PATCH 0/9] block: reviewed add_disk() error handling set

2021-10-15 Thread Luis Chamberlain
Jens,

I had last split up patches into 7 groups, but at this point now
most changes are merged except a few more drivers. Instead of creating
a new patch set for each of the 7 groups I'm creating 3 new groups of
patches now:

  * This set, for which we already have an Acked-by or Reviewed-by tag,
it would be nice to get clarification of driver maintainers want
these to go through you or if a the maintainers want to pick these
changes up themselves.

  * A second set will deal with patches which have no reviews done for
them yet 

  * The last set deals with the __register_blkdev() change and the
__must_check change which ensures we don't let in new drivers
which don't deal with error handling.

If you're a maintainer of any of the below patches and wish for them to
go through Jens' tree directly now would be a good time to say so or
you can just pick the patch up yourself.

Luis Chamberlain (9):
  scsi/sd: add error handling support for add_disk()
  scsi/sr: add error handling support for add_disk()
  dm: add add_disk() error handling
  bcache: add error handling support for add_disk()
  xen-blkfront: add error handling support for add_disk()
  m68k/emu/nfblock: add error handling support for add_disk()
  um/drivers/ubd_kern: add error handling support for add_disk()
  rnbd: add error handling support for add_disk()
  mtd: add add_disk() error handling

 arch/m68k/emu/nfblock.c   |  9 +++--
 arch/um/drivers/ubd_kern.c| 13 +
 drivers/block/rnbd/rnbd-clt.c | 13 +
 drivers/block/xen-blkfront.c  |  8 +++-
 drivers/md/bcache/super.c | 17 -
 drivers/md/dm.c   |  4 +++-
 drivers/mtd/mtd_blkdevs.c |  6 +-
 drivers/scsi/sd.c |  8 +++-
 drivers/scsi/sr.c |  7 ++-
 9 files changed, 65 insertions(+), 20 deletions(-)

-- 
2.30.2

--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel



[dm-devel] [PATCH 5/9] xen-blkfront: add error handling support for add_disk()

2021-10-15 Thread Luis Chamberlain
We never checked for errors on device_add_disk() as this function
returned void. Now that this is fixed, use the shiny new error
handling. The function xlvbd_alloc_gendisk() typically does the
unwinding on error on allocating the disk and creating the tag,
but since all that error handling was stuffed inside
xlvbd_alloc_gendisk() we must repeat the tag free'ing as well.

We set the info->rq to NULL to ensure blkif_free() doesn't crash
on blk_mq_stop_hw_queues() on device_add_disk() error as the queue
will be long gone by then.

Reviewed-by: Juergen Gross 
Signed-off-by: Luis Chamberlain 
---
 drivers/block/xen-blkfront.c | 8 +++-
 1 file changed, 7 insertions(+), 1 deletion(-)

diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
index df0deb927760..8e3983e456f3 100644
--- a/drivers/block/xen-blkfront.c
+++ b/drivers/block/xen-blkfront.c
@@ -2386,7 +2386,13 @@ static void blkfront_connect(struct blkfront_info *info)
for_each_rinfo(info, rinfo, i)
kick_pending_request_queues(rinfo);
 
-   device_add_disk(>xbdev->dev, info->gd, NULL);
+   err = device_add_disk(>xbdev->dev, info->gd, NULL);
+   if (err) {
+   blk_cleanup_disk(info->gd);
+   blk_mq_free_tag_set(>tag_set);
+   info->rq = NULL;
+   goto fail;
+   }
 
info->is_ready = 1;
return;
-- 
2.30.2

--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel



[dm-devel] [PATCH v4 5/6] loop: add error handling support for add_disk()

2021-09-27 Thread Luis Chamberlain
We never checked for errors on add_disk() as this function
returned void. Now that this is fixed, use the shiny new
error handling.

Reviewed-by: Christoph Hellwig 
Reviewed-by: Hannes Reinecke 
Signed-off-by: Luis Chamberlain 
---
 drivers/block/loop.c | 8 +++-
 1 file changed, 7 insertions(+), 1 deletion(-)

diff --git a/drivers/block/loop.c b/drivers/block/loop.c
index 7bf4686af774..00ee365ed5e0 100644
--- a/drivers/block/loop.c
+++ b/drivers/block/loop.c
@@ -2394,13 +2394,19 @@ static int loop_add(int i)
disk->event_flags   = DISK_EVENT_FLAG_UEVENT;
sprintf(disk->disk_name, "loop%d", i);
/* Make this loop device reachable from pathname. */
-   add_disk(disk);
+   err = add_disk(disk);
+   if (err)
+   goto out_cleanup_disk;
+
/* Show this loop device. */
mutex_lock(_ctl_mutex);
lo->idr_visible = true;
mutex_unlock(_ctl_mutex);
+
return i;
 
+out_cleanup_disk:
+   blk_cleanup_disk(disk);
 out_cleanup_tags:
blk_mq_free_tag_set(>tag_set);
 out_free_idr:
-- 
2.30.2

--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel



[dm-devel] [PATCH v4 3/6] md: add error handling support for add_disk()

2021-09-27 Thread Luis Chamberlain
We never checked for errors on add_disk() as this function
returned void. Now that this is fixed, use the shiny new
error handling.

We just do the unwinding of what was not done before, and are
sure to unlock prior to bailing.

Reviewed-by: Christoph Hellwig 
Signed-off-by: Luis Chamberlain 
Signed-off-by: Song Liu 
---
 drivers/md/md.c | 6 +-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/drivers/md/md.c b/drivers/md/md.c
index 6c0c3d0d905a..6bd5ad3c30b4 100644
--- a/drivers/md/md.c
+++ b/drivers/md/md.c
@@ -5700,7 +5700,11 @@ static int md_alloc(dev_t dev, char *name)
disk->flags |= GENHD_FL_EXT_DEVT;
disk->events |= DISK_EVENT_MEDIA_CHANGE;
mddev->gendisk = disk;
-   add_disk(disk);
+   error = add_disk(disk);
+   if (error) {
+   blk_cleanup_disk(disk);
+   goto abort;
+   }
 
error = kobject_add(>kobj, _to_dev(disk)->kobj, "%s", "md");
if (error) {
-- 
2.30.2

--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel



[dm-devel] [PATCH v4 0/6] block: first batch of add_disk() error handling conversions

2021-09-27 Thread Luis Chamberlain
This is the first set of driver conversions to use the add_disk() error
handling. You can find these changes along with the entire 7th set of
driver conversions on my 20210927-for-axboe-add-disk-error-handling
branch [0].

Changes on this v4 since the last v3:

 - Rebased onto linux-next tag 20210927
 - I drop the patches already merged
 - Fix the scsi/sd, scsi/sr as noted by Ming Lei
 - md: rebased in light of "md: fix a lock order reversal in md_alloc"
 - Adds Reviewed / Acked by tags where appropriate. I didn't keep the
   review by Hannes on the scsi patches as those patches are now
   slightly fixed

[0] 
https://git.kernel.org/pub/scm/linux/kernel/git/mcgrof/linux-next.git/log/?h=20210927-for-axboe-add-disk-error-handling

Luis Chamberlain (6):
  scsi/sd: add error handling support for add_disk()
  scsi/sr: add error handling support for add_disk()
  md: add error handling support for add_disk()
  dm: add add_disk() error handling
  loop: add error handling support for add_disk()
  nbd: add error handling support for add_disk()

 drivers/block/loop.c | 8 +++-
 drivers/block/nbd.c  | 6 +-
 drivers/md/dm.c  | 4 +++-
 drivers/md/md.c  | 6 +-
 drivers/scsi/sd.c| 8 +++-
 drivers/scsi/sr.c| 7 ++-
 6 files changed, 33 insertions(+), 6 deletions(-)

-- 
2.30.2

--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel



[dm-devel] [PATCH v4 2/6] scsi/sr: add error handling support for add_disk()

2021-09-27 Thread Luis Chamberlain
We never checked for errors on add_disk() as this function
returned void. Now that this is fixed, use the shiny new
error handling.

Just put the cdrom kref and have the unwinding be done by
sr_kref_release().

Reviewed-by: Christoph Hellwig 
Signed-off-by: Luis Chamberlain 
---
 drivers/scsi/sr.c | 7 ++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/drivers/scsi/sr.c b/drivers/scsi/sr.c
index 8b17b35283aa..d769057b25a0 100644
--- a/drivers/scsi/sr.c
+++ b/drivers/scsi/sr.c
@@ -727,7 +727,12 @@ static int sr_probe(struct device *dev)
dev_set_drvdata(dev, cd);
disk->flags |= GENHD_FL_REMOVABLE;
sr_revalidate_disk(cd);
-   device_add_disk(>sdev_gendev, disk, NULL);
+
+   error = device_add_disk(>sdev_gendev, disk, NULL);
+   if (error) {
+   kref_put(>kref, sr_kref_release);
+   goto fail;
+   }
 
sdev_printk(KERN_DEBUG, sdev,
"Attached scsi CD-ROM %s\n", cd->cdi.name);
-- 
2.30.2

--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel



[dm-devel] [PATCH v4 4/6] dm: add add_disk() error handling

2021-09-27 Thread Luis Chamberlain
We never checked for errors on add_disk() as this function
returned void. Now that this is fixed, use the shiny new
error handling.

There are two calls to dm_setup_md_queue() which can fail then,
one on dm_early_create() and we can easily see that the error path
there calls dm_destroy in the error path. The other use case is on
the ioctl table_load case. If that fails userspace needs to call
the DM_DEV_REMOVE_CMD to cleanup the state - similar to any other
failure.

Reviewed-by: Hannes Reinecke 
Signed-off-by: Luis Chamberlain 
---
 drivers/md/dm.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/drivers/md/dm.c b/drivers/md/dm.c
index a011d09cb0fa..b83aab8507c2 100644
--- a/drivers/md/dm.c
+++ b/drivers/md/dm.c
@@ -2083,7 +2083,9 @@ int dm_setup_md_queue(struct mapped_device *md, struct 
dm_table *t)
if (r)
return r;
 
-   add_disk(md->disk);
+   r = add_disk(md->disk);
+   if (r)
+   return r;
 
r = dm_sysfs_init(md);
if (r) {
-- 
2.30.2

--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel



[dm-devel] [PATCH v4 1/6] scsi/sd: add error handling support for add_disk()

2021-09-27 Thread Luis Chamberlain
We never checked for errors on add_disk() as this function
returned void. Now that this is fixed, use the shiny new
error handling.

As with the error handling for device_add() we follow the same
logic and just put the device so that cleanup is done via the
scsi_disk_release().

Reviewed-by: Christoph Hellwig 
Signed-off-by: Luis Chamberlain 
---
 drivers/scsi/sd.c | 8 +++-
 1 file changed, 7 insertions(+), 1 deletion(-)

diff --git a/drivers/scsi/sd.c b/drivers/scsi/sd.c
index 523bf2fdc253..3a101ad4d16e 100644
--- a/drivers/scsi/sd.c
+++ b/drivers/scsi/sd.c
@@ -3449,7 +3449,13 @@ static int sd_probe(struct device *dev)
pm_runtime_set_autosuspend_delay(dev,
sdp->host->hostt->rpm_autosuspend_delay);
}
-   device_add_disk(dev, gd, NULL);
+
+   error = device_add_disk(dev, gd, NULL);
+   if (error) {
+   put_device(>dev);
+   goto out;
+   }
+
if (sdkp->capacity)
sd_dif_config_host(sdkp);
 
-- 
2.30.2

--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel



[dm-devel] [PATCH v4 6/6] nbd: add error handling support for add_disk()

2021-09-27 Thread Luis Chamberlain
We never checked for errors on add_disk() as this function
returned void. Now that this is fixed, use the shiny new
error handling.

Reviewed-by: Christoph Hellwig 
Reviewed-by: Hannes Reinecke 
Signed-off-by: Luis Chamberlain 
---
 drivers/block/nbd.c | 6 +-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/drivers/block/nbd.c b/drivers/block/nbd.c
index 5170a630778d..741365295157 100644
--- a/drivers/block/nbd.c
+++ b/drivers/block/nbd.c
@@ -1757,7 +1757,9 @@ static struct nbd_device *nbd_dev_add(int index, unsigned 
int refs)
disk->fops = _fops;
disk->private_data = nbd;
sprintf(disk->disk_name, "nbd%d", index);
-   add_disk(disk);
+   err = add_disk(disk);
+   if (err)
+   goto out_err_disk;
 
/*
 * Now publish the device.
@@ -1766,6 +1768,8 @@ static struct nbd_device *nbd_dev_add(int index, unsigned 
int refs)
nbd_total_devices++;
return nbd;
 
+out_err_disk:
+   blk_cleanup_disk(disk);
 out_free_idr:
mutex_lock(_index_mutex);
idr_remove(_index_idr, index);
-- 
2.30.2

--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel



Re: [dm-devel] [PATCH v3 2/8] scsi/sr: add error handling support for add_disk()

2021-09-13 Thread Luis Chamberlain
On Tue, Sep 07, 2021 at 09:37:05AM +0800, Ming Lei wrote:
> On Mon, Aug 30, 2021 at 02:25:32PM -0700, Luis Chamberlain wrote:
> > We never checked for errors on add_disk() as this function
> > returned void. Now that this is fixed, use the shiny new
> > error handling.
> > 
> > Reviewed-by: Christoph Hellwig 
> > Signed-off-by: Luis Chamberlain 
> > ---
> >  drivers/scsi/sr.c | 5 -
> >  1 file changed, 4 insertions(+), 1 deletion(-)
> > 
> > diff --git a/drivers/scsi/sr.c b/drivers/scsi/sr.c
> > index 2942a4ec9bdd..72fd21844367 100644
> > --- a/drivers/scsi/sr.c
> > +++ b/drivers/scsi/sr.c
> > @@ -779,7 +779,10 @@ static int sr_probe(struct device *dev)
> > dev_set_drvdata(dev, cd);
> > disk->flags |= GENHD_FL_REMOVABLE;
> > sr_revalidate_disk(cd);
> > -   device_add_disk(>sdev_gendev, disk, NULL);
> > +
> > +   error = device_add_disk(>sdev_gendev, disk, NULL);
> > +   if (error)
> > +   goto fail_minor;
> 
> You don't undo register_cdrom(), maybe you can use kref_put(>kref, 
> sr_kref_release);
> to simplify the error handling.

Works with me, thanks!

  Luis

--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel



Re: [dm-devel] [PATCH v3 1/8] scsi/sd: add error handling support for add_disk()

2021-09-13 Thread Luis Chamberlain
On Tue, Sep 07, 2021 at 09:29:07AM +0800, Ming Lei wrote:
> On Mon, Aug 30, 2021 at 02:25:31PM -0700, Luis Chamberlain wrote:
> > We never checked for errors on add_disk() as this function
> > returned void. Now that this is fixed, use the shiny new
> > error handling.
> > 
> > Reviewed-by: Christoph Hellwig 
> > Signed-off-by: Luis Chamberlain 
> > ---
> >  drivers/scsi/sd.c | 6 +-
> >  1 file changed, 5 insertions(+), 1 deletion(-)
> > 
> > diff --git a/drivers/scsi/sd.c b/drivers/scsi/sd.c
> > index 610ebba0d66e..8c1273fff23e 100644
> > --- a/drivers/scsi/sd.c
> > +++ b/drivers/scsi/sd.c
> > @@ -3487,7 +3487,11 @@ static int sd_probe(struct device *dev)
> > pm_runtime_set_autosuspend_delay(dev,
> > sdp->host->hostt->rpm_autosuspend_delay);
> > }
> > -   device_add_disk(dev, gd, NULL);
> > +
> > +   error = device_add_disk(dev, gd, NULL);
> > +   if (error)
> > +   goto out_free_index;
> > +
> 
> The error handling is actually wrong, see 
> 
>   
> https://lore.kernel.org/linux-scsi/c93f3010-13c9-e07f-1458-b6b47a270...@acm.org/T/#t
> 
> Maybe you can base on that patch.

Done, thanks!

 Luis

--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel



[dm-devel] [PATCH v3 0/8] block: first batch of add_disk() error handling conversions

2021-08-30 Thread Luis Chamberlain
Jens,

I think this first set is ready, but pending review of just two patches:

  * mmc/core/block
  * dm

All other patches have a respective Reviewed-by tag. The above two
patches were integrated back into the series once I understood
Christoph's concerns, and adjusted the patch as such.

This goes rebased onto your for-next as of today. If anyone wants to
explore the pending full set this is up on my linux-next branch
20210830-for-axboe-add-disk-error-handling-next [0].

[0] 
https://git.kernel.org/pub/scm/linux/kernel/git/mcgrof/linux-next.git/log/?h=20210830-for-axboe-add-disk-error-handling-next

Luis Chamberlain (8):
  scsi/sd: add error handling support for add_disk()
  scsi/sr: add error handling support for add_disk()
  nvme: add error handling support for add_disk()
  mmc/core/block: add error handling support for add_disk()
  md: add error handling support for add_disk()
  dm: add add_disk() error handling
  loop: add error handling support for add_disk()
  nbd: add error handling support for add_disk()

 drivers/block/loop.c | 9 -
 drivers/block/nbd.c  | 6 +-
 drivers/md/dm.c  | 4 +++-
 drivers/md/md.c  | 7 ++-
 drivers/mmc/core/block.c | 7 ++-
 drivers/nvme/host/core.c | 9 -
 drivers/scsi/sd.c| 6 +-
 drivers/scsi/sr.c| 5 -
 8 files changed, 45 insertions(+), 8 deletions(-)

-- 
2.30.2

--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel



[dm-devel] [PATCH v3 3/8] nvme: add error handling support for add_disk()

2021-08-30 Thread Luis Chamberlain
We never checked for errors on add_disk() as this function
returned void. Now that this is fixed, use the shiny new
error handling.

Reviewed-by: Christoph Hellwig 
Signed-off-by: Luis Chamberlain 
---
 drivers/nvme/host/core.c | 9 -
 1 file changed, 8 insertions(+), 1 deletion(-)

diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index 8679a108f571..687d3be563a3 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -3763,7 +3763,9 @@ static void nvme_alloc_ns(struct nvme_ctrl *ctrl, 
unsigned nsid,
 
nvme_get_ctrl(ctrl);
 
-   device_add_disk(ctrl->device, ns->disk, nvme_ns_id_attr_groups);
+   if (device_add_disk(ctrl->device, ns->disk, nvme_ns_id_attr_groups))
+   goto out_cleanup_ns_from_list;
+
if (!nvme_ns_head_multipath(ns->head))
nvme_add_ns_cdev(ns);
 
@@ -3773,6 +3775,11 @@ static void nvme_alloc_ns(struct nvme_ctrl *ctrl, 
unsigned nsid,
 
return;
 
+ out_cleanup_ns_from_list:
+   nvme_put_ctrl(ctrl);
+   down_write(>namespaces_rwsem);
+   list_del_init(>list);
+   up_write(>namespaces_rwsem);
  out_unlink_ns:
mutex_lock(>subsys->lock);
list_del_rcu(>siblings);
-- 
2.30.2

--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel



[dm-devel] [PATCH v3 1/8] scsi/sd: add error handling support for add_disk()

2021-08-30 Thread Luis Chamberlain
We never checked for errors on add_disk() as this function
returned void. Now that this is fixed, use the shiny new
error handling.

Reviewed-by: Christoph Hellwig 
Signed-off-by: Luis Chamberlain 
---
 drivers/scsi/sd.c | 6 +-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/drivers/scsi/sd.c b/drivers/scsi/sd.c
index 610ebba0d66e..8c1273fff23e 100644
--- a/drivers/scsi/sd.c
+++ b/drivers/scsi/sd.c
@@ -3487,7 +3487,11 @@ static int sd_probe(struct device *dev)
pm_runtime_set_autosuspend_delay(dev,
sdp->host->hostt->rpm_autosuspend_delay);
}
-   device_add_disk(dev, gd, NULL);
+
+   error = device_add_disk(dev, gd, NULL);
+   if (error)
+   goto out_free_index;
+
if (sdkp->capacity)
sd_dif_config_host(sdkp);
 
-- 
2.30.2

--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel



[dm-devel] [PATCH v3 8/8] nbd: add error handling support for add_disk()

2021-08-30 Thread Luis Chamberlain
We never checked for errors on add_disk() as this function
returned void. Now that this is fixed, use the shiny new
error handling.

Reviewed-by: Christoph Hellwig 
Signed-off-by: Luis Chamberlain 
---
 drivers/block/nbd.c | 6 +-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/drivers/block/nbd.c b/drivers/block/nbd.c
index 5170a630778d..741365295157 100644
--- a/drivers/block/nbd.c
+++ b/drivers/block/nbd.c
@@ -1757,7 +1757,9 @@ static struct nbd_device *nbd_dev_add(int index, unsigned 
int refs)
disk->fops = _fops;
disk->private_data = nbd;
sprintf(disk->disk_name, "nbd%d", index);
-   add_disk(disk);
+   err = add_disk(disk);
+   if (err)
+   goto out_err_disk;
 
/*
 * Now publish the device.
@@ -1766,6 +1768,8 @@ static struct nbd_device *nbd_dev_add(int index, unsigned 
int refs)
nbd_total_devices++;
return nbd;
 
+out_err_disk:
+   blk_cleanup_disk(disk);
 out_free_idr:
mutex_lock(_index_mutex);
idr_remove(_index_idr, index);
-- 
2.30.2

--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel



[dm-devel] [PATCH v3 5/8] md: add error handling support for add_disk()

2021-08-30 Thread Luis Chamberlain
We never checked for errors on add_disk() as this function
returned void. Now that this is fixed, use the shiny new
error handling.

We just do the unwinding of what was not done before, and are
sure to unlock prior to bailing.

Reviewed-by: Christoph Hellwig 
Signed-off-by: Luis Chamberlain 
---
 drivers/md/md.c | 7 ++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/drivers/md/md.c b/drivers/md/md.c
index ae8fe54ea358..5c0d3536d7c7 100644
--- a/drivers/md/md.c
+++ b/drivers/md/md.c
@@ -5704,7 +5704,11 @@ static int md_alloc(dev_t dev, char *name)
 * through to md_open, so make sure it doesn't get too far
 */
mutex_lock(>open_mutex);
-   add_disk(disk);
+   error = add_disk(disk);
+   if (error) {
+   blk_cleanup_disk(disk);
+   goto abort_unlock;
+   }
 
error = kobject_add(>kobj, _to_dev(disk)->kobj, "%s", "md");
if (error) {
@@ -5718,6 +5722,7 @@ static int md_alloc(dev_t dev, char *name)
if (mddev->kobj.sd &&
sysfs_create_group(>kobj, _bitmap_group))
pr_debug("pointless warning\n");
+ abort_unlock:
mutex_unlock(>open_mutex);
  abort:
mutex_unlock(_mutex);
-- 
2.30.2

--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel



[dm-devel] [PATCH v3 4/8] mmc/core/block: add error handling support for add_disk()

2021-08-30 Thread Luis Chamberlain
We never checked for errors on add_disk() as this function
returned void. Now that this is fixed, use the shiny new
error handling.

The caller only cleanups the disk if we pass on an allocated md
but on error we return return ERR_PTR(ret), and so we must do all
the unwinding ourselves.

Signed-off-by: Luis Chamberlain 
---
 drivers/mmc/core/block.c | 7 ++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c
index 6a15fdf6e5f2..9b2856aa6231 100644
--- a/drivers/mmc/core/block.c
+++ b/drivers/mmc/core/block.c
@@ -2453,9 +2453,14 @@ static struct mmc_blk_data *mmc_blk_alloc_req(struct 
mmc_card *card,
/* used in ->open, must be set before add_disk: */
if (area_type == MMC_BLK_DATA_AREA_MAIN)
dev_set_drvdata(>dev, md);
-   device_add_disk(md->parent, md->disk, mmc_disk_attr_groups);
+   ret = device_add_disk(md->parent, md->disk, mmc_disk_attr_groups);
+   if (ret)
+   goto err_cleanup_queue;
return md;
 
+ err_cleanup_queue:
+   blk_cleanup_queue(md->disk->queue);
+   blk_mq_free_tag_set(>queue.tag_set);
  err_kfree:
kfree(md);
  out:
-- 
2.30.2

--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel



[dm-devel] [PATCH v3 6/8] dm: add add_disk() error handling

2021-08-30 Thread Luis Chamberlain
We never checked for errors on add_disk() as this function
returned void. Now that this is fixed, use the shiny new
error handling.

There are two calls to dm_setup_md_queue() which can fail then,
one on dm_early_create() and we can easily see that the error path
there calls dm_destroy in the error path. The other use case is on
the ioctl table_load case. If that fails userspace needs to call
the DM_DEV_REMOVE_CMD to cleanup the state - similar to any other
failure.

Signed-off-by: Luis Chamberlain 
---
 drivers/md/dm.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/drivers/md/dm.c b/drivers/md/dm.c
index 7981b7287628..4e6b9d6ac508 100644
--- a/drivers/md/dm.c
+++ b/drivers/md/dm.c
@@ -2077,7 +2077,9 @@ int dm_setup_md_queue(struct mapped_device *md, struct 
dm_table *t)
if (r)
return r;
 
-   add_disk(md->disk);
+   r = add_disk(md->disk);
+   if (r)
+   return r;
 
r = dm_sysfs_init(md);
if (r) {
-- 
2.30.2

--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel



[dm-devel] [PATCH v3 7/8] loop: add error handling support for add_disk()

2021-08-30 Thread Luis Chamberlain
We never checked for errors on add_disk() as this function
returned void. Now that this is fixed, use the shiny new
error handling.

Reviewed-by: Christoph Hellwig 
Signed-off-by: Luis Chamberlain 
---
 drivers/block/loop.c | 9 -
 1 file changed, 8 insertions(+), 1 deletion(-)

diff --git a/drivers/block/loop.c b/drivers/block/loop.c
index fa1c298a8cfb..b8b9e2349e77 100644
--- a/drivers/block/loop.c
+++ b/drivers/block/loop.c
@@ -2393,10 +2393,17 @@ static int loop_add(int i)
disk->events= DISK_EVENT_MEDIA_CHANGE;
disk->event_flags   = DISK_EVENT_FLAG_UEVENT;
sprintf(disk->disk_name, "loop%d", i);
-   add_disk(disk);
+
+   err = add_disk(disk);
+   if (err)
+   goto out_cleanup_disk;
+
mutex_unlock(_ctl_mutex);
+
return i;
 
+out_cleanup_disk:
+   blk_cleanup_disk(disk);
 out_cleanup_tags:
blk_mq_free_tag_set(>tag_set);
 out_free_idr:
-- 
2.30.2

--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel



[dm-devel] [PATCH v3 2/8] scsi/sr: add error handling support for add_disk()

2021-08-30 Thread Luis Chamberlain
We never checked for errors on add_disk() as this function
returned void. Now that this is fixed, use the shiny new
error handling.

Reviewed-by: Christoph Hellwig 
Signed-off-by: Luis Chamberlain 
---
 drivers/scsi/sr.c | 5 -
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/drivers/scsi/sr.c b/drivers/scsi/sr.c
index 2942a4ec9bdd..72fd21844367 100644
--- a/drivers/scsi/sr.c
+++ b/drivers/scsi/sr.c
@@ -779,7 +779,10 @@ static int sr_probe(struct device *dev)
dev_set_drvdata(dev, cd);
disk->flags |= GENHD_FL_REMOVABLE;
sr_revalidate_disk(cd);
-   device_add_disk(>sdev_gendev, disk, NULL);
+
+   error = device_add_disk(>sdev_gendev, disk, NULL);
+   if (error)
+   goto fail_minor;
 
sdev_printk(KERN_DEBUG, sdev,
"Attached scsi CD-ROM %s\n", cd->cdi.name);
-- 
2.30.2

--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel



Re: [dm-devel] [PATCH 08/10] dm: add add_disk() error handling

2021-08-30 Thread Luis Chamberlain
On Sat, Aug 28, 2021 at 08:35:57AM +0100, Christoph Hellwig wrote:
> On Fri, Aug 27, 2021 at 11:55:14AM -0700, Luis Chamberlain wrote:
> > > I think the add_disk should just return r.  If you look at the
> > > callers they eventualy end up in dm_table_destroy, which does
> > > this cleanup.
> > 
> > I don't see it. What part of dm_table_destroy() does this?
> 
> Sorry, dm_destroy, not dm_table_destroy.  For dm_early_create it's
> trivial as that calls both dm_table_destroy and dm_destroy in the error
> path.  The normal table_load case is a separate ioctl, but if that
> fails userspace needs to call the DM_DEV_REMOVE_CMD to cleanup
> the state - similar to any other failure.

I see, ok sure I'll document this on the commit log as its not so
obvious.

  Luis

--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel



Re: [dm-devel] [PATCH 06/10] mmc/core/block: add error handling support for add_disk()

2021-08-30 Thread Luis Chamberlain
On Sat, Aug 28, 2021 at 08:32:11AM +0100, Christoph Hellwig wrote:
> On Fri, Aug 27, 2021 at 11:42:36AM -0700, Luis Chamberlain wrote:
> > > > if (area_type == MMC_BLK_DATA_AREA_MAIN)
> > > > dev_set_drvdata(>dev, md);
> > > > -   device_add_disk(md->parent, md->disk, mmc_disk_attr_groups);
> > > > +   ret = device_add_disk(md->parent, md->disk, 
> > > > mmc_disk_attr_groups);
> > > > +   if (ret)
> > > > +   goto out;
> > > 
> > > This needs to do a blk_cleanup_queue and also te kfree of md.
> > 
> > If mmc_blk_alloc_parts() fails mmc_blk_remove_req() is called which
> > does both for us?
> 
> Yes, but only for the main gendisk, and those parts already added to
> the list which happens after device_add_disk succeeded.

Ah yes I see that now. Will fix up. The tag also needs to be cleaned up.

  Luis

--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel



[dm-devel] [PATCH v2 3/6] nvme: add error handling support for add_disk()

2021-08-27 Thread Luis Chamberlain
We never checked for errors on add_disk() as this function
returned void. Now that this is fixed, use the shiny new
error handling.

Signed-off-by: Luis Chamberlain 
---
 drivers/nvme/host/core.c | 9 -
 1 file changed, 8 insertions(+), 1 deletion(-)

diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index 8679a108f571..687d3be563a3 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -3763,7 +3763,9 @@ static void nvme_alloc_ns(struct nvme_ctrl *ctrl, 
unsigned nsid,
 
nvme_get_ctrl(ctrl);
 
-   device_add_disk(ctrl->device, ns->disk, nvme_ns_id_attr_groups);
+   if (device_add_disk(ctrl->device, ns->disk, nvme_ns_id_attr_groups))
+   goto out_cleanup_ns_from_list;
+
if (!nvme_ns_head_multipath(ns->head))
nvme_add_ns_cdev(ns);
 
@@ -3773,6 +3775,11 @@ static void nvme_alloc_ns(struct nvme_ctrl *ctrl, 
unsigned nsid,
 
return;
 
+ out_cleanup_ns_from_list:
+   nvme_put_ctrl(ctrl);
+   down_write(>namespaces_rwsem);
+   list_del_init(>list);
+   up_write(>namespaces_rwsem);
  out_unlink_ns:
mutex_lock(>subsys->lock);
list_del_rcu(>siblings);
-- 
2.30.2

--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel



[dm-devel] [PATCH v2 2/6] scsi/sr: add error handling support for add_disk()

2021-08-27 Thread Luis Chamberlain
We never checked for errors on add_disk() as this function
returned void. Now that this is fixed, use the shiny new
error handling.

Reviewed-by: Christoph Hellwig 
Signed-off-by: Luis Chamberlain 
---
 drivers/scsi/sr.c | 5 -
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/drivers/scsi/sr.c b/drivers/scsi/sr.c
index 2942a4ec9bdd..72fd21844367 100644
--- a/drivers/scsi/sr.c
+++ b/drivers/scsi/sr.c
@@ -779,7 +779,10 @@ static int sr_probe(struct device *dev)
dev_set_drvdata(dev, cd);
disk->flags |= GENHD_FL_REMOVABLE;
sr_revalidate_disk(cd);
-   device_add_disk(>sdev_gendev, disk, NULL);
+
+   error = device_add_disk(>sdev_gendev, disk, NULL);
+   if (error)
+   goto fail_minor;
 
sdev_printk(KERN_DEBUG, sdev,
"Attached scsi CD-ROM %s\n", cd->cdi.name);
-- 
2.30.2

--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel



[dm-devel] [PATCH v2 6/6] nbd: add error handling support for add_disk()

2021-08-27 Thread Luis Chamberlain
We never checked for errors on add_disk() as this function
returned void. Now that this is fixed, use the shiny new
error handling.

Reviewed-by: Christoph Hellwig 
Signed-off-by: Luis Chamberlain 
---
 drivers/block/nbd.c | 6 +-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/drivers/block/nbd.c b/drivers/block/nbd.c
index 5170a630778d..741365295157 100644
--- a/drivers/block/nbd.c
+++ b/drivers/block/nbd.c
@@ -1757,7 +1757,9 @@ static struct nbd_device *nbd_dev_add(int index, unsigned 
int refs)
disk->fops = _fops;
disk->private_data = nbd;
sprintf(disk->disk_name, "nbd%d", index);
-   add_disk(disk);
+   err = add_disk(disk);
+   if (err)
+   goto out_err_disk;
 
/*
 * Now publish the device.
@@ -1766,6 +1768,8 @@ static struct nbd_device *nbd_dev_add(int index, unsigned 
int refs)
nbd_total_devices++;
return nbd;
 
+out_err_disk:
+   blk_cleanup_disk(disk);
 out_free_idr:
mutex_lock(_index_mutex);
idr_remove(_index_idr, index);
-- 
2.30.2

--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel



[dm-devel] [PATCH v2 5/6] loop: add error handling support for add_disk()

2021-08-27 Thread Luis Chamberlain
We never checked for errors on add_disk() as this function
returned void. Now that this is fixed, use the shiny new
error handling.

Reviewed-by: Christoph Hellwig 
Signed-off-by: Luis Chamberlain 
---
 drivers/block/loop.c | 9 -
 1 file changed, 8 insertions(+), 1 deletion(-)

diff --git a/drivers/block/loop.c b/drivers/block/loop.c
index fa1c298a8cfb..b8b9e2349e77 100644
--- a/drivers/block/loop.c
+++ b/drivers/block/loop.c
@@ -2393,10 +2393,17 @@ static int loop_add(int i)
disk->events= DISK_EVENT_MEDIA_CHANGE;
disk->event_flags   = DISK_EVENT_FLAG_UEVENT;
sprintf(disk->disk_name, "loop%d", i);
-   add_disk(disk);
+
+   err = add_disk(disk);
+   if (err)
+   goto out_cleanup_disk;
+
mutex_unlock(_ctl_mutex);
+
return i;
 
+out_cleanup_disk:
+   blk_cleanup_disk(disk);
 out_cleanup_tags:
blk_mq_free_tag_set(>tag_set);
 out_free_idr:
-- 
2.30.2

--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel



[dm-devel] [PATCH v2 0/6] block: first batch of add_disk() error handling conversions

2021-08-27 Thread Luis Chamberlain
This v2 drops two of the scsi patches which  Christoph pointed out were
not needed. It also fixes the nvme driver change to account for the
missing put of the ctrl. It also just appends the commits with the
respective reviewed tags. I've also dropped the mmc and dm patches
from this series as that is still pending discussion. I'll roll that
into the my next series after discussion is done.

The only patch which goes without a review tag is the nvme driver change.

These changes go rebased on Jen's latest axboe/for-next. The respective
full queue of my changes can be found on my git branch [0].

[0] 
https://git.kernel.org/pub/scm/linux/kernel/git/mcgrof/linux-next.git/log/?h=20210827-for-axboe-add-disk-error-handling-next

Luis Chamberlain (6):
  scsi/sd: add error handling support for add_disk()
  scsi/sr: add error handling support for add_disk()
  nvme: add error handling support for add_disk()
  md: add error handling support for add_disk()
  loop: add error handling support for add_disk()
  nbd: add error handling support for add_disk()

 drivers/block/loop.c | 9 -
 drivers/block/nbd.c  | 6 +-
 drivers/md/md.c  | 7 ++-
 drivers/nvme/host/core.c | 9 -
 drivers/scsi/sd.c| 6 +-
 drivers/scsi/sr.c| 5 -
 6 files changed, 36 insertions(+), 6 deletions(-)

-- 
2.30.2

--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel



[dm-devel] [PATCH v2 4/6] md: add error handling support for add_disk()

2021-08-27 Thread Luis Chamberlain
We never checked for errors on add_disk() as this function
returned void. Now that this is fixed, use the shiny new
error handling.

We just do the unwinding of what was not done before, and are
sure to unlock prior to bailing.

Reviewed-by: Christoph Hellwig 
Signed-off-by: Luis Chamberlain 
---
 drivers/md/md.c | 7 ++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/drivers/md/md.c b/drivers/md/md.c
index ae8fe54ea358..5c0d3536d7c7 100644
--- a/drivers/md/md.c
+++ b/drivers/md/md.c
@@ -5704,7 +5704,11 @@ static int md_alloc(dev_t dev, char *name)
 * through to md_open, so make sure it doesn't get too far
 */
mutex_lock(>open_mutex);
-   add_disk(disk);
+   error = add_disk(disk);
+   if (error) {
+   blk_cleanup_disk(disk);
+   goto abort_unlock;
+   }
 
error = kobject_add(>kobj, _to_dev(disk)->kobj, "%s", "md");
if (error) {
@@ -5718,6 +5722,7 @@ static int md_alloc(dev_t dev, char *name)
if (mddev->kobj.sd &&
sysfs_create_group(>kobj, _bitmap_group))
pr_debug("pointless warning\n");
+ abort_unlock:
mutex_unlock(>open_mutex);
  abort:
mutex_unlock(_mutex);
-- 
2.30.2

--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel



[dm-devel] [PATCH v2 1/6] scsi/sd: add error handling support for add_disk()

2021-08-27 Thread Luis Chamberlain
We never checked for errors on add_disk() as this function
returned void. Now that this is fixed, use the shiny new
error handling.

Reviewed-by: Christoph Hellwig 
Signed-off-by: Luis Chamberlain 
---
 drivers/scsi/sd.c | 6 +-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/drivers/scsi/sd.c b/drivers/scsi/sd.c
index 610ebba0d66e..8c1273fff23e 100644
--- a/drivers/scsi/sd.c
+++ b/drivers/scsi/sd.c
@@ -3487,7 +3487,11 @@ static int sd_probe(struct device *dev)
pm_runtime_set_autosuspend_delay(dev,
sdp->host->hostt->rpm_autosuspend_delay);
}
-   device_add_disk(dev, gd, NULL);
+
+   error = device_add_disk(dev, gd, NULL);
+   if (error)
+   goto out_free_index;
+
if (sdkp->capacity)
sd_dif_config_host(sdkp);
 
-- 
2.30.2

--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel



Re: [dm-devel] [PATCH 08/10] dm: add add_disk() error handling

2021-08-27 Thread Luis Chamberlain
On Tue, Aug 24, 2021 at 07:21:30AM +0100, Christoph Hellwig wrote:
> On Mon, Aug 23, 2021 at 01:29:28PM -0700, Luis Chamberlain wrote:
> > -   add_disk(md->disk);
> > +   r = add_disk(md->disk);
> > +   if (r)
> > +   goto out_cleanup_disk;
> >  
> > r = dm_sysfs_init(md);
> > -   if (r) {
> > -   del_gendisk(md->disk);
> > -   return r;
> > -   }
> > +   if (r)
> > +   goto out_del_gendisk;
> > md->type = type;
> > return 0;
> > +
> > +out_cleanup_disk:
> > +   blk_cleanup_disk(md->disk);
> > +out_del_gendisk:
> > +   del_gendisk(md->disk);
> > +   return r;
> 
> I think the add_disk should just return r.  If you look at the
> callers they eventualy end up in dm_table_destroy, which does
> this cleanup.

I don't see it. What part of dm_table_destroy() does this?

  Luis

--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel



Re: [dm-devel] [PATCH 06/10] mmc/core/block: add error handling support for add_disk()

2021-08-27 Thread Luis Chamberlain
On Tue, Aug 24, 2021 at 07:13:13AM +0100, Christoph Hellwig wrote:
> On Mon, Aug 23, 2021 at 01:29:26PM -0700, Luis Chamberlain wrote:
> > We never checked for errors on add_disk() as this function
> > returned void. Now that this is fixed, use the shiny new
> > error handling.
> > 
> > The caller cleanups the disk already so all we need to do
> > is just pass along the return value.
> > 
> > Signed-off-by: Luis Chamberlain 
> > ---
> >  drivers/mmc/core/block.c | 4 +++-
> >  1 file changed, 3 insertions(+), 1 deletion(-)
> > 
> > diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c
> > index 4c11f171e56d..4f12c6d1e1b5 100644
> > --- a/drivers/mmc/core/block.c
> > +++ b/drivers/mmc/core/block.c
> > @@ -2432,7 +2432,9 @@ static struct mmc_blk_data *mmc_blk_alloc_req(struct 
> > mmc_card *card,
> > /* used in ->open, must be set before add_disk: */
> > if (area_type == MMC_BLK_DATA_AREA_MAIN)
> > dev_set_drvdata(>dev, md);
> > -   device_add_disk(md->parent, md->disk, mmc_disk_attr_groups);
> > +   ret = device_add_disk(md->parent, md->disk, mmc_disk_attr_groups);
> > +   if (ret)
> > +   goto out;
> 
> This needs to do a blk_cleanup_queue and also te kfree of md.

If mmc_blk_alloc_parts() fails mmc_blk_remove_req() is called which
does both for us?

 Luis

--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel



Re: [dm-devel] [PATCH 05/10] nvme: add error handling support for add_disk()

2021-08-27 Thread Luis Chamberlain
On Tue, Aug 24, 2021 at 07:09:37AM +0100, Christoph Hellwig wrote:
> On Mon, Aug 23, 2021 at 01:29:25PM -0700, Luis Chamberlain wrote:
> > +   rc = device_add_disk(ctrl->device, ns->disk, nvme_ns_id_attr_groups);
> > +   if (rc)
> > +   goto out_cleanup_ns_from_list;
> > +
> 
> Nit: no real need for the rc variable here as we never use the actual
> value.

Alrighty.

> > if (!nvme_ns_head_multipath(ns->head))
> > nvme_add_ns_cdev(ns);
> >  
> > @@ -3785,6 +3789,10 @@ static void nvme_alloc_ns(struct nvme_ctrl *ctrl, 
> > unsigned nsid,
> >  
> > return;
> >  
> > + out_cleanup_ns_from_list:
> > +   down_write(>namespaces_rwsem);
> > +   list_del_init(>list);
> > +   up_write(>namespaces_rwsem);
> 
> This also needs to do a nvme_put_ctrl.

Fixed.

  Luis

--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel



Re: [dm-devel] [PATCH 03/10] scsi/sr: use blk_cleanup_disk() instead of put_disk()

2021-08-27 Thread Luis Chamberlain
On Tue, Aug 24, 2021 at 07:00:27AM +0100, Christoph Hellwig wrote:
> On Mon, Aug 23, 2021 at 01:29:23PM -0700, Luis Chamberlain wrote:
> > The single put_disk() is useful if you know you're not doing
> > a cleanup after add_disk(), but since we want to add support
> > for that, just use the normal form of blk_cleanup_disk() to
> > cleanup the queue and put the disk.
> > 
> > Signed-off-by: Luis Chamberlain 
> 
> .. not needed as the cleanup is done by the core scsi code.

Dropped.

  Luis

--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel



Re: [dm-devel] [PATCH 01/10] scsi/sd: use blk_cleanup_queue() insted of put_disk()

2021-08-27 Thread Luis Chamberlain
On Tue, Aug 24, 2021 at 06:52:53AM +0100, Christoph Hellwig wrote:
> On Mon, Aug 23, 2021 at 01:29:21PM -0700, Luis Chamberlain wrote:
> > The single put_disk() is useful if you know you're not doing
> > a cleanup after add_disk(), but since we want to add support
> > for that, just use the normal form of blk_cleanup_disk() to
> > cleanup the queue and put the disk.
> 
> Hmm, I don't think this is correct.  The request_queue is owned by the
> core SCSI code.

Alright, I'll drop this one. For the life of me I can't find the respective
probe call on the scsi layer.

  Luis

--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel



[dm-devel] [PATCH 04/10] scsi/sr: add error handling support for add_disk()

2021-08-23 Thread Luis Chamberlain
We never checked for errors on add_disk() as this function
returned void. Now that this is fixed, use the shiny new
error handling.

Signed-off-by: Luis Chamberlain 
---
 drivers/scsi/sr.c | 5 -
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/drivers/scsi/sr.c b/drivers/scsi/sr.c
index dc78ad96e6f9..3e0c6f881521 100644
--- a/drivers/scsi/sr.c
+++ b/drivers/scsi/sr.c
@@ -779,7 +779,10 @@ static int sr_probe(struct device *dev)
dev_set_drvdata(dev, cd);
disk->flags |= GENHD_FL_REMOVABLE;
sr_revalidate_disk(cd);
-   device_add_disk(>sdev_gendev, disk, NULL);
+
+   error = device_add_disk(>sdev_gendev, disk, NULL);
+   if (error)
+   goto fail_minor;
 
sdev_printk(KERN_DEBUG, sdev,
"Attached scsi CD-ROM %s\n", cd->cdi.name);
-- 
2.30.2

--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel



[dm-devel] [PATCH 05/10] nvme: add error handling support for add_disk()

2021-08-23 Thread Luis Chamberlain
We never checked for errors on add_disk() as this function
returned void. Now that this is fixed, use the shiny new
error handling.

Signed-off-by: Luis Chamberlain 
---
 drivers/nvme/host/core.c | 10 +-
 1 file changed, 9 insertions(+), 1 deletion(-)

diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index 68acd33c3856..fe02e95173d8 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -3720,6 +3720,7 @@ static void nvme_alloc_ns(struct nvme_ctrl *ctrl, 
unsigned nsid,
struct gendisk *disk;
struct nvme_id_ns *id;
int node = ctrl->numa_node;
+   int rc;
 
if (nvme_identify_ns(ctrl, nsid, ids, ))
return;
@@ -3775,7 +3776,10 @@ static void nvme_alloc_ns(struct nvme_ctrl *ctrl, 
unsigned nsid,
 
nvme_get_ctrl(ctrl);
 
-   device_add_disk(ctrl->device, ns->disk, nvme_ns_id_attr_groups);
+   rc = device_add_disk(ctrl->device, ns->disk, nvme_ns_id_attr_groups);
+   if (rc)
+   goto out_cleanup_ns_from_list;
+
if (!nvme_ns_head_multipath(ns->head))
nvme_add_ns_cdev(ns);
 
@@ -3785,6 +3789,10 @@ static void nvme_alloc_ns(struct nvme_ctrl *ctrl, 
unsigned nsid,
 
return;
 
+ out_cleanup_ns_from_list:
+   down_write(>namespaces_rwsem);
+   list_del_init(>list);
+   up_write(>namespaces_rwsem);
  out_unlink_ns:
mutex_lock(>subsys->lock);
list_del_rcu(>siblings);
-- 
2.30.2

--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel



[dm-devel] [PATCH 09/10] loop: add error handling support for add_disk()

2021-08-23 Thread Luis Chamberlain
We never checked for errors on add_disk() as this function
returned void. Now that this is fixed, use the shiny new
error handling.

Signed-off-by: Luis Chamberlain 
Signed-off-by: Christoph Hellwig 
---
 drivers/block/loop.c | 9 -
 1 file changed, 8 insertions(+), 1 deletion(-)

diff --git a/drivers/block/loop.c b/drivers/block/loop.c
index fa1c298a8cfb..b8b9e2349e77 100644
--- a/drivers/block/loop.c
+++ b/drivers/block/loop.c
@@ -2393,10 +2393,17 @@ static int loop_add(int i)
disk->events= DISK_EVENT_MEDIA_CHANGE;
disk->event_flags   = DISK_EVENT_FLAG_UEVENT;
sprintf(disk->disk_name, "loop%d", i);
-   add_disk(disk);
+
+   err = add_disk(disk);
+   if (err)
+   goto out_cleanup_disk;
+
mutex_unlock(_ctl_mutex);
+
return i;
 
+out_cleanup_disk:
+   blk_cleanup_disk(disk);
 out_cleanup_tags:
blk_mq_free_tag_set(>tag_set);
 out_free_idr:
-- 
2.30.2

--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel



[dm-devel] [PATCH 02/10] scsi/sd: add error handling support for add_disk()

2021-08-23 Thread Luis Chamberlain
We never checked for errors on add_disk() as this function
returned void. Now that this is fixed, use the shiny new
error handling.

Signed-off-by: Luis Chamberlain 
---
 drivers/scsi/sd.c | 6 +-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/drivers/scsi/sd.c b/drivers/scsi/sd.c
index 7d5217905374..7df6f20f28ec 100644
--- a/drivers/scsi/sd.c
+++ b/drivers/scsi/sd.c
@@ -3487,7 +3487,11 @@ static int sd_probe(struct device *dev)
pm_runtime_set_autosuspend_delay(dev,
sdp->host->hostt->rpm_autosuspend_delay);
}
-   device_add_disk(dev, gd, NULL);
+
+   error = device_add_disk(dev, gd, NULL);
+   if (error)
+   goto out_free_index;
+
if (sdkp->capacity)
sd_dif_config_host(sdkp);
 
-- 
2.30.2

--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel



[dm-devel] [PATCH 07/10] md: add error handling support for add_disk()

2021-08-23 Thread Luis Chamberlain
We never checked for errors on add_disk() as this function
returned void. Now that this is fixed, use the shiny new
error handling.

We just do the unwinding of what was not done before, and are
sure to unlock prior to bailing.

Signed-off-by: Luis Chamberlain 
---
 drivers/md/md.c | 7 ++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/drivers/md/md.c b/drivers/md/md.c
index ae8fe54ea358..5c0d3536d7c7 100644
--- a/drivers/md/md.c
+++ b/drivers/md/md.c
@@ -5704,7 +5704,11 @@ static int md_alloc(dev_t dev, char *name)
 * through to md_open, so make sure it doesn't get too far
 */
mutex_lock(>open_mutex);
-   add_disk(disk);
+   error = add_disk(disk);
+   if (error) {
+   blk_cleanup_disk(disk);
+   goto abort_unlock;
+   }
 
error = kobject_add(>kobj, _to_dev(disk)->kobj, "%s", "md");
if (error) {
@@ -5718,6 +5722,7 @@ static int md_alloc(dev_t dev, char *name)
if (mddev->kobj.sd &&
sysfs_create_group(>kobj, _bitmap_group))
pr_debug("pointless warning\n");
+ abort_unlock:
mutex_unlock(>open_mutex);
  abort:
mutex_unlock(_mutex);
-- 
2.30.2

--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel



[dm-devel] [PATCH 03/10] scsi/sr: use blk_cleanup_disk() instead of put_disk()

2021-08-23 Thread Luis Chamberlain
The single put_disk() is useful if you know you're not doing
a cleanup after add_disk(), but since we want to add support
for that, just use the normal form of blk_cleanup_disk() to
cleanup the queue and put the disk.

Signed-off-by: Luis Chamberlain 
---
 drivers/scsi/sr.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/scsi/sr.c b/drivers/scsi/sr.c
index a0df27db4d61..dc78ad96e6f9 100644
--- a/drivers/scsi/sr.c
+++ b/drivers/scsi/sr.c
@@ -792,7 +792,7 @@ static int sr_probe(struct device *dev)
clear_bit(minor, sr_index_bits);
spin_unlock(_index_lock);
 fail_put:
-   put_disk(disk);
+   blk_cleanup_disk(disk);
mutex_destroy(>lock);
 fail_free:
kfree(cd);
-- 
2.30.2

--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel



[dm-devel] [PATCH 10/10] nbd: add error handling support for add_disk()

2021-08-23 Thread Luis Chamberlain
We never checked for errors on add_disk() as this function
returned void. Now that this is fixed, use the shiny new
error handling.

Signed-off-by: Luis Chamberlain 
Signed-off-by: Christoph Hellwig 
---
 drivers/block/nbd.c | 6 +-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/drivers/block/nbd.c b/drivers/block/nbd.c
index c38317979f74..95f84c9b31f2 100644
--- a/drivers/block/nbd.c
+++ b/drivers/block/nbd.c
@@ -1730,10 +1730,14 @@ static int nbd_dev_add(int index)
disk->fops = _fops;
disk->private_data = nbd;
sprintf(disk->disk_name, "nbd%d", index);
-   add_disk(disk);
+   err = add_disk(disk);
+   if (err)
+   goto out_err_disk;
nbd_total_devices++;
return index;
 
+out_err_disk:
+   blk_cleanup_disk(disk);
 out_free_idr:
idr_remove(_index_idr, index);
 out_free_tags:
-- 
2.30.2

--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel



[dm-devel] [PATCH 06/10] mmc/core/block: add error handling support for add_disk()

2021-08-23 Thread Luis Chamberlain
We never checked for errors on add_disk() as this function
returned void. Now that this is fixed, use the shiny new
error handling.

The caller cleanups the disk already so all we need to do
is just pass along the return value.

Signed-off-by: Luis Chamberlain 
---
 drivers/mmc/core/block.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c
index 4c11f171e56d..4f12c6d1e1b5 100644
--- a/drivers/mmc/core/block.c
+++ b/drivers/mmc/core/block.c
@@ -2432,7 +2432,9 @@ static struct mmc_blk_data *mmc_blk_alloc_req(struct 
mmc_card *card,
/* used in ->open, must be set before add_disk: */
if (area_type == MMC_BLK_DATA_AREA_MAIN)
dev_set_drvdata(>dev, md);
-   device_add_disk(md->parent, md->disk, mmc_disk_attr_groups);
+   ret = device_add_disk(md->parent, md->disk, mmc_disk_attr_groups);
+   if (ret)
+   goto out;
return md;
 
  err_kfree:
-- 
2.30.2

--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel



[dm-devel] [PATCH 01/10] scsi/sd: use blk_cleanup_queue() insted of put_disk()

2021-08-23 Thread Luis Chamberlain
The single put_disk() is useful if you know you're not doing
a cleanup after add_disk(), but since we want to add support
for that, just use the normal form of blk_cleanup_disk() to
cleanup the queue and put the disk.

Signed-off-by: Luis Chamberlain 
---
 drivers/scsi/sd.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/scsi/sd.c b/drivers/scsi/sd.c
index 610ebba0d66e..7d5217905374 100644
--- a/drivers/scsi/sd.c
+++ b/drivers/scsi/sd.c
@@ -3508,7 +3508,7 @@ static int sd_probe(struct device *dev)
  out_free_index:
ida_free(_index_ida, index);
  out_put:
-   put_disk(gd);
+   blk_cleanup_disk(gd);
  out_free:
sd_zbc_release_disk(sdkp);
kfree(sdkp);
-- 
2.30.2

--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel



[dm-devel] [PATCH 00/10] block: first batch of add_disk() error handling conversions

2021-08-23 Thread Luis Chamberlain
There's a total of 70 pending patches in my queue which transform
drivers over to use the new add_disk() error handling. Now that
Jens has merged the core components what is left are all the other
driver conversions. A bit of these changes are helpers to make that
easier to do.

I'm going to split the driver conversions into batches, and
this first batch are drivers which should be of high priority
to consider.

Should this get merged, I'll chug on with the next batch, and
so on with batches of 10 each, until we tackle last the wonderful
world of floppy drivers.

I've put together a git tree based on Jen's for-5.15/block branch
which holds all of my pending changes, in case anyone wants to
take a peak.

[0] 
https://git.kernel.org/pub/scm/linux/kernel/git/mcgrof/linux-next.git/log/?h=20210823-for-axboe-add-disk-error-handling-next

Luis Chamberlain (10):
  scsi/sd: use blk_cleanup_queue() insted of put_disk()
  scsi/sd: add error handling support for add_disk()
  scsi/sr: use blk_cleanup_disk() instead of put_disk()
  scsi/sr: add error handling support for add_disk()
  nvme: add error handling support for add_disk()
  mmc/core/block: add error handling support for add_disk()
  md: add error handling support for add_disk()
  dm: add add_disk() error handling
  loop: add error handling support for add_disk()
  nbd: add error handling support for add_disk()

 drivers/block/loop.c |  9 -
 drivers/block/nbd.c  |  6 +-
 drivers/md/dm.c  | 16 +++-
 drivers/md/md.c  |  7 ++-
 drivers/mmc/core/block.c |  4 +++-
 drivers/nvme/host/core.c | 10 +-
 drivers/scsi/sd.c|  8 ++--
 drivers/scsi/sr.c|  7 +--
 8 files changed, 53 insertions(+), 14 deletions(-)

-- 
2.30.2

--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel



[dm-devel] [PATCH 08/10] dm: add add_disk() error handling

2021-08-23 Thread Luis Chamberlain
We never checked for errors on add_disk() as this function
returned void. Now that this is fixed, use the shiny new
error handling.

Signed-off-by: Luis Chamberlain 
---
 drivers/md/dm.c | 16 +++-
 1 file changed, 11 insertions(+), 5 deletions(-)

diff --git a/drivers/md/dm.c b/drivers/md/dm.c
index 7981b7287628..cd26fde4ab56 100644
--- a/drivers/md/dm.c
+++ b/drivers/md/dm.c
@@ -2077,15 +2077,21 @@ int dm_setup_md_queue(struct mapped_device *md, struct 
dm_table *t)
if (r)
return r;
 
-   add_disk(md->disk);
+   r = add_disk(md->disk);
+   if (r)
+   goto out_cleanup_disk;
 
r = dm_sysfs_init(md);
-   if (r) {
-   del_gendisk(md->disk);
-   return r;
-   }
+   if (r)
+   goto out_del_gendisk;
md->type = type;
return 0;
+
+out_cleanup_disk:
+   blk_cleanup_disk(md->disk);
+out_del_gendisk:
+   del_gendisk(md->disk);
+   return r;
 }
 
 struct mapped_device *dm_get_md(dev_t dev)
-- 
2.30.2

--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel



Re: [dm-devel] [PATCH 02/26] block: move the DISK_MAX_PARTS sanity check into __device_add_disk

2021-05-24 Thread Luis Chamberlain
On Fri, May 21, 2021 at 07:50:52AM +0200, Christoph Hellwig wrote:
> Keep this together with the first place that actually looks at
> ->minors and prepare for not passing a minors argument to
> alloc_disk.
> 
> Signed-off-by: Christoph Hellwig 

Reviewed-by: Luis Chamberlain 

  Luis

--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel



Re: [dm-devel] [PATCH 05/26] block: add blk_alloc_disk and blk_cleanup_disk APIs

2021-05-24 Thread Luis Chamberlain
On Fri, May 21, 2021 at 07:50:55AM +0200, Christoph Hellwig wrote:
> Add two new APIs to allocate and free a gendisk including the
> request_queue for use with BIO based drivers.  This is to avoid
> boilerplate code in drivers.
> 
> Signed-off-by: Christoph Hellwig 
> ---
>  block/genhd.c | 35 +++
>  include/linux/genhd.h | 22 ++
>  2 files changed, 57 insertions(+)
> 
> diff --git a/block/genhd.c b/block/genhd.c
> index e4974af3d729..6d4ce962866d 100644
> --- a/block/genhd.c
> +++ b/block/genhd.c
> @@ -1302,6 +1302,25 @@ struct gendisk *__alloc_disk_node(int minors, int 
> node_id)
>  }
>  EXPORT_SYMBOL(__alloc_disk_node);
>  
> +struct gendisk *__blk_alloc_disk(int node)
> +{
> + struct request_queue *q;
> + struct gendisk *disk;
> +
> + q = blk_alloc_queue(node);
> + if (!q)
> + return NULL;
> +
> + disk = __alloc_disk_node(0, node);
> + if (!disk) {
> + blk_cleanup_queue(q);
> + return NULL;
> + }
> + disk->queue = q;
> + return disk;
> +}
> +EXPORT_SYMBOL(__blk_alloc_disk);

Its not obvious to me why using this new API requires you then to
set minors explicitly to 1, and yet here underneath we see the minors
argument passed is 0.

Nor is it clear from the documentation.

  Luis

--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel



Re: [dm-devel] [PATCH 01/26] block: refactor device number setup in __device_add_disk

2021-05-24 Thread Luis Chamberlain
On Fri, May 21, 2021 at 07:50:51AM +0200, Christoph Hellwig wrote:
> diff --git a/block/genhd.c b/block/genhd.c
> index 39ca97b0edc6..2c00bc3261d9 100644
> --- a/block/genhd.c
> +++ b/block/genhd.c
> @@ -335,52 +335,22 @@ static int blk_mangle_minor(int minor)

<-- snip -->

> -int blk_alloc_devt(struct block_device *bdev, dev_t *devt)
> +int blk_alloc_ext_minor(void)
>  {
> - struct gendisk *disk = bdev->bd_disk;
>   int idx;
>  
> - /* in consecutive minor range? */
> - if (bdev->bd_partno < disk->minors) {
> - *devt = MKDEV(disk->major, disk->first_minor + bdev->bd_partno);
> - return 0;
> - }
> -

It is not obviously clear to me, why this was part of add_disk()
path, and ...

> diff --git a/block/partitions/core.c b/block/partitions/core.c
> index dc60ecf46fe6..504297bdc8bf 100644
> --- a/block/partitions/core.c
> +++ b/block/partitions/core.c
> @@ -379,9 +380,15 @@ static struct block_device *add_partition(struct gendisk 
> *disk, int partno,
>   pdev->type = _type;
>   pdev->parent = ddev;
>  
> - err = blk_alloc_devt(bdev, );
> - if (err)
> - goto out_put;
> + /* in consecutive minor range? */
> + if (bdev->bd_partno < disk->minors) {
> + devt = MKDEV(disk->major, disk->first_minor + bdev->bd_partno);
> + } else {
> + err = blk_alloc_ext_minor();
> + if (err < 0)
> + goto out_put;
> + devt = MKDEV(BLOCK_EXT_MAJOR, err);
> + }
>   pdev->devt = devt;
>  
>   /* delay uevent until 'holders' subdir is created */

... and why we only add this here now.

Other than that, this looks like a super nice cleanup!

Reviewed-by: Luis Chamberlain 

  Luis

--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel



Re: [dm-devel] [PATCH 04/26] block: add a flag to make put_disk on partially initalized disks safer

2021-05-24 Thread Luis Chamberlain
On Fri, May 21, 2021 at 07:50:54AM +0200, Christoph Hellwig wrote:
> Add a flag to indicate that __device_add_disk did grab a queue reference
> so that disk_release only drops it if we actually had it.  This sort
> out one of the major pitfals with partially initialized gendisk that
> a lot of drivers did get wrong or still do.
> 
> Signed-off-by: Christoph Hellwig 

Reviewed-by: Luis Chamberlain 

  Luis

--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel



Re: [dm-devel] [PATCH 03/26] block: automatically enable GENHD_FL_EXT_DEVT

2021-05-24 Thread Luis Chamberlain
On Fri, May 21, 2021 at 07:50:53AM +0200, Christoph Hellwig wrote:
> Automatically set the GENHD_FL_EXT_DEVT flag for all disks allocated
> without an explicit number of minors.  This is what all new block
> drivers should do, so make sure it is the default without boilerplate
> code.
> 
> Signed-off-by: Christoph Hellwig 

Reviewed-by: Luis Chamberlain 

  Luis

--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel