Looks good,
Reviewed-by: Johannes Thumshirn
--
Johannes Thumshirn Storage
jthumsh...@suse.de+49 911 74053 689
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: Felix Imendörffer, Jane Smithard, Graham Norton
HRB 21284 (AG N
On Wed, Jan 10, 2018 at 12:21 PM, Mike Snitzer wrote:
> On Tue, Jan 09 2018 at 10:46pm -0500,
> Ming Lei wrote:
>
>> Hi Mike,
>>
>> On Wed, Jan 10, 2018 at 10:41 AM, Mike Snitzer wrote:
>> > Since I can remember DM has forced the block layer to allow the
>> > allocation and initialization of the
On Tue, Jan 09 2018 at 9:41pm -0500,
Mike Snitzer wrote:
> device_add_disk() will only call bdi_register_owner() if
> !GENHD_FL_HIDDEN so it follows that bdi_unregister() should be avoided
> for !GENHD_FL_HIDDEN in del_gendisk().
This ^ should be "GENHD_FL_HIDDEN".
Sorry for the cut-n-paste t
On Tue, Jan 09 2018 at 10:46pm -0500,
Ming Lei wrote:
> Hi Mike,
>
> On Wed, Jan 10, 2018 at 10:41 AM, Mike Snitzer wrote:
> > Since I can remember DM has forced the block layer to allow the
> > allocation and initialization of the request_queue to be distinct
> > operations. Reason for this w
Hi Mike,
On Wed, Jan 10, 2018 at 10:41 AM, Mike Snitzer wrote:
> Since I can remember DM has forced the block layer to allow the
> allocation and initialization of the request_queue to be distinct
> operations. Reason for this was block/genhd.c:add_disk() has required
> that the request_queue (a
Fix DM so that it no longer creates a place-holder request_queue that
doesn't reflect the actual way the request_queue will ulimately be used,
only to have to backfill proper queue and queue_limits initialization.
Instead, DM creates a gendisk that initially doesn't have a
request_queue at all. Th
Since I can remember DM has forced the block layer to allow the
allocation and initialization of the request_queue to be distinct
operations. Reason for this was block/genhd.c:add_disk() has required
that the request_queue (and associated bdi) be tied to the gendisk
before add_disk() is called --
device_add_disk() will only call bdi_register_owner() if
!GENHD_FL_HIDDEN so it follows that bdi_unregister() should be avoided
for !GENHD_FL_HIDDEN in del_gendisk().
Found with code inspection. bdi_unregister() won't do much harm if
bdi_register_owner() wasn't used but best to avoid it.
Fixes:
Hi Jens,
Please consider PATCH 1 and 2 for 4.16 inclusion. I've included PATCH
3 to show the DM changes needed in order to make use of these block
changes.
v2: added some del_gendisk() code movement to be symmetric with the
device_add_disk() related code movement (suggested by Bart). Also
On Tue, Jan 09 2018 at 6:41pm -0500,
Mike Snitzer wrote:
> On Tue, Jan 09 2018 at 6:04pm -0500,
> Bart Van Assche wrote:
>
> > On Tue, 2018-01-09 at 17:10 -0500, Mike Snitzer wrote:
> > > diff --git a/block/blk-sysfs.c b/block/blk-sysfs.c
> > > index 870484eaed1f..0b0dda8e2420 100644
> > > --
On Tue, Jan 09 2018 at 6:04pm -0500,
Bart Van Assche wrote:
> On Tue, 2018-01-09 at 17:10 -0500, Mike Snitzer wrote:
> > diff --git a/block/blk-sysfs.c b/block/blk-sysfs.c
> > index 870484eaed1f..0b0dda8e2420 100644
> > --- a/block/blk-sysfs.c
> > +++ b/block/blk-sysfs.c
> > @@ -919,8 +919,20 @@
On Tue, 2018-01-09 at 17:10 -0500, Mike Snitzer wrote:
> diff --git a/block/blk-sysfs.c b/block/blk-sysfs.c
> index 870484eaed1f..0b0dda8e2420 100644
> --- a/block/blk-sysfs.c
> +++ b/block/blk-sysfs.c
> @@ -919,8 +919,20 @@ int blk_register_queue(struct gendisk *disk)
> ret = 0;
> unlock:
>
Fix DM so that it no longer creates a place-holder request_queue that
doesn't reflect the actual way the request_queue will ulimately be used,
only to have to backfill proper queue and queue_limits initialization.
Instead, DM creates a gendisk that initially doesn't have a
request_queue at all. Th
Since I can remember DM has forced the block layer to allow the
allocation and initialization of the request_queue to be distinct
operations. Reason for this was block/genhd.c:add_disk() has required
that the request_queue (and associated bdi) be tied to the gendisk
before add_disk() is called --
Hi Jens,
Please consider PATCH 1/2 for 4.16 inclusion. I've included PATCH 2/2
to show the DM changes needed in order to make use of the block
changes.
I think, all things considered, this moves DM's interface with block
core in a better direction for the long-term but I obviously welcome
any cr
Uses common code for determining if an error should be retried on
alternate path.
Acked-by: Mike Snitzer
Reviewed-by: Hannes Reinecke
Signed-off-by: Keith Busch
---
drivers/nvme/host/multipath.c | 14 +-
1 file changed, 1 insertion(+), 13 deletions(-)
diff --git a/drivers/nvme/hos
This patch provides a common decoder for block status path related errors
that may be retried so various entities wishing to consult this do not
have to duplicate this decision.
Acked-by: Mike Snitzer
Reviewed-by: Hannes Reinecke
Signed-off-by: Keith Busch
---
include/linux/blk_types.h | 28 ++
Uses common code for determining if an error should be retried on
alternate path.
Acked-by: Mike Snitzer
Reviewed-by: Hannes Reinecke
Signed-off-by: Keith Busch
---
drivers/md/dm-mpath.c | 19 ++-
1 file changed, 2 insertions(+), 17 deletions(-)
diff --git a/drivers/md/dm-mpat
This removes nvme multipath's specific status decoding to see if failover
is needed, using the generic blk_status_t that was decoded earlier. This
abstraction from the raw NVMe status means all status decoding exists
in one place.
Acked-by: Mike Snitzer
Reviewed-by: Hannes Reinecke
Signed-off-by
Native nvme multipath provided a separate NVMe status decoder,
complicating maintenance as new statuses need to be accounted for. This
was already diverging from the generic nvme status decoder, which has
implications for other components that rely on accurate generic block
errors.
This series uni
This adds more NVMe status code translations to blk_status_t values,
and captures all the current status codes NVMe multipath uses.
Acked-by: Mike Snitzer
Reviewed-by: Hannes Reinecke
Signed-off-by: Keith Busch
---
drivers/nvme/host/core.c | 7 +++
1 file changed, 7 insertions(+)
diff --g
On Tue, Jan 09, 2018 at 10:38:58AM -0700, Keith Busch wrote:
> On Mon, Jan 08, 2018 at 01:57:07AM -0800, Christoph Hellwig wrote:
> > > - if (unlikely(nvme_req(req)->status && nvme_req_needs_retry(req))) {
> > > - if (nvme_req_needs_failover(req)) {
> > > + blk_status_t status = nvme_error_
On Mon, Jan 08, 2018 at 01:57:07AM -0800, Christoph Hellwig wrote:
> > - if (unlikely(nvme_req(req)->status && nvme_req_needs_retry(req))) {
> > - if (nvme_req_needs_failover(req)) {
> > + blk_status_t status = nvme_error_status(req);
> > +
> > + if (unlikely(status != BLK_STS_OK &&
23 matches
Mail list logo