[PATCH] lightnvm: remove already calculated nr_chnls

2017-09-17 Thread Rakesh Pandit
Remove repeated calculation for number of channels while creating a
target device.

Signed-off-by: Rakesh Pandit 
---

This is also a trivial change I found while investigating/working on
other issue.

 drivers/lightnvm/core.c | 1 -
 1 file changed, 1 deletion(-)

diff --git a/drivers/lightnvm/core.c b/drivers/lightnvm/core.c
index 1b8338d..01536b8 100644
--- a/drivers/lightnvm/core.c
+++ b/drivers/lightnvm/core.c
@@ -139,7 +139,6 @@ static struct nvm_tgt_dev *nvm_create_tgt_dev(struct 
nvm_dev *dev,
int prev_nr_luns;
int i, j;
 
-   nr_chnls = nr_luns / dev->geo.luns_per_chnl;
nr_chnls = (nr_chnls_mod == 0) ? nr_chnls : nr_chnls + 1;
 
dev_map = kmalloc(sizeof(struct nvm_dev_map), GFP_KERNEL);
-- 
2.7.4



Re: [PATCH 2/5] dm-mpath: return DM_MAPIO_REQUEUE in case of rq allocation failure

2017-09-17 Thread Ming Lei
On Fri, Sep 15, 2017 at 04:06:55PM -0400, Mike Snitzer wrote:
> On Fri, Sep 15 2017 at  1:29pm -0400,
> Bart Van Assche  wrote:
> 
> > On Sat, 2017-09-16 at 00:44 +0800, Ming Lei wrote:
> > > blk-mq will rerun queue via RESTART after one request is completion,
> > > so not necessary to wait random time for requeuing, it should trust
> > > blk-mq to do it.
> > > 
> > > Signed-off-by: Ming Lei 
> > > ---
> > >  drivers/md/dm-mpath.c | 2 +-
> > >  1 file changed, 1 insertion(+), 1 deletion(-)
> > > 
> > > diff --git a/drivers/md/dm-mpath.c b/drivers/md/dm-mpath.c
> > > index 96aedaac2c64..f5a1088a6e79 100644
> > > --- a/drivers/md/dm-mpath.c
> > > +++ b/drivers/md/dm-mpath.c
> > > @@ -505,7 +505,7 @@ static int multipath_clone_and_map(struct dm_target 
> > > *ti, struct request *rq,
> > >   atomic_inc(>pg_init_in_progress);
> > >   activate_or_offline_path(pgpath);
> > >   }
> > > - return DM_MAPIO_DELAY_REQUEUE;
> > > + return DM_MAPIO_REQUEUE;
> > >   }
> > >   clone->bio = clone->biotail = NULL;
> > >   clone->rq_disk = bdev->bd_disk;
> > 
> > So you are reverting the patch below? Thank you very much.
> > 
> > commit 1c23484c355ec360ca2f37914f8a4802c6baeead
> > Author: Bart Van Assche 
> > Date:   Wed Aug 9 11:32:12 2017 -0700
> > 
> > dm mpath: do not lock up a CPU with requeuing activity
> > 
> > When using the block layer in single queue mode, get_request()
> > returns ERR_PTR(-EAGAIN) if the queue is dying and the REQ_NOWAIT
> > flag has been passed to get_request(). Avoid that the kernel
> > reports soft lockup complaints in this case due to continuous
> > requeuing activity.
> > 
> > Fixes: 7083abbbf ("dm mpath: avoid that path removal can trigger an 
> > infinite loop")
> > Cc: sta...@vger.kernel.org
> > Signed-off-by: Bart Van Assche 
> > Tested-by: Laurence Oberman 
> > Reviewed-by: Christoph Hellwig 
> > Signed-off-by: Mike Snitzer 
> 
> The problem is that multipath_clone_and_map() is now treated as common
> code (thanks to both blk-mq and old .request_fn now enjoying the use of
> blk_get_request) BUT: Ming please understand that this code is used by
> old .request_fn too.  So it would seem that the use of

Hi Mike,

OK, thanks for pointing this out.

> DM_MAPIO_DELAY_REQUEUE vs DM_MAPIO_REQUEUE needs to be based on dm-sq vs
> dm-mq.

Yeah, just forget that dm-mq can't work on underlying queue which is
block legacy path, also forget the exact reason, :-(

-- 
Ming


Re: [PATCH 2/5] dm-mpath: return DM_MAPIO_REQUEUE in case of rq allocation failure

2017-09-17 Thread Ming Lei
On Fri, Sep 15, 2017 at 05:29:53PM +, Bart Van Assche wrote:
> On Sat, 2017-09-16 at 00:44 +0800, Ming Lei wrote:
> > blk-mq will rerun queue via RESTART after one request is completion,
> > so not necessary to wait random time for requeuing, it should trust
> > blk-mq to do it.
> > 
> > Signed-off-by: Ming Lei 
> > ---
> >  drivers/md/dm-mpath.c | 2 +-
> >  1 file changed, 1 insertion(+), 1 deletion(-)
> > 
> > diff --git a/drivers/md/dm-mpath.c b/drivers/md/dm-mpath.c
> > index 96aedaac2c64..f5a1088a6e79 100644
> > --- a/drivers/md/dm-mpath.c
> > +++ b/drivers/md/dm-mpath.c
> > @@ -505,7 +505,7 @@ static int multipath_clone_and_map(struct dm_target 
> > *ti, struct request *rq,
> > atomic_inc(>pg_init_in_progress);
> > activate_or_offline_path(pgpath);
> > }
> > -   return DM_MAPIO_DELAY_REQUEUE;
> > +   return DM_MAPIO_REQUEUE;
> > }
> > clone->bio = clone->biotail = NULL;
> > clone->rq_disk = bdev->bd_disk;
> 
> So you are reverting the patch below? Thank you very much.
> 
> commit 1c23484c355ec360ca2f37914f8a4802c6baeead
> Author: Bart Van Assche 
> Date:   Wed Aug 9 11:32:12 2017 -0700
> 
> dm mpath: do not lock up a CPU with requeuing activity
> 
> When using the block layer in single queue mode, get_request()
> returns ERR_PTR(-EAGAIN) if the queue is dying and the REQ_NOWAIT
> flag has been passed to get_request(). Avoid that the kernel
> reports soft lockup complaints in this case due to continuous
> requeuing activity.

What is the continuous requeuing activity? In case of BLK_STS_RESOURCE,
blk-mq's SCHED_RESTART(see blk_mq_sched_dispatch_requests()) will be
triggered, then this rq will be dispatched again after one rq is completed.

-- 
Ming


Re: [PATCH] bcache: option for recovery from staled data

2017-09-17 Thread Nix
On 9 Sep 2017, Coly Li spake thusly:

> When bcache does read I/Os, for example in writeback or writethrough mode,
> if a read request on cache device is failed, bcache will try to recovery
> the request by reading from cached device. If the data on cached device is
> not synced with cache device, then requester will get a staled data.
>
> For critical storage system like database, recovery from staled data may
> result an application level data corruption, which is unacceptible. But
> for some other situation like multi-media stream cache, continuous service
> may be more important and it is acceptible to fetch a staled chunk of data.
>
> This patch tries to solve the above conflict by adding a sysfs option
>   /sys/block/bcache/bcache/recovery_from_staled_data
> which is defaultly cleared (to 0) as disabled. Now people can make choices
> for different situations.

'Staled' is not a word, though perhaps it should be. You probably want
to call it recovery_from_stale_data. But given the description below...

> With this patch, for a failed read request in writeback or writethrough
> mode, recovery a recoverable read request only happens in one of the
> following conditions,
>  - dc->has_dirty is zero. It means all data on cache device is synced to
>cached device, the recoveried data is up-to-date. 
>  - dc->has_dirty is non-zero, and dc->recovery_from_staled_data is set
>to 1. It means there is dirty data not synced to cached device yet, but
>option recovery_from_staled_data is set, receiving staled data is
>explicitly acceptible for requester.

... this name is also unclear. It sounded to me like it was an option
that recovers *from* stale data (as if the stale data was a problem to
recover from), not an option that uses stale data to *allow* recovery.

Perhaps, instead, something like stale_data_permitted or
allow_stale_data_on_failure would be better?

-- 
NULL && (void)