Bart,
On Mon, 2017-09-25 at 22:00 +, Bart Van Assche wrote:
> On Mon, 2017-09-25 at 15:14 +0900, Damien Le Moal wrote:
> > +static inline bool deadline_request_needs_zone_wlock(struct deadline_data
> > *dd,
> > +struct request *rq)
> > +{
> > +
On Mon, 2017-09-25 at 22:06 +, Bart Van Assche wrote:
> On Mon, 2017-09-25 at 15:14 +0900, Damien Le Moal wrote:
> > - return rq_entry_fifo(dd->fifo_list[data_dir].next);
> > + if (!dd->zones_wlock || data_dir == READ)
> > + return rq_entry_fifo(dd->fifo_list[data_dir].next);
> >
Bart,
On Mon, 2017-09-25 at 21:34 +, Bart Van Assche wrote:
> On Mon, 2017-09-25 at 15:14 +0900, Damien Le Moal wrote:
> > Modify mq-dealine init_queue and exit_queue elevator methods to handle
>
> ^^
> mq-deadline ?
>
> > +static int
On Mon, 2017-09-25 at 21:17 +, Bart Van Assche wrote:
> On Mon, 2017-09-25 at 15:14 +0900, Damien Le Moal wrote:
> > + return kzalloc_node(BITS_TO_LONGS(sdkp->nr_zones)
> > + * sizeof(unsigned long),
>
> Does this perhaps fit on one line?
>
> > +/**
> > + *
On Sat, Sep 30, 2017 at 02:38:49PM +0800, Joseph Qi wrote:
> From: Joseph Qi
>
> There is a case which will lead to io stall. The case is described as
> follows.
> /test1
> |-subtest1
> /test2
> |-subtest2
> And subtest1 and subtest2 each has 32 queued bios
UNSECURED BUSINESS/PERSONAL LOAN BY LOAN CAPITAL FINANCE
- NO COLLATERAL
- MINIMUM DOCUMENTATION
- BUSINESS LOAN UP TO FIVE(5) MILLION US DOLLARS
CONTACT US TODAY VIA EMAIL: financecapital...@mail.com
On Sun, Oct 1, 2017 at 10:23 AM, Coly Li wrote:
> Hi Mike,
>
> Your data set is too small. Normally bcache users I talk with, they use
> bcache for distributed storage cluster or commercial data base, their
> catch device is large and fast. It is possible we see different I/O
>
On 2017/10/2 上午12:56, Michael Lyle wrote:
> That's strange-- are you doing the same test scenario? How much
> random I/O did you ask for?
>
> My tests took 6-7 minutes to do the 30G of 8k not-repeating I/Os in a
> 30G file (about 9k IOPs for me-- it's actually significantly faster
> but then
That's strange-- are you doing the same test scenario? How much
random I/O did you ask for?
My tests took 6-7 minutes to do the 30G of 8k not-repeating I/Os in a
30G file (about 9k IOPs for me-- it's actually significantly faster
but then starves every few seconds-- not new with these patches)..
When under memory-pressure it is possible that the mempool which backs
the 'struct request_queue' will make use of up to BLKDEV_MIN_RQ count
emergency buffers - in case it can't get a regular allocation. These
buffers are preallocated and once they are also used, they are
re-supplied with old
On Sun, Oct 01, 2017 at 04:25:18PM +0300, Rakesh Pandit wrote:
> Not all exported symbols are being used outside core and there were
> some stable entries in lightnvm.h
>
If you can replace 'stable' with 'stale' both in subject or body while
picking up that would be great.
Regards,
If pblk_core_init fails lets destroy all global caches.
Signed-off-by: Rakesh Pandit
---
drivers/lightnvm/pblk-init.c | 8 +++-
1 file changed, 7 insertions(+), 1 deletion(-)
diff --git a/drivers/lightnvm/pblk-init.c b/drivers/lightnvm/pblk-init.c
index 519e5cf..9f39800
While separating read and erase mempools in 22da65a1b pblk_g_rq_cache
was used two times to set aside memory both for erase and read
requests. Because same kmem cache is used repeatedly a single call to
kmem_cache_destroy wouldn't deallocate everything. Repeatedly doing
loading and unloading of
Not all exported symbols are being used outside core and there were
some stable entries in lightnvm.h
Signed-off-by: Rakesh Pandit
---
drivers/lightnvm/core.c | 129 +++
include/linux/lightnvm.h | 7 ---
2 files changed, 64
vblk isn't being used anyway and if we ever have a usecase we can
introduce this again. This makes the logic easier and removes
unnecessary checks.
Signed-off-by: Rakesh Pandit
---
drivers/lightnvm/core.c | 29 -
include/linux/lightnvm.h | 2 +-
We already pass the structure pointer so no need to pass the member.
Signed-off-by: Rakesh Pandit
---
drivers/lightnvm/pblk-rb.c | 12 ++--
1 file changed, 6 insertions(+), 6 deletions(-)
diff --git a/drivers/lightnvm/pblk-rb.c b/drivers/lightnvm/pblk-rb.c
index
Signed-off-by: Rakesh Pandit
---
drivers/lightnvm/pblk-core.c | 1 -
1 file changed, 1 deletion(-)
diff --git a/drivers/lightnvm/pblk-core.c b/drivers/lightnvm/pblk-core.c
index 1f8aa94..4ffd1d6 100644
--- a/drivers/lightnvm/pblk-core.c
+++ b/drivers/lightnvm/pblk-core.c
@@
For 4.15.
These are last set of cleanups, error path fixes most likely and also
some memory leak. Also removes stale symbols from lightnvm.h. In
addition remove exported symbols which are no where used (except
locally in core).
Everything is based on top of:
On Sat, Sep 30, 2017 at 10:06:45AM +0200, Jens Axboe wrote:
> For some reason, the laptop mode IO completion notified was never wired
> up for blk-mq. Ensure that we trigger the callback appropriately, to arm
> the laptop mode flush timer.
Looks fine:
Reviewed-by: Christoph Hellwig
On Fri, Sep 29, 2017 at 10:21:53PM +0800, Tony Yang wrote:
> Hi, All
>
> Because my environment requirements, the kernel must use 4.8.17,
> I would like to ask, how to use the kernel 4.8.17 nvme multi-path?
> Because I see support for multi-path versions are above 4.13
In that case we
20 matches
Mail list logo