On 09/07/2017 05:54 AM, Christoph Hellwig wrote:
> The job structure is allocated as part of the request, so we should not
> free it in the error path of bsg_prepare_job.
Added for this series, thanks.
--
Jens Axboe
Philipp,
On 9/14/17 21:51, Philipp wrote:
> Dear Damien,
>
> Thank you for your feedback.
>
>> On 14. Sep 2017, at 10:46, Damien Le Moal
>> wrote:
>
> […]
>
>> Writing once to a sector stored on spinning rust will *not* fully
>> erase the previous data. Part of the
On Thu, Sep 14, 2017 at 12:51:24PM -0600, Jens Axboe wrote:
> On 09/14/2017 10:42 AM, Ming Lei wrote:
> > Hi,
> >
> > This patchset avoids to allocate driver tag beforehand for flush rq
> > in case of I/O scheduler, then flush rq isn't treated specially
> > wrt. get/put driver tag, code gets
On Thu, Sep 14, 2017 at 05:04:13PM -0700, Omar Sandoval wrote:
> On Sat, Sep 02, 2017 at 11:17:18PM +0800, Ming Lei wrote:
> > This function is introduced for dequeuing request
> > from sw queue so that we can dispatch it in
> > scheduler's way.
> >
> > More importantly, some SCSI devices may set
Christoph,
> bsg-lib now embeddeds the job structure into the request, and
> req->special can't be used anymore.
Applied to 4.14/scsi-fixes.
--
Martin K. Petersen Oracle Linux Engineering
On Sat, Sep 02, 2017 at 11:17:18PM +0800, Ming Lei wrote:
> This function is introduced for dequeuing request
> from sw queue so that we can dispatch it in
> scheduler's way.
>
> More importantly, some SCSI devices may set
> q->queue_depth, which is a per-request_queue limit,
> and applied on
On Thu, Sep 07, 2017 at 01:54:34PM +0200, Christoph Hellwig wrote:
> Now fully away, with my fair share of coffee and lunch:
>
>
> Two fixups for the recent bsg-lib fixes, should go into 4.13 stable as
> well.
Jens, can you look at this for Linux 4.14?
From: Shaohua Li
kthread usually runs jobs on behalf of other threads. The jobs should be
charged to cgroup of original threads. But the jobs run in a kthread,
where we lose the cgroup context of original threads. The patch adds a
machanism to record cgroup info of original threads
From: Shaohua Li
Nobody uses the APIs right now.
Acked-by: Tejun Heo
Signed-off-by: Shaohua Li
---
block/bio.c| 31 ---
include/linux/bio.h| 2 --
include/linux/blk-cgroup.h | 12
3
From: Shaohua Li
loop block device handles IO in a separate thread. The actual IO
dispatched isn't cloned from the IO loop device received, so the
dispatched IO loses the cgroup context.
I'm ignoring buffer IO case now, which is quite complicated. Making the
loop thread aware
From: Shaohua Li
Hi,
The IO dispatched to under layer disk by loop block device isn't cloned from
original bio, so the IO loses cgroup information of original bio. These IO
escapes from cgroup control. The patches try to address this issue. The idea is
quite generic, but we
From: Shaohua Li
bio_blkcg is the only API to get cgroup info for a bio right now. If
bio_blkcg finds current task is a kthread and has original blkcg
associated, it will use the css instead of associating the bio to
current task. This makes it possible that kthread dispatches bios
On 09/14/2017 10:42 AM, Ming Lei wrote:
> Hi,
>
> This patchset avoids to allocate driver tag beforehand for flush rq
> in case of I/O scheduler, then flush rq isn't treated specially
> wrt. get/put driver tag, code gets cleanup much, such as,
> reorder_tags_to_front() is removed, and we needn't
The behind idea is simple:
1) for none scheduler, driver tag has to be borrowed for flush
rq, otherwise we may run out of tag, and IO hang is caused.
get/put driver tag is actually a nop, so reorder tags isn't
necessary at all.
2) for real I/O scheduler, we needn't to allocate driver tag
Now we always preallocate one driver tag before blk_insert_flush(),
and flush request will be marked as RQF_FLUSH_SEQ once it is in
flush machinary.
So if RQF_FLUSH_SEQ isn't set, we call blk_insert_flush()
to handle the request, otherwise the flush request is
dispatched to ->dispatch list
Block flush need this function without needing run queue,
so introduce the parameter.
Signed-off-by: Ming Lei
---
block/blk-core.c | 2 +-
block/blk-mq.c | 5 +++--
block/blk-mq.h | 2 +-
3 files changed, 5 insertions(+), 4 deletions(-)
diff --git a/block/blk-core.c
We need this helper to put the tag for flash rq, since
we will not share tag in the flush request sequences
in case of I/O scheduler.
Signed-off-by: Ming Lei
---
block/blk-mq.c | 12
block/blk-mq.h | 13 +
2 files changed, 13 insertions(+), 12
In the following patch, we will use RQF_FLUSH_SEQ
to decide:
- if the flag isn't set, the flush rq need to be inserted
via blk_insert_flush()
- otherwise, the flush rq need to be dispatched directly
since it is in flush machinery now.
So we use blk_mq_request_bypass_insert() for requsts
of
blk_insert_flush() should only insert request since run
queue always follows it.
For the case of bypassing flush, we don't need to run queue
since every blk_insert_flush() follows one run queue.
Signed-off-by: Ming Lei
---
block/blk-flush.c | 2 +-
1 file changed, 1
Hi,
This patchset avoids to allocate driver tag beforehand for flush rq
in case of I/O scheduler, then flush rq isn't treated specially
wrt. get/put driver tag, code gets cleanup much, such as,
reorder_tags_to_front() is removed, and we needn't to worry
about request order in dispatch list for
On 09/14/2017 10:33 AM, Ming Lei wrote:
> On Fri, Sep 15, 2017 at 12:30 AM, Jens Axboe wrote:
>> On 09/14/2017 09:57 AM, Ming Lei wrote:
>>> On Tue, Sep 12, 2017 at 5:27 AM, Jens Axboe wrote:
On 09/11/2017 03:13 PM, Mike Snitzer wrote:
> On Mon, Sep 11
On Fri, Sep 15, 2017 at 12:30 AM, Jens Axboe wrote:
> On 09/14/2017 09:57 AM, Ming Lei wrote:
>> On Tue, Sep 12, 2017 at 5:27 AM, Jens Axboe wrote:
>>> On 09/11/2017 03:13 PM, Mike Snitzer wrote:
On Mon, Sep 11 2017 at 4:51pm -0400,
Jens Axboe
On 09/14/2017 09:57 AM, Ming Lei wrote:
> On Tue, Sep 12, 2017 at 5:27 AM, Jens Axboe wrote:
>> On 09/11/2017 03:13 PM, Mike Snitzer wrote:
>>> On Mon, Sep 11 2017 at 4:51pm -0400,
>>> Jens Axboe wrote:
>>>
On 09/11/2017 10:16 AM, Mike Snitzer wrote:
>
On Thu, Sep 14, 2017 at 01:37:14PM +, Bart Van Assche wrote:
> On Thu, 2017-09-14 at 09:15 +0800, Ming Lei wrote:
> > On Wed, Sep 13, 2017 at 07:07:53PM +, Bart Van Assche wrote:
> > > On Thu, 2017-09-14 at 01:48 +0800, Ming Lei wrote:
> > > > No, that patch only changes
On Tue, Sep 12, 2017 at 5:27 AM, Jens Axboe wrote:
> On 09/11/2017 03:13 PM, Mike Snitzer wrote:
>> On Mon, Sep 11 2017 at 4:51pm -0400,
>> Jens Axboe wrote:
>>
>>> On 09/11/2017 10:16 AM, Mike Snitzer wrote:
Here is v2 that should obviate the need to
On Thu, Sep 14, 2017 at 07:59:43AM -0700, Omar Sandoval wrote:
[snip]
> Honestly I prefer your original patch with a comment on depth += nr. I'd
> be happy with the following incremental patch on top of your original v4
> patch.
Oh, and the change renaming the off parameter to start would be
On Thu, Sep 14, 2017 at 09:56:56AM +0800, Ming Lei wrote:
> On Wed, Sep 13, 2017 at 11:37:20AM -0700, Omar Sandoval wrote:
> > On Mon, Sep 11, 2017 at 12:08:29PM +0800, Ming Lei wrote:
> > > On Sun, Sep 10, 2017 at 10:20:27AM -0700, Omar Sandoval wrote:
> >
> > [snip]
> >
> > > > What I mean is
On Thu, 2017-09-14 at 09:15 +0800, Ming Lei wrote:
> On Wed, Sep 13, 2017 at 07:07:53PM +, Bart Van Assche wrote:
> > On Thu, 2017-09-14 at 01:48 +0800, Ming Lei wrote:
> > > No, that patch only changes blk_insert_cloned_request() which is used
> > > by dm-rq(mpath) only, nothing to do with
Dear Damien,
Thank you for your feedback.
> On 14. Sep 2017, at 10:46, Damien Le Moal wrote:
[…]
> Writing once to a sector stored on spinning rust will *not* fully erase
> the previous data. Part of the signal used for storing that data will
> remain on the track
In pblk, we have a mempool to allocate a generic structure that we
pass along workqueues. This is heavily used in the GC path in order
to have enough inflight reads and fully utilize the GC bandwidth.
However, the current GC path copies data to the host memory and puts it
back into the write
As part of the mempool audit on pblk, remove unnecessary mempool
allocation checks on mempools.
Reported-by: Jens Axboe
Signed-off-by: Javier González
---
drivers/lightnvm/pblk-core.c | 4
drivers/lightnvm/pblk-read.c | 8
pblk holds two sector bitmaps: one to keep track of the mapped sectors
while the line is active and another one to keep track of the invalid
sectors. The latter is kept during the whole live of the line, until it
is recycled. Since we cannot guarantee forward progress for the mempool
in this case,
Since read and erase paths offer different guarantees for inflight I/Os,
separate the mempools to set the right min_nr for each on creation.
Reported-by: Jens Axboe
Signed-off-by: Javier González
---
drivers/lightnvm/pblk-core.c | 12
pblk uses an internal page mempool for allocating pages on internal
bios. The main two users of this memory pool are partial reads (reads
with some sectors in cache and some on media) and padded writes, which
need to add dummy pages to an existing bio already containing valid
data (and with a
As suggested by Jens [1], I audited all mempools on pblk.
This patche series (i) fixes bad mempool allocations that did not
guarantee forward progress and downsizes the sizes of some overused
mempools, (ii) removes unnecessary checks, and (iii) eliminates some
mempools that where introduced in
Philipp,
On 9/14/17 00:37, Philipp Guendisch wrote:
> This patch adds a software based secure erase option to improve data
> confidentiality. The CONFIG_BLK_DEV_SECURE_ERASE option enables a mount
> flag called 'sw_secure_erase'. When you mount a volume with this flag,
> every discard call is
On Wed, Sep 13, 2017 at 05:37:53PM +0200, Philipp Guendisch wrote:
> This patch adds a software based secure erase option to improve data
> confidentiality. The CONFIG_BLK_DEV_SECURE_ERASE option enables a mount
> flag called 'sw_secure_erase'. When you mount a volume with this flag,
> every
Tested-by: Oleksandr Natalenko
Similarly to CFQ, BFQ has its write-throttling heuristics, and it
is better not to combine them with further write-throttling
heuristics of a different nature.
So this commit disables write-back throttling for a device if BFQ
is used as
On Wednesday, September 13, 2017 10:25:46 PM IST Christoph Hellwig wrote:
> Does it work fine if you call sb_init_dio_done_wq unconditionally?
>
>
Yes, The issue isn't recreated when sb_init_dio_done_wq() is
invoked unconditionally.
--
chandan
39 matches
Mail list logo