Hi Jens,
Could you please take a look at below patch and the issue it is trying
to solve. Please let us know your thoughts on the below problem and the
patch.
Regards
Ritesh
On 8/9/2017 6:28 PM, Ritesh Harjani wrote:
In below scenario blkio cgroup does not work as per their assigned
On Fri, Aug 11, 2017 at 12:10:38AM +0200, Benjamin Block wrote:
> On Thu, Aug 10, 2017 at 11:32:17AM +0200, Christoph Hellwig wrote:
> > We can't use an on-stack buffer for the sense data, as drivers will
> > dma to it. So we should reuse the SCSI init_rq_fn() for the BSG
> > queues and/or
On Thu, Aug 10, 2017 at 11:35:31AM +0200, Christoph Hellwig wrote:
> On Wed, Aug 09, 2017 at 04:11:19PM +0200, Benjamin Block wrote:
> > Since struct bsg_command is now used in every calling case, we don't
> > need separation of arguments anymore that are contained in the same
> > bsg_command.
> >
On Thu, Aug 10, 2017 at 10:24:56AM +0200, Johannes Thumshirn wrote:
> On Wed, Aug 09, 2017 at 04:11:18PM +0200, Benjamin Block wrote:
> > + return 0 == (bc->hdr.flags & BSG_FLAG_Q_AT_TAIL);
>
> return !(bc->hdr.flags & BSG_FLAG_Q_AT_TAIL); and make the function return
> bool? I have to admit,
The lockdep code had reported the following unsafe locking scenario:
CPU0CPU1
lock(s_active#228);
lock(>bd_mutex/1);
lock(s_active#228);
lock(>bd_mutex);
*** DEADLOCK
On Thu 10-08-17 06:49:53, Goldwyn Rodrigues wrote:
> On 08/09/2017 09:17 PM, Jens Axboe wrote:
> > On 08/09/2017 08:07 PM, Goldwyn Rodrigues wrote:
> No, from a multi-device point of view, this is inconsistent. I
> have tried the request bio returns -EAGAIN before the split, but
On 08/10/2017 05:49 AM, Goldwyn Rodrigues wrote:
>
>
> On 08/09/2017 09:17 PM, Jens Axboe wrote:
>> On 08/09/2017 08:07 PM, Goldwyn Rodrigues wrote:
> No, from a multi-device point of view, this is inconsistent. I
> have tried the request bio returns -EAGAIN before the split, but
On 08/10/2017 05:38 AM, Goldwyn Rodrigues wrote:
>
>
> On 08/09/2017 09:18 PM, Jens Axboe wrote:
>> On 08/08/2017 02:36 PM, Jens Axboe wrote:
>>> On 08/08/2017 02:32 PM, Shaohua Li wrote:
> diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
> index 25f6a0cb27d3..fae021ebec1b
On 08/10/2017 06:16 AM, Christoph Hellwig wrote:
> On Mon, Aug 07, 2017 at 02:29:47PM -0300, Guilherme G. Piccoli wrote:
>> Thanks for your feedback Hannes, agreed!
>
> And btw, you'll see similar results with the SCSI target or nbd,
> so it's not really nvme specific.
Thanks, I agree - noticed
First: as mentioned in the previous patches I really hate the name
scheme with the _sp and _mp postfixes.
To be clear and understandable we should always name the versions
that iterate over segments *segment* and the ones that iterate over
pages *page*. To make sure we have a clean compile break
On Tue, Aug 08, 2017 at 04:45:20PM +0800, Ming Lei wrote:
> It is enough to check and compute bio->bi_seg_front_size just
> after the 1st segment is found, but current code checks that
> for each bvec, which is inefficient.
>
> This patch follows the way in __blk_recalc_rq_segments()
> for
This looks like another candidate for a standalone prep series.
Looks good, and another candidate for a prep series:
Reviewed-by: Christoph Hellwig
> struct bio_vec *bv;
> + struct bvec_iter_all bia;
> int i;
>
> - bio_for_each_segment_all(bv, bio, i) {
> + bio_for_each_segment_all_sp(bv, bio, i, bia) {
> struct page *page = bv->bv_page;
> page_endio(page, op_is_write(bio_op(bio)),
>
On Tue, Aug 08, 2017 at 04:45:18PM +0800, Ming Lei wrote:
> This patch implements singlepage version of the following
> 3 helpers:
> - bvec_iter_offset_sp()
> - bvec_iter_len_sp()
> - bvec_iter_page_sp()
>
> So that one multipage bvec can be splited to singlepage
> bvec, and
Please skip adding the _sp names for the single page ones - those
are the only used to implement the non postfixed ones anyway.
The _mp ones should have bio_iter_segment_* names instead.
And while you're at it - I think this code would massively benefit
from turning it into inline functions in a
On 08/09/2017 09:17 PM, Jens Axboe wrote:
> On 08/09/2017 08:07 PM, Goldwyn Rodrigues wrote:
No, from a multi-device point of view, this is inconsistent. I
have tried the request bio returns -EAGAIN before the split, but
I shall check again. Where do you see this
On 08/09/2017 09:18 PM, Jens Axboe wrote:
> On 08/08/2017 02:36 PM, Jens Axboe wrote:
>> On 08/08/2017 02:32 PM, Shaohua Li wrote:
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index 25f6a0cb27d3..fae021ebec1b 100644
--- a/include/linux/blkdev.h
+++
The comments should be added in the patches where semantics change,
not separately.
> +static unsigned int get_bio_pages(struct bio *bio)
> +{
> + unsigned i;
> + struct bio_vec *bv;
> +
> + bio_for_each_segment_all(bv, bio, i)
> + ;
> +
> + return i;
> +}
s/get_bio_pages/bio_nr_pages/ ?
Also this seems like a useful helper for bio.h
> + ti->max_io_len = min_t(uint32_t, len,
> +(BIO_MAX_PAGES * PAGE_SIZE));
No need for the inner braces. Also all of the above fits nicely
onto a single < 80 char line.
Otherwise this looks fine:
Reviewed-by: Christoph Hellwig
> + * The hacking way of using bvec table as page pointer array is safe
> + * even after multipage bvec is introduced because that space can be
> + * thought as unused by bio_add_page().
I'm not sure what value this comment adds.
Note that once we have multi-page biovecs this could should change
I think all this bcache code needs bigger attention. For one
bio_alloc_pages is only used in bcache, so we should move it in there.
Second the way bio_alloc_pages is currently written looks potentially
dangerous for multi-page biovecs, so we should think about a better
calling convention. The
> + /*
> + * It is safe to truncate the last bvec in the following way
> + * even though multipage bvec is supported, but we need to
> + * fix the parameters passed to zero_user().
> + */
> + struct bio_vec *bvec = >bi_io_vec[bio->bi_vcnt - 1];
A 'we need to fix XXX'
I really don't think that these comments are all that useful.
A big comment near the bi_io_vec field defintion explaining the rules
for access would be a lot better.
Never mind, I guess I was wrong, coz bio_add_page() is always called
with "offset" = 0 here, so:
offset == bv->bv_offset + bv->bv_len
is never matched. Hence,
if (bio->bi_vcnt >= bio->bi_max_vecs)
return 0;
will trigger the if-break.
So maybe this is after all just a dm-crypt issue on
On Wed, Aug 09, 2017 at 04:11:19PM +0200, Benjamin Block wrote:
> Since struct bsg_command is now used in every calling case, we don't
> need separation of arguments anymore that are contained in the same
> bsg_command.
>
> Signed-off-by: Benjamin Block
> ---
>
On Thu, Aug 10, 2017 at 10:24:56AM +0200, Johannes Thumshirn wrote:
> On Wed, Aug 09, 2017 at 04:11:18PM +0200, Benjamin Block wrote:
> > + return 0 == (bc->hdr.flags & BSG_FLAG_Q_AT_TAIL);
>
> return !(bc->hdr.flags & BSG_FLAG_Q_AT_TAIL); and make the function return
> bool? I have to admit,
Looks fine,
Reviewed-by: Christoph Hellwig
Looks fine,
Reviewed-by: Christoph Hellwig
We can't use an on-stack buffer for the sense data, as drivers will
dma to it. So we should reuse the SCSI init_rq_fn() for the BSG
queues and/or implement the same scheme.
On Wed, Aug 09, 2017 at 11:49:17PM +0200, Paolo Valente wrote:
> > This discrepancy with your results makes a little bit harder for me to
> > understand how to better proceed, as I see no regression. Anyway,
> > since this reader-throttling issue seems relevant, I have investigated
> > it a
Looks good,
Reviewed-by: Johannes Thumshirn
--
Johannes Thumshirn Storage
jthumsh...@suse.de+49 911 74053 689
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: Felix Imendörffer, Jane Smithard, Graham
Looks good,
Reviewed-by: Johannes Thumshirn
--
Johannes Thumshirn Storage
jthumsh...@suse.de+49 911 74053 689
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: Felix Imendörffer, Jane Smithard, Graham
On Wed, Aug 09, 2017 at 04:11:18PM +0200, Benjamin Block wrote:
> + return 0 == (bc->hdr.flags & BSG_FLAG_Q_AT_TAIL);
return !(bc->hdr.flags & BSG_FLAG_Q_AT_TAIL); and make the function return
bool? I have to admit, this is the 1st time I have seen the above construct.
Thanks,
On Wed, Aug 09, 2017 at 11:28:06AM -0700, Bart Van Assche wrote:
> The blk_mq_delay_kick_requeue_list() function is used by the device
> mapper and only by the device mapper to rerun the queue and requeue
> list after a delay. This function is called once per request that
> gets requeued. Modify
36 matches
Mail list logo