Normally, sd_read_capacity sets sdp->use_16_for_rw to 1 based on the
disk capacity so that READ16/WRITE16 are used for large drives.
However, for a zoned disk with RC_BASIS set to 0, the capacity reported
through READ_CAPACITY may be very small, leading to use_16_for_rw not being
set and
On Thu, Nov 10, 2016 at 06:23:12PM -0500, Keith Busch wrote:
> On Thu, Nov 10, 2016 at 04:01:31PM -0700, Scott Bauer wrote:
> > On Tue, Nov 01, 2016 at 06:57:05AM -0700, Christoph Hellwig wrote:
> > > blk_execute_rq_nowait is the API to use - blk_mq_insert_request isn't
> > > even exported.
> >
>
On Thu, Nov 10, 2016 at 04:01:31PM -0700, Scott Bauer wrote:
> On Tue, Nov 01, 2016 at 06:57:05AM -0700, Christoph Hellwig wrote:
> > blk_execute_rq_nowait is the API to use - blk_mq_insert_request isn't
> > even exported.
>
> I remember now, after I changed it to use rq_nowait, why we added this
On Tue, Nov 01, 2016 at 06:57:05AM -0700, Christoph Hellwig wrote:
> On Tue, Nov 01, 2016 at 10:18:13AM +0200, Sagi Grimberg wrote:
> > > +
> > > + return nvme_insert_rq(q, req, 1, sec_submit_endio);
> >
> > No need to introduce nvme_insert_rq at all, just call
> > blk_mq_insert_request (other
Thanks for the info. Yes, maybe that change in ordering could explain the
increased Q2D latency then.
I am actually using blk-mq for the 12G SAS. The latencies appear to have been
pretty similar in v4.4.16 but not in v4.8-rc6. It does seem odd that they are
different in v4.8-rc6. I've
On Thu, Nov 10, 2016 at 11:04:38PM +0300, Dan Carpenter wrote:
> This code causes a problem for flush_epd_write_bio() because | has
> higher precedence than ?: so it basically turns into:
>
> ((bio)->bi_opf |= REQ_SYNC;
>
> Which is wrong.
Its is. And while we're at it the macro should
This code causes a problem for flush_epd_write_bio() because | has
higher precedence than ?: so it basically turns into:
((bio)->bi_opf |= REQ_SYNC;
Which is wrong.
Fixes: ef295ecf090d ("block: better op and flags encoding")
Signed-off-by: Dan Carpenter
diff
Hi Shaohua,
one of the major issues with Ming Lei's multipage biovec works
is that we can't easily enabled the MD RAID code for it. I had
a quick chat on that with Chris and Jens and they suggested talking
to you about it.
It's mostly about the RAID1 and RAID10 code which does a lot of funny
On Wed 09-11-16 12:52:25, Jens Axboe wrote:
> On 11/09/2016 09:09 AM, Jens Axboe wrote:
> >On 11/09/2016 02:01 AM, Jan Kara wrote:
> >>On Tue 08-11-16 08:25:52, Jens Axboe wrote:
> >>>On 11/08/2016 06:30 AM, Jan Kara wrote:
> On Tue 01-11-16 15:08:49, Jens Axboe wrote:
> >For legacy block,
Hi Ming,
any chance you could send out a series with the various bio_add_page
soon-ish? I'd really like to get all the good prep work in for
this merge window, so that we can look at the real multipage-bvec
work for the next one.
--
To unsubscribe from this list: send the line "unsubscribe
On Wed, Nov 09, 2016 at 02:43:58PM -0500, Jeff Moyer wrote:
> But on the issue side, we have different trace actions: Q vs. I. On the
> completion side, we just have C. You'd end up getting two C events for
> each Q, and that may confuse existing utilities (such as blkparse, btt,
> iowatcher,
On Thu, Nov 10, 2016 at 08:31:11AM -0800, Bart Van Assche wrote:
> Have you verified whether or not this change affects the behavior of the
> bcache driver? From commit ad0d9e76a412:
It doesn't, bcache only calls bch_data_verify from a read completion
handler.
> Additionally, since
On Wed, Nov 09, 2016 at 01:43:55AM +, Alana Alexander-Rutledge wrote:
> Hi,
>
> I have been profiling the performance of the NVMe and SAS IO stacks on Linux.
> I used blktrace and blkparse to collect block layer trace points and a
> custom analysis script on the trace points to average out
On 11/10/2016 09:16 AM, Tejun Heo wrote:
Re: [bug report] blkcg: replace blkcg_policy->cpd_size with
->cpd_alloc/free_fn() methods
Reply-To:
In-Reply-To: <20161110133426.GA30610@mwanda>
cfq_cpd_alloc() which is the cpd_alloc_fn implementation for cfq was
incorrectly hard coding GFP_KERNEL
Hi,
We ran into a funky issue, where someone doing 256K buffered reads saw
128K requests at the device level. Turns out it is read-ahead capping
the request size, since we use 128K as the default setting. This doesn't
make a lot of sense - if someone is issuing 256K reads, they should see
256K
On 11/10/2016 09:36 AM, Bart Van Assche wrote:
On 11/10/2016 08:04 AM, Hannes Reinecke wrote:
this really feels like a follow-up to the discussion we've had in Santa
Fe, but finally I'm able to substantiate it with some numbers.
Hi Jens,
Should I send you the notes I took on Thursday morning
On 11/09/2016 10:38 AM, Christoph Hellwig wrote:
Since commit 87374179 ("block: add a proper block layer data direction
encoding") we only OR the new op and flags into bi_opf in bio_set_op_attrs
instead of clearing the old value. I've not seen any breakage with the
new behavior, but it seems
Hi all,
this really feels like a follow-up to the discussion we've had in Santa
Fe, but finally I'm able to substantiate it with some numbers.
I've made a patch to enable the megaraid_sas driver for multiqueue.
While this is pretty straightforward (I'll be sending the patchset later
on), the
On 11/10/2016 04:26 AM, Matias Bjørling wrote:
Hi Jens,
A small calculation bug sneaked into 4.9, which led to data loss using the rrpc
FTL.
Can the fix be picked up for next -rc, or should I mark it for stable after the
4.9 release?
We can put it in this release, it's a regression fix for a
[ No idea why it's suddenly complaining about year old code - dan ]
Hello Tejun Heo,
The patch e4a9bde9589f: "blkcg: replace blkcg_policy->cpd_size with
->cpd_alloc/free_fn() methods" from Aug 18, 2015, leads to the
following static checker warning:
block/cfq-iosched.c:1589
[ For some reason, I suddenly started paying attention to these? Maybe
I have been grepping XXX out of my warning messages for the past year?
- dan ]
Hello Dan Williams,
The patch 3ef28e83ab15: "block: generic request_queue reference
counting" from Oct 21, 2015, leads to the following
The ns->lba_shift assumes its value to be the logarithmic of the
LA size. A previous patch duplicated the lba_shift calculation into
lightnvm. It prematurely also subtracted a 512byte shift, which commonly
is applied per-command. The 512byte shift being subtracted twice led to
data loss when
Hi Jens,
A small calculation bug sneaked into 4.9, which led to data loss using the rrpc
FTL.
Can the fix be picked up for next -rc, or should I mark it for stable after the
4.9 release?
-Matias
Matias Bjørling (1):
lightnvm: invalid offset calculation for lba_shift
23 matches
Mail list logo