On 11/23/2017 06:31 PM, Joseph Qi wrote:
> Hi Jens,
> Could you please give your advice for the two patches or pick them up if
> you think they are good?
It looks OK to me, but my preference would be to push this until
4.16.
--
Jens Axboe
On Thu, Nov 16, 2017 at 02:07:46PM +0100, Michal Hocko wrote:
> On Thu 16-11-17 21:48:05, Byungchul Park wrote:
> > On 11/16/2017 9:02 PM, Michal Hocko wrote:
> > > for each struct page. So you are doubling the size. Who is going to
> > > enable this config option? You are moving this to page_ext
On 11/23/2017 09:54 PM, Jens Axboe wrote:
> On 11/23/2017 07:44 AM, Christoph Hellwig wrote:
>> Sagi Grimberg (3):
>> nvme-fc: check if queue is ready in queue_rq
>> nvme-loop: check if queue is ready in queue_rq
>
> The nvme-loop part looks fine, but why is the nvme-fc part using:
>
Hi Jan,
On 2017/11/21 0:43, Jan Kara wrote:
> Hi Tao!
>
> On Fri 17-11-17 14:51:18, Hou Tao wrote:
>> On 2017/3/13 23:14, Jan Kara wrote:
>>> blkdev_open() may race with gendisk shutdown in two different ways.
>>> Either del_gendisk() has already unhashed block device inode (and thus
>>>
On 11/23/2017 06:37 AM, weiping zhang wrote:
> Hi Jens,
>
> several cleanup for blk-wbt, no function change, thanks
>
> weiping zhang (5):
> blk-wbt: remove duplicated setting in wbt_init
> blk-wbt: cleanup comments to one line
> blk-sysfs: remove NULL pointer checking in
Hi Jens,
Could you please give your advice for the two patches or pick them up if
you think they are good?
Thanks,
Joseph
On 17/11/21 09:38, Joseph Qi wrote:
> From: Joseph Qi
>
> In mixed read/write workload on SSD, write latency is much lower than
> read. But now
On 11/23/2017 07:44 AM, Christoph Hellwig wrote:
> Sagi Grimberg (3):
> nvme-fc: check if queue is ready in queue_rq
> nvme-loop: check if queue is ready in queue_rq
The nvme-loop part looks fine, but why is the nvme-fc part using:
enum nvme_fc_queue_flags {
On Tue, Nov 21, 2017 at 2:42 PM, Adrian Hunter wrote:
> blk_get_request() can fail, always check the return value.
>
> Fixes: 0493f6fe5bde ("mmc: block: Move boot partition locking into a driver
> op")
> Fixes: 3ecd8cf23f88 ("mmc: block: move multi-ioctl() to use block
On 22/11/17 16:43, Ulf Hansson wrote:
> On 22 November 2017 at 08:40, Adrian Hunter wrote:
>> On 21/11/17 17:39, Ulf Hansson wrote:
>>> On 21 November 2017 at 14:42, Adrian Hunter wrote:
card_busy_detect() has a 10 minute timeout. However
On Tue, Nov 21, 2017 at 2:42 PM, Adrian Hunter wrote:
> The block driver must be resumed if the mmc bus fails to suspend the card.
>
> Signed-off-by: Adrian Hunter
Reviewed-by: Linus Walleij
Also looks like a clear
On 11/23/2017 03:34 PM, Christoph Hellwig wrote:
> FYI, the patch below changes both the irq and block mappings to
> always use the cpu possible map (should be split in two in due time).
>
> I think this is the right way forward. For every normal machine
> those two are the same, but for VMs
Yes it seems to fix the bug.
On 11/23/2017 03:34 PM, Christoph Hellwig wrote:
> FYI, the patch below changes both the irq and block mappings to
> always use the cpu possible map (should be split in two in due time).
>
> I think this is the right way forward. For every normal machine
> those two
Hi Jens,
several cleanup for blk-wbt, no function change, thanks
weiping zhang (5):
blk-wbt: remove duplicated setting in wbt_init
blk-wbt: cleanup comments to one line
blk-sysfs: remove NULL pointer checking in queue_wb_lat_store
blk-wbt: move wbt_clear_stat to common place in wbt_done
I can't reproduce it in my VM with adding a new CPU. Do you have
any interesting blk-mq like actually using multiple queues? I'll
give that a spin next.
[fullquote deleted]
> What will happen for the CPU hotplug case?
> Wouldn't we route I/O to a disabled CPU with this patch?
Why would we route I/O to a disabled CPU (we generally route
I/O to devices to start with). How would including possible
but not present cpus change anything?
Hi Jens,
a couple nvme fixes for 4.15:
- expand the queue ready fix that we only had for RDMA to also cover FC and
loop by moving it to common code (Sagi)
- fix an array out of bounds in the PCIe HMB code (Minwoo Im)
- two new device quirks (Jeff Lien and Kai-Heng Feng)
- static checkers
On 21 November 2017 at 14:42, Adrian Hunter wrote:
> The block driver must be resumed if the mmc bus fails to suspend the card.
>
> Signed-off-by: Adrian Hunter
Thanks, applied for fixes and added a stable tag (I think v3.19+ is
the first one we
On 21 November 2017 at 14:42, Adrian Hunter wrote:
> The card is not necessarily being removed, but the debugfs files must be
> removed when the driver is removed, otherwise they will continue to exist
> after unbinding the card from the driver. e.g.
>
> # echo
On 21 November 2017 at 14:42, Adrian Hunter wrote:
> blk_get_request() can fail, always check the return value.
>
> Fixes: 0493f6fe5bde ("mmc: block: Move boot partition locking into a driver
> op")
> Fixes: 3ecd8cf23f88 ("mmc: block: move multi-ioctl() to use block
On 21 November 2017 at 14:42, Adrian Hunter wrote:
> Ensure blk_get_request() is paired with blk_put_request().
>
> Fixes: 0493f6fe5bde ("mmc: block: Move boot partition locking into a driver
> op")
> Fixes: 627c3ccfb46a ("mmc: debugfs: Move block debugfs into block
On Tue, Nov 21, 2017 at 2:42 PM, Adrian Hunter wrote:
> Use blk_cleanup_queue() to shutdown the queue when the driver is removed,
> and instead get an extra reference to the queue to prevent the queue being
> freed before the final mmc_blk_put().
>
> Signed-off-by:
wbt_done call wbt_clear_stat no matter current stat was tracked
or not, move it to common place.
Signed-off-by: weiping zhang
---
block/blk-wbt.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/block/blk-wbt.c b/block/blk-wbt.c
index
Signed-off-by: weiping zhang
---
block/blk-wbt.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/block/blk-wbt.c b/block/blk-wbt.c
index cd9a20a..9f4ef9c 100644
--- a/block/blk-wbt.c
+++ b/block/blk-wbt.c
@@ -481,7 +481,7 @@ static inline
rwb->wc and rwb->queue_depth were overwritten by wbt_set_write_cache and
wbt_set_queue_depth, remove the default setting.
Signed-off-by: weiping zhang
---
block/blk-wbt.c | 2 --
1 file changed, 2 deletions(-)
diff --git a/block/blk-wbt.c b/block/blk-wbt.c
index
wbt_init doesn't set q->rq_wb to NULL, if wbt_init return 0,
so check return value is enough, remove NULL checking.
Signed-off-by: weiping zhang
---
block/blk-sysfs.c | 5 +
1 file changed, 1 insertion(+), 4 deletions(-)
diff --git a/block/blk-sysfs.c
Signed-off-by: weiping zhang
---
block/blk-wbt.c | 4 +---
1 file changed, 1 insertion(+), 3 deletions(-)
diff --git a/block/blk-wbt.c b/block/blk-wbt.c
index edb09e93..0fb65f0 100644
--- a/block/blk-wbt.c
+++ b/block/blk-wbt.c
@@ -729,9 +729,7 @@ int
FYI, the patch below changes both the irq and block mappings to
always use the cpu possible map (should be split in two in due time).
I think this is the right way forward. For every normal machine
those two are the same, but for VMs with maxcpus above their normal
count or some big iron that
On Tue, Nov 21, 2017 at 2:42 PM, Adrian Hunter wrote:
> The card is not necessarily being removed, but the debugfs files must be
> removed when the driver is removed, otherwise they will continue to exist
> after unbinding the card from the driver. e.g.
>
> # echo
On Tue, Nov 21, 2017 at 2:42 PM, Adrian Hunter wrote:
> mmc_cleanup_queue() is not used by a different module. Do not export it.
>
> Signed-off-by: Adrian Hunter
Reviewed-by: Linus Walleij
Yours,
Linus Walleij
Ok, it helps to make sure we're actually doing I/O from the CPU,
I've reproduced it now.
On Thu, Nov 23, 2017 at 07:28:31PM +0100, Christian Borntraeger wrote:
> zfcp on s390.
Ok, so it can't be the interrupt code, but probably is the blk-mq-cpumap.c
changes. Can you try to revert just those for a quick test?
On 11/23/2017 07:32 PM, Christoph Hellwig wrote:
> On Thu, Nov 23, 2017 at 07:28:31PM +0100, Christian Borntraeger wrote:
>> zfcp on s390.
>
> Ok, so it can't be the interrupt code, but probably is the blk-mq-cpumap.c
> changes. Can you try to revert just those for a quick test?
Hmm, I get
zfcp on s390.
On 11/23/2017 07:25 PM, Christoph Hellwig wrote:
> What HBA driver do you use in the host?
>
On 11/23/2017 03:34 PM, Christoph Hellwig wrote:
> FYI, the patch below changes both the irq and block mappings to
> always use the cpu possible map (should be split in two in due time).
>
> I think this is the right way forward. For every normal machine
> those two are the same, but for VMs
What HBA driver do you use in the host?
When we use bio_clone_bioset() to split off the front part of a bio
and chain the two together and submit the remainder to
generic_make_request(), it is important that the newly allocated
bio is used as the head to be processed immediately, and the original
bio gets "bio_advance()"d and sent to
36 matches
Mail list logo