> Il giorno 27 apr 2018, alle ore 05:27, Joseph Qi ha
> scritto:
>
> Hi Paolo,
>
> On 18/4/27 01:27, Paolo Valente wrote:
>>
>>
>>> Il giorno 25 apr 2018, alle ore 14:13, Joseph Qi ha
>>> scritto:
>>>
>>> Hi Paolo,
>>>
>>
>> Hi Joseph
>>
Hi Paolo,
On 18/4/27 01:27, Paolo Valente wrote:
>
>
>> Il giorno 25 apr 2018, alle ore 14:13, Joseph Qi ha
>> scritto:
>>
>> Hi Paolo,
>>
>
> Hi Joseph
>
>> ...
>> Could you run blktrace as well when testing your case? There are several
>> throtl traces to help
Hi Jianchao,
On 18/4/27 10:09, jianchao.wang wrote:
> Hi Tejun and Joseph
>
> On 04/27/2018 02:32 AM, Tejun Heo wrote:
>> Hello,
>>
>> On Tue, Apr 24, 2018 at 02:12:51PM +0200, Paolo Valente wrote:
>>> +Tejun (I guess he might be interested in the results below)
>>
>> Our experiments didn't work
Hi Tejun and Joseph
On 04/27/2018 02:32 AM, Tejun Heo wrote:
> Hello,
>
> On Tue, Apr 24, 2018 at 02:12:51PM +0200, Paolo Valente wrote:
>> +Tejun (I guess he might be interested in the results below)
>
> Our experiments didn't work out too well either. At this point, it
> isn't clear whether
On 04/26/2018 11:57 PM, Ming Lei wrote:
> Hi Jianchao,
>
> On Thu, Apr 26, 2018 at 11:07:56PM +0800, jianchao.wang wrote:
>> Hi Ming
>>
>> Thanks for your wonderful solution. :)
>>
>> On 04/26/2018 08:39 PM, Ming Lei wrote:
>>> +/*
>>> + * This one is called after queues are quiesced, and no
Any thoughts on this? Can we really drop a reference from a child device
(bsg_class_device) to a parent device (Scsi_Host) while the child device
is still around at fc_bsg_remove time?
If not, please consider a fix with module references. I realized that
the previous version of the fix had a
On Thu, Apr 26, 2018 at 04:52:24PM -0600, Keith Busch wrote:
> This test is for PCI devices in a surprise remove capable slot and tests
> how well the drivers and kernel handle losing the link to that device.
>
> The test finds the PCI Express Capability register of the pci slot a block
> device
This test is for PCI devices in a surprise remove capable slot and tests
how well the drivers and kernel handle losing the link to that device.
The test finds the PCI Express Capability register of the pci slot a block
device is in, then at offset 0x10 (the Link Control Register) writes a 1
to
Tejun Heo wrote:
> (cc'ing Tetuso and quoting whole message)
>
> Tetuso, could this be the same problem as the hang in wb_shutdown()
> syszbot reported?
Excuse me, but I'm too unfamiliar to judge it. ;-)
Anyway, since Fabiano has a reproducer, I appreciate trying a patch at
Hello,
On Thu, Apr 19, 2018 at 12:06:09PM +0800, Jiang Biao wrote:
> The initializing of q->root_blkg is currently outside of queue lock
> and rcu, so the blkg may be destroied before the initializing, which
> may cause dangling/null references. On the other side, the destroys
> of blkg are
Hello,
On Fri, Apr 20, 2018 at 06:06:01PM +0800, Benlong Zhang wrote:
> One problem with cgwb is how fs should treat metadata bios.
> For example in xfs, the log might be partially stuck in one
> group, leaving threads in other groups waiting for too long.
> Please refer to the linux-xfs
(cc'ing Tetuso and quoting whole message)
Tetuso, could this be the same problem as the hang in wb_shutdown()
syszbot reported?
On Wed, Apr 25, 2018 at 05:07:48PM -0300, Fabiano Rosas wrote:
> I'm looking into an issue where removing a virtio disk via sysfs while another
> process is issuing
Hello,
On Tue, Apr 24, 2018 at 02:12:51PM +0200, Paolo Valente wrote:
> +Tejun (I guess he might be interested in the results below)
Our experiments didn't work out too well either. At this point, it
isn't clear whether io.low will ever leave experimental state. We're
trying to find a working
Like d88b6d04: "cdrom: information leak in cdrom_ioctl_media_changed()"
There is another cast from unsigned long to int which causes
a bounds check to fail with specially crafted input. The value is
then used as an index in the slot array in cdrom_slot_status().
Signed-off-by: Scott Bauer
On Thu, Apr 26, 2018 at 08:39:56PM +0800, Ming Lei wrote:
> diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
> index 5d05a04f8e72..1e058deb4718 100644
> --- a/drivers/nvme/host/pci.c
> +++ b/drivers/nvme/host/pci.c
> @@ -1265,6 +1265,20 @@ static enum blk_eh_timer_return
On Thu, Apr 26, 2018 at 11:57:22PM +0800, Ming Lei wrote:
> Hi Jianchao,
>
> On Thu, Apr 26, 2018 at 11:07:56PM +0800, jianchao.wang wrote:
> > Hi Ming
> >
> > Thanks for your wonderful solution. :)
> >
> > On 04/26/2018 08:39 PM, Ming Lei wrote:
> > > +/*
> > > + * This one is called after
Hi Jianchao,
On Thu, Apr 26, 2018 at 11:07:56PM +0800, jianchao.wang wrote:
> Hi Ming
>
> Thanks for your wonderful solution. :)
>
> On 04/26/2018 08:39 PM, Ming Lei wrote:
> > +/*
> > + * This one is called after queues are quiesced, and no in-fligh timeout
> > + * and nvme interrupt handling.
Hi Ming
Thanks for your wonderful solution. :)
On 04/26/2018 08:39 PM, Ming Lei wrote:
> +/*
> + * This one is called after queues are quiesced, and no in-fligh timeout
> + * and nvme interrupt handling.
> + */
> +static void nvme_pci_cancel_request(struct request *req, void *data,
> +
On 4/26/18 1:21 AM, Omar Sandoval wrote:
> From: Omar Sandoval
>
> Hi, Jens,
>
> I added a blktest (block/017) for the inflight counter after you
> mentioned that we should have one and it easily found a bug :) Patch 2
> fixes the bug found by the test, and patch 1 fixes another
When handling error/timeout, it still needs to send commands to admin
queue, and these commands can be timed out too, then EH handler may
never move on for dealing with this situation.
Actually it doesn't need to handle these admin commands after controller
is recovered because all these requests
When one req is timed out, now nvme_timeout() handles it by the
following way:
nvme_dev_disable(dev, false);
nvme_reset_ctrl(>ctrl);
return BLK_EH_HANDLED.
which may introduces the following issues:
1) the following timeout on other reqs may call nvme_dev_disable()
Hi,
This first patch introduces EH kthread for handling timeout, and
simplifies the logics a lot, and fixes reports on block/011.
The 2nd one fixes the issue reported by Jianchao, in which admin
req may time out in EH.
Ming Lei (2):
nvme: pci: simplify timeout handling
nvme: pci: guarantee
From: Omar Sandoval
When the blk-mq inflight implementation was added, /proc/diskstats was
converted to use it, but /sys/block/$dev/inflight was not. Fix it by
adding another helper to count in-flight requests by data direction.
Fixes: f299b7c7a9de ("blk-mq: provide internal
From: Omar Sandoval
Hi, Jens,
I added a blktest (block/017) for the inflight counter after you
mentioned that we should have one and it easily found a bug :) Patch 2
fixes the bug found by the test, and patch 1 fixes another bug I
noticed. Based on Linus' master.
Thanks!
Omar
From: Omar Sandoval
In the legacy block case, we increment the counter right after we
allocate the request, not when the driver handles it. In both the legacy
and blk-mq cases, part_inc_in_flight() is called from
blk_account_io_start() right after we've allocated the request.
On Tue, Apr 24, 2018 at 08:02:56PM -0600, Jens Axboe wrote:
> On 4/24/18 12:16 PM, Christoph Hellwig wrote:
> > ide_toggle_bounce did select various strange block bounce limits, including
> > not bouncing at all as soon as an iommu is present in the system. Given
> > that the dma_map routines now
26 matches
Mail list logo