Hi Jens,
do you think this version could be ok?
Thanks,
Paolo
> Il giorno 04 dic 2017, alle ore 11:42, Paolo Valente
> ha scritto:
>
> Commit a33801e8b473 ("block, bfq: move debug blkio stats behind
> CONFIG_DEBUG_BLK_CGROUP") introduced two batches of confusing
On Thu, Dec 14, 2017 at 12:07 PM, Theodore Ts'o wrote:
> On Wed, Dec 13, 2017 at 04:13:07PM +0900, Byungchul Park wrote:
>>
>> Therefore, I want to say the fundamental problem
>> comes from classification, not cross-release
>> specific.
>
> You keep saying that it is "just" a
On Wed, Dec 13, 2017 at 7:46 PM, Ingo Molnar wrote:
>
> * Byungchul Park wrote:
>
>> Lockdep works, based on the following:
>>
>>(1) Classifying locks properly
>>(2) Checking relationship between the classes
>>
>> If (1) is not good or (2)
On Wed, Dec 13, 2017 at 04:13:07PM +0900, Byungchul Park wrote:
>
> Therefore, I want to say the fundamental problem
> comes from classification, not cross-release
> specific.
You keep saying that it is "just" a matter of classificaion.
However, it is not obvious how to do the classification in
It may cause race by setting 'nvmeq' in nvme_init_request()
because .init_request is called inside switching io scheduler, which
may happen when the NVMe device is being resetted and its nvme queues
are being freed and created. We don't have any sync between the two
pathes.
This patch removes the
blk_mq_pci_map_queues() may not map one CPU into any hw queue, but its
previous map isn't cleared yet, and may point to one stale hw queue
index.
This patch fixes the following issue by clearing the mapping table before
setting it up in blk_mq_pci_map_queues().
This patches fixes this following
Dispatch may still be in-progress after queue is frozen, so we have to
quiesce queue before switching IO scheduler and updating nr_requests.
Also when switching io schedulers, blk_mq_run_hw_queue() may still be
called somewhere(such as from nvme_reset_work()), and io scheduler's
per-hctx data may
In both elevator_switch_mq() and blk_mq_update_nr_hw_queues(), sched tags
can be allocated, and q->nr_hw_queue is used, and race is inevitable, for
example: blk_mq_init_sched() may trigger use-after-free on hctx, which is
freed in blk_mq_realloc_hw_ctxs() when nr_hw_queues is decreased.
This
Hi,
The 1st patch fixes one kernel oops triggered by IOs vs. deleting SCSI
device, and this issue can be triggered easily on scsi_debug.
The other 5 patch fixes recent Yi Zhang's reports about his NVMe stress
tests, most of them are related with switching io sched, NVMe reset or
updating
Turns out that blk_mq_freeze_queue() isn't stronger[1] than
blk_mq_quiesce_queue() because dispatch may still be in-progress after
queue is frozen, and in several cases, such as switching io scheduler,
and updating hw queues, we still need to quiesce queue as a supplement
of freezing queue.
As we
After queue is frozen, dispatch still may happen, for example:
1) requests are submitted from several contexts
2) requests from all these contexts are inserted to queue, but may dispatch
to LLD in one of these paths, but other paths sill need to move on even all
these requests are completed(that
On 12/14/2017 12:13 AM, Tejun Heo wrote:
> Hello,
>
> On Wed, Dec 13, 2017 at 11:30:48AM +0800, jianchao.wang wrote:
>>> + } else {
>>> + srcu_idx = srcu_read_lock(hctx->queue_rq_srcu);
>>> + if (!blk_mark_rq_complete(rq))
>>> +
Hi,
Fedora got a bug report https://bugzilla.redhat.com/show_bug.cgi?id=1520982
of a boot failure/bug on Linus' master (full bootlog at the bugzilla)
WARNING: CPU: 3 PID: 3486 at block/genhd.c:680 device_add_disk+0x3d9/0x460
Modules linked in: intel_rapl sb_edac x86_pkg_temp_thermal
On Wed, 2017-11-29 at 10:57 +0800, chenxiang (M) wrote:
> I applied this v2 patchset to kernel 4.15-rc1, running fio on a SATA
> disk, then disable the disk with sysfs interface
> (echo 0 > /sys/class/sas_phy/phy-1:0:1/enable), and find system is hung.
> But with v1 patch, it doesn't has this
On Fri, 2017-12-01 at 16:49 -0200, Mauricio Faria de Oliveira wrote:
> LR [c057c7fc] __blk_run_queue+0x6c/0xb0
> Call Trace:
> [c001fb083970] [c001fb0839e0] 0xc001fb0839e0 (unreliable)
> [c001fb0839a0] [c057ce0c] blk_run_queue+0x4c/0x80
> [c001fb0839d0]
On Tue, Dec 12, 2017 at 05:18:44PM +0800, Ming Lei wrote:
> On Mon, Dec 11, 2017 at 11:57:38PM -0800, Christoph Hellwig wrote:
> > Most of this looks sane, but I'd really like to see it in context
> > of the actual multipage bvec patches. Do you have an updated branch
> > on top of these?
>
> I
Hi, Jianchao.
On Wed, Dec 13, 2017 at 01:07:30PM +0800, jianchao.wang wrote:
> Test ok with NVMe
Awesome, thanks for testing!
--
tejun
Hello,
On Wed, Dec 13, 2017 at 11:30:48AM +0800, jianchao.wang wrote:
> > + } else {
> > + srcu_idx = srcu_read_lock(hctx->queue_rq_srcu);
> > + if (!blk_mark_rq_complete(rq))
> > + __blk_mq_complete_request(rq);
> > +
On Wed, 2017-12-13 at 16:13 +0900, Byungchul Park wrote:
> In addition, I want to say that the current level of
> classification is much less than 100% but, since we
> have annotated well to suppress wrong reports by
> rough classifications, finally it does not come into
> view by original lockdep
* Byungchul Park wrote:
> Lockdep works, based on the following:
>
>(1) Classifying locks properly
>(2) Checking relationship between the classes
>
> If (1) is not good or (2) is not good, then we
> might get false positives.
>
> For (1), we don't have
20 matches
Mail list logo