If blkg_create fails, new_blkg passed as an argument will
be freed by blkg_create, so there is no need to free it again.
Signed-off-by: Hou Tao <hout...@huawei.com>
---
block/blk-cgroup.c | 4 +---
1 file changed, 1 insertion(+), 3 deletions(-)
diff --git a/block/blk-cgroup.c b/blo
keep_bio_blkcg feature is off by default, and it can
be turned on by using "keep_bio_blkcg" argument.
Signed-off-by: Hou Tao <hout...@huawei.com>
---
drivers/md/dm-thin.c | 6 +-
1 file changed, 5 insertions(+), 1 deletion(-)
diff --git a/drivers/md/dm-thin.c b/drivers/md
If keep_bio_blkcg is enabled, assign the io_context and the blkcg of
current task to bio before processing the bio.
Signed-off-by: Hou Tao <hout...@huawei.com>
---
drivers/md/dm-thin.c | 5 +
drivers/md/dm-thin.h | 17 +
2 files changed, 22 insertions(+)
create mode
a limitation on the blkcg of the original IO thread,
so the blk-throttle doesn't work well.
In order to handle the situation, we add a "keep_bio_blkcg" feature
to dm-thin. If the feature is enabled, the original blkcg of bio
will be saved at thin_map() and will be used during blk-throttle.
Tao
If keep_bio_blkcg feature is enabled, we can ensure that
by STATUSTYPE_TABLE or STATUSTYPE_INFO command.
Signed-off-by: Hou Tao <hout...@huawei.com>
---
drivers/md/dm-thin.c | 8 +++-
1 file changed, 7 insertions(+), 1 deletion(-)
diff --git a/drivers/md/dm-thin.c b/drivers/md/dm-
if blkg_create fails, new_blkg passed as an argument will
be freed by blkg_create, so there is no need to free it again.
Signed-off-by: Hou Tao <hout...@huawei.com>
---
block/blk-cgroup.c | 1 -
1 file changed, 1 deletion(-)
diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c
index 8
Hi all,
During our test of the CFQ group schedule, we found a performance related
problem.
Rate-capped fio jobs in a CFQ group will degrade the performance of fio jobs in
another CFQ group. Both of the CFQ groups have the same blkio.weight.
We launch two fios in difference terminals. The
iops delay and
lead to an abnormal io schedule delay for the added cfq_group. To fix
it, we just need to revert to the old CFQ_IDLE_DELAY value: HZ / 5
when iops mode is enabled.
Cc: <sta...@vger.kernel.org> # 4.8+
Signed-off-by: Hou Tao <hout...@huawei.com>
---
block/cfq-iosched.c |
s OK to renew the time slice.
2. If there is no queued bio, the time slice must have been expired,
so it's Ok to renew the time slice.
Signed-off-by: Hou Tao <hout...@huawei.com>
---
block/blk-throttle.c | 11 +--
1 file changed, 9 insertions(+), 2 deletions(-)
diff --git a/block/blk
define two new
macros for the delay of a cfq_group under time-slice mode and IOPs mode.
Fixes: 9a7f38c42c2b92391d9dabaf9f51df7cfe5608e4
Cc: <sta...@vger.kernel.org> # 4.8+
Signed-off-by: Hou Tao <hout...@huawei.com>
---
block/cfq-iosched.c | 17 +++--
1 file changed, 15 inse
Hi Jan and list,
When testing the hrtimer version of CFQ, we found a performance degradation
problem which seems to be caused by commit 0b31c10 ("cfq-iosched: Charge at
least 1 jiffie instead of 1 ns").
The following is the test process:
* filesystem and block device
* XFS + /dev/sda
Hi Vivek,
On 2017/3/4 3:53, Vivek Goyal wrote:
> On Fri, Mar 03, 2017 at 09:20:44PM +0800, Hou Tao wrote:
>
> [..]
>>> Frankly, vdisktime is in fixed-point precision shifted by
>>> CFQ_SERVICE_SHIFT so using CFQ_IDLE_DELAY does not make much sense in any
>>&g
On 2017/3/2 18:29, Jan Kara wrote:
> On Wed 01-03-17 10:07:44, Hou Tao wrote:
>> When adding a cfq_group into the cfq service tree, we use CFQ_IDLE_DELAY
>> as the delay of cfq_group's vdisktime if there have been other cfq_groups
>> already.
>>
>> When cfq is und
.
Signed-off-by: Hou Tao <hout...@huawei.com>
---
block/bio.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/block/bio.c b/block/bio.c
index 5eec5e0..d8ed36f 100644
--- a/block/bio.c
+++ b/block/bio.c
@@ -2072,7 +2072,7 @@ EXPORT_SYMBOL_GPL(bio_associate_current);
The start time of eligible entity should be less than or equal to
the current virtual time, and the entity in idle tree has a finish
time being greater than the current virtual time.
Signed-off-by: Hou Tao <hout...@huawei.com>
---
block/bfq-iosched.h | 2 +-
block/bfq-wf2q.c| 2 +-
2
On 2017/7/12 17:41, Paolo Valente wrote:
>
>> Il giorno 11 lug 2017, alle ore 15:58, Hou Tao <hout...@huawei.com> ha
>> scritto:
>>
>> There are mq devices (eg., virtio-blk, nbd and loopback) which don't
>> invoke blk_mq_run_hw_queues() after the comple
the remaining requests of busy bfq queue
will stalled in the bfq schedule until a new request arrives.
To fix the scheduler latency problem, we need to check whether or not
all issued requests have completed and dispatch more requests to driver
if there is no request in driver.
Signed-off-by: Hou Tao
Hi Paolo,
I am reading the code of BFQ scheduler and having a question about the purpose
of idle rb-tree in bfq_service_tree.
>From the comment in code, the idle rb-tree is used to keep the bfq_queue which
doesn't have any request and has a finish time greater than the vtime of the
service tree.
Hi Jens,
I didn't found the patch in your linux-block git tree and the vanilla git tree.
Maybe you have forgot this CFQ fix ?
Regards,
Tao
On 2017/3/9 19:22, Hou Tao wrote:
> On 2017/3/8 22:05, Jan Kara wrote:
>> On Wed 08-03-17 20:16:55, Hou Tao wrote:
>>> When adding a cfq_
Hi Jan,
On 2017/11/21 0:43, Jan Kara wrote:
> Hi Tao!
>
> On Fri 17-11-17 14:51:18, Hou Tao wrote:
>> On 2017/3/13 23:14, Jan Kara wrote:
>>> blkdev_open() may race with gendisk shutdown in two different ways.
>>> Either del_gendisk() has already unha
Hi Jan,
On 2017/3/13 23:14, Jan Kara wrote:
> blkdev_open() may race with gendisk shutdown in two different ways.
> Either del_gendisk() has already unhashed block device inode (and thus
> bd_acquire() will end up creating new block device inode) however
> gen_gendisk() will still return the
r act_mask
in struct blk_user_trace_setup and a new attr file (cgroup_info) under
/sys/block/$dev/trace dir, so BLKTRACESETUP ioctl and sysfs file
can be used to enable cgroup info for selected block devices.
Signed-off-by: Hou Tao <hout...@huawei.com>
---
include/linux/blktrace_api.h
Hi,
On 2018/1/11 16:24, Dan Carpenter wrote:
> Thanks for your report and the patch. I am sending it to the
> linux-block devs since it's already public.
>
> regards,
> dan carpenter
The User-after-free problem is not specific for loop device, it can also
be reproduced on scsi device, and
Hi Jens,
Any comments on this patch and the related patch set for blktrace [1] ?
Regards,
Tao
[1]: https://www.spinics.net/lists/linux-btrace/msg00790.html
On 2018/1/11 12:09, Hou Tao wrote:
> Now blktrace supports outputting cgroup info for trace action and
> trace message, however,
wly created bdev inode, we are also guaranteed that following
> get_gendisk() will either return failure (and we fail open) or it
> returns gendisk for the new device and following bdget_disk() will
> return new bdev inode (i.e., blkdev_open() follows the path as if it is
> completely run
- remember this was old device - this was last ref and disk is
> now freed
> }
> disk_unblock_events(disk); -> oops
>
> Fix the problem by making sure we drop reference to disk in
> __blkdev_get() only after we are really done with it.
>
> Reported
file and cgrp_dfl_root
is only valid for cgroup v2.
So fix cgroup_path_from_kernfs_id() to support both cgroup v1 and v2.
Fixes: 69fd5c3 ("blktrace: add an option to allow displaying cgroup path")
Signed-off-by: Hou Tao <hout...@huawei.com>
---
include/linux/cgroup.h | 6 +++---
ke
Hi Jens,
Could you please look at this patch and the related patch set for blktrace [1],
and
give some feedback ?
Regards,
Tao
[1]: https://www.spinics.net/lists/linux-btrace/msg00790.html
On 2018/1/17 14:10, Hou Tao wrote:
> Hi Jens,
>
> Any comments on this patch and the related
28 matches
Mail list logo