Hi,
On 3/7/2023 10:47 PM, Mike Snitzer wrote:
> On Mon, Mar 06 2023 at 9:12P -0500,
> Hou Tao wrote:
>
>> Hi,
>>
>> On 3/7/2023 3:31 AM, Mike Snitzer wrote:
>>> On Mon, Mar 06 2023 at 8:49P -0500,
>>> Hou Tao wrote:
>>>
>>&g
Hi,
On 3/7/2023 3:31 AM, Mike Snitzer wrote:
> On Mon, Mar 06 2023 at 8:49P -0500,
> Hou Tao wrote:
>
>> From: Hou Tao
>>
>> When neither no_read_workqueue nor no_write_workqueue are enabled,
>> tasklet_trylock() in crypt_dec_pending() may still return false
From: Hou Tao
When neither no_read_workqueue nor no_write_workqueue are enabled,
tasklet_trylock() in crypt_dec_pending() may still return false due to
an uninitialized state, and dm-crypt will do io completion in io_queue
instead of current context unnecessarily.
Fix it by initializing io
ping ? Any comments on this clean up patch ?
On 1/31/2023 9:44 AM, Hou Tao wrote:
> ping ? Any comments on this clean up patch ?
>
> On 1/18/2023 9:16 PM, Hou Tao wrote:
>> ping ?
>>
>> On 12/16/2022 12:23 PM, Hou Tao wrote:
>>> From: Hou Tao
>&
ping ? Any comments on this clean up patch ?
On 1/18/2023 9:16 PM, Hou Tao wrote:
> ping ?
>
> On 12/16/2022 12:23 PM, Hou Tao wrote:
>> From: Hou Tao
>>
>> __hash_remove() removes hash_cell with _hash_lock locked, so acquiring
>> _hash_lock can guarantee no-N
ping ?
On 12/16/2022 12:23 PM, Hou Tao wrote:
> From: Hou Tao
>
> __hash_remove() removes hash_cell with _hash_lock locked, so acquiring
> _hash_lock can guarantee no-NULL hc returned from dm_get_mdptr() must
> have not been removed and hc->md must still be md.
>
> __has
From: Hou Tao
__hash_remove() removes hash_cell with _hash_lock locked, so acquiring
_hash_lock can guarantee no-NULL hc returned from dm_get_mdptr() must
have not been removed and hc->md must still be md.
__hash_remove() also acquires dm_hash_cells_mutex before setting mdptr
as NULL,
/0x140
pool_message+0x218/0x2b0
target_message+0x251/0x290
ctl_ioctl+0x1c4/0x4d0
dm_ctl_ioctl+0xe/0x20
__x64_sys_ioctl+0x7b/0xb0
do_syscall_64+0x40/0xb0
entry_SYSCALL_64_after_hwframe+0x44/0xae
Fixing it by only assign new_root when removal succeeds
Signed-off-by: Hou Tao
---
driv
The unit of max_io_len is sector instead of byte (spotted through
code review), so fix it.
Fixes: 3b1a94c88b79 ("dm zoned: drive-managed zoned block device target")
Signed-off-by: Hou Tao
---
drivers/md/dm-zoned-target.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
Hi Mike,
On 2020/3/4 0:15, Mike Snitzer wrote:
> On Tue, Mar 03 2020 at 3:45am -0500,
> Hou Tao wrote:
>
>> We neither assign congested_fn for requested-based blk-mq device
>> nor implement it correctly. So fix both.
>>
>> Fixes: 4aa9c692e052 ("bdi: separa
We neither assign congested_fn for requested-based blk-mq device
nor implement it correctly. So fix both.
Fixes: 4aa9c692e052 ("bdi: separate out congested state into a separate struct")
Signed-off-by: Hou Tao
---
drivers/md/dm.c | 5 -
1 file changed, 4 insertions(+), 1 deletio
off-by: Hou Tao
---
drivers/md/persistent-data/dm-btree-remove.c | 8 +++-
1 file changed, 7 insertions(+), 1 deletion(-)
diff --git a/drivers/md/persistent-data/dm-btree-remove.c
b/drivers/md/persistent-data/dm-btree-remove.c
index 21ea537bd55e..eff04fa23dfa 100644
--- a/drivers/md/persist
weclome.
Regards,
Tao
Hou Tao (3):
md-debugfs: add md_debugfs_create_files()
md: export inflight io counters and internal stats in debugfs
raid1: export inflight io counters and internal stats in debugfs
drivers/md/Makefile | 2 +-
drivers/md/md-debugfs.c | 35 ++
── iostat
├── raid1
│ ├── iostat
│ └── stat
└── stat
Signed-off-by: Hou Tao
---
drivers/md/md.c | 65 +
drivers/md/md.h | 1 +
2 files changed, 66 insertions(+)
diff --git a/drivers/md/md.c b/drivers/md/md.c
index 9801d540fea1..dceb8fd59b
It will be used by the following patches to create debugfs files
under /sys/kernel/debug/mdX.
Signed-off-by: Hou Tao
---
drivers/md/Makefile | 2 +-
drivers/md/md-debugfs.c | 35 +++
drivers/md/md-debugfs.h | 16
3 files changed, 52
Just like the previous patch which exports debugfs files for md-core,
this patch exports debugfs file for md-raid1 under
/sys/kernel/debug/block/mdX/raid1.
Signed-off-by: Hou Tao
---
drivers/md/raid1.c | 78 ++
drivers/md/raid1.h | 1 +
2 files
er before switching to write mode.
Signed-off-by: Hou Tao
---
drivers/md/dm-thin.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/md/dm-thin.c b/drivers/md/dm-thin.c
index b900723bbd0f..c6da4afc16cf 100644
--- a/drivers/md/dm-thin.c
+++ b/drivers/md/dm-thin.c
@@ -1401,6 +1401,7 @@ static v
;open_count.
Signed-off-by: Hou Tao <hout...@huawei.com>
---
drivers/md/dm.c | 7 +--
1 file changed, 5 insertions(+), 2 deletions(-)
diff --git a/drivers/md/dm.c b/drivers/md/dm.c
index 4be8532..97d383b 100644
--- a/drivers/md/dm.c
+++ b/drivers/md/dm.c
@@ -2709,11 +2709,14 @@ str
a limitation on the blkcg of the original IO thread,
so the blk-throttle doesn't work well.
In order to handle the situation, we add a "keep_bio_blkcg" feature
to dm-thin. If the feature is enabled, the original blkcg of bio
will be saved at thin_map() and will be used during blk-throttle.
Tao
If keep_bio_blkcg is enabled, assign the io_context and the blkcg of
current task to bio before processing the bio.
Signed-off-by: Hou Tao <hout...@huawei.com>
---
drivers/md/dm-thin.c | 5 +
drivers/md/dm-thin.h | 17 +
2 files changed, 22 insertions(+)
create mode
"keep_bio_blkcg" is used to control whether or not
dm-thin needs to save the original blkcg of bio
Signed-off-by: Hou Tao <hout...@huawei.com>
---
drivers/md/dm-thin.c | 7 +++
1 file changed, 7 insertions(+)
diff --git a/drivers/md/dm-thin.c b/drivers/md/dm-thin.c
index
keep_bio_blkcg feature is off by default, and it can
be turned on by using "keep_bio_blkcg" argument.
Signed-off-by: Hou Tao <hout...@huawei.com>
---
drivers/md/dm-thin.c | 6 +-
1 file changed, 5 insertions(+), 1 deletion(-)
diff --git a/drivers/md/dm-thin.c b/drivers/md
On 2017/1/11 3:42, Vivek Goyal wrote:
> On Tue, Jan 10, 2017 at 02:47:02PM +0800, Hou Tao wrote:
>> Hi, all.
>>
>> I am trying to test block-throttle on dm-thin devices. I find the throttling
>> on dm-thin device is OK, but the throttling doesn't work for the dat
Hi, all.
I am trying to test block-throttle on dm-thin devices. I find the throttling
on dm-thin device is OK, but the throttling doesn't work for the data device
of dm-thin pool.
The following is my test case:
#!/bin/sh
dmsetup create pool --table '0 41943040 thin-pool /dev/vdb /dev/vda \
24 matches
Mail list logo