In passthrough mode, dm-cache defers write bio submission until cache
invalidation completes to maintain existing coherency, requiring the
target map function to return DM_MAPIO_SUBMITTED. The current map_bio()
returns DM_MAPIO_REMAPPED, violating the required ordering constraint.
Reproduce steps:
1. Create a cache device
dmsetup create cmeta --table "0 8192 linear /dev/sdc 0"
dmsetup create cdata --table "0 131072 linear /dev/sdc 8192"
dmsetup create corig --table "0 262144 linear /dev/sdc 262144"
dd if=/dev/zero of=/dev/mapper/cmeta bs=4k count=1 oflag=direct
dmsetup create cache --table "0 262144 cache /dev/mapper/cmeta \
/dev/mapper/cdata /dev/mapper/corig 128 2 metadata2 writethrough smq 0"
2. Promote the first data block into the cache
fio --filename=/dev/mapper/cache --name=populate --rw=write --bs=4k \
--direct=1 --size=64k
3. Reload the cache into passthrough mode
dmsetup suspend cache
dmsetup reload cache --table "0 262144 cache /dev/mapper/cmeta \
/dev/mapper/cdata /dev/mapper/corig 128 2 metadata2 passthrough smq 0"
dmsetup resume cache
4. Write to the first data block, and check io ordering using ftrace
echo 1 > /sys/kernel/debug/tracing/events/block/block_bio_queue/enable
echo 1 > /sys/kernel/debug/tracing/events/block/block_bio_complete/enable
echo 1 > /sys/kernel/debug/tracing/events/block/block_rq_complete/enable
fio --filename=/dev/mapper/cache --name=test --rw=write --bs=64k \
--direct=1 --size 64k
5. ftrace logs show that write operations to the cache origin (252:2)
and metadata operations (252:0) are unsynchronized: the origin write
occurs before metadata commit.
<snip>
fio-146 [000] ..... 420.139562: block_bio_queue: 252,3 WS 0 + 128 [fio]
fio-146 [000] ..... 420.149395: block_bio_queue: 252,2 WS 0 + 128 [fio]
fio-146 [000] ..... 420.149763: block_bio_queue: 8,32 WS 262144 + 128
[fio]
fio-146 [000] dNh1. 420.151446: block_rq_complete: 8,32 WS () 262144 +
128 be,0,4 [0]
fio-146 [000] dNh1. 420.152731: block_bio_complete: 252,2 WS 0 + 128
[0]
fio-146 [000] dNh1. 420.154229: block_bio_complete: 252,3 WS 0 + 128
[0]
kworker/0:0-9 [000] ..... 420.160530: block_bio_queue: 252,0 W 408 + 8
[kworker/0:0]
kworker/0:0-9 [000] ..... 420.161641: block_bio_queue: 8,32 W 408 + 8
[kworker/0:0]
kworker/0:0-9 [000] ..... 420.162533: block_bio_queue: 252,0 W 416 + 8
[kworker/0:0]
kworker/0:0-9 [000] ..... 420.162821: block_bio_queue: 8,32 W 416 + 8
[kworker/0:0]
<snip>
Fixes: b29d4986d0da ("dm cache: significant rework to leverage
dm-bio-prison-v2")
Signed-off-by: Ming-Hung Tsai <[email protected]>
---
drivers/md/dm-cache-target.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/md/dm-cache-target.c b/drivers/md/dm-cache-target.c
index a2e07b124901..ac9901033073 100644
--- a/drivers/md/dm-cache-target.c
+++ b/drivers/md/dm-cache-target.c
@@ -1703,6 +1703,7 @@ static int map_bio(struct cache *cache, struct bio *bio,
dm_oblock_t block,
bio_drop_shared_lock(cache, bio);
atomic_inc(&cache->stats.demotion);
invalidate_start(cache, cblock, block, bio);
+ return DM_MAPIO_SUBMITTED;
} else
remap_to_origin_clear_discard(cache, bio,
block);
} else {
--
2.49.0