Hi On Thu, 7 Aug 2025, Zdenek Kabelac wrote:
> > I think that adding a new bit for the read syscalls is not a workable > > solition. There are so many programs using the read() syscall and teaching > > them to use this new bit is impossible. > > I believe the idea of the test is to physically check the proper leg is > reporting an error. > > So instead of this proposed patch - we should actually enforce reading leg > with lvchange --writemostly to select the preferred leg among other legs. Here I'm sending a patch that uses "lvchange --writemostly". It doesn't use it for raid-10, because lvm doesn't support it for raid-10. > The bigger question is though - that user doesn't have normally a single > workable leg - as 'reading' is spread across all legs - thus when one leg > goes away - it could have happen that other legs have also some 'errors' > spread around... So, the user should periodically scrub the array or he should use a raid configuration that can withstand two-disk failure (i.e. raid-1 with 3 legs or raid-6). > Zdenek From: Mikulas Patocka <mpato...@redhat.com> The test shell/integrity.sh creates raid arrays, corrupts one of the legs, then reads the array and verifies that the corruption was corrected. Finally, the test tests that the number of mismatches on the corrupted leg is non-zero. The problem is that the raid1 implementation may freely choose which leg to read from. If it chooses to read the non-corrupted leg, the corruption is not detected, the number of mismatches is not incremented and the test reports this as failure. Fix this failure by marking the non-corrupted leg as "writemostly", so that the kernel doesn't try to read it (it reads it only if it finds corruption on the other leg). Signed-off-by: Mikulas Patocka <mpato...@redhat.com> --- test/shell/integrity.sh | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) Index: lvm2/test/shell/integrity.sh =================================================================== --- lvm2.orig/test/shell/integrity.sh 2025-08-11 13:34:07.000000000 +0200 +++ lvm2/test/shell/integrity.sh 2025-08-11 13:46:43.000000000 +0200 @@ -135,6 +135,7 @@ lvs -a -o name,segtype,devices,sync_perc aux wait_recalc $vg/${lv1}_rimage_0 aux wait_recalc $vg/${lv1}_rimage_1 aux wait_recalc $vg/$lv1 +lvchange $vg/$lv1 --writemostly "$dev2" _test_fs_with_read_repair "$dev1" lvs -o integritymismatches $vg/${lv1}_rimage_0 |tee mismatch not grep 0 mismatch @@ -152,6 +153,8 @@ aux wait_recalc $vg/${lv1}_rimage_0 aux wait_recalc $vg/${lv1}_rimage_1 aux wait_recalc $vg/${lv1}_rimage_2 aux wait_recalc $vg/$lv1 +lvchange $vg/$lv1 --writemostly "$dev2" +lvchange $vg/$lv1 --writemostly "$dev3" _test_fs_with_read_repair "$dev1" "$dev2" lvs -o integritymismatches $vg/${lv1}_rimage_0 |tee mismatch not grep 0 mismatch @@ -233,8 +236,6 @@ lvs -o integritymismatches $vg/${lv1}_ri lvs -o integritymismatches $vg/${lv1}_rimage_1 lvs -o integritymismatches $vg/${lv1}_rimage_2 lvs -o integritymismatches $vg/${lv1}_rimage_3 -lvs -o integritymismatches $vg/$lv1 |tee mismatch -not grep 0 mismatch lvchange -an $vg/$lv1 lvconvert --raidintegrity n $vg/$lv1 lvremove $vg/$lv1 @@ -602,6 +603,7 @@ lvs -a -o name,segtype,devices,sync_perc aux wait_recalc $vg/${lv1}_rimage_0 aux wait_recalc $vg/${lv1}_rimage_1 aux wait_recalc $vg/$lv1 +lvchange $vg/$lv1 --writemostly "$dev2" _test_fs_with_read_repair "$dev1" lvs -o integritymismatches $vg/${lv1}_rimage_0 |tee mismatch not grep 0 mismatch