Hi all, On drbd-9.0.23 (CentOS 8.2), we are facing an inconsistence of node status. That is, although the state of both DRBD device is UpToDate, there exists some out-of-sync value.
#drbdsetup status --verbose --statistics r008 node-id:0 role:Primary suspended:no write-ordering:flush volume:0 minor:8 disk:UpToDate quorum:yes size:109757124 read:3065 written:105640 al-writes:27 bm-writes:0 upper-pending:0 lower-pending:0 al-suspended:no blocked:no fs100 node-id:1 connection:Connected role:Secondary congested:no ap-in-flight:0 rs-in-flight:0 volume:0 replication:Established peer-disk:UpToDate resync-suspended:no received:0 sent:105028 out-of-sync:12 pending:0 unacked:0 I see this issue on several systems running drbd-9.0, and I made a stable reproducer (attached for reference). This repeats umount/mount of DRBD disk on primary node, and disconnect/connect on secondary node at the same time. When I run this reproducer on drbd-8.9 for comparison, I never see the issue. On the other hand, I see this not only drbd-9.0.23, but also the latest 9.1. Am I doing something wrong or do we have a bug in out-of-sync calculation and state transition to UpToDate? Any feedback would be greatly appreciated.
<<attachment: drbd-out-of-sync-mountumount.zip>>
_______________________________________________ Star us on GITHUB: https://github.com/LINBIT drbd-user mailing list drbd-user@lists.linbit.com https://lists.linbit.com/mailman/listinfo/drbd-user