Hi,

I ran into a strange (to me) issue while testing something in LVM - I wanted to 
check that zeroing a Logical Volume would really zero the mapped Physical 
Extents. My setup was simple - I had a single Volume Group (vg0) containing a 
single Physical Volume (/dev/mmcblk0p2) and a few Logical Volumes. Nothing 
fancy.

The test itself was also simple:

1.       Write a test pattern to an LV using 'dd'

2.      Check how the LV is mapped with 'pvdisplay -m'

3.      Read back the mapped PEs from the PVs block device with 'dd', checking 
that they match the test pattern written to the LV

It seems like Linux's disk cache doesn't like this test. If I follow the steps, 
the readback from the PV's block device returns stale data. But if I empty my 
cache ('echo 3 > /proc/sys/vm/drop_caches') and do the readback again, I can 
see the pattern that was written to the LV.

Is this the expected behavior? I don't know much about Linux subsystems, but 
I'd assume that when LVM maps LV -> PV and does a write, Linux would get the 
memo and update its disk cache accordingly. The results I'm seeing suggest that 
Linux doesn't update the cache or flag it as dirty, because I can repeatedly 
read back stale data prior to emptying my cache.

Here are a few details about my setup:

*         Platform: Zynq 7000 (Dual-Core ARM Cortex-A9)

*         Linux: 4.0.0 (built from Xilinx repo)

*         LVM Version: 2.02.162(2) (2016-07-28)

*         LVM Library Version: 1.02.132 (2016-07-28)

*         LVM Driver Version: 4.30.0

*         Test Media: eMMC device

I don't know if I'm misunderstanding how LVM/Linux are supposed to mingle or if 
this is an actual issue.
Matthew Kipper
_______________________________________________
linux-lvm mailing list
[email protected]
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

Reply via email to