Dne 26.4.2017 v 15:37 Gionatan Danti napsal(a):

On 26/04/2017 13:23, Zdenek Kabelac wrote:

You need to use 'direct' write more - otherwise you are just witnessing
issues related with 'page-cache' flushing.

Every update of file means update of journal - so you surely can lose
some data in-flight - but every good software needs to the flush before
doing next transaction - so with correctly working transaction software
no data could be lost.

I used "oflag=sync" for this very reason - to avoid async writes, However, let's retry with "oflat=direct,sync".

This is the thinpool before filling:

[root@blackhole mnt]# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
   thinpool vg_kvm    twi-aot---  1.00g                 87.66  12.01
   thinvol  vg_kvm    Vwi-aot---  2.00g thinpool        43.83
   root     vg_system -wi-ao---- 50.00g
   swap     vg_system -wi-ao----  7.62g

[root@blackhole storage]# mount | grep thinvol
/dev/mapper/vg_kvm-thinvol on /mnt/storage type ext4 (rw,relatime,seclabel,errors=remount-ro,stripe=32,data=ordered)


Fill the thin volume (note that errors are raised immediately due to --errorwhenfull=y):

[root@blackhole mnt]# dd if=/dev/zero of=/mnt/storage/test.2 bs=1M count=300 oflag=direct,sync
dd: error writing ‘/mnt/storage/test.2’: Input/output error
127+0 records in
126+0 records out
132120576 bytes (132 MB) copied, 14.2165 s, 9.3 MB/s

 From syslog:

Apr 26 15:26:24 localhost lvm[897]: WARNING: Thin pool vg_kvm-thinpool-tpool data is now 96.84% full. Apr 26 15:26:27 localhost kernel: device-mapper: thin: 253:4: reached low water mark for data device: sending event. Apr 26 15:26:27 localhost kernel: device-mapper: thin: 253:4: switching pool to out-of-data-space (error IO) mode Apr 26 15:26:34 localhost lvm[897]: WARNING: Thin pool vg_kvm-thinpool-tpool data is now 100.00% full.

Despite write errors, the filesystem is not in read-only mode:


But you get correct 'write' error - so from application POV - you get failing
transaction update/write - so app knows 'data' were lost and should not proceed with next transaction - so it's in line with 'no data is lost' and filesystem is not damaged and is in correct state (mountable).


Zdenek

_______________________________________________
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

Reply via email to