Re: [DRBD-user] drbdadm verify always report oos

2016-05-18 Thread d tbsky
Hi: it is not ssd. it is just two 2TB sata hard disks. mdadm is checked every week and I don't see any error report under dmesg. there are 20 VMs running above it and they seems normal. but I wonder it will be normal again after so many verify/resync. so I just pick a test-vm to try. still

Re: [DRBD-user] drbdadm verify always report oos

2016-05-18 Thread Veit Wahlich
Are you utilising SSDs? Is the kernel log (dmesg) clean from errors on the backing devices (also mdraid members/backing devices)? Did you verify the mdraid array consistency and are the array's members in sync? Ursprüngliche Nachricht Von: d tbsky

Re: [DRBD-user] drbdadm verify always report oos

2016-05-18 Thread d tbsky
Hi: I shutdown the vm when I found the strange behavior. so the drbd is resync under idle situation. I try to play with config options and resync about 15 times, still can not get verify report 0 oos. I have about 10 resource which has verify problem. but it's strange that some resources

Re: [DRBD-user] drbdadm verify always report oos

2016-05-18 Thread Veit Wahlich
Hi, how did you configure die VMs' disk caches? In case of qemu/KVM/Xen it is essential for consistency to configure cache as "write through", any other setting is prone to problems due to double-writes, unless the OS inside of the VM uses write barriers. Although write barriers are default for

[DRBD-user] drbdadm verify always report oos

2016-05-18 Thread d tbsky
hi: I am using drbd 8.4.7 which comes from epel under scientific linux 7.2. when I try "drbdadm verify res", it report there is oos. so I disconnect/connect the resource, the oos now becomes 0. but when I verify it again, it report oos again. the oos amount is different than previous,

[DRBD-user] DRBD 9 Peack CPU load

2016-05-18 Thread Mats Ramnefors
I am testing a DRBD 9 and 8.4 in simple 2 node active - passive clusters with NFS. Copying files form a third server to the NFS share using dd, I typically see an average of 20% CPU load (with v9) on the primary during transfer of larger files, testing with 0,5 and 2 GB. At the very end of