I'm seeing something I didn't expect...  I was getting very poor results from 
the following command:

time dd if=/dev/zero of=/dev/vg-drbd/test oflag=direct count=512 bs=512

in the order of 5 seconds. Then I added the following to my configuration:

disk {
  no-disk-barrier;
  no-disk-flushes;
  no-md-flushes;
 }

And the time dropped to under half a second, which is on par with write 
performance when the secondary is offline.
 
Where is the flush/barrier having an effect? I'm using protocol B so I assumed 
that there would be no flush at all on the secondary in either case as the 
packet is only supposed to have hit the secondary queue, not necessarily the 
disk. I still want the data definitely on the disk on the primary though so the 
flush should be happening there (I'm prepared to lose data in the event of a 
power failure + simultaneous destruction of the primary's disks).

To summarise:

primary&secondary online with all flushing/barriers enabled = bad performance
As above but secondary offline = good performance
Primary&secondary online with all flushing/barriers disabled = good performance

My details are:
2 x HP DL180's, each with 2 x 2TB 7200RPM SATA disks
Backing store is md (RAID0) on a partition
Protocol B

I've tried all the other configuration tweaks I can think of, the only think of 
is that flushing is having an effect on the secondary node???

Can anyone clarify the situation for me?

Thanks

James

_______________________________________________
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user

Reply via email to