On 11/07/11 15:13, Mark Dokter wrote:
On 07/10/2011 07:30 PM, Phil Stoneman wrote:
I've seen a similar thing with small writes, and for me, using
no-disk-barrier and no-disk-flushes solved the small write performance
issue. Hope that helps! :-)

 From [1]:
"Unfortunately device mapper (LVM) might not support barriers."

Although, cat /proc/drbd states
3: cs:Connected ro:Secondary/Primary ds:UpToDate/UpToDate C r----
     ns:0 nr:46983440 dw:46983440 dr:0 al:0 bm:2863 lo:0 pe:0 ua:0 ap:0
ep:1 wo:b oos:0

I think that LVM that comes with recent kernels support barriers, which is why you're seeing this.


Furthermore, my 3ware RAID controllers do not have a BBU installed and I
read that it's not recommended to use no-disk-barriers  and
no-disk-flushes in this case. The servers are connected to a quite large
ups tough. Does that suffice?

From what I understand, the main risk of not using barriers is that when drbd thinks something's written to disk, it might not actually be. And that's possibly not great - but to me, it doesn't seem any worse than using a SATA disk normally with the internal disk write ccache. Anyway, if you have a UPS which is configured to gracefully shut your machines down on power loss, it should have approximately the same effect as having a BBU.


Or, to put it another way: I have good and regularly tested backups; the small risk of losing a bit of data on writes is more than offset by the massive speed benefit that no-disk-barrier and no-disk-flushes gives me.


You can see my previous discussion on this here:

http://article.gmane.org/gmane.comp.linux.drbd/21997
http://article.gmane.org/gmane.comp.linux.drbd/22056

Phil
_______________________________________________
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user

Reply via email to