Lars,

I didn't want to hijack a different thread so I started a new one.  I want to 
understand better why you said in another thread today not to use no-disk-drain 
(quote):

> Will people please STOP using no-disk-drain.  On most hardware, it does
> not provide measurable performance gain, but may risk data integrity
> because of potential violation of write-after-write dependencies!

I also found you said in another older thread:

>Do not use "no-disk-drain", unless you are absolutely sure that nothing 
>in the IO stack below DRBD could possibly reorder writes. 

Unfortunately I'm not that knowledgeable on storage so this may be a ignorant 
question but...

In the man page for drbd.conf (version 8.3.7) it states using none is the best 
performance choice if using a storage device with BBWC. The way I read it I 
believe that none would be the same as having  no-disk-barrier, 
no-disk-flushes, no-disk-drain?  I didn't see any mention there of write order 
considerations only performance and volatility of storage disks.  Which leads 
me to use no-disk-drain and thinking everything is fine :-) (plus I wouldn't 
know if something in the stack could/would re-order writes without a lot of 
Google)

Excerpt:

       no-disk-barrier, no-disk-flushes, no-disk-drain
           DRBD has four implementations to express write-after-write 
dependencies to its backing
           storage device. DRBD will use the first method that is supported by 
the backing
           storage device and that is not disabled by the user.

           When selecting the method you should not only base your decision on 
the measurable
           performance. In case your backing storage device has a volatile 
write cache (plain
           disks, RAID of plain disks) you should use one of the first two. In 
case your backing
           storage device has battery-backed write cache you may go with option 
3 or 4. Option 4
           will deliver the best performance such devices.

           Unfortunately device mapper (LVM) does not support barriers.

           The letter after "wo:" in /proc/drbd indicates with method is 
currently in use for a
           device: b, f, d, n. The implementations:

           barrier
               The first requirs that the driver of the backing storage device 
support barriers
               (called ´tagged command queuing´ in SCSI and ´native command 
queuing´ in SATA
               speak). The use of this method can be disabled by the we 
no-disk-barrier option.

           flush
               The second requires that the backing device support disk flushes 
(called ´force
               unit access´ in the drive vendors speak). The use of this 
method can be disabled
               using the no-disk-flushes option.

           drain
               The third method is simply to let write requests drain before 
write requests of a
               new reordering domain are issued. That was the only 
implementation before 8.0.9.
               You can prevent to use of this method by using the no-disk-drain 
option.

           none
               The fourth method is to not express write-after-write 
dependencies to the backing
               store at all.


Jake 
_______________________________________________
drbd-user mailing list
[email protected]
http://lists.linbit.com/mailman/listinfo/drbd-user

Reply via email to