Hi,
Dan Frincu wrote:
Hi,
You have to realize how many abstraction layers you have going on there.
It goes something like this, disk, software RAID on top of disk, LVM
physical volumes on top of software RAID, volume group on top of
physical volume, logical volume on top of volume group, DRBD on top of
logical volume. Does it sound like a lot?
Well, of course I do realize, how many abstraction layers there are. But
my tests compare many layers vs. many layers + DRBD. And it's with the
last layer (DRBD), when performance drops so drastically. So in my
opinion it's correct to look for problems in DRBD here.
Because it is, and this is
just one node, now put into play the secondary node and a network
connection to get your traffic there and then go back the chain down to
the disk for a write. I'm surprised you got up to 80MB/s, but then
again, the CPU is compensating for a lot here.
On-point, what protocol are you using for the setup? For instance, if
you're using protocol C, then it has to wait for a lot of things to
happen for the write operation to be committed, it is the safest
protocol, but also the slowest, from what I know. Changing the protocol
_might_ help, but the main thing is the layer overload you got going on
there. And to add to that, what happens when a process requires CPU
time? What about 100 processes?
As I mentioned, I tested DRBD in _standalone_ mode - without network
connection - so I don't need to take any network-related latencies into
account.
The only thing added on top of LVM (and all those other layers) here is
the DRBD layer, which doesn't need to wait on network and (in my
opinion) should be passing requests to lower layers as quickly as possible.
_______________________________________________
drbd-user mailing list
[email protected]
http://lists.linbit.com/mailman/listinfo/drbd-user