On Tue, Dec 18, 2012 at 10:58 AM, Tom Fernandes <[email protected]> wrote:
> ------------------------------- DRBD -----------------------------------------
> tom@hydra04 [1526]:~$ sudo drbdadm dump
> # /etc/drbd.conf
> common {
>     protocol               C;
>     syncer {
>         rate             150M;
>     }
> }
>
> # resource leela on hydra04: not ignored, not stacked
> resource leela {
>     on hydra04 {
>         device           minor 0;
>         disk             /dev/vg0/leela;
>         address          ipv4 10.0.0.1:7788;
>         meta-disk        internal;
>     }
>     on hydra05 {
>         device           minor 0;
>         disk             /dev/vg0/leela;
>         address          ipv4 10.0.0.2:7788;
>         meta-disk        internal;
>     }
> }

If that configuration is indeed "similar" to the one on the other
cluster (the one where you're apparently writing to DRBD at 200 MB/s),
I'd be duly surprised. Indeed I'd consider it quite unlikely for _any_
DRBD 8.3 cluster to hit that throughput unless you tweaked at least
al-extents, max-buffers and max-epoch-size, and possibly also
sndbuf-size and rcvbuf-size, and set no-disk-flushes and no-md-flushes
(assuming you run on flash or battery backed write cache).

So I'd suggest that you refer back to your "fast" cluster and see if
perhaps you forgot to copy over your /etc/drbd.d/global_common.conf.

You may also need to switch your I/O scheduler from cfq to deadline on
your backing devices, if you haven't already done so. And finally, for
a round-robin bonded network link, upping the net.ipv4.tcp_reordering
sysctl to perhaps 30 or so would also be wise.

Cheers,
Florian

-- 
Need help with High Availability?
http://www.hastexo.com/now
_______________________________________________
drbd-user mailing list
[email protected]
http://lists.linbit.com/mailman/listinfo/drbd-user

Reply via email to