listslut <listslut@...> writes:

>
> I didn't set this cluster up but that information was documented by the
> previous admin here:
>
> http://crunchtools.com/kvm-cluster-with-drbd-gfs2/
>
> It is now to the point where provisioning space for a new VM is a day
> long process with a load level that brings down the server. I did these > tests this morning. If the formatting gets really bad on send the non > drbd portion of the raid shows 2.14.79 MB writes and the drbd portion of
> the raid shows 4.65MB to start and drops off to around 1.5MB average.
>
> Iostat test on the non drbd portion of the raid.
>
> Device: tps MB_read/s MB_wrtn/s MB_read MB_wrtn > cciss/c0d0 622.61 0.03 214.79 0 427
> drbd0                 329.65         0.03         1.26
> 0          2
>
> Iostat test on the drbd portion of the raid.
>
> Device: tps MB_read/s MB_wrtn/s MB_read MB_wrtn > cciss/c0d0 52.26 0.02 4.65 0 9 > drbd0 1182.41 0.02 4.65 0 9
>
> Device: tps MB_read/s MB_wrtn/s MB_read MB_wrtn > cciss/c0d0 134.83 0.01 2.15 0 4 > drbd0 449.75 0.01 1.75 0 3
>
> Device: tps MB_read/s MB_wrtn/s MB_read MB_wrtn > cciss/c0d0 105.00 0.00 1.60 0 3 > drbd0 407.00 0.00 1.59 0 3
>
> Device: tps MB_read/s MB_wrtn/s MB_read MB_wrtn > cciss/c0d0 90.95 0.00 1.54 0 3 > drbd0 394.97 0.00 1.54 0 3
>
> Device: tps MB_read/s MB_wrtn/s MB_read MB_wrtn > cciss/c0d0 114.93 0.00 1.68 0 3 > drbd0 430.35 0.00 1.68 0 3
>
> Clustering was broken after an upgrade before I got here.  I upgraded
> both systems to the latest RHEL 5 about a month ago.    The drbd was
> compiled locally and is version drbd-8.2.6.
>
> Thank You
> Ken
>

(Re-sending since my prior attempt via gmane doesn't appear to have worked;
apologies if this is duplicated.)

Hi,

I'm seeing similar problems with massive underperformance on writes on my system. I'm running locally-compiled DRBD [version: 8.3.7 (api:88/ proto:86-91), GIT-hash: ea9e28dbff98e331a62bcbcc63a6135808fe2917] on RHEL 5. Read performance is fine, but I'm getting write throughput of about 4MB/s, with latency around
13ms (as measured with a script similar to the one in the user's guide).

I haven't yet tried tuning various parameters in drbd.conf as described in the available performance-optimization guides, but it's so slow I have to think
there must be some more fundamental problem at play here than a lack of
optimization (i.e. the defaults shouldn't be *that* bad).

And for what it's worth, I've ruled out the network link as a possible
bottleneck -- it's giving me 1.97Gbps throughput in both directions according to
iperf (a bonded pair of back-to-back GbE ports, 9k MTU).

Anyone have any suggestions or advice?


Thanks,
Zev

_______________________________________________
drbd-user mailing list
[email protected]
http://lists.linbit.com/mailman/listinfo/drbd-user

Reply via email to