Hello!

Adam, THX for your answer.

I used now a 100MBit switch to test this further and indeed the performance went down to 12MB/s.

I am now asking how I need to understand the Protocol option in DRBD? It
clearly reads:
   Protocol A: write IO is reported as completed, if it has reached local disk
   and local TCP send buffer.
I would suggest this is used for any operation on the disk, initial sync and
normal write operations. But it seems DRBD doesn't behave as the configuration
suggests.

Maybe anyone can explain this.

BR,
   Jasmin

**********************************************************************

On 12/08/2016 12:22 PM, Adam Goryachev wrote:


On 8/12/16 22:00, Jasmin J. wrote:
Hi!

I am using DRBD 8.4.

I want to limit the syncer speed to approx. 300MBit/s .

During intial sync after creating the disk this worked as expected. The
parameter controlling this was "c-max-rate".
But during normal operation DRBD seems to sync always with the fastest speed
possible.

I tried also protocol versions A and C and didn't see significant differences
concerning the used network bandwidth.

This is my currently used setup:

        disk {
                # in units of 0.1 seconds
                # 0,1s * 5 => 0,5s
                c-plan-ahead 5;

                # in units of 0.1 seconds
                # 2 seconds max sync delay
                c-delay-target 20;

                # in Units of KiB/s
                # ca. 300MBit/s
                c-max-rate 35M;

                # in Units of KiB/s
                # 32MBit/s
                c-min-rate 4M;

                # will overrule c-delay-target, but seems to be better
                # concerning linearity of the used network bandwith
                c-fill-target 18M;
        }

        net {
                #protocol C;
                max-buffers 8000;
                max-epoch-size 8000;
                sndbuf-size 512k;
                cram-hmac-alg sha1;
                shared-secret "XXXXX";
                verify-alg md5;

                protocol A;
                on-congestion pull-ahead;
                congestion-fill 2G;
                congestion-extents 2000;
        }

I commented "c-fill-target" -> no change
I also tried this (on both sides):
   drbdadm disk-options --c-plan-ahead=0 --c-max-rate=1M <resource>
 -> no change

I test the used network bandwidth with the tool
   speedometer -s -r eth0 -t eth0
It shows always 940MBit/s on a 1GBit ETH link, which is the whole bandwidth
of this link.

@Linbit:
Is it possible, that the driver ignore this settings after initial sync?

Any ideas?

I'm not an expert, but I am pretty sure that all those values are only related
to the initial sync, or a re-sync (ie, after a period of disconnected
primary/secondary).

To limit the bandwidth consumed while both nodes are online and in sync, you
will either need to limit the data being written to the primary, or limit
(through external means) the bandwidth between the nodes (eg, perhaps linux
kernel traffic shaping will work).

However, limiting the bandwidth will limit the write performance of your
storage (depending if this is important to you or not).

Regards,
Adam
_______________________________________________
drbd-user mailing list
[email protected]
http://lists.linbit.com/mailman/listinfo/drbd-user

_______________________________________________
drbd-user mailing list
[email protected]
http://lists.linbit.com/mailman/listinfo/drbd-user

Reply via email to