Hi Oleksiy,

Here is my entire current config:

global { usage-count no; }
common { protocol C; }
resource r0 {
  disk {
    on-io-error detach;
    no-disk-flushes ;
    no-disk-barrier;
    c-plan-ahead 0;
    c-fill-target 24M;
    c-min-rate 80M;
    c-max-rate 720M;
  }
  net {
    max-buffers 36k;
    sndbuf-size 1024k;
    rcvbuf-size 2048k;
  }
  on node1 {
    device /dev/drbd0;
    disk /dev/sdb1;
    address 192.168.200.1:7788;
    meta-disk internal;
  }
  on node2 {
    device /dev/drbd0;
    disk /dev/sdb1;
    address 192.168.200.2:7788;
    meta-disk internal;
  }
}

...and the speed is still dreadfully slow, even though the link can easily do 150MB/s:

cat /proc/drbd
version: 8.4.10 (api:1/proto:86-101)
srcversion: 17A0C3A0AF9492ED4B9A418
 0: cs:SyncSource ro:Primary/Secondary ds:UpToDate/Inconsistent C r-----
    ns:46712 nr:0 dw:0 dr:47528 al:8 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:d oos:3898265768
    [>....................] sync'ed:  0.1% (3806900/3806944)M
    finish: 3867:19:37 speed: 264 (260) K/sec

Anything else to try before giving up on DRBD?

Thanks,
Adam


On 11/10/18 07:45, Oleksiy Evin wrote:
You may try to disable dynamic sync rate by setting "c-plan-ahead" to 0 and increase "max-buffers". That's the only way which helped me to get reasonable sync rate with 100GigE connection.

net {
...
max-buffers 32K;
# max-epoch-size 18K;
}

disk {
...
c-plan-ahead 0;
}

You can find some more info here:
https://serverfault.com/questions/740311/drbd-terrible-sync-performance-on-10gige/740370


//OE

-----Original Message-----
*From*: Adam Weremczuk <[email protected] <mailto:adam%20weremczuk%20%[email protected]%3e>>
*To*: [email protected] <mailto:[email protected]>
*Subject*: [DRBD-user] slow sync speed
*Date*: Wed, 10 Oct 2018 14:57:02 +0100

Hi all,
I'm trying out DRBD Pacemaker HA Cluster on Proxmox 5.2
I have 2 identical servers connected with 2 x 1 Gbps links in bond_mode
balance-rr.
The bond is working fine; I get a transfer rate of 150 MB/s with scp.
Following this guide:
https://www.theurbanpenguin.com/drbd-pacemaker-ha-cluster-ubuntu-16-04/ was going  smoothly up until:
drbdadm -- --overwrite-data-of-peer primary r0/0
cat /proc/drbd
version: 8.4.10 (api:1/proto:86-101)
srcversion: 17A0C3A0AF9492ED4B9A418
   0: cs:SyncSource ro:Primary/Secondary ds:UpToDate/Inconsistent C r-----
      ns:10944 nr:0 dw:0 dr:10992 al:8 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f
oos:3898301536
      [>....................] sync'ed:  0.1% (3806932/3806944)M
      finish: 483:25:13 speed: 2,188 (2,188) K/sec
The transfer rate is horribly slow and at this pace it's going to take
20 days for two 4 TB volumes to sync!
That's almost 15 times slower comparing with the guide video (8:30):
https://www.youtube.com/watch?v=WQGi8Nf0kVc
The volumes have been zeroed and contain no live data yet.
My sdb disks are logical drives (hardware RAID) set up as RAID50 with
the defaults:
Strip size: 128 KB
Access policy: RW
Read policy: Normal
Write policy: Write Back with BBU
IO policy: Direct
Drive Cache: Disable
Disable BGI: No
Performance looks good when tested with hdparm:
hdparm -tT /dev/sdb1
/dev/sdb1:
   Timing cached reads:   15056 MB in  1.99 seconds = 7550.46 MB/sec
   Timing buffered disk reads: 2100 MB in  3.00 seconds = 699.81 MB/sec
The volumes have been zeroed and contain no live data yet.
It seems to be a problem with default DRBD settings.
Can anybody recommend optimal tweaks specific to my environment?
Regards,
Adam
_______________________________________________
drbd-user mailing list
[email protected] <mailto:[email protected]>
http://lists.linbit.com/mailman/listinfo/drbd-user

_______________________________________________
drbd-user mailing list
[email protected]
http://lists.linbit.com/mailman/listinfo/drbd-user

Reply via email to