Hi,

i've got a third setup running with drbd9 right now and i'm wondering that performance in this new version of drbd is that slow. I see a lot of traffic going on and then suddenly the traffic is slowing down. drbdadm show then "congested:yes" on Secondard and "blocker:lower" on primary device. the network is a gigabit network and with the same servers and drbd8.3 i was able to get at least 80mb/s out of it. When the status is like this the performance is near to 2mb/s for arround 2-5 minutes, then it boosts up for 20sec and slowing down again.
I also tested it with a crosslink connection.

The setup is quit normal and nothing tweaked yet.

Couldn't find the bottleneck.

global_common.conf:
global {
        #usage-count yes;
        # minor-count dialog-refresh disable-ip-verification
# cmd-timeout-short 5; cmd-timeout-medium 121; cmd-timeout-long 600;
}

common {
        handlers {
                # These are EXAMPLE handlers only.
                # They may have severe implications,
# like hard resetting the node under certain circumstances.
                # Be careful when chosing your poison.

# pri-on-incon-degr "/usr/lib/drbd/notify-pri-on-incon-degr.sh; /usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot -f"; # pri-lost-after-sb "/usr/lib/drbd/notify-pri-lost-after-sb.sh; /usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot -f"; # local-io-error "/usr/lib/drbd/notify-io-error.sh; /usr/lib/drbd/notify-emergency-shutdown.sh; echo o > /proc/sysrq-trigger ; halt -f";
                # fence-peer "/usr/lib/drbd/crm-fence-peer.sh";
# split-brain "/usr/lib/drbd/notify-split-brain.sh root"; # out-of-sync "/usr/lib/drbd/notify-out-of-sync.sh root"; # before-resync-target "/usr/lib/drbd/snapshot-resync-target-lvm.sh -p 15 -- -c 16k"; # after-resync-target /usr/lib/drbd/unsnapshot-resync-target-lvm.sh;
        }

        startup {
                wfc-timeout 20;
                degr-wfc-timeout 20;
                # outdated-wfc-timeout wait-after-sb
        }

        options {
                # cpu-mask on-no-data-accessible
        }

        disk {
                # size on-io-error fencing disk-barrier disk-flushes
# disk-drain md-flushes resync-rate resync-after al-extents
                # c-plan-ahead c-delay-target c-fill-target c-max-rate
                # c-min-rate disk-timeout
        }

        net {
# protocol timeout max-epoch-size max-buffers unplug-watermark
                # connect-int ping-int sndbuf-size rcvbuf-size ko-count
# allow-two-primaries cram-hmac-alg shared-secret after-sb-0pri
                # after-sb-1pri after-sb-2pri always-asbp rr-conflict
                # ping-timeout data-integrity-alg tcp-cork on-congestion
# congestion-fill congestion-extents csums-alg verify-alg
                # use-rle
                protocol C;
        }
}


r0.res:
resource r0 {
      net {
              cram-hmac-alg sha1;
              shared-secret "Cwzvaerfyficjdh6";
      }
      volume 0 {
              device    /dev/drbd0;
              disk      /dev/sda3;
              meta-disk internal;
      }
      on node1 {
              node-id   0;
              address   10.0.0.1:7000;
      }
      on node2 {
              node-id   1;
              address   10.0.0.2:7000;
      }
      connection {
              host      node1 port 7000;
              host      node2 port 7000;
              net {
                        protocol C;
              }
      }
}


would be nice if anyone can help me with this issue.

kind regards
_______________________________________________
drbd-user mailing list
[email protected]
http://lists.linbit.com/mailman/listinfo/drbd-user

Reply via email to