I set up 2-node NFS cluster on drbd 8.4 across two sites connected with
10GB link. I run MQ benchmark with result of 100 units which is poor. When
I disable drbd synchronizations I get 200 units which is good. That's twice
faster. Is it normal for drbd to slow down that much ?
Thank you
___
I think that the resync progress in drbd9 is visible in drbdadm status
2017-07-25 20:22 GMT+02:00 Eric Robinson :
>> http://docs.linbit.com/docs/users-guide-9.0/p-work/
>>
>> 5.2.2. Status information in /proc/drbd
>>
>> '/proc/drbd' is deprecated. While it won’t be removed in the 8.4 series, we
>
http://docs.linbit.com/docs/users-guide-9.0/p-work/
5.2.2. Status information in /proc/drbd
'/proc/drbd' is deprecated. While it won’t be removed in the 8.4
series, we recommend to switch to other means, like Section 5.2.3,
“Status information via drbdadm”; or, for monitoring even more
convenient
:00 Lars Ellenberg :
> On Sun, Jul 16, 2017 at 11:14:12PM +0200, ArekW wrote:
>>
>> Hi,
>> On a 2 node cluster when I do a failover test I get messages in logs
>> on the healthy node:
>
>
>> crm-fence-peer.sh[32094]: WARNING could not
>> determine my dis
Hi,
On a 2 node cluster when I do a failover test I get messages in logs
on the healthy node:
Jul 16 22:55:39 centos2 kernel: drbd storage centos1: fence-peer
helper broken, returned 1
Jul 16 22:55:39 centos2 kernel: drbd storage: State change failed:
Refusing to be Primary while peer is not outda