On 01/15/2015 08:33 PM, Muhammad Sharfuddin wrote:
> I have to put this 2 node active/passive cluster in production very soon and I have tested the resource migration > works perfectly in case of the node running the resource goes down(abruptly/forcefully).
>
> I have always read and heard to increase msgwait and watchdog timeout when sbd is a multipath disk, but in my case
> I have just created the disk via
>     sbd -d /dev/mapper/mpathe create
>
> and I have following resource for sbd
>     primitive sbd_stonith stonith:external/sbd \
>             op monitor interval="3000" timeout="120" start-delay="21" \
>             op start interval="0" timeout="120" \
>             op stop interval="0" timeout="120" \
>             params sbd_device="/dev/mapper/mpathe"
>
> as of now I am quite satisfied, but should I increase the msgwait and watchdog timeouts ?
>
> also I am using the start-delay=21 for "op monitor interval" should I also use the start-delay=11 for "op start interval"
>
> Please recommend
>
Oh I forgot to mention:

cat /etc/sysconfig/sbd
SBD_DEVICE="/dev/mapper/mpathe"
SBD_OPTS="-W"

sbd -d /dev/mapper/mpathe dump
==Dumping header on disk /dev/mapper/mpathe
Header version     : 2.1
UUID               : 505dc5b5-5da0-463e-a4fa-1ce55384542a
Number of slots    : 255
Sector size        : 512
Timeout (watchdog) : 5
Timeout (allocate) : 2
Timeout (loop)     : 1
Timeout (msgwait)  : 10
==Header on disk /dev/mapper/mpathe is dumped

sbd -d /dev/mapper/mpathe list
0       node2 clear
1       node1 clear

--
Regards,

Muhammad Sharfuddin




_______________________________________________
Linux-HA mailing list
Linux-HA@lists.linux-ha.org
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems

Reply via email to