On April 7, 2020 12:21:50 AM GMT+03:00, Sherrard Burton
wrote:
>
>
>On 4/6/20 4:10 PM, Andrei Borzenkov wrote:
>> 06.04.2020 20:57, Sherrard Burton пишет:
>>>
>>>
>>> On 4/6/20 1:20 PM, Sherrard Burton wrote:
On 4/6/20 12:35 PM, Andrei Borzenkov wrote:
> 06.04.2020 17:05,
On 4/6/20 4:10 PM, Andrei Borzenkov wrote:
06.04.2020 20:57, Sherrard Burton пишет:
On 4/6/20 1:20 PM, Sherrard Burton wrote:
On 4/6/20 12:35 PM, Andrei Borzenkov wrote:
06.04.2020 17:05, Sherrard Burton пишет:
from the quorum node:
...
Apr 05 23:10:17 debug Client
06.04.2020 20:57, Sherrard Burton пишет:
>
>
> On 4/6/20 1:20 PM, Sherrard Burton wrote:
>>
>>
>> On 4/6/20 12:35 PM, Andrei Borzenkov wrote:
>>> 06.04.2020 17:05, Sherrard Burton пишет:
from the quorum node:
>> ...
Apr 05 23:10:17 debug Client :::192.168.250.50:54462
On 4/6/20 1:20 PM, Sherrard Burton wrote:
On 4/6/20 12:35 PM, Andrei Borzenkov wrote:
06.04.2020 17:05, Sherrard Burton пишет:
from the quorum node:
...
Apr 05 23:10:17 debug Client :::192.168.250.50:54462 (cluster
xen-nfs01_xen-nfs02, node_id 1) sent quorum node list.
Apr 05
On 4/6/20 12:35 PM, Andrei Borzenkov wrote:
06.04.2020 17:05, Sherrard Burton пишет:
from the quorum node:
...
Apr 05 23:10:17 debug Client :::192.168.250.50:54462 (cluster
xen-nfs01_xen-nfs02, node_id 1) sent quorum node list.
Apr 05 23:10:17 debug msg seq num = 6
Apr 05
On 4/6/20 12:35 PM, Andrei Borzenkov wrote:
06.04.2020 17:05, Sherrard Burton пишет:
...or at least that's that i think is happening :-)
two-node cluster, plus quorum-only node. testing the behavior when
active node is gracefully rebooted. all seems well initially. resources
are migrated,
Good Day All!
I am trying to setup a new gfs2 cluster on centos7 using the included
HA/pacemaker. Previously we did this on Centos6 using cman.
We have a 7 node cluster that we need to share gfs2 iscsi SAN mounts
between. I setup two test mounts but when i fence or reboot a node
the mounts do
...or at least that's that i think is happening :-)
two-node cluster, plus quorum-only node. testing the behavior when
active node is gracefully rebooted. all seems well initially. resources
are migrated, come up and function as expected.
but, when the rebooted node starts to come back up,
> Note: When changing parameters the cluster will restart the resources, so
> keep that in mind.
and thats the problem, targetcli supports changees on the fly.. and pacemaker
restart all resources instead of only who was changed
On Saturday, April 4, 2020 7:51:49 AM CEST Strahil Nikolov