Re: [ClusterLabs] Why my node1 couldn't back to the clustering chain?

2021-04-08 Thread Antony Stone
On Thursday 08 April 2021 at 21:24:02, Jason Long wrote:

> Thanks.
> Thus, my cluster uses Node1 when Node2 is down?

Judging from your previous emails, you have a two node cluster.

What else is it going to use?


Antony.

-- 
Anything that improbable is effectively impossible.

 - Murray Gell-Mann, Nobel Prizewinner in Physics

   Please reply to the list;
 please *don't* CC me.
___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/


[ClusterLabs] Why my node1 couldn't back to the clustering chain?

2021-04-08 Thread Jason Long
Hello,
I stopped node1 manually as below:

[root@node1 ~]# pcs cluster stop node1
node1: Stopping Cluster (pacemaker)...
node1: Stopping Cluster (corosync)...
[root@node1 ~]#
[root@node1 ~]# pcs status
Error: error running crm_mon, is pacemaker running?
  Could not connect to the CIB: Transport endpoint is not connected
  crm_mon: Error: cluster is not available on this node

And after it, I checked my cluster logs: https://paste.ubuntu.com/p/9KNyt5nB79/
Then, I started node1:

[root@node1 ~]# pcs cluster start node1
node1: Starting Cluster...
[root@node1 ~]#

And checked my cluster logs again: https://paste.ubuntu.com/p/GQFbdhMvV3/
The status of my cluster is:

# pcs status 
Cluster name: mycluster
Cluster Summary:
  * Stack: corosync
  * Current DC: node2 (version 2.0.5-10.fc33-ba59be7122) - partition with quorum
  * Last updated: Thu Apr  8 18:33:49 2021
  * Last change:  Thu Apr  8 17:18:49 2021 by root via cibadmin on node1
  * 2 nodes configured
  * 3 resource instances configured


Node List:
  * Online: [ node1 node2 ]


Full List of Resources:
  * Resource Group: apache:
    * httpd_fs(ocf::heartbeat:Filesystem): Started node2
    * httpd_vip(ocf::heartbeat:IPaddr2): Started node2
    * httpd_ser(ocf::heartbeat:apache): Started node2


Daemon Status:
  corosync: active/enabled
  pacemaker: active/enabled
  pcsd: active/enabled

But, I can't browse the Apache web server? Why?

Thanks.
___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/


Re: [ClusterLabs] Why my node1 couldn't back to the clustering chain?

2021-04-08 Thread Jason Long
Why, when node1 is back, then web server still on node2? Why not switched?






On Thursday, April 8, 2021, 06:49:38 PM GMT+4:30, Ken Gaillot 
 wrote: 





On Thu, 2021-04-08 at 14:14 +, Jason Long wrote:

> Hello,
> I stopped node1 manually as below:
> 
> [root@node1 ~]# pcs cluster stop node1
> node1: Stopping Cluster (pacemaker)...
> node1: Stopping Cluster (corosync)...
> [root@node1 ~]#
> [root@node1 ~]# pcs status
> Error: error running crm_mon, is pacemaker running?
>  Could not connect to the CIB: Transport endpoint is not connected
>  crm_mon: Error: cluster is not available on this node
> 
> And after it, I checked my cluster logs: 
> https://paste.ubuntu.com/p/9KNyt5nB79/
> Then, I started node1:
> 
> [root@node1 ~]# pcs cluster start node1
> node1: Starting Cluster...
> [root@node1 ~]#
> 
> And checked my cluster logs again: 
> https://paste.ubuntu.com/p/GQFbdhMvV3/
> The status of my cluster is:
> 
> # pcs status 
> Cluster name: mycluster
> Cluster Summary:
>  * Stack: corosync
>  * Current DC: node2 (version 2.0.5-10.fc33-ba59be7122) - partition
> with quorum
>  * Last updated: Thu Apr  8 18:33:49 2021
>  * Last change:  Thu Apr  8 17:18:49 2021 by root via cibadmin on
> node1
>  * 2 nodes configured
>  * 3 resource instances configured
> 
> 
> Node List:
>  * Online: [ node1 node2 ]
> 
> 
> Full List of Resources:
>  * Resource Group: apache:
>    * httpd_fs    (ocf::heartbeat:Filesystem):    Started node2
>    * httpd_vip    (ocf::heartbeat:IPaddr2):    Started node2
>    * httpd_ser    (ocf::heartbeat:apache):    Started node2
> 
> 
> Daemon Status:
>  corosync: active/enabled
>  pacemaker: active/enabled
>  pcsd: active/enabled
> 
> But, I can't browse the Apache web server? Why?
> 
> Thanks.


Based on the above status output, the web server is running on node2,
using the IP address specified by the httpd_vip resource. Are you
trying to contact the web server at a name corresponding to that IP?
-- 
Ken Gaillot 

___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/

___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/


Re: [ClusterLabs] Why my node1 couldn't back to the clustering chain?

2021-04-08 Thread Antony Stone
On Thursday 08 April 2021 at 21:33:48, Jason Long wrote:

> Yes, I just wanted to know. In clustering, when a node is down and
> go online again, then the cluster will not use it again until another node
> fails. Am I right?

In general, yes - unless you have specified a location contraint for resources; 
however as already discussed, this is unusual and doesn't apply by default.


Antony.

-- 
A user interface is like a joke.
If you have to explain it, it means it doesn't work.

   Please reply to the list;
 please *don't* CC me.
___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/


Re: [ClusterLabs] Why my node1 couldn't back to the clustering chain?

2021-04-08 Thread Antony Stone
On Thursday 08 April 2021 at 21:33:48, Jason Long wrote:

> Yes, I just wanted to know. In clustering, when a node is down and
> go online again, then the cluster will not use it again until another node
> fails. Am I right?

Think of it like this:

You can have as many nodes in your cluster as you think you need, and I'm 
going to assume that you only need the resources running on one node at any 
given time.

Cluster management (eg: corosync / pacemaker) will ensure that the resources 
are running on *a* node.

The resources will be moved *away* from that node if they can't run there any 
more, for some reason (the node going down is a good reason).

However, there is almost never any concept of the resources being moved *to* a 
(specific) node.  If they get moved away from one node, then obviously they 
need to be moved to another one, but the move happens because the resources 
have to be moved *away* from the first node, not because the cluster thinks 
they need to be moved *to* the second node.

So, if a node is running its resources quite happily, it doesn't matter what 
happens to all the other nodes (provided quorum remains); the resources will 
stay running on that same node all the time.


Antony.

-- 
Was ist braun, liegt ins Gras, und raucht?
Ein Kaminchen...

   Please reply to the list;
 please *don't* CC me.
___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/


[ClusterLabs] [Problem] In RHEL8.4beta, pgsql resource control fails.

2021-04-08 Thread renayama19661014
Hi Ken,
Hi All,

In the pgsql resource, crm_mon is executed in the process of demote and stop, 
and the result is processed.

However, pacemaker included in RHEL8.4beta fails to execute this crm_mon.
 - The problem also occurs on github 
master(c40e18f085fad9ef1d9d79f671ed8a69eb3e753f).

The problem can be easily reproduced in the following ways.

Step1. Modify to execute crm_mon in the stop process of the Dummy resource.


dummy_stop() {
    mon=$(crm_mon -1)
    ret=$?
    ocf_log info "### YAMAUCHI  crm_mon[${ret}] : ${mon}"
    dummy_monitor
    if [ $? =  $OCF_SUCCESS ]; then
        rm ${OCF_RESKEY_state}
    fi
    return $OCF_SUCCESS
}


Step2. Configure a cluster with two nodes.


[root@rh84-beta01 ~]# crm_mon -rfA1
Cluster Summary:
  * Stack: corosync
  * Current DC: rh84-beta01 (version 2.0.5-8.el8-ba59be7122) - partition with 
quorum
  * Last updated: Thu Apr  8 18:00:52 2021
  * Last change:  Thu Apr  8 18:00:38 2021 by root via cibadmin on rh84-beta01
  * 2 nodes configured
  * 1 resource instance configured

Node List:
  * Online: [ rh84-beta01 rh84-beta02 ]

Full List of Resources:
  * dummy-1     (ocf::heartbeat:Dummy):  Started rh84-beta01

Migration Summary:


Step3. Stop the node where the Dummy resource is running. The resource will 
fail over.

[root@rh84-beta02 ~]# crm_mon -rfA1
Cluster Summary:
  * Stack: corosync
  * Current DC: rh84-beta02 (version 2.0.5-8.el8-ba59be7122) - partition with 
quorum
  * Last updated: Thu Apr  8 18:08:56 2021
  * Last change:  Thu Apr  8 18:05:08 2021 by root via cibadmin on rh84-beta01
  * 2 nodes configured
  * 1 resource instance configured

Node List:
  * Online: [ rh84-beta02 ]
  * OFFLINE: [ rh84-beta01 ]

Full List of Resources:
  * dummy-1     (ocf::heartbeat:Dummy):  Started rh84-beta02


However, if you look at the log, you can see that the execution of crm_mon in 
the stop processing of the Dummy resource has failed.


Apr 08 18:05:17  Dummy(dummy-1)[2631]:    INFO: ### YAMAUCHI  crm_mon[102] 
: Pacemaker daemons shutting down ...
Apr 08 18:05:17 rh84-beta01 pacemaker-execd     [2219] (log_op_output)  notice: 
dummy-1_stop_0[2631] error output [ crm_mon: Error: cluster is not available on 
this node ]


Similarly, pgsql also executes crm_mon with demote or stop, so control fails.

The problem seems to be related to the next fix.
 * Report pacemakerd in state waiting for sbd
  - https://github.com/ClusterLabs/pacemaker/pull/2278

The problem does not occur with the release version of Pacemaker 2.0.5 or the 
Pacemaker included with RHEL8.3.

This issue has a huge impact on the user.

Perhaps it also affects the control of other resources that utilize crm_mon.

Please improve the release version of RHEL8.4 so that it includes Pacemaker 
which does not cause this problem.
 * Distributions other than RHEL may also be affected in future releases.


This content is the same as the following Bugzilla.
 - https://bugs.clusterlabs.org/show_bug.cgi?id=5471


Best Regards,
Hideo Yamauchi.

___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/


Re: [ClusterLabs] Why my node1 couldn't back to the clustering chain?

2021-04-08 Thread Jason Long
Thanks.
Thus, my cluster uses Node1 when Node2 is down?






On Thursday, April 8, 2021, 07:32:14 PM GMT+4:30, Antony Stone 
 wrote: 





On Thursday 08 April 2021 at 16:55:47, Ken Gaillot wrote:

> On Thu, 2021-04-08 at 14:32 +, Jason Long wrote:
> > Why, when node1 is back, then web server still on node2? Why not
> > switched?
> 
> By default, there are no preferences as to where a resource should run.
> The cluster is free to move or leave resources as needed.
> 
> If you want a resource to prefer a particular node, you can use
> location constraints to express that. However there is rarely a need to
> do so; in most clusters, nodes are equally interchangeable.

I would add that it is generally preferable, in fact, to leave a resource 
where it is unless there's a good reason to move it.

Imagine for example that you have three nodes and a resource is running on 
node A.

Node A fails, the resource moves to node C, and node A then comes back again.

If the resource then got moved back to node A just because it had recovered, 
you've now had two transitions of the resource (each of which means *some* 
downtime, however small that may be), whereas if it remains running on node C 
until such time as there's a good reason to move it away, you've only had one 
transition to cope with.


Antony.

-- 
"The future is already here.  It's just not evenly distributed yet."

- William Gibson

                                                  Please reply to the list;
                                                        please *don't* CC me.

___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/
___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/


Re: [ClusterLabs] Antw: [EXT] Re: how to setup single node cluster

2021-04-08 Thread Strahil Nikolov
Maybe booth can take care when it dies and powers up the resource in the DR.
Best Regards,Strahil Nikolov
 
 
  On Thu, Apr 8, 2021 at 10:28, Ulrich Windl 
wrote:   >>> Reid Wahl  schrieb am 08.04.2021 um 08:32 in 
Nachricht
:
> On Wed, Apr 7, 2021 at 11:27 PM d tbsky  wrote:
> 
>> Reid Wahl 
>> > I don't think we do require fencing for single-node clusters. (Anyone at
>> Red Hat, feel free to comment.) I vaguely recall an internal mailing list
>> or IRC conversation where we discussed this months ago, but I can't find it
>> now. I've also checked our support policies documentation, and it's not
>> mentioned in the "cluster size" doc or the "fencing" doc.
>>
>>    since the cluster is 100% alive or 100% dead with single node, I
>> think fencing/quorum is not required. I am just curious what is the
>> usage case. since RedHat supports it, it must be useful in real
>> scenario.
>>
> 
> Disaster recovery is the main use case we had in mind. See the RHEL 8.2
> release notes:
>  -
> https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/htm 
> l/8.2_release_notes/rhel-8-2-0-release#enhancement_high-availability-and-clus
> ters

I wonder: How does desaster recovery look like if a plane crashed into your 
single-node of the cluster?
(Without a cluster you would have some off-site backup, and probably some spare 
hardware off-site, too)
HA in that case is really interesting, as setting up the new hardware has to be 
done manually...

> 
> I thought I also remembered some other use case involving MS SQL, but I
> can't find anything about it so I might be remembering incorrectly.
> 
> 
> -- 
> Regards,
> 
> Reid Wahl, RHCA
> Senior Software Maintenance Engineer, Red Hat
> CEE - Platform Support Delivery - ClusterHA




___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/
  
___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/


Re: [ClusterLabs] Why my node1 couldn't back to the clustering chain?

2021-04-08 Thread Jason Long
Yes, I just wanted to know. In clustering, when a node is down and go online 
again, then the cluster will not use it again until another node fails. Am I 
right?






On Thursday, April 8, 2021, 11:58:16 PM GMT+4:30, Antony Stone 
 wrote: 





On Thursday 08 April 2021 at 21:24:02, Jason Long wrote:

> Thanks.
> Thus, my cluster uses Node1 when Node2 is down?

Judging from your previous emails, you have a two node cluster.

What else is it going to use?


Antony.

-- 
Anything that improbable is effectively impossible.

- Murray Gell-Mann, Nobel Prizewinner in Physics


                                                  Please reply to the list;
                                                        please *don't* CC me.
___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/
___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/


Re: [ClusterLabs] how to setup single node cluster

2021-04-08 Thread Andrei Borzenkov
On 08.04.2021 09:26, d tbsky wrote:
> Reid Wahl 
>> I don't think we do require fencing for single-node clusters. (Anyone at Red 
>> Hat, feel free to comment.) I vaguely recall an internal mailing list or IRC 
>> conversation where we discussed this months ago, but I can't find it now. 
>> I've also checked our support policies documentation, and it's not mentioned 
>> in the "cluster size" doc or the "fencing" doc.
> 
>since the cluster is 100% alive or 100% dead with single node, I
> think fencing/quorum is not required. I am just curious what is the
> usage case. since RedHat supports it, it must be useful in real
> scenario.


I do not know what "disaster recovery" configuration you have in mind,
but if you intend to use geo clustering fencing can speed up fail-over
so it is at least useful.

Even in single node cluster if resource failed to stop you are stuck -
you cannot actually do anything from that point without manual
intervention. Depending on configuration and requirements rebooting node
may be considered as an attempt to automatically "reset" cluster state.
___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/


[ClusterLabs] Antw: [EXT] how to setup single node cluster

2021-04-08 Thread Ulrich Windl
>>> d tbsky  schrieb am 08.04.2021 um 05:52 in Nachricht
:
> Hi:
> I found RHEL 8.2 support single node cluster now.  but I didn't
> find further document to explain the concept. RHEL 8.2 also support
> "disaster recovery cluster". so I think maybe a single node disaster
> recovery cluster is not bad.
> 
>I think corosync is still necessary under single node cluster. or
> is there other new style of configuration?

IMHO if you want a single-.node cluster, and you are not planning to add more 
nodes, you'll be better off using a utility like monit to manage your 
processes...

> 
> thanks for help!
> ___
> Manage your subscription:
> https://lists.clusterlabs.org/mailman/listinfo/users 
> 
> ClusterLabs home: https://www.clusterlabs.org/ 




___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/


Re: [ClusterLabs] how to setup single node cluster

2021-04-08 Thread Reid Wahl
On Wed, Apr 7, 2021 at 9:46 PM Strahil Nikolov 
wrote:

> I always though that the setup is the same, just the node count is only
> one.
>
> I guess you need pcs, corosync + pacemaker.
> If RH is going to support it, they will require fencing. Most probably sbd
> or ipmi are the best candidates.
>

I don't think we do require fencing for single-node clusters. (Anyone at
Red Hat, feel free to comment.) I vaguely recall an internal mailing list
or IRC conversation where we discussed this months ago, but I can't find it
now. I've also checked our support policies documentation, and it's not
mentioned in the "cluster size" doc or the "fencing" doc.

The closest thing I can find is the following, from the cluster size doc[1]:
~~~
RHEL 8.2 and later: Support for 1 or more nodes

   - Single node clusters do not support DLM and GFS2 filesystems (as they
   require fencing).

~~~

To me that suggests that fencing isn't required in a single-node cluster.
Maybe sbd could work (I haven't thought it through), but conventional power
fencing (e.g., fence_ipmilan) wouldn't. That's because most conventional
power fencing agents require sending a "power on" signal after the "power
off" is complete.

[1] https://access.redhat.com/articles/3069031


> Best Regards,
> Strahil Nikolov
>
> On Thu, Apr 8, 2021 at 6:52, d tbsky
>  wrote:
> Hi:
> I found RHEL 8.2 support single node cluster now.  but I didn't
> find further document to explain the concept. RHEL 8.2 also support
> "disaster recovery cluster". so I think maybe a single node disaster
> recovery cluster is not bad.
>
>   I think corosync is still necessary under single node cluster. or
> is there other new style of configuration?
>
> thanks for help!
> ___
> Manage your subscription:
> https://lists.clusterlabs.org/mailman/listinfo/users
>
> ClusterLabs home: https://www.clusterlabs.org/
>
> ___
> Manage your subscription:
> https://lists.clusterlabs.org/mailman/listinfo/users
>
> ClusterLabs home: https://www.clusterlabs.org/
>


-- 
Regards,

Reid Wahl, RHCA
Senior Software Maintenance Engineer, Red Hat
CEE - Platform Support Delivery - ClusterHA
___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/


Re: [ClusterLabs] Antw: [EXT] how to setup single node cluster

2021-04-08 Thread d tbsky
Ulrich Windl 
>
> >>> d tbsky  schrieb am 08.04.2021 um 05:52 in Nachricht
> :
> > Hi:
> > I found RHEL 8.2 support single node cluster now.  but I didn't
> > find further document to explain the concept. RHEL 8.2 also support
> > "disaster recovery cluster". so I think maybe a single node disaster
> > recovery cluster is not bad.
> >
> >I think corosync is still necessary under single node cluster. or
> > is there other new style of configuration?
>
> IMHO if you want a single-.node cluster, and you are not planning to add more 
> nodes, you'll be better off using a utility like monit to manage your 
> processes...

sorry I didn't mention pacemaker in my previous post. I want a single
node pacemaker disaster recovery cluster, which can be managed by
normal pacemaker utilities like pcs.
maybe there is other case which single node pacemaker cluster is
useful, I just don't know now.
___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/


Re: [ClusterLabs] how to setup single node cluster

2021-04-08 Thread Klaus Wenninger

On 4/8/21 8:16 AM, Reid Wahl wrote:



On Wed, Apr 7, 2021 at 9:46 PM Strahil Nikolov > wrote:


I always though that the setup is the same, just the node count is
only one.

I guess you need pcs, corosync + pacemaker.
If RH is going to support it, they will require fencing. Most
probably sbd or ipmi are the best candidates.


I don't think we do require fencing for single-node clusters. (Anyone 
at Red Hat, feel free to comment.) I vaguely recall an internal 
mailing list or IRC conversation where we discussed this months ago, 
but I can't find it now. I've also checked our support policies 
documentation, and it's not mentioned in the "cluster size" doc or the 
"fencing" doc.


The closest thing I can find is the following, from the cluster size 
doc[1]:

~~~
RHEL 8.2 and later: Support for 1 or more nodes

  * Single node clusters do not support DLM and GFS2 filesystems (as
they require fencing).

~~~

To me that suggests that fencing isn't required in a single-node 
cluster. Maybe sbd could work (I haven't thought it through), but 
conventional power fencing (e.g., fence_ipmilan) wouldn't. That's 
because most conventional power fencing agents require sending a 
"power on" signal after the "power off" is complete.

And moreover you have to be alive enough to kick off
conventional power fencing to self-fence ;-)
With sbd the hardware-watchdog should kick in.

Klaus


[1] https://access.redhat.com/articles/3069031 




Best Regards,
Strahil Nikolov

On Thu, Apr 8, 2021 at 6:52, d tbsky
mailto:tbs...@gmail.com>> wrote:
Hi:
    I found RHEL 8.2 support single node cluster now.  but I
didn't
find further document to explain the concept. RHEL 8.2 also
support
"disaster recovery cluster". so I think maybe a single node
disaster
recovery cluster is not bad.

  I think corosync is still necessary under single node
cluster. or
is there other new style of configuration?

    thanks for help!
___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users


ClusterLabs home: https://www.clusterlabs.org/


___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users


ClusterLabs home: https://www.clusterlabs.org/




--
Regards,

Reid Wahl, RHCA
Senior Software Maintenance Engineer, Red Hat
CEE - Platform Support Delivery - ClusterHA

___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/


___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/


Re: [ClusterLabs] how to setup single node cluster

2021-04-08 Thread d tbsky
Reid Wahl 
> I don't think we do require fencing for single-node clusters. (Anyone at Red 
> Hat, feel free to comment.) I vaguely recall an internal mailing list or IRC 
> conversation where we discussed this months ago, but I can't find it now. 
> I've also checked our support policies documentation, and it's not mentioned 
> in the "cluster size" doc or the "fencing" doc.

   since the cluster is 100% alive or 100% dead with single node, I
think fencing/quorum is not required. I am just curious what is the
usage case. since RedHat supports it, it must be useful in real
scenario.
___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/


[ClusterLabs] Libreswan state machine

2021-04-08 Thread Ryszard Styczynski
Hello,

I'm looking for IPsec state machine implemented in Libreswan. I may guess how 
states are correlated, but having a state machine will give me a final answer.

My current question is what is a next state after STATE_QUICK_R2? Should IPsec 
engine wait for rekeying? How long? How many times should repeat waiting step? 
Should go back to STATE_MAIN and delete SA? When?

I currently see i my system that:
1. STATE_QUICK_R2 may go to STATE_MAIN_R3, delete SA, and reestablish 
connection from Phase 1 - it happens after 15 seconds
2. STATE_QUICK_R2 may go to STATE_QUICK_R1 and process rekeying - it happens 
when peer responds quicker than 15 seconds

How to understand why sometimes SA is deleted (what causes 5 minutes line 
drop), and sometimes rekeying is completed? How to control time limits? 

Thanks,
Ryszard 
___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/


Re: [ClusterLabs] Libreswan state machine

2021-04-08 Thread Ryszard Styczynski
That's right. I just realised it. Apologies for the spam. 

> On 8 Apr 2021, at 09:36, Reid Wahl  wrote:
> 
> Hi, Ryszard.
> 
> I believe you may have mailed the wrong list by mistake, as your question 
> doesn't appear related to the ClusterLabs project. Perhaps someone here might 
> know though.
> 
> On Thursday, April 8, 2021, Ryszard Styczynski  > wrote:
> > Hello,
> >
> > I'm looking for IPsec state machine implemented in Libreswan. I may guess 
> > how states are correlated, but having a state machine will give me a final 
> > answer.
> >
> > My current question is what is a next state after STATE_QUICK_R2? Should 
> > IPsec engine wait for rekeying? How long? How many times should repeat 
> > waiting step? Should go back to STATE_MAIN and delete SA? When?
> >
> > I currently see i my system that:
> > 1. STATE_QUICK_R2 may go to STATE_MAIN_R3, delete SA, and reestablish 
> > connection from Phase 1 - it happens after 15 seconds
> > 2. STATE_QUICK_R2 may go to STATE_QUICK_R1 and process rekeying - it 
> > happens when peer responds quicker than 15 seconds
> >
> > How to understand why sometimes SA is deleted (what causes 5 minutes line 
> > drop), and sometimes rekeying is completed? How to control time limits?
> >
> > Thanks,
> > Ryszard
> > ___
> > Manage your subscription:
> > https://lists.clusterlabs.org/mailman/listinfo/users 
> > 
> >
> > ClusterLabs home: https://www.clusterlabs.org/ 
> > 
> >
> >
> 
> -- 
> Regards,
> 
> Reid Wahl, RHCA
> Senior Software Maintenance Engineer, Red Hat
> CEE - Platform Support Delivery - ClusterHA
> 

___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/


Re: [ClusterLabs] how to setup single node cluster

2021-04-08 Thread Strahil Nikolov
I always though that the setup is the same, just the node count is only one.
I guess you need pcs, corosync + pacemaker.If RH is going to support it, they 
will require fencing. Most probably sbd or ipmi are the best candidates.
Best Regards,Strahil Nikolov 
 
  On Thu, Apr 8, 2021 at 6:52, d tbsky wrote:   Hi:
    I found RHEL 8.2 support single node cluster now.  but I didn't
find further document to explain the concept. RHEL 8.2 also support
"disaster recovery cluster". so I think maybe a single node disaster
recovery cluster is not bad.

  I think corosync is still necessary under single node cluster. or
is there other new style of configuration?

    thanks for help!
___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/
  
___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/


Re: [ClusterLabs] how to setup single node cluster

2021-04-08 Thread d tbsky
Reid Wahl 
> Disaster recovery is the main use case we had in mind. See the RHEL 8.2 
> release notes:
>   - 
> https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/8.2_release_notes/rhel-8-2-0-release#enhancement_high-availability-and-clusters
>
> I thought I also remembered some other use case involving MS SQL, but I can't 
> find anything about it so I might be remembering incorrectly.

thanks a lot for confirmation. according to the discussion above, I
think the setup procedure is similar for singe-node cluster. I should
still use corosync (although it seems sync nowhere).
I will try that when I have time.
thanks again for your kindly help!
___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/


[ClusterLabs] Antw: [EXT] Re: how to setup single node cluster

2021-04-08 Thread Ulrich Windl
>>> Klaus Wenninger  schrieb am 08.04.2021 um 08:26 in
Nachricht <01fe6b6e-690a-2ea7-6218-8545f0b7a...@redhat.com>:
> On 4/8/21 8:16 AM, Reid Wahl wrote:
>>
>>
>> On Wed, Apr 7, 2021 at 9:46 PM Strahil Nikolov > > wrote:
>>
>> I always though that the setup is the same, just the node count is
>> only one.
>>
>> I guess you need pcs, corosync + pacemaker.
>> If RH is going to support it, they will require fencing. Most
>> probably sbd or ipmi are the best candidates.
>>
>>
>> I don't think we do require fencing for single-node clusters. (Anyone 
>> at Red Hat, feel free to comment.) I vaguely recall an internal 
>> mailing list or IRC conversation where we discussed this months ago, 
>> but I can't find it now. I've also checked our support policies 
>> documentation, and it's not mentioned in the "cluster size" doc or the 
>> "fencing" doc.
>>
>> The closest thing I can find is the following, from the cluster size 
>> doc[1]:
>> ~~~
>> RHEL 8.2 and later: Support for 1 or more nodes
>>
>>   * Single node clusters do not support DLM and GFS2 filesystems (as
>> they require fencing).
>>
>> ~~~

Actually I think using DLM and a cluster filesystem for just one single node 
would be overkill, BUT it should work (if you have planned to extend your 
1-node cluster to more nodes at a later time).
Fencing for a single-node-cluster just means reboot, so that shouldn't really 
be the problem.

>>
>> To me that suggests that fencing isn't required in a single-node 
>> cluster. Maybe sbd could work (I haven't thought it through), but 
>> conventional power fencing (e.g., fence_ipmilan) wouldn't. That's 
>> because most conventional power fencing agents require sending a 
>> "power on" signal after the "power off" is complete.
> And moreover you have to be alive enough to kick off
> conventional power fencing to self-fence ;-)
> With sbd the hardware-watchdog should kick in.
> 
> Klaus
>>
>> [1] https://access.redhat.com/articles/3069031 
>> 
>>
>>
>> Best Regards,
>> Strahil Nikolov
>>
>> On Thu, Apr 8, 2021 at 6:52, d tbsky
>> mailto:tbs...@gmail.com>> wrote:
>> Hi:
>> I found RHEL 8.2 support single node cluster now.  but I
>> didn't
>> find further document to explain the concept. RHEL 8.2 also
>> support
>> "disaster recovery cluster". so I think maybe a single node
>> disaster
>> recovery cluster is not bad.
>>
>>   I think corosync is still necessary under single node
>> cluster. or
>> is there other new style of configuration?
>>
>> thanks for help!
>> ___
>> Manage your subscription:
>> https://lists.clusterlabs.org/mailman/listinfo/users 
>> 
>>
>> ClusterLabs home: https://www.clusterlabs.org/ 
>> 
>>
>> ___
>> Manage your subscription:
>> https://lists.clusterlabs.org/mailman/listinfo/users 
>> 
>>
>> ClusterLabs home: https://www.clusterlabs.org/ 
>> 
>>
>>
>>
>> -- 
>> Regards,
>>
>> Reid Wahl, RHCA
>> Senior Software Maintenance Engineer, Red Hat
>> CEE - Platform Support Delivery - ClusterHA
>>
>> ___
>> Manage your subscription:
>> https://lists.clusterlabs.org/mailman/listinfo/users 
>>
>> ClusterLabs home: https://www.clusterlabs.org/ 
> 
> ___
> Manage your subscription:
> https://lists.clusterlabs.org/mailman/listinfo/users 
> 
> ClusterLabs home: https://www.clusterlabs.org/ 




___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/


Re: [ClusterLabs] Libreswan state machine

2021-04-08 Thread Reid Wahl
Hi, Ryszard.

I believe you may have mailed the wrong list by mistake, as your question
doesn't appear related to the ClusterLabs project. Perhaps someone here
might know though.

On Thursday, April 8, 2021, Ryszard Styczynski 
wrote:
> Hello,
>
> I'm looking for IPsec state machine implemented in Libreswan. I may guess
how states are correlated, but having a state machine will give me a final
answer.
>
> My current question is what is a next state after STATE_QUICK_R2? Should
IPsec engine wait for rekeying? How long? How many times should repeat
waiting step? Should go back to STATE_MAIN and delete SA? When?
>
> I currently see i my system that:
> 1. STATE_QUICK_R2 may go to STATE_MAIN_R3, delete SA, and reestablish
connection from Phase 1 - it happens after 15 seconds
> 2. STATE_QUICK_R2 may go to STATE_QUICK_R1 and process rekeying - it
happens when peer responds quicker than 15 seconds
>
> How to understand why sometimes SA is deleted (what causes 5 minutes line
drop), and sometimes rekeying is completed? How to control time limits?
>
> Thanks,
> Ryszard
> ___
> Manage your subscription:
> https://lists.clusterlabs.org/mailman/listinfo/users
>
> ClusterLabs home: https://www.clusterlabs.org/
>
>

-- 
Regards,

Reid Wahl, RHCA
Senior Software Maintenance Engineer, Red Hat
CEE - Platform Support Delivery - ClusterHA
___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/


Re: [ClusterLabs] Libreswan state machine

2021-04-08 Thread Reid Wahl
No worries, best of luck!

On Thursday, April 8, 2021, Ryszard Styczynski 
wrote:
> That's right. I just realised it. Apologies for the spam.
>
> On 8 Apr 2021, at 09:36, Reid Wahl  wrote:
> Hi, Ryszard.
>
> I believe you may have mailed the wrong list by mistake, as your question
doesn't appear related to the ClusterLabs project. Perhaps someone here
might know though.
>
> On Thursday, April 8, 2021, Ryszard Styczynski 
wrote:
>> Hello,
>>
>> I'm looking for IPsec state machine implemented in Libreswan. I may
guess how states are correlated, but having a state machine will give me a
final answer.
>>
>> My current question is what is a next state after STATE_QUICK_R2? Should
IPsec engine wait for rekeying? How long? How many times should repeat
waiting step? Should go back to STATE_MAIN and delete SA? When?
>>
>> I currently see i my system that:
>> 1. STATE_QUICK_R2 may go to STATE_MAIN_R3, delete SA, and reestablish
connection from Phase 1 - it happens after 15 seconds
>> 2. STATE_QUICK_R2 may go to STATE_QUICK_R1 and process rekeying - it
happens when peer responds quicker than 15 seconds
>>
>> How to understand why sometimes SA is deleted (what causes 5 minutes
line drop), and sometimes rekeying is completed? How to control time limits?
>>
>> Thanks,
>> Ryszard
>> ___
>> Manage your subscription:
>> https://lists.clusterlabs.org/mailman/listinfo/users
>>
>> ClusterLabs home: https://www.clusterlabs.org/
>>
>>
>
> --
> Regards,
>
> Reid Wahl, RHCA
> Senior Software Maintenance Engineer, Red Hat
> CEE - Platform Support Delivery - ClusterHA
>
>

-- 
Regards,

Reid Wahl, RHCA
Senior Software Maintenance Engineer, Red Hat
CEE - Platform Support Delivery - ClusterHA
___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/


Re: [ClusterLabs] how to setup single node cluster

2021-04-08 Thread Reid Wahl
On Wed, Apr 7, 2021 at 11:27 PM d tbsky  wrote:

> Reid Wahl 
> > I don't think we do require fencing for single-node clusters. (Anyone at
> Red Hat, feel free to comment.) I vaguely recall an internal mailing list
> or IRC conversation where we discussed this months ago, but I can't find it
> now. I've also checked our support policies documentation, and it's not
> mentioned in the "cluster size" doc or the "fencing" doc.
>
>since the cluster is 100% alive or 100% dead with single node, I
> think fencing/quorum is not required. I am just curious what is the
> usage case. since RedHat supports it, it must be useful in real
> scenario.
>

Disaster recovery is the main use case we had in mind. See the RHEL 8.2
release notes:
  -
https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/8.2_release_notes/rhel-8-2-0-release#enhancement_high-availability-and-clusters

I thought I also remembered some other use case involving MS SQL, but I
can't find anything about it so I might be remembering incorrectly.


-- 
Regards,

Reid Wahl, RHCA
Senior Software Maintenance Engineer, Red Hat
CEE - Platform Support Delivery - ClusterHA
___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/


[ClusterLabs] Antw: [EXT] Re: how to setup single node cluster

2021-04-08 Thread Ulrich Windl
>>> Reid Wahl  schrieb am 08.04.2021 um 08:32 in Nachricht
:
> On Wed, Apr 7, 2021 at 11:27 PM d tbsky  wrote:
> 
>> Reid Wahl 
>> > I don't think we do require fencing for single-node clusters. (Anyone at
>> Red Hat, feel free to comment.) I vaguely recall an internal mailing list
>> or IRC conversation where we discussed this months ago, but I can't find it
>> now. I've also checked our support policies documentation, and it's not
>> mentioned in the "cluster size" doc or the "fencing" doc.
>>
>>since the cluster is 100% alive or 100% dead with single node, I
>> think fencing/quorum is not required. I am just curious what is the
>> usage case. since RedHat supports it, it must be useful in real
>> scenario.
>>
> 
> Disaster recovery is the main use case we had in mind. See the RHEL 8.2
> release notes:
>   -
> https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/htm 
> l/8.2_release_notes/rhel-8-2-0-release#enhancement_high-availability-and-clus
> ters

I wonder: How does desaster recovery look like if a plane crashed into your 
single-node of the cluster?
(Without a cluster you would have some off-site backup, and probably some spare 
hardware off-site, too)
HA in that case is really interesting, as setting up the new hardware has to be 
done manually...

> 
> I thought I also remembered some other use case involving MS SQL, but I
> can't find anything about it so I might be remembering incorrectly.
> 
> 
> -- 
> Regards,
> 
> Reid Wahl, RHCA
> Senior Software Maintenance Engineer, Red Hat
> CEE - Platform Support Delivery - ClusterHA




___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/


Re: [ClusterLabs] Why my node1 couldn't back to the clustering chain?

2021-04-08 Thread Ken Gaillot
On Thu, 2021-04-08 at 14:14 +, Jason Long wrote:
> Hello,
> I stopped node1 manually as below:
> 
> [root@node1 ~]# pcs cluster stop node1
> node1: Stopping Cluster (pacemaker)...
> node1: Stopping Cluster (corosync)...
> [root@node1 ~]#
> [root@node1 ~]# pcs status
> Error: error running crm_mon, is pacemaker running?
>   Could not connect to the CIB: Transport endpoint is not connected
>   crm_mon: Error: cluster is not available on this node
> 
> And after it, I checked my cluster logs: 
> https://paste.ubuntu.com/p/9KNyt5nB79/
> Then, I started node1:
> 
> [root@node1 ~]# pcs cluster start node1
> node1: Starting Cluster...
> [root@node1 ~]#
> 
> And checked my cluster logs again: 
> https://paste.ubuntu.com/p/GQFbdhMvV3/
> The status of my cluster is:
> 
> # pcs status 
> Cluster name: mycluster
> Cluster Summary:
>   * Stack: corosync
>   * Current DC: node2 (version 2.0.5-10.fc33-ba59be7122) - partition
> with quorum
>   * Last updated: Thu Apr  8 18:33:49 2021
>   * Last change:  Thu Apr  8 17:18:49 2021 by root via cibadmin on
> node1
>   * 2 nodes configured
>   * 3 resource instances configured
> 
> 
> Node List:
>   * Online: [ node1 node2 ]
> 
> 
> Full List of Resources:
>   * Resource Group: apache:
> * httpd_fs(ocf::heartbeat:Filesystem): Started node2
> * httpd_vip(ocf::heartbeat:IPaddr2): Started node2
> * httpd_ser(ocf::heartbeat:apache): Started node2
> 
> 
> Daemon Status:
>   corosync: active/enabled
>   pacemaker: active/enabled
>   pcsd: active/enabled
> 
> But, I can't browse the Apache web server? Why?
> 
> Thanks.

Based on the above status output, the web server is running on node2,
using the IP address specified by the httpd_vip resource. Are you
trying to contact the web server at a name corresponding to that IP?
-- 
Ken Gaillot 

___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/


[ClusterLabs] Antw: [EXT] Why my node1 couldn't back to the clustering chain?

2021-04-08 Thread Ulrich Windl
>>> Jason Long  schrieb am 08.04.2021 um 16:14 in Nachricht
<409853881.89954.1617891244...@mail.yahoo.com>:
> Hello,
> I stopped node1 manually as below:
> 
> [root@node1 ~]# pcs cluster stop node1
> node1: Stopping Cluster (pacemaker)...
> node1: Stopping Cluster (corosync)...
> [root@node1 ~]#
> [root@node1 ~]# pcs status
> Error: error running crm_mon, is pacemaker running?
>   Could not connect to the CIB: Transport endpoint is not connected
>   crm_mon: Error: cluster is not available on this node
> 
> And after it, I checked my cluster logs: 
> https://paste.ubuntu.com/p/9KNyt5nB79/ 
> Then, I started node1:
> 
> [root@node1 ~]# pcs cluster start node1
> node1: Starting Cluster...
> [root@node1 ~]#
> 
> And checked my cluster logs again: https://paste.ubuntu.com/p/GQFbdhMvV3/ 
> The status of my cluster is:
> 
> # pcs status 
> Cluster name: mycluster
> Cluster Summary:
>   * Stack: corosync
>   * Current DC: node2 (version 2.0.5-10.fc33-ba59be7122) - partition with 
> quorum
>   * Last updated: Thu Apr  8 18:33:49 2021
>   * Last change:  Thu Apr  8 17:18:49 2021 by root via cibadmin on node1
>   * 2 nodes configured
>   * 3 resource instances configured
> 
> 
> Node List:
>   * Online: [ node1 node2 ]
> 
> 
> Full List of Resources:
>   * Resource Group: apache:
> * httpd_fs(ocf::heartbeat:Filesystem): Started node2
> * httpd_vip(ocf::heartbeat:IPaddr2): Started node2
> * httpd_ser(ocf::heartbeat:apache): Started node2
> 
> 
> Daemon Status:
>   corosync: active/enabled
>   pacemaker: active/enabled
>   pcsd: active/enabled
> 
> But, I can't browse the Apache web server? Why?

Why not start with the apache's error or access logs?

Also: What does "can't" mean?

> 
> Thanks.
> ___
> Manage your subscription:
> https://lists.clusterlabs.org/mailman/listinfo/users 
> 
> ClusterLabs home: https://www.clusterlabs.org/ 




___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/


Re: [ClusterLabs] Why my node1 couldn't back to the clustering chain?

2021-04-08 Thread Ken Gaillot
On Thu, 2021-04-08 at 14:32 +, Jason Long wrote:
> Why, when node1 is back, then web server still on node2? Why not
> switched?

By default, there are no preferences as to where a resource should run.
The cluster is free to move or leave resources as needed.

If you want a resource to prefer a particular node, you can use
location constraints to express that. However there is rarely a need to
do so; in most clusters, nodes are equally interchangeable.


> 
> On Thursday, April 8, 2021, 06:49:38 PM GMT+4:30, Ken Gaillot <
> kgail...@redhat.com> wrote: 
> 
> 
> 
> 
> 
> On Thu, 2021-04-08 at 14:14 +, Jason Long wrote:
> 
> > Hello,
> > I stopped node1 manually as below:
> > 
> > [root@node1 ~]# pcs cluster stop node1
> > node1: Stopping Cluster (pacemaker)...
> > node1: Stopping Cluster (corosync)...
> > [root@node1 ~]#
> > [root@node1 ~]# pcs status
> > Error: error running crm_mon, is pacemaker running?
> >   Could not connect to the CIB: Transport endpoint is not connected
> >   crm_mon: Error: cluster is not available on this node
> > 
> > And after it, I checked my cluster logs: 
> > https://paste.ubuntu.com/p/9KNyt5nB79/
> > Then, I started node1:
> > 
> > [root@node1 ~]# pcs cluster start node1
> > node1: Starting Cluster...
> > [root@node1 ~]#
> > 
> > And checked my cluster logs again: 
> > https://paste.ubuntu.com/p/GQFbdhMvV3/
> > The status of my cluster is:
> > 
> > # pcs status 
> > Cluster name: mycluster
> > Cluster Summary:
> >   * Stack: corosync
> >   * Current DC: node2 (version 2.0.5-10.fc33-ba59be7122) -
> > partition
> > with quorum
> >   * Last updated: Thu Apr  8 18:33:49 2021
> >   * Last change:  Thu Apr  8 17:18:49 2021 by root via cibadmin on
> > node1
> >   * 2 nodes configured
> >   * 3 resource instances configured
> > 
> > 
> > Node List:
> >   * Online: [ node1 node2 ]
> > 
> > 
> > Full List of Resources:
> >   * Resource Group: apache:
> > * httpd_fs(ocf::heartbeat:Filesystem):Started node2
> > * httpd_vip(ocf::heartbeat:IPaddr2):Started node2
> > * httpd_ser(ocf::heartbeat:apache):Started node2
> > 
> > 
> > Daemon Status:
> >   corosync: active/enabled
> >   pacemaker: active/enabled
> >   pcsd: active/enabled
> > 
> > But, I can't browse the Apache web server? Why?
> > 
> > Thanks.
> 
> 
> Based on the above status output, the web server is running on node2,
> using the IP address specified by the httpd_vip resource. Are you
> trying to contact the web server at a name corresponding to that IP?
-- 
Ken Gaillot 

___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/


Re: [ClusterLabs] Why my node1 couldn't back to the clustering chain?

2021-04-08 Thread Antony Stone
On Thursday 08 April 2021 at 16:55:47, Ken Gaillot wrote:

> On Thu, 2021-04-08 at 14:32 +, Jason Long wrote:
> > Why, when node1 is back, then web server still on node2? Why not
> > switched?
> 
> By default, there are no preferences as to where a resource should run.
> The cluster is free to move or leave resources as needed.
> 
> If you want a resource to prefer a particular node, you can use
> location constraints to express that. However there is rarely a need to
> do so; in most clusters, nodes are equally interchangeable.

I would add that it is generally preferable, in fact, to leave a resource 
where it is unless there's a good reason to move it.

Imagine for example that you have three nodes and a resource is running on 
node A.

Node A fails, the resource moves to node C, and node A then comes back again.

If the resource then got moved back to node A just because it had recovered, 
you've now had two transitions of the resource (each of which means *some* 
downtime, however small that may be), whereas if it remains running on node C 
until such time as there's a good reason to move it away, you've only had one 
transition to cope with.


Antony.

-- 
"The future is already here.   It's just not evenly distributed yet."

 - William Gibson

   Please reply to the list;
 please *don't* CC me.
___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/