[ClusterLabs] FYI: users being automatically unsubscribed from list and/or not getting messages

2020-03-12 Thread Ken Gaillot
Hi all,

TL;DR if you've been having problems, they hopefully will get better
now


We have gotten some reports of users being automatically unsubscribed
from this list, or not getting all messages from the list.

The issue is that some mail servers have become more strict about what
mail they send and accept. Some servers cryptographically sign outgoing
messages (using "DKIM") so forgeries can be detected. Some servers
reject incoming messages if there's a signature and it doesn't match.

Many mailing lists, including this one, change some of the mail headers
-- most obviously, the subject line gets "[ClusterLabs]" added to the
front, to make list messages easy to filter. Unfortunately this breaks
DKIM signatures. Thus, if someone sends a DKIM-signed message to this
list, some recipients' servers will reject the message. After a certain
number of rejections, this list's server will automatically unsubscribe
the user.

Luckily, most servers that are configured to reject broken DKIM
signatures are also configured to accept the mail anyway if the sending
domain has proper "SPF" records (a DNS-based mechanism for preventing
address spoofing). We have just added SPF records for clusterlabs.org,
so hopefully the situation will improve for users who have been
affected.

If anyone continues to have problems after this point, please let us
know (either to this list or directly to me).
-- 
Ken Gaillot 

___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/


Re: [ClusterLabs] I want to have some resource monitored and based on that make an acton. Is it possible?

2020-03-11 Thread Ken Gaillot
On Wed, 2020-03-11 at 16:08 +0200, Roman Hershkovich wrote:
> Great, thank you very much for explanation. Regarding returning error
> - i did not knew.
> So, basically i can have a service, that will probe for master DB, in
> case of its transfer - service will update /etc/hosts and return
> error, which will be caught by pcs and it will restart whole
> dependent set ? Sounds good.
> But how i can do 2 "main resources" ? I have webserver AND
> db_monitor. In case of failure of webserver - should all start on
> node b, but in case of DB change - only underlying resources ...
> Should i make webserver outside of set? 

If you want the webserver to move to another node after a single
failure (of the webserver itself), set its migration-threshold to 1. If
you want other resources to move with it, colocate them with the
webserver.

The db monitor won't affect that -- if the db monitor fails, anything
ordered after it will restart.

> On Wed, Mar 11, 2020 at 3:57 PM Ken Gaillot 
> wrote:
> > On Wed, 2020-03-11 at 02:27 +0200, Roman Hershkovich wrote:
> > > Yes.
> > > I have only 1 APP active at same time, and so I want this app to
> > be
> > > restarted whenever DB changes. Another one is a "standby" APP,
> > where
> > > all resources are shut.
> > > So i thought about adding some "service" script, which will probe
> > a
> > > DB , and in case if it finds a CHANGE - will trigger pcs to
> > reload a
> > > set of resources, where one of resource would be a systemctl
> > file,
> > > which will continue to run a script, so in case of next change of
> > DB
> > > - it will restart APP set again. Is it sounds reasonable? (i
> > don't
> > > care of errors. I mean - i do, i want to log, but i'm ok to see
> > them)
> > 
> > That sounds fine, but I'd trigger the restart by returning an error
> > code from the db-monitoring script, rather than directly attempt to
> > restart the resources via pcs. If you order the other resources
> > after
> > the db-monitoring script, pacemaker will automatically restart them
> > when the db-monitoring script returns an error.
> > 
> > > In addition - i thought maybe bringing PAF here could be useful -
> > but
> > > this is even more complex ... 
> > 
> > If bringing the db into the cluster is a possibility, that would
> > probably be more reliable, with a quicker response too.
> > 
> > In that case you would simply order the dependent resources after
> > the
> > database master promotion. pcs example: pcs constraint order
> > promote
> > DB-RSC then start DEPENDENT-RSC
> > 
> > > On Tue, Mar 10, 2020 at 10:28 PM Ken Gaillot  > >
> > > wrote:
> > > > On Tue, 2020-03-10 at 21:03 +0200, Roman Hershkovich wrote:
> > > > > DB servers are not in PCS cluster. Basically you say that i
> > need
> > > > to
> > > > > add them to PCS cluster and then start them? but in case if
> > DB1
> > > > fails
> > > > > - DB2 autopromoted and not required start of service again>
> > > > > 
> > > > > Regarding colocation rule - i'm kind of missing logic how it
> > > > works -
> > > > > how i can "colocate" 1 of 2 APP servers to be around a master
> > DB
> > > > ? 
> > > > 
> > > > If I understand correctly, what you want is that both apps are
> > > > restarted if the master changes?
> > > > 
> > > > I'm thinking you'll need a custom OCF agent for the app
> > servers.
> > > > The
> > > > monitor action, in addition to checking the app's status, could
> > > > also
> > > > check which db is master, and return an error if it's changed
> > since
> > > > the
> > > > last monitor. (The start action would have to record the
> > initial
> > > > master.) Pacemaker will restart the app to recover from the
> > error.
> > > > 
> > > > That is a little hacky because you'll have errors in the status
> > > > every
> > > > time the master moves, but maybe that's worth knowing in your
> > > > situation
> > > > anyway.
> > > > 
> > > > > On Tue, Mar 10, 2020 at 8:42 PM Strahil Nikolov <
> > > > > hunter86...@yahoo.com> wrote:
> > > > > > On March 10, 2020 7:31:27 PM GMT+02:00, Roman Hershkovich <
> > > > > > war...@gmail.com> wrote:
> > > > > > >I 

Re: [ClusterLabs] Antw: [EXT] Coming in Pacemaker 2.0.4: dependency on monotonic clock for systemd resources

2020-03-11 Thread Ken Gaillot
On Wed, 2020-03-11 at 08:20 +0100, Ulrich Windl wrote:
> > > > Ken Gaillot  schrieb am 10.03.2020 um
> > > > 18:49 in
> 
> Nachricht
> <3098_1583862581_5E67D335_3098_1270_1_91b728456223eea7c8a00516a91ede1
> 8ab094530.c
> m...@redhat.com>:
> > Hi all,
> > 
> > This is not a big deal but I wanted to give a heads‑up for anyone
> > who
> > builds their own pacemaker packages.
> > 
> > With Pacemaker 2.0.4 (first release candidate expected next month),
> > we
> > are finally replacing our calls to the long‑deprecated ftime()
> > system
> > call with the "modern" clock_gettime().
> > 
> > As part of this, building pacemaker with support for systemd‑class
> > resources will now require that the underlying platform supports
> > clock_gettime() with CLOCK_MONOTONIC. Every platform we're aware of
> > that is used for pacemaker does, so this should not be an issue.
> > The
> > configure script will automatically determine whether support is
> > available.
> 
> You only have to take care not to compare CLOCK_MONOTONIC timestamps
> between
> nodes or node restarts. 

Definitely :)

They are used only to calculate action queue and run durations. For
most resource types those are optional (for reporting only), but
systemd resources require them (multiple status checks are usually
necessary to verify a start or stop worked, and we need to check the
remaining timeout each time).
-- 
Ken Gaillot 

___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/


Re: [ClusterLabs] I want to have some resource monitored and based on that make an acton. Is it possible?

2020-03-11 Thread Ken Gaillot
On Wed, 2020-03-11 at 02:27 +0200, Roman Hershkovich wrote:
> Yes.
> I have only 1 APP active at same time, and so I want this app to be
> restarted whenever DB changes. Another one is a "standby" APP, where
> all resources are shut.
> So i thought about adding some "service" script, which will probe a
> DB , and in case if it finds a CHANGE - will trigger pcs to reload a
> set of resources, where one of resource would be a systemctl file,
> which will continue to run a script, so in case of next change of DB
> - it will restart APP set again. Is it sounds reasonable? (i don't
> care of errors. I mean - i do, i want to log, but i'm ok to see them)

That sounds fine, but I'd trigger the restart by returning an error
code from the db-monitoring script, rather than directly attempt to
restart the resources via pcs. If you order the other resources after
the db-monitoring script, pacemaker will automatically restart them
when the db-monitoring script returns an error.

> In addition - i thought maybe bringing PAF here could be useful - but
> this is even more complex ... 

If bringing the db into the cluster is a possibility, that would
probably be more reliable, with a quicker response too.

In that case you would simply order the dependent resources after the
database master promotion. pcs example: pcs constraint order promote
DB-RSC then start DEPENDENT-RSC

> On Tue, Mar 10, 2020 at 10:28 PM Ken Gaillot 
> wrote:
> > On Tue, 2020-03-10 at 21:03 +0200, Roman Hershkovich wrote:
> > > DB servers are not in PCS cluster. Basically you say that i need
> > to
> > > add them to PCS cluster and then start them? but in case if DB1
> > fails
> > > - DB2 autopromoted and not required start of service again>
> > > 
> > > Regarding colocation rule - i'm kind of missing logic how it
> > works -
> > > how i can "colocate" 1 of 2 APP servers to be around a master DB
> > ? 
> > 
> > If I understand correctly, what you want is that both apps are
> > restarted if the master changes?
> > 
> > I'm thinking you'll need a custom OCF agent for the app servers.
> > The
> > monitor action, in addition to checking the app's status, could
> > also
> > check which db is master, and return an error if it's changed since
> > the
> > last monitor. (The start action would have to record the initial
> > master.) Pacemaker will restart the app to recover from the error.
> > 
> > That is a little hacky because you'll have errors in the status
> > every
> > time the master moves, but maybe that's worth knowing in your
> > situation
> > anyway.
> > 
> > > On Tue, Mar 10, 2020 at 8:42 PM Strahil Nikolov <
> > > hunter86...@yahoo.com> wrote:
> > > > On March 10, 2020 7:31:27 PM GMT+02:00, Roman Hershkovich <
> > > > war...@gmail.com> wrote:
> > > > >I have 2 DB servers (master/slave with replica) and 2 APP
> > servers.
> > > > >2 APP servers managed by pacemaker  (active/passive) , but i
> > want
> > > > also
> > > > >to
> > > > >monitor "which DB is master".  I can't use VIP (which could be
> > > > sticked
> > > > >on
> > > > >master DB) - it is very limited virtual environment.
> > > > >
> > > > >Is it possible to create a rule or some other scenario, so in
> > case
> > > > if
> > > > >master moved - pacemaker will restart APP (app does not
> > support
> > > > >failover) ?
> > > > 
> > > > Hi Roman,
> > > > 
> > > > If you set an order rule that  starts  first the master  and
> > then
> > > > the app, during a failover  the app will be stoped  and once
> > the
> > > > master  is switched  (slave is promoted) the  app will be
> > started
> > > > again.
> > > > 
> > > > Also you can consider  a  colocation rule that all  apps are 
> > > > started  where  the master  DB is running  -  so the lattency
> > will
> > > > be minimal.
> > > > 
> > > > Best Regards,
> > > > Strahil Nikolov
-- 
Ken Gaillot 

___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/


Re: [ClusterLabs] I want to have some resource monitored and based on that make an acton. Is it possible?

2020-03-10 Thread Ken Gaillot
On Tue, 2020-03-10 at 21:03 +0200, Roman Hershkovich wrote:
> DB servers are not in PCS cluster. Basically you say that i need to
> add them to PCS cluster and then start them? but in case if DB1 fails
> - DB2 autopromoted and not required start of service again>
> 
> Regarding colocation rule - i'm kind of missing logic how it works -
> how i can "colocate" 1 of 2 APP servers to be around a master DB ? 

If I understand correctly, what you want is that both apps are
restarted if the master changes?

I'm thinking you'll need a custom OCF agent for the app servers. The
monitor action, in addition to checking the app's status, could also
check which db is master, and return an error if it's changed since the
last monitor. (The start action would have to record the initial
master.) Pacemaker will restart the app to recover from the error.

That is a little hacky because you'll have errors in the status every
time the master moves, but maybe that's worth knowing in your situation
anyway.

> On Tue, Mar 10, 2020 at 8:42 PM Strahil Nikolov <
> hunter86...@yahoo.com> wrote:
> > On March 10, 2020 7:31:27 PM GMT+02:00, Roman Hershkovich <
> > war...@gmail.com> wrote:
> > >I have 2 DB servers (master/slave with replica) and 2 APP servers.
> > >2 APP servers managed by pacemaker  (active/passive) , but i want
> > also
> > >to
> > >monitor "which DB is master".  I can't use VIP (which could be
> > sticked
> > >on
> > >master DB) - it is very limited virtual environment.
> > >
> > >Is it possible to create a rule or some other scenario, so in case
> > if
> > >master moved - pacemaker will restart APP (app does not support
> > >failover) ?
> > 
> > Hi Roman,
> > 
> > If you set an order rule that  starts  first the master  and then
> > the app, during a failover  the app will be stoped  and once the
> > master  is switched  (slave is promoted) the  app will be started
> > again.
> > 
> > Also you can consider  a  colocation rule that all  apps are 
> > started  where  the master  DB is running  -  so the lattency will
> > be minimal.
> > 
> > Best Regards,
> > Strahil Nikolov
> 
> ___
> Manage your subscription:
> https://lists.clusterlabs.org/mailman/listinfo/users
> 
> ClusterLabs home: https://www.clusterlabs.org/
-- 
Ken Gaillot 

___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/


[ClusterLabs] Coming in Pacemaker 2.0.4: dependency on monotonic clock for systemd resources

2020-03-10 Thread Ken Gaillot
Hi all,

This is not a big deal but I wanted to give a heads-up for anyone who
builds their own pacemaker packages.

With Pacemaker 2.0.4 (first release candidate expected next month), we
are finally replacing our calls to the long-deprecated ftime() system
call with the "modern" clock_gettime().

As part of this, building pacemaker with support for systemd-class
resources will now require that the underlying platform supports
clock_gettime() with CLOCK_MONOTONIC. Every platform we're aware of
that is used for pacemaker does, so this should not be an issue. The
configure script will automatically determine whether support is
available.
-- 
Ken Gaillot 

___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/


Re: [ClusterLabs] Antw: Re: Antw: [EXT] Re: Q: rulke-based operation pause/freeze?

2020-03-09 Thread Ken Gaillot
On Mon, 2020-03-09 at 11:42 +0100, Ulrich Windl wrote:
> > > > Ken Gaillot  schrieb am 06.03.2020 um
> > > > 16:00 in
> 
> Nachricht
> :
> > On Fri, 2020‑03‑06 at 08:19 +0100, Ulrich Windl wrote:
> > > > > > Ondrej  schrieb am 06.03.2020
> > > > > > um
> > > > > > 01:45 in
> > > 
> > > Nachricht
> > > <
> > > 
> 
> 7499_1583455563_5E619D4B_7499_1105_1_2a18c389‑059e‑cf6f‑a840‑
> dec26437fdd1@famer
> > > .cz>:
> > > > On 3/5/20 9:24 PM, Ulrich Windl wrote:
> > > > > Hi!
> > > > > 
> > > > > I'm wondering whether it's possible to pause/freeze specific
> > > > > resource 
> > > > 
> > > > operations through rules.
> > > > > The idea is something like this: If your monitor operation
> > > > > needes
> > > > > (e.g.) 
> > > > 
> > > > some external NFS server, and thst NFS server is known to be
> > > > down,
> > > > it seems
> > > > better to delay the monitor operation until NFS is up again,
> > > > rather
> > > > than 
> > > > forcing a monitor timeout that will most likely be followed by
> > > > a
> > > > stop 
> > > > operation that will also time out, eventually killing the node
> > > > (which has no
> > > > problem itself).
> > > > > 
> > > > > As I guess it's not possible right now, what would be needed
> > > > > to
> > > > > make this 
> > > > 
> > > > work?
> > > > > In case it's possible, how would an example scenario look
> > > > > like?
> > > > > 
> > > > > Regards,
> > > > > Ulrich
> > > > > 
> > > > 
> > > > Hi Ulrich,
> > > > 
> > > > To determine _when_ this state should be enabled and disabled
> > > > would
> > > > be a 
> > > > different story.
> > > 
> > > For the moment let's assume I know it ;‑) ping‑node, maybe.
> > 
> > I believe that limited scenario is possible, but imperfectly.
> > 
> > You could configure an ocf:pacemaker:ping resource to ping the NFS
> > server IP. Then in the dependent resource, configure the recurring
> > monitor logically like this:
> > 
> >   monitor interval=N
> >  meta attributes
> > rule when ping attribute lt 1 or not defined
> > enabled=false
> 
> Assuming my attribute is named "n_up", would the syntax be (sorry,
> I'm not
> fluent with rules):
> "rule when n_up lt1 or not defined"
> 
> From your example it's not quite clear which words are placeholders
> and which
> are reserved words...
> 
> [...]

They're all placeholders :) because the syntax is different in XML, pcs
and crm shell. The XML syntax is described at:

https://clusterlabs.org/pacemaker/doc/en-US/Pacemaker/2.0/html-single/Pacemaker_Explained/index.html#idm47160816617536

In this case, a separate  block for the operation
would contain something like this:







See the man pages for pcs and crm shell for their equivalent syntax.
(Or maybe someone more familiar can reply with it.)
-- 
Ken Gaillot 

___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/


Re: [ClusterLabs] Antw: [EXT] Re: Q: rulke-based operation pause/freeze?

2020-03-06 Thread Ken Gaillot
On Fri, 2020-03-06 at 08:19 +0100, Ulrich Windl wrote:
> > > > Ondrej  schrieb am 06.03.2020 um
> > > > 01:45 in
> 
> Nachricht
> <
> 7499_1583455563_5E619D4B_7499_1105_1_2a18c389-059e-cf6f-a840-dec26437fdd1@famer
> .cz>:
> > On 3/5/20 9:24 PM, Ulrich Windl wrote:
> > > Hi!
> > > 
> > > I'm wondering whether it's possible to pause/freeze specific
> > > resource 
> > 
> > operations through rules.
> > > The idea is something like this: If your monitor operation needes
> > > (e.g.) 
> > 
> > some external NFS server, and thst NFS server is known to be down,
> > it seems
> > better to delay the monitor operation until NFS is up again, rather
> > than 
> > forcing a monitor timeout that will most likely be followed by a
> > stop 
> > operation that will also time out, eventually killing the node
> > (which has no
> > problem itself).
> > > 
> > > As I guess it's not possible right now, what would be needed to
> > > make this 
> > 
> > work?
> > > In case it's possible, how would an example scenario look like?
> > > 
> > > Regards,
> > > Ulrich
> > > 
> > 
> > Hi Ulrich,
> > 
> > To determine _when_ this state should be enabled and disabled would
> > be a 
> > different story.
> 
> For the moment let's assume I know it ;-) ping-node, maybe.

I believe that limited scenario is possible, but imperfectly.

You could configure an ocf:pacemaker:ping resource to ping the NFS
server IP. Then in the dependent resource, configure the recurring
monitor logically like this:

  monitor interval=N
 meta attributes
rule when ping attribute lt 1 or not defined
enabled=false

The node attribute will be changed only once the ping resource monitor
detects the IP gone, so there will be a window between when the IP
actually disappears and the node attribute is changed where the problem
could still occur. Also, the NFS server could have problems that do not
make the IP unpingable, and those situations would still have the
issue.
-- 
Ken Gaillot 

___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/


Re: [ClusterLabs] Antw: [EXT] Re: clusterlabs.org upgrade done

2020-03-05 Thread Ken Gaillot
On Wed, 2020-03-04 at 10:44 +0100, Valentin Vidić wrote:
> AFAICT from the reports, the mail I send to the list might not get
> delivered, perhaps this is causing the unsubscribe too:
> 
>   
> 
>   78.46.95.29
>   2
>   
> reject
> fail
> fail
>   
> 
> 
>   valentin-vidic.from.hr
> 
> 
>   
> valentin-vidic.from.hr
> permerror
>   
>   
> clusterlabs.org
> none
>   
> 
>   
> 
> For DKIM the problem is that list modifies Subject and body so
> the signature is not valid anymore. The list would need to remove
> DKIM headers, change the From field to list address and perhaps
> add DKIM signature of its own. Another options is for the list
> to stop modifying messages:
> https://begriffs.com/posts/2018-09-18-dmarc-mailing-list.html

Hmm, not sure what the best approach is. I think some people like
having the [ClusterLabs] tag in the subject line. If anyone has
suggested config changes for mailman 2, I can take a look.

> For SPF if would be good to add SPF records into DNS for
> clusterlabs.org
> domain.

We definitely should add SPF records. That might help the "not being
delivered" issue, if mail servers are doing a "SPF or DKIM must pass"
test.
-- 
Ken Gaillot 

___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/


Re: [ClusterLabs] Antw: [EXT] Re: clusterlabs.org upgrade done

2020-03-05 Thread Ken Gaillot
On Wed, 2020-03-04 at 10:05 +0200, Strahil Nikolov wrote:
> Maybe I will be unsubscribed every 10th email instead of every 5th
> one.

Hi Strahil,

What sort of issue are you seeing exactly? Is your account being
unsubscribed from the list automatically, or are you not receiving some
of the emails sent by the list?
-- 
Ken Gaillot 

___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/


Re: [ClusterLabs] Debian 10 pacemaker - CIB did not pass schema validation

2020-03-05 Thread Ken Gaillot
On Thu, 2020-03-05 at 11:44 +, Bala Mutyam wrote:
> Hi Strahil,
> 
> Apologies for my delay. I've attached the config below.
> 
> Here is the new error:
> 
> crm_verify --verbose --xml-file=/tmp/ansible.yJMg2z.xml
> /tmp/ansible.yJMg2z.xml:28: element primitive: Relax-NG validity
> error : Invalid sequence in interleave
> /tmp/ansible.yJMg2z.xml:28: element primitive: Relax-NG validity
> error : Element primitive failed to validate content
> /tmp/ansible.yJMg2z.xml:28: element clone: Relax-NG validity error :
> Invalid sequence in interleave
> /tmp/ansible.yJMg2z.xml:28: element clone: Relax-NG validity error :
> Element clone failed to validate content
> /tmp/ansible.yJMg2z.xml:19: element primitive: Relax-NG validity
> error : Element resources has extra content: primitive
> (main)  error: CIB did not pass schema validation
> Errors found during check: config not valid

The attached config doesn't have any clone elements, so I'm guessing
it's not the /tmp/ansible.yJMg2z.xml mentioned above? The syntax in
that tmp file is not valid (somewhere in the  and 
tags).

> 
> Thanks
> Bala
> 
> 
> On Mon, Mar 2, 2020 at 5:26 PM Strahil Nikolov  > wrote:
> > On March 2, 2020 1:22:55 PM GMT+02:00, Bala Mutyam <
> > koti.reddy...@gmail.com> wrote:
> > >Hi All,
> > >
> > >I'm trying to setup Pacemaker cluster with 2 VIPs and a group with
> > the
> > >VIPs
> > >and service for squid proxy. But the CIB verification is failing
> > with
> > >below
> > >errors. Could someone help me with this please?
> > >
> > >Errors:
> > >
> > >crm_verify --verbose --xml-file=/tmp/ansible.oGK0ye.xml
> > >/tmp/ansible.oGK0ye.xml:17: element primitive: Relax-NG validity
> > error
> > >:
> > >Invalid sequence in interleave
> > >/tmp/ansible.oGK0ye.xml:17: element primitive: Relax-NG validity
> > error
> > >:
> > >Element primitive failed to validate content
> > >/tmp/ansible.oGK0ye.xml:17: element group: Relax-NG validity error
> > :
> > >Invalid sequence in interleave
> > >/tmp/ansible.oGK0ye.xml:17: element group: Relax-NG validity error
> > :
> > >Element group failed to validate content
> > >/tmp/ansible.oGK0ye.xml:17: element group: Relax-NG validity error
> > :
> > >Element resources has extra content: group
> > >(main)  error: CIB did not pass schema validation
> > >Errors found during check: config not valid
> > 
> > And your config is ?
> > 
> > Best Regards,
> > Strahil Nikolov
> 
> 
> ___
> Manage your subscription:
> https://lists.clusterlabs.org/mailman/listinfo/users
> 
> ClusterLabs home: https://www.clusterlabs.org/
-- 
Ken Gaillot 

___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/


Re: [ClusterLabs] Resource monitors crash, restart, leave core files

2020-03-05 Thread Ken Gaillot
mount Filesystem device="/dev/drbd0"  
> directory="/data" fstype="ext4" ; \
>pcs constraint colocation add mount with drbd-master
> INFINITY  
> with-rsc-role=Master ; \
>pcs constraint order promote drbd-master then mount ; \
>pcs resource create vip ocf:heartbeat:IPaddr2
> ip=192.168.2.73  
> cidr_netmask=24 op monitor interval=30s ; \
>pcs constraint colocation add vip with drbd-master INFINITY  
> with-rsc-role=Master ; \
>pcs constraint order mount then vip ; \
>pcs resource create nfsd nfsserver nfs_shared_infodir=/data ;
> \
>pcs resource create nfscfg exportfs
> clientspec="192.168.2.55"  
> options=rw,no_subtree_check,no_root_squash directory=/data fsid=0 ; \
>pcs constraint colocation add nfsd with vip ; \
>pcs constraint colocation add nfscfg with nfsd ; \
>pcs constraint order vip then nfsd ; \
>pcs constraint order nfsd then nfscfg
> 
> 
> 
> 
> ___
> Manage your subscription:
> https://lists.clusterlabs.org/mailman/listinfo/users
> 
> ClusterLabs home: https://www.clusterlabs.org/
> 
-- 
Ken Gaillot 

___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/


[ClusterLabs] clusterlabs.org upgrade done

2020-02-29 Thread Ken Gaillot
Hi all,

The clusterlabs.org server OS upgrade is (mostly) done.

Services are back up, with the exception of some cosmetic issues and
the source code continuous integration testing for ClusterLabs github
projects (ci.kronosnet.org). Those will be dealt with at a more
reasonable time :)
-- 
Ken Gaillot 

___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/


Re: [ClusterLabs] Antw: Re: Antw: Re: Antw: [EXT] Coming in Pacemaker 2.0.4: shutdown locks

2020-02-28 Thread Ken Gaillot
On Fri, 2020-02-28 at 09:37 +0100, Ulrich Windl wrote:
> > > > Ken Gaillot  schrieb am 27.02.2020 um
> > > > 23:43 in Nachricht
> 
> <43512a11c2ddffbabeee11cf4cb509e4e5dc98ca.ca...@redhat.com>:
> 
> [...]
> > 
> > > 2. Resources/groups  are stopped  (target-role=stopped)
> > > 3. Node exits the cluster cleanly when no resources are  running
> > > any
> > > more
> > > 4. The node rejoins the cluster  after  the reboot
> > > 5. A  positive (on the rebooted node) & negative (ban on the rest
> > > of
> > > the nodes) constraints  are  created for the marked  in step 1
> > > resources
> > > 6.  target-role is  set back to started and the resources are
> > > back
> > > and running
> > > 7. When each resource group (or standalone resource)  is  back
> > > online
> > > -  the mark in step 1  is removed  and any location
> > > constraints  (cli-ban &  cli-prefer)  are  removed  for the
> > > resource/group.
> > 
> > Exactly, that's effectively what happens.
> 
> May I ask how robust the mechanism will be?
> For example if you do  a "resource restart" there are two target
> roles (each made persistent): stopped and started. If the node
> performing the operation is fenced (we had that a few times). The
> resources may remain "stopped" until started manually again.
> I see a similar issue with this mechanism.

Corner cases were carefully considered with this one. If a node is
fenced, its entire CIB status section is cleared, which will include
shutdown locks. I considered alternative implementations under the
hood, and the main advantage of the one chosen is that setting and
clearing the lock are atomic with recording the action results that
cause them. That eliminates a whole lot of possibilities for the type
of problem you mention. Also, there are multiple backstops to clear
locks if anything is fishy, such as if the node is unclean, the
resource somehow started elsewhere while the lock was in effect, a
locked resource is removed from the configuration while it is down,
etc.

The one area I don't consider mature yet is Pacemaker Remote nodes. I'd
recommend using the feature only in a cluster without them. This is due
mainly to a (documented) limitation that manual lock clearing and
shutdown-lock-limit only work if the remote connection is disabled
after stopping the node, which sort of defeats the "hands off" goal.
But also I think using locks with remote nodes requires more testing.

> 
> [...]
-- 
Ken Gaillot 

___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/


Re: [ClusterLabs] Antw: Re: Antw: [EXT] Coming in Pacemaker 2.0.4: shutdown locks

2020-02-27 Thread Ken Gaillot
On Thu, 2020-02-27 at 22:39 +0300, Andrei Borzenkov wrote:
> 27.02.2020 20:54, Ken Gaillot пишет:
> > On Thu, 2020-02-27 at 18:43 +0100, Jehan-Guillaume de Rorthais
> > wrote:
> > > > > Speaking about shutdown, what is the status of clean shutdown
> > > > > of
> > > > > the
> > > > > cluster handled by Pacemaker? Currently, I advice to stop
> > > > > resources
> > > > > gracefully (eg. using pcs resource disable [...]) before
> > > > > shutting
> > > > > down each
> > > > > nodes either by hand or using some higher level tool (eg. pcs
> > > > > cluster stop
> > > > > --all).  
> > > > 
> > > > I'm not sure why that would be necessary. It should be
> > > > perfectly
> > > > fine
> > > > to stop pacemaker in any order without disabling resources.
> > > 
> > > Because resources might move around during the shutdown sequence.
> > > It
> > > might
> > > not be desirable as some resource migration can be heavy, long,
> > > interfere
> > > with shutdown, etc. I'm pretty sure this has been discussed in
> > > the
> > > past.
> > 
> > Ah, that makes sense, I hadn't thought about that.
> 
> Is not it exactly what shutdown-lock does? It prevents resource
> migration when stopping pacemaker so my expectation is that if we
> stop
> pacemaker on all nodes no resource is moved. Or what am I missing?

shutdown-lock would indeed handle this, if you want the behavior
whenever any node is shut down. However for this purpose, I could see
some users wanting the behavior when shutting down all nodes, but not
when shutting down just one node.

BTW if all nodes shut down, any shutdown locks are cleared.
Practically, this is because they are stored in the CIB status section,
which goes away with the cluster. Logically, I could see arguments for
and against, but this makes sense.
-- 
Ken Gaillot 

___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/

Re: [ClusterLabs] Antw: Re: Antw: [EXT] Coming in Pacemaker 2.0.4: shutdown locks

2020-02-27 Thread Ken Gaillot
On Thu, 2020-02-27 at 20:42 +0200, Strahil Nikolov wrote:
> On February 27, 2020 7:00:36 PM GMT+02:00, Ken Gaillot <
> kgail...@redhat.com> wrote:
> > On Thu, 2020-02-27 at 17:28 +0100, Jehan-Guillaume de Rorthais
> > wrote:
> > > On Thu, 27 Feb 2020 09:48:23 -0600
> > > Ken Gaillot  wrote:
> > > 
> > > > On Thu, 2020-02-27 at 15:01 +0100, Jehan-Guillaume de Rorthais
> > > > wrote:
> > > > > On Thu, 27 Feb 2020 12:24:46 +0100
> > > > > "Ulrich Windl"  wrote:
> > > > >   
> > > > > > > > > Jehan-Guillaume de Rorthais  schrieb
> > > > > > > > > am
> > > > > > > > > 27.02.2020 um
> > > > > > 
> > > > > > 11:05 in
> > > > > > Nachricht <20200227110502.3624cb87@firost>:
> > > > > > 
> > > > > > [...]  
> > > > > > > What about something like "lock‑location=bool" and
> > > > > > 
> > > > > > For "lock-location" I would assume the value is a
> > > > > > "location". I
> > > > > > guess you
> > > > > > wanted a "use-lock-location" Boolean value.  
> > > > > 
> > > > > Mh, maybe "lock-current-location" would better reflect what I
> > > > > meant.
> > > > > 
> > > > > The point is to lock the resource on the node currently
> > > > > running
> > > > > it.  
> > > > 
> > > > Though it only applies for a clean node shutdown, so that has
> > > > to be
> > > > in
> > > > the name somewhere. The resource isn't locked during normal
> > > > cluster
> > > > operation (it can move for resource or node failures, load
> > > > rebalancing,
> > > > etc.).
> > > 
> > > Well, I was trying to make the new feature a bit wider than just
> > > the
> > > narrow shutdown feature.
> > > 
> > > Speaking about shutdown, what is the status of clean shutdown of
> > > the
> > > cluster
> > > handled by Pacemaker? Currently, I advice to stop resources
> > > gracefully (eg.
> > > using pcs resource disable [...]) before shutting down each nodes
> > > either by hand
> > > or using some higher level tool (eg. pcs cluster stop --all).
> > 
> > I'm not sure why that would be necessary. It should be perfectly
> > fine
> > to stop pacemaker in any order without disabling resources.
> > 
> > Start-up is actually more of an issue ... if you start corosync and
> > pacemaker on nodes one by one, and you're not quick enough, then
> > once
> > quorum is reached, the cluster will fence all the nodes that
> > haven't
> > yet come up. So on start-up, it makes sense to start corosync on
> > all
> > nodes, which will establish membership and quorum, then start
> > pacemaker
> > on all nodes. Obviously that can't be done within pacemaker so that
> > has
> > to be done manually or by a higher-level tool.
> > 
> > > Shouldn't this feature be discussed in this context as well?
> > > 
> > > [...] 
> > > > > > > it would lock the resource location (unique or clones)
> > > > > > > until
> > > > > > > the
> > > > > > > operator unlock it or the "lock‑location‑timeout" expire.
> > > > > > > No
> > > > > > > matter what
> > > > > > > happen to the resource, maintenance mode or not.
> > > > > > > 
> > > > > > > At a first look, it looks to peer nicely with
> > > > > > > maintenance‑mode
> > > > > > > and avoid resource migration after node reboot.
> > > > 
> > > > Maintenance mode is useful if you're updating the cluster stack
> > > > itself
> > > > -- put in maintenance mode, stop the cluster services (leaving
> > > > the
> > > > managed services still running), update the cluster services,
> > > > start
> > > > the
> > > > cluster services again, take out of maintenance mode.
> > > > 
> > > > This is useful if you're rebooting the node for a kernel update
> > > > (for
> > > > example). Apply the update, reboot the node. The cluster takes
> > 

Re: [ClusterLabs] Antw: Re: Antw: [EXT] Coming in Pacemaker 2.0.4: shutdown locks

2020-02-27 Thread Ken Gaillot
On Thu, 2020-02-27 at 18:43 +0100, Jehan-Guillaume de Rorthais wrote:
> > > Speaking about shutdown, what is the status of clean shutdown of
> > > the
> > > cluster handled by Pacemaker? Currently, I advice to stop
> > > resources
> > > gracefully (eg. using pcs resource disable [...]) before shutting
> > > down each
> > > nodes either by hand or using some higher level tool (eg. pcs
> > > cluster stop
> > > --all).  
> > 
> > I'm not sure why that would be necessary. It should be perfectly
> > fine
> > to stop pacemaker in any order without disabling resources.
> 
> Because resources might move around during the shutdown sequence. It
> might
> not be desirable as some resource migration can be heavy, long,
> interfere
> with shutdown, etc. I'm pretty sure this has been discussed in the
> past.

Ah, that makes sense, I hadn't thought about that. FYI, there is a
stop-all-resources cluster property that would let you disable
everything in one step.
-- 
Ken Gaillot 

___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/


[ClusterLabs] More summit photos

2020-02-27 Thread Ken Gaillot
Hi all,

The ClusterLabs Summit wiki has been updated with a few more photos.
Enjoy ...

http://plan.alteeve.ca/index.php/HA_Cluster_Summit_2020#Photos
-- 
Ken Gaillot 

___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/


Re: [ClusterLabs] Antw: Re: Antw: [EXT] Coming in Pacemaker 2.0.4: shutdown locks

2020-02-27 Thread Ken Gaillot
On Thu, 2020-02-27 at 17:28 +0100, Jehan-Guillaume de Rorthais wrote:
> On Thu, 27 Feb 2020 09:48:23 -0600
> Ken Gaillot  wrote:
> 
> > On Thu, 2020-02-27 at 15:01 +0100, Jehan-Guillaume de Rorthais
> > wrote:
> > > On Thu, 27 Feb 2020 12:24:46 +0100
> > > "Ulrich Windl"  wrote:
> > >   
> > > > > > > Jehan-Guillaume de Rorthais  schrieb am
> > > > > > > 27.02.2020 um
> > > > 
> > > > 11:05 in
> > > > Nachricht <20200227110502.3624cb87@firost>:
> > > > 
> > > > [...]  
> > > > > What about something like "lock‑location=bool" and
> > > > 
> > > > For "lock-location" I would assume the value is a "location". I
> > > > guess you
> > > > wanted a "use-lock-location" Boolean value.  
> > > 
> > > Mh, maybe "lock-current-location" would better reflect what I
> > > meant.
> > > 
> > > The point is to lock the resource on the node currently running
> > > it.  
> > 
> > Though it only applies for a clean node shutdown, so that has to be
> > in
> > the name somewhere. The resource isn't locked during normal cluster
> > operation (it can move for resource or node failures, load
> > rebalancing,
> > etc.).
> 
> Well, I was trying to make the new feature a bit wider than just the
> narrow shutdown feature.
> 
> Speaking about shutdown, what is the status of clean shutdown of the
> cluster
> handled by Pacemaker? Currently, I advice to stop resources
> gracefully (eg.
> using pcs resource disable [...]) before shutting down each nodes
> either by hand
> or using some higher level tool (eg. pcs cluster stop --all).

I'm not sure why that would be necessary. It should be perfectly fine
to stop pacemaker in any order without disabling resources.

Start-up is actually more of an issue ... if you start corosync and
pacemaker on nodes one by one, and you're not quick enough, then once
quorum is reached, the cluster will fence all the nodes that haven't
yet come up. So on start-up, it makes sense to start corosync on all
nodes, which will establish membership and quorum, then start pacemaker
on all nodes. Obviously that can't be done within pacemaker so that has
to be done manually or by a higher-level tool.

> Shouldn't this feature be discussed in this context as well?
> 
> [...] 
> > > > > it would lock the resource location (unique or clones) until
> > > > > the
> > > > > operator unlock it or the "lock‑location‑timeout" expire. No
> > > > > matter what
> > > > > happen to the resource, maintenance mode or not.
> > > > > 
> > > > > At a first look, it looks to peer nicely with
> > > > > maintenance‑mode
> > > > > and avoid resource migration after node reboot.
> > 
> > Maintenance mode is useful if you're updating the cluster stack
> > itself
> > -- put in maintenance mode, stop the cluster services (leaving the
> > managed services still running), update the cluster services, start
> > the
> > cluster services again, take out of maintenance mode.
> > 
> > This is useful if you're rebooting the node for a kernel update
> > (for
> > example). Apply the update, reboot the node. The cluster takes care
> > of
> > everything else for you (stop the services before shutting down and
> > do
> > not recover them until the node comes back).
> 
> I'm a bit lost. If resource doesn't move during maintenance mode,
> could you detail a scenario where we should ban it explicitly from
> other node to
> secure its current location when getting out of maintenance? Isn't it

Sorry, I was unclear -- I was contrasting maintenance mode with
shutdown locks.

You wouldn't need a ban with maintenance mode. However maintenance mode
leaves any active resources running. That means the node shouldn't be
rebooted in maintenance mode, because those resources will not be
cleanly stopped.

With shutdown locks, the active resources are cleanly stopped. That
does require a ban of some sort because otherwise the resources will be
recovered on another node.

> excessive
> precaution? Is it just to avoid is to move somewhere else when
> exiting
> maintenance-mode? If the resource has a preferred node, I suppose the
> location
> constraint should take care of this, isn't it?

Having a preferred node doesn't prevent the resource from starting
elsewhere if the preferred node is down (or in standby, or otherwise
ineligible to run the resource). Even a +INFINITY constraint allo

[ClusterLabs] *** Correction *** clusterlabs.org/corosync.org/kronosnet.org planned outage this Saturday 2020-02-29

2020-02-27 Thread Ken Gaillot
We've rescheduled the window for the OS upgrade to this Saturday, Feb.
29, 2020, from roughly 09:00 UTC to 18:00 UTC.

This will result in outages of the clusterlabs.org website, bugzilla,
and wiki. The mailing lists will also be unavailable, but mail gateways
will generally retry sent messages so there shouldn't be any missed messages.

This server also hosts some corosync.org and kronosnet.org services,
which will experience outages as well.

And no the server isn't HA. :) That would be nice but even in that case
there would be some downtime for a major OS upgrade since database
tables etc. will need upgrading.
-- 
Ken Gaillot 

___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/


Re: [ClusterLabs] Antw: Re: Antw: [EXT] Coming in Pacemaker 2.0.4: shutdown locks

2020-02-27 Thread Ken Gaillot
On Thu, 2020-02-27 at 08:12 +0100, Ulrich Windl wrote:
> > > > Ken Gaillot  schrieb am 26.02.2020 um
> > > > 16:41 in Nachricht
> 
> <2257e2a1e5fd88ae2b915b8241a8e8c9e150b95b.ca...@redhat.com>:
> 
> [...]
> > I considered a per-resource and/or per-node setting, but the target
> > audience is someone who wants things as simple as possible. A per-
> > node
> 
> Actually, while it may seem simple, it adds quite a lot of additional
> complexity, and I'm still not convinced that this is really needed.
> 
> [...]
> 
> Regards,
> Ulrich

I think that was the reaction of just about everyone (including myself)
the first time they heard about it :)

The main justification is that other HA software offers the capability,
so this removes an obstacle to those users switching to pacemaker.

However the fact that it's a blocking point for users who might
otherwise switch points out that it does have real-world value.

It might be a narrow use case, but it's one that involves scale, which
is something we're always striving to better support. If an
organization has hundreds or thousands of clusters, yet those still are
just a small fraction of the total servers being administered at the
organization, expertise becomes a major limiting factor. In such a case
you don't want to waste your cluster admins' time on late-night routine
OS updates.
-- 
Ken Gaillot 

___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/


Re: [ClusterLabs] Antw: Re: Antw: [EXT] Coming in Pacemaker 2.0.4: shutdown locks

2020-02-27 Thread Ken Gaillot
thwhile.

> > But you want the resources to be down while the node boots, right?
> > How can
> > that concept be "married with" the concept of high availablility?
> 
> The point here is to avoid moving resources during planed
> maintenance/downtime
> as it would require longer maintenance duration (thus longer
> downtime) than a
> simple reboot with no resource migration.
> 
> Even a resource in HA can have planed maintenance :)

Right. I jokingly call this feature "medium availability" but really it
is just another way to set a planned maintenance window.

> > "We have a HA cluster and HA resources, but when we boot a node
> > those
> > HA-resources will be down while the node boots." How is that
> > different from
> > not having a HA cluster, or taking those resources temporarily away
> > from the
> > HA cluster? (That was my intitial objection: Why not simply ignore
> > resource
> > failures for some time?)

HA recovery is still done for resource failures and node failures, just
not clean node shutdowns. A clean node shutdown is one where the node
notifies the DC that it wants to leave the cluster (which is what
happens in the background when you stop cluster services on a node).

Also, all other cluster resource management features being used, like
utilization attributes, placement strategies, node health attributes,
time-based rules, etc., are all still in effect.

> Unless I'm wrong, maintenance mode does not secure the current
> location of
> resources after reboots.
-- 
Ken Gaillot 

___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/

[ClusterLabs] FYI clusterlabs.org planned outage this Friday 2020-02-28

2020-02-26 Thread Ken Gaillot
Hi all,

We will be upgrading the OS on clusterlabs.org this Friday, Feb. 28,
2020, sometime after 18:00 UTC.

This will result in outages of the clusterlabs.org website, bugzilla,
and wiki. The mailing lists will also be unavailable, but mail gateways
will generally retry sent messages so there shouldn't be any missed
messages.
-- 
Ken Gaillot 

___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/


Re: [ClusterLabs] Coming in Pacemaker 2.0.4: shutdown locks

2020-02-26 Thread Ken Gaillot
On Wed, 2020-02-26 at 06:52 +0200, Strahil Nikolov wrote:
> On February 26, 2020 12:30:24 AM GMT+02:00, Ken Gaillot <
> kgail...@redhat.com> wrote:
> > Hi all,
> > 
> > We are a couple of months away from starting the release cycle for
> > Pacemaker 2.0.4. I'll highlight some new features between now and
> > then.
> > 
> > First we have shutdown locks. This is a narrow use case that I
> > don't
> > expect a lot of interest in, but it helps give pacemaker feature
> > parity
> > with proprietary HA systems, which can help users feel more
> > comfortable
> > switching to pacemaker and open source.
> > 
> > The use case is a large organization with few cluster experts and
> > many
> > junior system administrators who reboot hosts for OS updates during
> > planned maintenance windows, without any knowledge of what the host
> > does. The cluster runs services that have a preferred node and take
> > a
> > very long time to start.
> > 
> > In this scenario, pacemaker's default behavior of moving the
> > service to
> > a failover node when the node shuts down, and moving it back when
> > the
> > node comes back up, results in needless downtime compared to just
> > leaving the service down for the few minutes needed for a reboot.
> > 
> > The goal could be accomplished with existing pacemaker features.
> > Maintenance mode wouldn't work because the node is being rebooted.
> > But
> > you could figure out what resources are active on the node, and use
> > a
> > location constraint with a rule to ban them on all other nodes
> > before
> > shutting down. That's a lot of work for something the cluster can
> > figure out automatically.
> > 
> > Pacemaker 2.0.4 will offer a new cluster property, shutdown-lock,
> > defaulting to false to keep the current behavior. If shutdown-lock
> > is
> > set to true, any resources active on a node when it is cleanly shut
> > down will be "locked" to the node (kept down rather than recovered
> > elsewhere). Once the node comes back up and rejoins the cluster,
> > they
> > will be "unlocked" (free to move again if circumstances warrant).
> > 
> > An additional cluster property, shutdown-lock-limit, allows you to
> > set
> > a timeout for the locks so that if the node doesn't come back
> > within
> > that time, the resources are free to be recovered elsewhere. This
> > defaults to no limit.
> > 
> > If you decide while the node is down that you need the resource to
> > be
> > recovered, you can manually clear a lock with "crm_resource --
> > refresh"
> > specifying both --node and --resource.
> > 
> > There are some limitations using shutdown locks with Pacemaker
> > Remote
> > nodes, so I'd avoid that with the upcoming release, though it is
> > possible.
> 
> Hi Ken,
> 
> Can it be 'shutdown-lock-timeout' instead of 'shutdown-lock-limit' ?

I thought about that, but I wanted to be clear that this is a maximum
bound. "timeout" could be a little ambiguous as to whether it is a
maximum or how long a lock will always last. On the other hand "limit"
is not obvious that it should be a time duration. I could see it going
either way.

> Also, I think that the default value could be something more
> reasonable - like 30min. Usually 30min are OK if you don't patch the
> firmware and 180min are the maximum if you do patch the firmware.

The primary goal is to ease the transition from other HA software,
which doesn't even offer the equivalent of shutdown-lock-limit, so I
wanted the default to match that behavior. Also "usually" is a mine
field :)

> The use case is odd. I have been in the same situation, and our
> solution was to train the team (internally) instead of using such
> feature.

Right, this is designed for situations where that isn't feasible :)

Though even with trained staff, this does make it easier, since you
don't have to figure out yourself what's active on the node.

> The interesting part will be the behaviour of the local cluster
> stack, when updates  happen. The risk is high for the node to be
> fenced due to unresponsiveness (during the update) or if
> corosync/pacemaker  use an old function changed in the libs.

That is a risk, but presumably one that a user transitioning from
another product would already be familiar with.

> Best Regards,
> Strahil Nikolov
-- 
Ken Gaillot 

___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/


Re: [ClusterLabs] Q: pseudo actions load_stopped_*, all_stopped

2020-02-26 Thread Ken Gaillot
On Wed, 2020-02-26 at 11:58 +0100, Ulrich Windl wrote:
> Hi!
> 
> I'm wondering what the pseudo actions in output of crm simulate are:
> "load_stopped" and "stopped_all". Are these some
> synchronization points? see them between (monitor, stop) and (start,
> monitor)
> 
> Regards,
> Ulrich

Exactly, they typically exist as points other actions can be internally
ordered against.
-- 
Ken Gaillot 

___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/


Re: [ClusterLabs] Antw: [EXT] Coming in Pacemaker 2.0.4: shutdown locks

2020-02-26 Thread Ken Gaillot
On Wed, 2020-02-26 at 10:33 +0100, Ulrich Windl wrote:
> > > > Ken Gaillot  schrieb am 25.02.2020 um
> > > > 23:30 in
> 
> Nachricht
> <29058_1582669837_5E55A00B_29058_3341_1_f8e8426d0c2cf098f88fb6330e8a8
> 0586f03043a
> ca...@redhat.com>:
> > Hi all,
> > 
> > We are a couple of months away from starting the release cycle for
> > Pacemaker 2.0.4. I'll highlight some new features between now and
> > then.
> > 
> > First we have shutdown locks. This is a narrow use case that I
> > don't
> > expect a lot of interest in, but it helps give pacemaker feature
> > parity
> > with proprietary HA systems, which can help users feel more
> > comfortable
> > switching to pacemaker and open source.
> > 
> > The use case is a large organization with few cluster experts and
> > many
> > junior system administrators who reboot hosts for OS updates during
> > planned maintenance windows, without any knowledge of what the host
> > does. The cluster runs services that have a preferred node and take
> > a
> > very long time to start.
> > 
> > In this scenario, pacemaker's default behavior of moving the
> > service to
> > a failover node when the node shuts down, and moving it back when
> > the
> > node comes back up, results in needless downtime compared to just
> > leaving the service down for the few minutes needed for a reboot.
> > 
> > The goal could be accomplished with existing pacemaker features.
> > Maintenance mode wouldn't work because the node is being rebooted.
> > But
> > you could figure out what resources are active on the node, and use
> > a
> > location constraint with a rule to ban them on all other nodes
> > before
> > shutting down. That's a lot of work for something the cluster can
> > figure out automatically.
> > 
> > Pacemaker 2.0.4 will offer a new cluster property, shutdown‑lock,
> > defaulting to false to keep the current behavior. If shutdown‑lock
> > is
> > set to true, any resources active on a node when it is cleanly shut
> > down will be "locked" to the node (kept down rather than recovered
> > elsewhere). Once the node comes back up and rejoins the cluster,
> > they
> > will be "unlocked" (free to move again if circumstances warrant).
> 
> I'm not very happy with the wording: What about a per-resource
> feature
> "tolerate-downtime" that specifies how long this resource may be down
> without
> causing actions from the cluster. I think it would be more useful
> than some
> global setting. Maybe complement that per-resource feature with a
> per-node
> feature using the same name.

I considered a per-resource and/or per-node setting, but the target
audience is someone who wants things as simple as possible. A per-node
setting would mean that newly added nodes don't have it by default,
which could be easily overlooked. (As an aside, I would someday like to
see a "node defaults" section that would provide default values for
node attributes. That could potentially replace several current
cluster-wide options. But it's a low priority.)

I didn't mention this in the announcements, but certain resource types
are excluded:

Stonith resources and Pacemaker Remote connection resources are never
locked. That makes sense because they are more a sort of internal
pseudo-resource than an actual end-user service. Stonith resources are
just monitors of the fence device, and a connection resource starts a
(remote) node rather than a service.

Also, with the current implementation, clone and bundle instances are
not locked. This would only matter for unique clones, and
clones/bundles with clone-max/replicas set below the total number of
nodes. If this becomes a high demand, we could add it in the future.
Similarly for the master role of promotable clones.

Given those limitations, I think a per-resource option would have more
potential to be confusing than helpful. But, it should be relatively
simple to extend this as a per-resource option, with the global option
as a backward-compatible default, if the demand arises.

> I think it's very important to specify and document that mode
> comparing it to
> maintenance mode.

The proposed documentation is in the master branch if you want to proof
it and make suggestions. If you have the prerequisites installed you
can run "make -C doc" and view it locally, otherwise you can browse the
source (search for "shutdown-lock"):

https://github.com/ClusterLabs/pacemaker/blob/master/doc/Pacemaker_Explained/en-US/Ch-Options.txt

There is currently no explicit comparison with maintenance-mode because
maintenance-mode still behaves according to its document

Re: [ClusterLabs] Coming in Pacemaker 2.0.4: shutdown locks

2020-02-26 Thread Ken Gaillot
On Wed, 2020-02-26 at 14:45 +0900, Ondrej wrote:
> Hi Ken,
> 
> On 2/26/20 7:30 AM, Ken Gaillot wrote:
> > The use case is a large organization with few cluster experts and
> > many
> > junior system administrators who reboot hosts for OS updates during
> > planned maintenance windows, without any knowledge of what the host
> > does. The cluster runs services that have a preferred node and take
> > a
> > very long time to start.
> > 
> > In this scenario, pacemaker's default behavior of moving the
> > service to
> > a failover node when the node shuts down, and moving it back when
> > the
> > node comes back up, results in needless downtime compared to just
> > leaving the service down for the few minutes needed for a reboot.
> 
> 1. Do I understand it correctly that scenario will be when system 
> gracefully reboots (pacemaker service is stopped by system shutting 
> down) and also in case that users for example manually stop cluster
> but 
> doesn't reboot the node - something like `pcs cluster stop`?

Exactly. The idea is the user wants HA for node or resource failures,
but not clean cluster stops.

> > If you decide while the node is down that you need the resource to
> > be
> > recovered, you can manually clear a lock with "crm_resource --
> > refresh"
> > specifying both --node and --resource.
> 
> 2. I'm interested how the situation will look like in the 'crm_mon' 
> output or in 'crm_simulate'. Will there be some indication why the 
> resources are not moving like 'blocked-shutdown-lock' or they will
> just 
> appear as not moving (Stopped)?

Yes, resources will be shown as "Stopped (LOCKED)".

> Will this look differently from situation where for example the
> resource 
> is just not allowed by constraint to run on other nodes?

Only in logs and cluster status; internally it is implemented as
implicit constraints banning the resources from every other node.

Another point I should clarify is that the lock/constraint remains in
place until the node rejoins the cluster *and* the resource starts
again on that node. That ensures that the node is preferred even if
stickiness was the only thing holding the resource to the node
previously.

However once the resource starts on the node, the lock/constraint is
lifted, and the resource could theoretically immediately move to
another node. An example would be if there were no stickiness and new
resources were added to the configuration while the node was down, so
load balancing calculations end up different. Another would be if a
time-based rule kicked in while the node was down. However this feature
is only expected or likely to be used in a cluster where there are
preferred nodes, enforced by stickiness and/or location constraints, so
it shouldn't be significant in practice.

Special care was taken in a number of corner cases:

* If the resource start on the rejoined node fails, the lock is lifted.

* If the node is fenced (e.g. manually via stonith_admin) while it is
down, the lock is lifted.

* If the resource somehow started on another node while the node was
down (which shouldn't be possible, but just as a fail-safe), the lock
is ignored when the node rejoins.

* Maintenance mode, unmanaged resources, etc., work the same with
shutdown locks as they would with any other constraint.

> Thanks for heads up
> 
> --
> Ondrej Famera
-- 
Ken Gaillot 

___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/


[ClusterLabs] Coming in Pacemaker 2.0.4: shutdown locks

2020-02-25 Thread Ken Gaillot
Hi all,

We are a couple of months away from starting the release cycle for
Pacemaker 2.0.4. I'll highlight some new features between now and then.

First we have shutdown locks. This is a narrow use case that I don't
expect a lot of interest in, but it helps give pacemaker feature parity
with proprietary HA systems, which can help users feel more comfortable
switching to pacemaker and open source.

The use case is a large organization with few cluster experts and many
junior system administrators who reboot hosts for OS updates during
planned maintenance windows, without any knowledge of what the host
does. The cluster runs services that have a preferred node and take a
very long time to start.

In this scenario, pacemaker's default behavior of moving the service to
a failover node when the node shuts down, and moving it back when the
node comes back up, results in needless downtime compared to just
leaving the service down for the few minutes needed for a reboot.

The goal could be accomplished with existing pacemaker features.
Maintenance mode wouldn't work because the node is being rebooted. But
you could figure out what resources are active on the node, and use a
location constraint with a rule to ban them on all other nodes before
shutting down. That's a lot of work for something the cluster can
figure out automatically.

Pacemaker 2.0.4 will offer a new cluster property, shutdown-lock,
defaulting to false to keep the current behavior. If shutdown-lock is
set to true, any resources active on a node when it is cleanly shut
down will be "locked" to the node (kept down rather than recovered
elsewhere). Once the node comes back up and rejoins the cluster, they
will be "unlocked" (free to move again if circumstances warrant).

An additional cluster property, shutdown-lock-limit, allows you to set
a timeout for the locks so that if the node doesn't come back within
that time, the resources are free to be recovered elsewhere. This
defaults to no limit.

If you decide while the node is down that you need the resource to be
recovered, you can manually clear a lock with "crm_resource --refresh"
specifying both --node and --resource.

There are some limitations using shutdown locks with Pacemaker Remote
nodes, so I'd avoid that with the upcoming release, though it is
possible.
-- 
Ken Gaillot 

___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/


Re: [ClusterLabs] Fedora 31 - systemd based resources don't start

2020-02-17 Thread Ken Gaillot
On Mon, 2020-02-17 at 17:35 +, Maverick wrote:
> 
> Hi,
> 
> When i start my cluster, most of my systemd resources won't start:
> 
> Failed Resource Actions:
>   * apache_stop_0 on boss1 'OCF_TIMEOUT' (198): call=82,
> status='Timed Out', exitreason='', last-rc-change='1970-01-01
> 01:00:54 +01:00', queued=29ms, exec=197799ms
>   * openvpn_stop_0 on boss1 'OCF_TIMEOUT' (198): call=61,
> status='Timed Out', exitreason='', last-rc-change='1970-01-01
> 01:00:54 +01:00', queued=1805ms, exec=198841ms

These show that attempts to stop failed, rather than start.

> 
> So everytime i reboot my node, i need to start the resources manually
> using systemd, for example:
> 
> systemd start apache
> 
> and then pcs resource cleanup
> 
> Resources configuration:
> 
> Clone: apache-clone
>   Meta Attrs: maintenance=false
>   Resource: apache (class=systemd type=httpd)
>Meta Attrs: maintenance=false
>Operations: monitor interval=60 timeout=100 (apache-monitor-
> interval-60)
>start interval=0s timeout=100 (apache-start-interval-
> 0s)
>stop interval=0s timeout=100 (apache-stop-interval-0s)
> 
> 
> 
> Resource: openvpn (class=systemd type=openvpn-server@01-server)
>Meta Attrs: maintenance=false
>Operations: monitor interval=60 timeout=100 (openvpn-monitor-
> interval-60)
>start interval=0s timeout=100 (openvpn-start-interval-
> 0s)
>stop interval=0s timeout=100 (openvpn-stop-interval-
> 0s)
> 
> 
> 
> Btw, if i try a debug-start / debug-stop the mentioned resources
> start and stop ok.

Based on that, my first guess would be SELinux. Check the SELinux logs
for denials.

Also, make sure your systemd services are not enabled in systemd itself
(e.g. via systemctl enable). Clustered systemd services should be
managed by the cluster only.
-- 
Ken Gaillot 

___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/


Re: [ClusterLabs] Question about IPaddr2 & redis heartbeat, do they coordinate?

2020-02-14 Thread Ken Gaillot
On Fri, 2020-02-14 at 12:09 +0900, steven prothero wrote:
> Greetings,
> 
> Hello, I am new to pacemaker and learning much from the online
> documentation, searching on google, etc but have a specific question
> I
> am not sure about...
> 
> My setup is two servers and the focus is redis.
> 
> We use the heartbeat/IPaddr2 to update the VIP as needed, and the
> redis heartbeat to manage who is redis master...  Everything seems to
> work nicely.
> 
> The boss requested I change the system to have Redis work first, if
> trouble switch & shutdown etc and then the VIP will switch after. How
> would one heartbeat wait for the other to finish? Is that a bad/good
> idea?
> 
> As I am learning this all, from what I understand I think the two
> heartbeats are running in parallel and not coordinating together.
> Perhaps the VIP is switched quickly while the redis switch takes
> longer due to it negotiating a bit & dumping the database to disk and
> then shutting down..?
> 
> I researched the "ordering" but I think that is more about what
> services load in first so doesn't help me with this situation.
> 
> Appreciate very much any thoughts about this.
> 
> Cheers
> 
> Steve

Hi,

You most likely want to put redis and the IP in a group. That is the
same as setting both an ordering constraint (to make sure one waits for
the other to be successfully started before starting itself) and a
colocation constraint (to make sure they both start on the same node).

Whether the IP is first or last depends on how redis binds to addresses
and how you want the service to behave. If redis binds to the wildcard
address, then it doesn't matter whether the IP starts before or after,
so you can choose based on whether you want the IP to be functional
only if redis is also functional, or you want the IP to be usable for
other purposes even if redis is down. If redis binds to the specific
IP, then the IP must be first.
-- 
Ken Gaillot 

___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/


[ClusterLabs] ClusterLabs Summit 2020: slides and photos

2020-02-11 Thread Ken Gaillot
Hi all,

ClusterLabs Summit 2020 was a wonderful experience. It was nice to see
old friends and meet new ones. For both those who attended and those
who were unable to attend, I've posted slides and photos for most of
the talks on the summit wiki page:

http://plan.alteeve.ca/index.php/HA_Cluster_Summit_2020

If anyone wants more in-depth information about the talks, I'm sure the
speakers will be happy to give more details here if asked.

Thanks to everyone involved, including all the speakers and attendees!
-- 
Ken Gaillot 

___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/


Re: [ClusterLabs] Saving secret locally

2020-01-22 Thread Ken Gaillot
On Wed, 2020-01-22 at 08:01 +0200, Strahil Nikolov wrote:
> On January 21, 2020 9:23:35 PM GMT+02:00, Ken Gaillot <
> kgail...@redhat.com> wrote:
> > On Tue, 2020-01-21 at 06:29 +0200, Strahil Nikolov wrote:
> > > On January 20, 2020 5:57:18 PM GMT+02:00, Ken Gaillot <
> > > kgail...@redhat.com> wrote:
> > > > On Sat, 2020-01-18 at 20:54 +, Strahil Nikolov wrote:
> > > > > Hello Community,
> > > > > 
> > > > > 
> > > > > I have been using pacemaker in the last 2 years on SUSE who
> > > > > use
> > > > > crmsh
> > > > > and now I struggle to recall some of the knowledge I had.
> > > > > Cluster
> > > > > is
> > > > > RHEL 7.7 on oVirt/RHV .
> > > > > 
> > > > > 
> > > > > Can someone tell me the pcs command that matches to this one,
> > > > > as
> > > > > I
> > > > > don't want the password for the fencing user in the CIB :
> > > > > 
> > > > > 
> > > > > crm resource secret  set  
> > > > > 
> > > > > 
> > > > > I've been searching in the pcs --help and on 
> > > > > 
> > > > 
> > > > 
> > 
> > https://github.com/ClusterLabs/pacemaker/blob/master/doc/pcs-crmsh-quick-ref.md
> > > > > md , but it seems it's not there or I can't find it.
> > > > > 
> > > > > Thanks in advance.
> > > > > 
> > > > > 
> > > > > Best Regards,
> > > > > Strahil Nikolov
> > > > 
> > > > Not only does pcs not have an equivalent, but CIB secrets
> > > > aren't
> > > > even
> > > > enabled in RHEL (it's a compile-time option). I'm not aware of
> > > > any
> > > > particular reason; it probably goes back to when the feature
> > > > was
> > > > experimental. Feel free to file a bug with Red Hat asking for
> > > > it to
> > > > be
> > > > enabled.
> > > 
> > > Hi Ken,
> > > 
> > > 
> > > Thanks for your reply.
> > > I will open a bug for RHEL 8 - my guess is that it also lacks
> > > that
> > > feature, right ?
> > > 
> > > Best Regards,
> > > Strahil Nikolov
> > 
> > Correct
> 
> Bug opened at:
> https://bugzilla.redhat.com/show_bug.cgi?id=1793860

Thanks!

> Should I open issue in github ?

No, it's a RHEL-only issue.

> Best Regards,
> Strahil Nikolov
> 
-- 
Ken Gaillot 

___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/


Re: [ClusterLabs] Saving secret locally

2020-01-21 Thread Ken Gaillot
On Tue, 2020-01-21 at 06:29 +0200, Strahil Nikolov wrote:
> On January 20, 2020 5:57:18 PM GMT+02:00, Ken Gaillot <
> kgail...@redhat.com> wrote:
> > On Sat, 2020-01-18 at 20:54 +, Strahil Nikolov wrote:
> > > Hello Community,
> > > 
> > > 
> > > I have been using pacemaker in the last 2 years on SUSE who use
> > > crmsh
> > > and now I struggle to recall some of the knowledge I had. Cluster
> > > is
> > > RHEL 7.7 on oVirt/RHV .
> > > 
> > > 
> > > Can someone tell me the pcs command that matches to this one, as
> > > I
> > > don't want the password for the fencing user in the CIB :
> > > 
> > > 
> > > crm resource secret  set  
> > > 
> > > 
> > > I've been searching in the pcs --help and on 
> > > 
> > 
> > https://github.com/ClusterLabs/pacemaker/blob/master/doc/pcs-crmsh-quick-ref.md
> > > md , but it seems it's not there or I can't find it.
> > > 
> > > Thanks in advance.
> > > 
> > > 
> > > Best Regards,
> > > Strahil Nikolov
> > 
> > Not only does pcs not have an equivalent, but CIB secrets aren't
> > even
> > enabled in RHEL (it's a compile-time option). I'm not aware of any
> > particular reason; it probably goes back to when the feature was
> > experimental. Feel free to file a bug with Red Hat asking for it to
> > be
> > enabled.
> 
> Hi Ken,
> 
> 
> Thanks for your reply.
> I will open a bug for RHEL 8 - my guess is that it also lacks that
> feature, right ?
> 
> Best Regards,
> Strahil Nikolov

Correct
-- 
Ken Gaillot 

___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/


Re: [ClusterLabs] pcs stonith fence - Error: unable to fence

2020-01-20 Thread Ken Gaillot
On Sat, 2020-01-18 at 22:20 +, Strahil Nikolov wrote:
> Sorry for the spam.
> I figured out that I forgot to specify the domain for the 'drbd1' and
> thus it has reacted like that.
> The strange thing is that pcs allows me to fence a node , that is not
> in the cluster :)
> 
> Do you think that this behaviour is a bug?
> If yes, I can open an issue to the upstream
> 
> 
> Best Regards,
> Strahil Nikolov

Leaving pcs out of the picture for a moment, from pacemaker's view the
stonith_admin command is just passing along what the user requested,
and the fencing daemon determines whether it's a valid request or not
and fails the request appropriately. So technically it's not a bug.

However I see two possible areas of improvement:

- The status display should show not just that the request failed, but
why. There is a project already planned to show why fencing was
initiated, so this would be a good addition to that. It's just a matter
of having developer time to do it.

- Since pcs is at a higher level than stonith_admin, it could
require "--force" if a given node isn't in the cluster configuration.
Feel free to file an upstream request for that.


> В неделя, 19 януари 2020 г., 00:01:11 ч. Гринуич+2, Strahil Nikolov <
> hunter86...@yahoo.com> написа: 
> 
> 
> 
> 
> 
> Hi All,
> 
> 
> I am building a test cluster with fence_rhevm stonith agent on RHEL
> 7.7 and oVirt 4.3.
> When I fenced drbd3 from drbd1 using 'pcs stonith fence drbd3' - the
> fence action was successfull.
> 
> So then I decided to test the fencing the opposite way and it
> partially failed.
> 
> 
> 1. in oVirt the machine was powered off and then powered on properly
> - so the communication with the engine is OK
> 2. the command on drbd3 to fence drbd1 did stuck and then reported as
> failiure despite the VM was reset.
> 
> 
> 
> Now 'pcs status' is reporting the following:
> Failed Fencing Actions:
> * reboot of drbd1 failed: delegate=drbd3.localdomain,
> client=stonith_admin.1706, origin=drbd3.localdomain,
>last-failed='Sat Jan 18 23:18:24 2020'
> 
> 
> 
> 
> My stonith is configured as follows:
> Stonith Devices: 
> Resource: ovirt_FENCE (class=stonith type=fence_rhevm) 
>  Attributes: ipaddr=engine.localdomain login=fencerdrbd@internal
> passwd=I_have_replaced_that 
> pcmk_host_map=drbd1.localdomain:drbd1;drbd2.localdomain:drbd2;drbd3.localdomain:drbd
> 3 power_wait=3 ssl=1 ssl_secure=1 
>  Operations: monitor interval=60s (ovirt_FENCE-monitor-interval-60s) 
> Fencing Levels:
> 
> 
> 
> Do I need to add some other settings to the fence_rhevm stonith agent
> ?
> 
> 
> Manually running the status command from drbd2/drbd3 is OK:
> 
> 
> [root@drbd3 ~]# fence_rhevm -o status --ssl --ssl-secure -a
> engine.localdomain --username='fencerdrbd@internal'  
> --password=I_have_replaced_that -n drbd1 
> Status: ON
> 
> I'm attaching the logs from the drbd2 (DC) and drbd3.
> 
> 
> Thanks in advance for your suggestions.
> 
> 
> Best Regards,
> Strahil Nikolov
-- 
Ken Gaillot 

___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/

Re: [ClusterLabs] Saving secret locally

2020-01-20 Thread Ken Gaillot
On Sat, 2020-01-18 at 20:54 +, Strahil Nikolov wrote:
> Hello Community,
> 
> 
> I have been using pacemaker in the last 2 years on SUSE who use crmsh
> and now I struggle to recall some of the knowledge I had. Cluster is
> RHEL 7.7 on oVirt/RHV .
> 
> 
> Can someone tell me the pcs command that matches to this one, as I
> don't want the password for the fencing user in the CIB :
> 
> 
> crm resource secret  set  
> 
> 
> I've been searching in the pcs --help and on 
> https://github.com/ClusterLabs/pacemaker/blob/master/doc/pcs-crmsh-quick-ref.md
> md , but it seems it's not there or I can't find it.
> 
> Thanks in advance.
> 
> 
> Best Regards,
> Strahil Nikolov

Not only does pcs not have an equivalent, but CIB secrets aren't even
enabled in RHEL (it's a compile-time option). I'm not aware of any
particular reason; it probably goes back to when the feature was
experimental. Feel free to file a bug with Red Hat asking for it to be
enabled.
-- 
Ken Gaillot 

___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/


Re: [ClusterLabs] corosync log are getting updated in message logs.

2020-01-17 Thread Ken Gaillot
On Fri, 2020-01-17 at 12:32 +, S Sathish S wrote:
> Hi All,
>  
> The version of pacemaker rpm's being used is:
>  
> corosync-2.4.4 -->  https://github.com/corosync/corosync/tree/v2.4.4
> pacemaker-2.0.2 --> 
> https://github.com/ClusterLabs/pacemaker/tree/Pacemaker-2.0.2
>  
> We see that the pacemaker related warnings and messages are getting
> updated in Corosync.log as expected but are also additionally being
> printed in /var/log/message
>  
> The corosync.conf file logging enabled as follows:
>  
> logging {
> to_logfile: yes
> logfile: /var/log/cluster/corosync.log
> to_syslog: no
> }
>  
> Though we have set the to_syslog:no the /var/log/messages is still
> getting updated.
>  
> How to suppress the corosync related information from being
> duplicated in message log?
>  
> Let us know if any additional information required from our end.
>  
> Regards,
> S Sathish S 

See /etc/sysconfig/pacemaker or the equivalent on your OS (/etc/default
or wherever). You can set PCMK_logfacility=none, or raise
PCMK_logpriority to see only the most important messages in the system
log.
-- 
Ken Gaillot 

___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/


Re: [ClusterLabs] Guest nodes in a pacemaker cluster

2020-01-14 Thread Ken Gaillot
On Tue, 2020-01-14 at 09:22 +0530, Prasad Nagaraj wrote:
> Hi - I have a 3 node master - slave - slave MySQL cluster setup using
> corosync\pacemaker stack.
> 
> Now I want to introduce 4 more slaves to the configuration. However,
> I do not want these to be part of the quorum or participate in DC
> election etc. Could someone guide me on an recommended approach to do
> this ?
> 
> Thanks!
> Prasad. 

Pacemaker Remote would be perfect for that:

https://clusterlabs.org/pacemaker/doc/en-US/Pacemaker/2.0/html-single/Pacemaker_Remote/index.html
-- 
Ken Gaillot 

___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/


Re: [ClusterLabs] pacemaker-controld getting respawned

2020-01-06 Thread Ken Gaillot
17]: notice: Recurring
> action XXX_vmc0621:267 (XXX_vmc0621_monitor_1) incomplete at
> shutdown
> Dec 30 10:02:37 vmc0621 pacemaker-controld[7517]: error: 12 resources
> were active at shutdown
> Dec 30 10:02:37 vmc0621 pacemaker-controld[7517]: notice:
> Disconnected from the executor
> Dec 30 10:02:37 vmc0621 pacemaker-controld[7517]: notice:
> Disconnected from Corosync
> Dec 30 10:02:37 vmc0621 pacemaker-controld[7517]: notice:
> Disconnected from the CIB manager
> Dec 30 10:02:37 vmc0621 pacemaker-controld[7517]: error: Could not
> recover from internal error
> Dec 30 10:02:37 vmc0621 pacemakerd[3048]: error: pacemaker-
> controld[7517] exited with status 1 (Error occurred)
> Dec 30 10:02:37 vmc0621 pacemakerd[3048]: notice: Respawning failed
> child process: pacemaker-controld
>  
> Please let us know if any further logs required from our end.
>  
> Thanks and Regards,
> S Sathish S
> ___
> Manage your subscription:
> https://lists.clusterlabs.org/mailman/listinfo/users
> 
> ClusterLabs home: https://www.clusterlabs.org/
-- 
Ken Gaillot 

___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/

Re: [ClusterLabs] Support for xt_cluster

2019-12-19 Thread Ken Gaillot
On Thu, 2019-12-19 at 15:01 +, Marcus Vinicius wrote:
> Hi, 
> 
> As I know, CLUSTERIP is deprecated for some time. Recent
> distributions doesn't have this module on their repositories at all
> (Red Hat 8)
> 
> It seems Pacemaker still use CLUSTERIP for clone an IP address.
> 
> For this reason, I have the following error on an Active/Active
> cluster VIP: 
> 
> Cenario: 
> 
> CentOS 8
> Pacemaker 2.0.1
> Kernel 4.18.0
> Iptables 1.8.2
> 
> # pcs resource create ClusterIP ocf:heartbeat:IPaddr2
> ip=172.18.14.100 nic=ens160 cidr_netmask=24 op monitor interval=2s
> # pcs resource clone ClusterIP
> # pcs status
> ...
> Failed Resource Actions:
> * ClusterIP_start_0 on pcsnode1 'unknown error' (1): call=40,
> status=complete, exitreason='iptables failed',
> last-rc-change='Thu Dec 19 12:30:40 2019', queued=0ms, exec=172ms
> 
> Logs: 
> 
> Dec 19 12:32:54 pcsnode1 IPaddr2(ClusterIP)[10245]: ERROR: iptables
> failed
> Dec 19 12:32:54 pcsnode1 pacemaker-execd[1436]: notice:
> ClusterIP_start_0:10245:stderr [ iptables v1.8.2 (nf_tables): chain
> name not allowed to start with `-' ]
> Dec 19 12:32:54 pcsnode1 pacemaker-execd[1436]: notice:
> ClusterIP_start_0:10245:stderr [  ]
> Dec 19 12:32:54 pcsnode1 pacemaker-execd[1436]: notice:
> ClusterIP_start_0:10245:stderr [ Try `iptables -h' or 'iptables --
> help' for more information. ]
> Dec 19 12:32:54 pcsnode1 pacemaker-execd[1436]: notice:
> ClusterIP_start_0:10245:stderr [ ocf-exit-reason:iptables failed ]
> Dec 19 12:32:54 pcsnode1 pacemaker-controld[1439]: notice: Result of
> start operation for ClusterIP on pcsnode1: 1 (unknown error)
> 
> Any one can simulate the module problem, outside Pacemaker, with this
> command: 
> 
> Perfectly good for CentOS 7 installation with ipt_CLUSTERIP.ko: 
> 
> # iptables -A INPUT -d 172.18.14.100/32 -i ens192 -j CLUSTERIP --new
> --hashmode sourceip-sourceport --clustermac 43:0A:1F:80:58:36 --
> total-nodes 2 --local-node 2 --hash-init 0
> 
> No good for a default CentOS 8 installation: 
> 
> # iptables -A INPUT -d 172.18.14.100/32 -i ens192 -j CLUSTERIP --new
> --hashmode sourceip-sourceport --clustermac 43:0A:1F:80:58:36 --
> total-nodes 2 --local-node 2 --hash-init 0
> iptables v1.8.2 (nf_tables): chain name not allowed to start with `-'
> 
> Try `iptables -h' or 'iptables --help' for more information.
> 
> 
> Is there any intention to abandon CLUSTERIP

yes

>  in favor of xt_cluster.ko? 

no

:)

A recent thread about this:
https://lists.clusterlabs.org/pipermail/users/2019-December/026663.html

resulted in a change to allow IPaddr2 clones to continue working on
newer systems if "iptables-legacy" is available:
https://github.com/ClusterLabs/resource-agents/pull/1439

tl;dr Cloned IPaddr2 is supported only on platforms that support
CLUSTERIP, and can be considered deprecated since CLUSTERIP itself is
deprecated. A pull request with an xt_cluster implementation would be
very welcome, as it's a low priority for available developers.

> Thanks a lot!
> 
> 
> Att,
> 
> Marcus Vinícius
-- 
Ken Gaillot 

___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/

Re: [ClusterLabs] Corosync/Pacemaker bug methinks! (Was: pacemaker won't start because duplicate node but can't remove dupe node because pacemaker won't start)

2019-12-19 Thread Ken Gaillot
On Thu, 2019-12-19 at 02:38 -0800, JC wrote:
> Hi Ken,
> 
> I took a little time away from the problem. Getting back to it now. I
> found that the corosync logs were not only in journalctl but also in
> /var/log/syslog. I think the logs in syslog are more interesting,
> though I haven’t actually done a thorough comparison. Nevertheless,
> I’m pasting what the logs in syslog say and am hoping there’s more
> interesting data here. The time signatures match perfectly here, too.



> Dec 18 23:44:21 region-ctrl-2 corosync[2946]:   [TOTEM ] A new
> membership (192.168.99.225:120) was formed. Members joined:
> 1084777441

Well that at least confirms that the ID is coming from corosync.



> # cat /etc/corosync/corosync.conf 
> totem {
> version: 2
> cluster_name: maas-cluster
> token: 3000
> token_retransmits_before_loss_const: 10
> clear_node_high_bit: yes
> crypto_cipher: none
> crypto_hash: none
> 
> interface {
> ringnumber: 0
> bindnetaddr: 192.168.99.0
> mcastport: 5405

Hmm, multicast? I bet your problems will go away if you switch to udpu.

> ttl: 1
> }
> }
> 
> logging {
> fileline: off
> to_stderr: no
> to_logfile: yes
> to_syslog: yes
> syslog_facility: daemon
> debug: on
> timestamp: on
> 
> logger_subsys {
> subsys: QUORUM
> debug: on
> }
> }
> 
> quorum {
> provider: corosync_votequorum
> expected_votes: 3
> two_node: 1
> }
> 
> nodelist {
> node {
> ring0_addr: postgres-sb
> nodeid: 3
> }
> 
> node {
> ring0_addr: region-ctrl-1
> nodeid: 1
> }
> }

I know you've tried various things with the config, so I'm not sure
what happened when, but with only those two nodes listed explicitly and
multicast configured, it does make sense that the local node (which
isn't listed) would join with an auto-generated ID.

I would list all nodes explicitly and switch to udpu transport.
-- 
Ken Gaillot 

___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/

Re: [ClusterLabs] Corosync/Pacemaker bug methinks! (Was: pacemaker won't start because duplicate node but can't remove dupe node because pacemaker won't start)

2019-12-18 Thread Ken Gaillot
Changed active directory to
> /var/lib/pacemaker/cores
> attrd: info: get_cluster_type:  Verifying cluster type:
> 'corosync'
> attrd: info: get_cluster_type:  Assuming an active 'corosync'
> cluster
> info: crm_log_init: Changed active directory to
> /var/lib/pacemaker/cores
> attrd:   notice: crm_cluster_connect:   Connecting to cluster
> infrastructure: corosync
>   cib: info: get_cluster_type:  Verifying cluster type:
> 'corosync'
>   cib: info: get_cluster_type:  Assuming an active 'corosync'
> cluster
> info: get_cluster_type: Verifying cluster type: 'corosync'
> info: get_cluster_type: Assuming an active 'corosync' cluster
>   notice: crm_cluster_connect:  Connecting to cluster infrastructure:
> corosync
> attrd: info: corosync_node_name:Unable to get node
> name for nodeid 1084777441
>   cib: info: validate_with_relaxng: Creating RNG parser
> context
>  crmd: info: crm_log_init:  Changed active directory to
> /var/lib/pacemaker/cores
>  crmd: info: get_cluster_type:  Verifying cluster type:
> 'corosync'
>  crmd: info: get_cluster_type:  Assuming an active 'corosync'
> cluster
>  crmd: info: do_log:Input I_STARTUP received in state
> S_STARTING from crmd_init
> attrd:   notice: get_node_name: Could not obtain a node name
> for corosync nodeid 1084777441
> attrd: info: crm_get_peer:  Created entry af5c62c9-21c5-
> 4428-9504-ea72a92de7eb/0x560870420e90 for node (null)/1084777441 (1
> total)
> attrd: info: crm_get_peer:  Node 1084777441 has uuid
> 1084777441
> attrd: info: crm_update_peer_proc:  cluster_connect_cpg:
> Node (null)[1084777441] - corosync-cpg is now online
> attrd:   notice: crm_update_peer_state_iter:Node (null)
> state is now member | nodeid=1084777441 previous=unknown
> source=crm_update_peer_proc
> attrd: info: init_cs_connection_once:   Connection to
> 'corosync': established
> info: corosync_node_name:   Unable to get node name for nodeid
> 1084777441
>   notice: get_node_name:Could not obtain a node name for
> corosync nodeid 1084777441
> info: crm_get_peer: Created entry 5bcb51ae-0015-4652-b036-
> b92cf4f1d990/0x55f583634700 for node (null)/1084777441 (1 total)
> info: crm_get_peer: Node 1084777441 has uuid 1084777441
> info: crm_update_peer_proc: cluster_connect_cpg: Node
> (null)[1084777441] - corosync-cpg is now online
>   notice: crm_update_peer_state_iter:   Node (null) state is now
> member | nodeid=1084777441 previous=unknown
> source=crm_update_peer_proc
> attrd: info: corosync_node_name:Unable to get node
> name for nodeid 1084777441
> attrd:   notice: get_node_name: Defaulting to uname -n for
> the local corosync node name
> attrd: info: crm_get_peer:  Node 1084777441 is now known
> as region-ctrl-2
> info: corosync_node_name:   Unable to get node name for nodeid
> 1084777441
>   notice: get_node_name:Defaulting to uname -n for the local
> corosync node name
> info: init_cs_connection_once:  Connection to 'corosync':
> established
> info: corosync_node_name:   Unable to get node name for nodeid
> 1084777441
>   notice: get_node_name:Defaulting to uname -n for the local
> corosync node name
> info: crm_get_peer: Node 1084777441 is now known as region-ctrl-2
>   cib:   notice: crm_cluster_connect:   Connecting to cluster
> infrastructure: corosync
>   cib: info: corosync_node_name:Unable to get node
> name for nodeid 1084777441
>   cib:   notice: get_node_name: Could not obtain a node name
> for corosync nodeid 1084777441
>   cib: info: crm_get_peer:  Created entry a6ced2c1-9d51-
> 445d-9411-2fb19deab861/0x55848365a150 for node (null)/1084777441 (1
> total)
>   cib: info: crm_get_peer:  Node 1084777441 has uuid
> 1084777441
>   cib: info: crm_update_peer_proc:  cluster_connect_cpg:
> Node (null)[1084777441] - corosync-cpg is now online
>   cib:   notice: crm_update_peer_state_iter:Node (null)
> state is now member | nodeid=1084777441 previous=unknown
> source=crm_update_peer_proc
>   cib: info: init_cs_connection_once:   Connection to
> 'corosync': established
>   cib: info: corosync_node_name:Unable to get node
> name for nodeid 1084777441
>   cib:   notice: get_node_name: Defaulting to uname -n for
> the local corosync node name
>   cib: info: crm_get_peer:  Node 1084777441 is now known
> as region-ctrl-2
>   cib: info: qb_ipcs_us_publish:server name: cib_ro
>

[ClusterLabs] Last call for Summit attendees!

2019-12-10 Thread Ken Gaillot
Hi everybody,

We're finalizing meeting room arrangements and hotel discounts for the
Feb. 5-6 ClusterLabs Summit in Brno, and we don't have much space for
additional attendees.

I currently have attendees from Alteeve, Canonical, IBM, Linbit, NTT,
Proxmox, Red Hat, and SUSE, as well as one additional maybe who already
contacted me.

If you're thinking of going and aren't on that list, please let me know
as soon as possible.

I should have hotel details soon.
-- 
Ken Gaillot 

___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/


[ClusterLabs] 2020 Summit is right around the corner!

2019-12-02 Thread Ken Gaillot
The 2020 ClusterLabs summit is only two months away! Details are
available at:

http://plan.alteeve.ca/index.php/HA_Cluster_Summit_2020

So far we have responses from Alteeve, Canonical, IBM MQ, NTT, Proxmox,
Red Hat, and SUSE. If anyone else thinks they might attend, please
reply here or email me privately so we can firm up the head count and
finalize planning.

Thanks,
-- 
Ken Gaillot 

___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/


Re: [ClusterLabs] Concept of a Shared ipaddress/resource for generic applicatons

2019-12-02 Thread Ken Gaillot
On Sat, 2019-11-30 at 18:58 +0300, Andrei Borzenkov wrote:
> 29.11.2019 17:46, Jan Pokorný пишет:
> > "Clone" feature for IPAddr2 is actually sort of an overloading that
> > agent with an alternative functionality -- trivial low-level load
> > balancing.  You can ignore that if you don't need any such.
> > 
> 
> I would say IPaddr2 in clone mode does something similar to
> SharedAddress.

Just a side note about something that came up recently:

IPaddr2 cloning utilizes the iptables "clusterip" feature, which has
been deprecated in the Linux kernel since 2015. IPaddr2 cloning
therefore must be considered deprecated as well. (Using it for a single
floating IP is still fully supported.)

IPaddr2 could be modified to use a newer iptables capability called
"xt_cluster", but someone would have to volunteer to do that as it's
not a priority.
-- 
Ken Gaillot 

___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/

Re: [ClusterLabs] Final Pacemaker 2.0.3 release now available

2019-11-27 Thread Ken Gaillot
On Mon, 2019-11-25 at 23:02 -0500, Digimer wrote:
> Congrats!
> 
> Can I ask, when might fencing become required? Is that still in the
> works, or has it been shelved?
> 
> digimer

tl;dr shelved

The original plan for 2.0.0 was to get rid of the stonith-enabled flag,
but still allow disabling stonith via "requires=quorum" in
rsc_defaults.

Certain resources, such as stonith devices themselves or simple nagios
checks, can be immediately started elsewhere even if their original
node needs to be fenced. requires=quorum was designed for such
resources. Setting that for all resources would make fencing largely
irrelevant.

That was shelved when I realized the code would have to be considerably
more complicated to go that route. Also, someone could theoretically
want "requires=quorum" for all resources while still wanting nodes to
be fenced if they are lost.

> On 2019-11-25 9:32 p.m., Ken Gaillot wrote:
> > Hi all,
> > 
> > The final release of Pacemaker version 2.0.3 is now available at:
> > 
> > https://github.com/ClusterLabs/pacemaker/releases/tag/Pacemaker-2.0.3
> > 
> > Highlights include:
> > 
> > * A dynamic cluster recheck interval (you don't have to care about
> > changing cluster-recheck-interval when using failure-timeout or
> > most
> > rules)
> > 
> > * Pacemaker Remote options for security hardening (listen address
> > and
> > TLS priorities)
> > 
> > * crm_mon supports the --output-as/--output-to options, has some
> > tweaks
> > to text and HTML output that will hopefully make it easier to read,
> > has
> > a correct count of disabled and blocked resources, and supports an
> > option to set a stylesheet for HTML output
> > 
> > * A new fence-reaction cluster option controls whether the local
> > node
> > stops pacemaker or panics the local host when notified of its own
> > fencing (which can happen with fabric fencing agents such as
> > fence_scsi)
> > 
> > * Documentation improvements include a new chapter about ACLs
> > (replacing an outdated text file) in "Pacemaker Explained" and
> > another
> > one about the command-line tools in "Pacemaker Administration":
> > 
> > https://clusterlabs.org/pacemaker/doc/en-US/Pacemaker/2.0/html-single/Pacemaker_Explained/index.html#idm47160746093920
> > 
> > https://clusterlabs.org/pacemaker/doc/en-US/Pacemaker/2.0/html-single/Pacemaker_Administration/index.html#idm47051359032720
> > 
> > As usual, there were bug fixes and log message improvements as
> > well.
> > Most significantly, a regression introduced in 2.0.2 that
> > effectively
> > disabled concurrent-fencing has been fixed, and an invalid
> > transition
> > (blocking all further resource actions) has been fixed when both a
> > guest node or bundle and the host running it needs to be fenced,
> > but
> > can't (due to quorum loss, for example).
> > 
> > For more details about changes in this release, see:
> > 
> > https://github.com/ClusterLabs/pacemaker/blob/2.0/ChangeLog
> > 
> > Many thanks to all contributors of source code to this release,
> > including Aleksei Burlakov, Chris Lumens, Gao,Yan, Hideo Yamauchi,
> > Jan
> > Pokorný, John Eckersberg, Kazunori INOUE, Ken Gaillot, Klaus
> > Wenninger,
> > Konstantin Kharlamov, Munenari, Roger Zhou, S. Schuberth, Tomas
> > Jelinek, and Yuusuke Iida.
> > 
> > Version 1.1.22, with selected backports from this release, will
> > also be
> > released soon.
> > 
> 
> 
-- 
Ken Gaillot 

___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/

Re: [ClusterLabs] 5 minute repeat warnings

2019-11-26 Thread Ken Gaillot
On Tue, 2019-11-26 at 08:31 +, BASDEN, ALASTAIR G. wrote:
> Hi,
> 
> In our /var/log/messages, we are getting messages repeated every 5 
> minutes:
> Nov 26 08:22:02 c6mds1 crmd[26950]:  notice: State transition S_IDLE
> -> 
> S_POLICY_ENGINE
> Nov 26 08:22:02 c6mds1 pengine[26949]:  notice: On loss of CCM
> Quorum: 
> Ignore
> Nov 26 08:22:02 c6mds1 pengine[26949]:  notice: Calculated
> transition 
> 1397, saving inputs in /var/lib/pacemaker/pengine/pe-input-184.bz2
> Nov 26 08:22:02 c6mds1 crmd[26950]:  notice: Transition 1397
> (Complete=0, 
> Pending=0, Fired=0, Skipped=0, Incomplete=0, 
> Source=/var/lib/pacemaker/pengine/pe-input-184.bz2): Complete
> Nov 26 08:22:02 c6mds1 crmd[26950]:  notice: State transition 
> S_TRANSITION_ENGINE -> S_IDLE
> 
> 
> Can anyone advise what this is about and how to stop the
> messages?  pcs 
> resource cleanup doesn't help.
> 
> We're on centos7.6.
> 
> Thanks,
> Alastair.

Hi,

Those are routine messages indicating that the cluster is rechecking
whether anything needs to be done. The frequency is controlled by the
cluster-recheck-interval property.

In the version you have, any value for failure-timeout or time-based
rules is not guaranteed to be checked more often than the recheck
interval. The recheck interval also serves as a fail-safe against
certain types of policy engine bugs (those that do not schedule all
needed actions in a single transition).

In 2.0.3 (released yesterday), the recheck interval is calculated
dynamically for failure-timeout and rules, so it has less importance.

More generally, log messages at "notice" level are just informational
and do not indicate anything wrong. Problems are logged at "warning",
"error", or "critical" level.

I could see an argument for lowering the "calculated"/"complete"
messages to "info" level (which doesn't go into the system log) when no
actions are needed. The state transition messages should stay at
notice, though.
-- 
Ken Gaillot 

___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/


Re: [ClusterLabs] Q: start/stop/order/colocate resources based on an attribute value

2019-11-26 Thread Ken Gaillot
On Tue, 2019-11-26 at 08:40 +0100, Ulrich Windl wrote:
> Hi!
> 
> I'm thinking about some mechanism that involves remote probing of
> some resource. As a result of such probing an attribute value would
> be set (like 0|1, maybe even multiple values).
> Is it possible to start a resource after the attribute is set to 1
> (and stop it when the attribute changes to 0)?
> Is it possible to wait for start until the attribute is 1?

Yes, rules support checking node attribute values for location
constraints. Whatever higher-level tool you use likely has simplified
syntax, but the XML is (under "Location Rules Based on Other Node
Properties"):

https://clusterlabs.org/pacemaker/doc/en-US/Pacemaker/2.0/html-single/Pacemaker_Explained/index.html#_using_rules_to_determine_resource_location

Basically you use attrd_updater or crm_attribute to set the node
attribute, and configure a location constraint containing a rule, which
has a score and contains a rule expression specifying the desired
condition. In this case you'd probably want a -INFINITY score where the
attribute is not 1.


> Is it possible to suspend monitoring (or to ignore the results of it)
> if the attribute is 0? (That would be to avoid "false" monitoring
> errors)

Mybe, but that sounds iffy. With the above rule, the cluster would
stop the resource, so it wouldn't matter.

But if you didn't want to stop the resource, just suspend monitoring,
rules might work inside an operation definition. It works for the
resource options themselves:

https://clusterlabs.org/pacemaker/doc/en-US/Pacemaker/2.0/html-single/Pacemaker_Explained/index.html#_using_rules_to_control_resource_options

and I suspect the same approach would work inside an op. Usually
operation options (like on-fail and timeout) are specified directly in
the op tag, but it is allowed to specify them in an meta_attributes
block, where you could try a rule. In this case you would use the rule
to set enabled=true/false for the op.

I suspect timing concerns may come into play with either approach.
There will be some time between when the condition occurs and your
remote monitor detects it and sets the node attribute. In that time a
monitor could run on the resource.

> When on multiple nodes is it possible to colocate a resource where
> the attribute is 1?

The location rule mentioned above would apply to all nodes that meet
the condition.

> Obviously such things have to be done using rules, but I haven't
> found any examples going in such direction.
> 
> Regards,
> Ulrich
-- 
Ken Gaillot 

___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/


[ClusterLabs] Final Pacemaker 2.0.3 release now available

2019-11-25 Thread Ken Gaillot
Hi all,

The final release of Pacemaker version 2.0.3 is now available at:

https://github.com/ClusterLabs/pacemaker/releases/tag/Pacemaker-2.0.3

Highlights include:

* A dynamic cluster recheck interval (you don't have to care about
changing cluster-recheck-interval when using failure-timeout or most
rules)

* Pacemaker Remote options for security hardening (listen address and
TLS priorities)

* crm_mon supports the --output-as/--output-to options, has some tweaks
to text and HTML output that will hopefully make it easier to read, has
a correct count of disabled and blocked resources, and supports an
option to set a stylesheet for HTML output

* A new fence-reaction cluster option controls whether the local node
stops pacemaker or panics the local host when notified of its own
fencing (which can happen with fabric fencing agents such as
fence_scsi)

* Documentation improvements include a new chapter about ACLs
(replacing an outdated text file) in "Pacemaker Explained" and another
one about the command-line tools in "Pacemaker Administration":

https://clusterlabs.org/pacemaker/doc/en-US/Pacemaker/2.0/html-single/Pacemaker_Explained/index.html#idm47160746093920

https://clusterlabs.org/pacemaker/doc/en-US/Pacemaker/2.0/html-single/Pacemaker_Administration/index.html#idm47051359032720

As usual, there were bug fixes and log message improvements as well.
Most significantly, a regression introduced in 2.0.2 that effectively
disabled concurrent-fencing has been fixed, and an invalid transition
(blocking all further resource actions) has been fixed when both a
guest node or bundle and the host running it needs to be fenced, but
can't (due to quorum loss, for example).

For more details about changes in this release, see:

https://github.com/ClusterLabs/pacemaker/blob/2.0/ChangeLog

Many thanks to all contributors of source code to this release,
including Aleksei Burlakov, Chris Lumens, Gao,Yan, Hideo Yamauchi, Jan
Pokorný, John Eckersberg, Kazunori INOUE, Ken Gaillot, Klaus Wenninger,
Konstantin Kharlamov, Munenari, Roger Zhou, S. Schuberth, Tomas
Jelinek, and Yuusuke Iida.

Version 1.1.22, with selected backports from this release, will also be
released soon.
-- 
Ken Gaillot 

___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/

Re: [ClusterLabs] Maximum number of nodes support in single cluster

2019-11-22 Thread Ken Gaillot
On Fri, 2019-11-22 at 07:32 +, S Sathish S wrote:
> Hi Team,
>  
> In Clusterlab below pacemaker and corosync version what is maximum
> cluster nodes its supported ? will it support for 120 nodes in single
> cluster?
>  
> corosync-2.4.4 à  https://github.com/corosync/corosync/tree/v2.4.4
> pacemaker-2.0.2 à 
> https://github.com/ClusterLabs/pacemaker/tree/Pacemaker-2.0.2
>  
> if we have 120 nodes in single cluster let us know if any impact or
> consideration need to take for any cluster configuration parameter ,
> please suggest.
>  
> let us know if any recommendation of cluster nodes limit pre-defined.
>  
> Thanks and Regards,
> S Sathish

At this time, nowhere near 120 full cluster nodes are supported.
There's no official limit by the upstream projects, because so much
depends on hardware and applications, but commercial entities often
limit support to 16 or 32 nodes. Going above 16 will likely require
high-end hardware and careful tuning of corosync parameters.

However pacemaker does support lightweight nodes via Pacemaker Remote:

https://clusterlabs.org/pacemaker/doc/en-US/Pacemaker/2.0/html-single/Pacemaker_Remote/

The scalability of Pacemaker Remote isn't well known. Some users have
reported problems with as few as 40 remote nodes, while others have
gotten above 100 without problems.
-- 
Ken Gaillot 

___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/

Re: [ClusterLabs] Announcing ClusterLabs Summit 2020

2019-11-18 Thread Ken Gaillot
On Mon, 2019-11-18 at 16:06 +, Diego Akechi wrote:
> Hi Everyone,
> 
> Sorry for the late response here.
> 
> From SUSE, we are still collecting the final list of attendees, but
> we
> already have 6 people confirmed, but most probably we will have
> around
> 10 people going.
> 
> I would like to propose two sessions about some of our current work:
> 
> 
> 1. Cluster monitoring capabilities based on the ha_cluster_exporter,
> Prometheus and Grafana
> 
> 2. Cluster deployment automation based on Salt.

Great, looking forward to it!

> If there is not enough time, we can shrink them into one slot.

I'm planning on 1 hour per talk on average (about 40-45 minutes
speaking plus 10-15 minutes Q and a few minutes between talks). If
you'd prefer more or less let me know, but you can plan on that.

> 
> On 15/10/2019 23:42, Ken Gaillot wrote:
> > I'm happy to announce that we have a date and location for the next
> > ClusterLabs Summit: Wednesday, Feb. 5, and Thursday, Feb. 6, 2020,
> > in
> > Brno, Czechia. This year's host is Red Hat.
> > 
> > Details will be given on this wiki page as they become available:
> > 
> >   http://plan.alteeve.ca/index.php/HA_Cluster_Summit_2020
> > 
> > We are still in the early stages of organizing, and need your
> > input.
> > 
> > Most importantly, we need a good idea of how many people will
> > attend,
> > to ensure we have an appropriate conference room and amenities. The
> > wiki page has a section where you can say how many people from your
> > organization expect to attend. We don't need a firm commitment or
> > an
> > immediate response, just let us know once you have a rough idea.
> > 
> > We also invite you to propose a talk, whether it's a talk you want
> > to
> > give or something you are interested in hearing more about. The
> > wiki
> > page has a section for that, too. Anything related to open-source
> > clustering is welcome: new features and plans for the cluster
> > software projects, how-to's and case histories for integrating
> > specific services into a cluster, utilizing specific
> > stonith/networking/etc. technologies in a cluster, tips for
> > administering a cluster, and so forth.
> > 
> > I'm excited about the chance for developers and users to meet in
> > person. Past summits have been helpful for shaping the direction of
> > the
> > projects and strengthening the community. I look forward to seeing
> > many
> > of you there!
> > 
> 
> -- 
> Diego V. Akechi 
> Engineering Manager HA Extension & SLES for SAP
> SUSE Software Solutions Germany GmbH
> Tel: +49-911-74053-373; Fax: +49-911-7417755;  https://www.suse.com/
> Maxfeldstr. 5, D-90409 Nürnberg
> HRB 247165 (AG München)
> Managing Director: Felix Imendörffer
> ___
> Manage your subscription:
> https://lists.clusterlabs.org/mailman/listinfo/users
> 
> ClusterLabs home: https://www.clusterlabs.org/
-- 
Ken Gaillot 

___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/

Re: [ClusterLabs] Antw: Re: Pacemaker 2.0.3-rc3 now available

2019-11-18 Thread Ken Gaillot
On Fri, 2019-11-15 at 14:35 +0100, Jehan-Guillaume de Rorthais wrote:
> On Thu, 14 Nov 2019 11:09:57 -0600
> Ken Gaillot  wrote:
> 
> > On Thu, 2019-11-14 at 15:22 +0100, Ulrich Windl wrote:
> > > > > > Jehan-Guillaume de Rorthais  schrieb am
> > > > > > 14.11.2019 um  
> > > 
> > > 15:17 in
> > > Nachricht <20191114151719.6cbf4e38@firost>:  
> > > > On Wed, 13 Nov 2019 17:30:31 ‑0600
> > > > Ken Gaillot  wrote:
> > > > ...  
> > > > > A longstanding pain point in the logs has been improved.
> > > > > Whenever
> > > > > the
> > > > > scheduler processes resource history, it logs a warning for
> > > > > any
> > > > > failures it finds, regardless of whether they are new or old,
> > > > > which can
> > > > > confuse anyone reading the logs. Now, the log will contain
> > > > > the
> > > > > time of
> > > > > the failure, so it's obvious whether you're seeing the same
> > > > > event
> > > > > or
> > > > > not. The log will also contain the exit reason if one was
> > > > > provided by
> > > > > the resource agent, for easier troubleshooting.  
> > > > 
> > > > I've been hurt by this in the past and I was wondering what was
> > > > the
> > > > point of
> > > > warning again and again in the logs for past failures during
> > > > scheduling? 
> > > > What this information brings to the administrator?  
> > 
> > The controller will log an event just once, when it happens.
> > 
> > The scheduler, on the other hand, uses the entire recorded resource
> > history to determine the current resource state. Old failures (that
> > haven't been cleaned) must be taken into account.
> 
> OK, I wasn't aware of this. If you have a few minutes, I would be
> interested to
> know why the full history is needed and not just find the latest
> entry from
> there. Or maybe there's some comments in the source code that already
> cover this question?

The full *recorded* history consists of the most recent operation that
affects the state (like start/stop/promote/demote), the most recent
failed operation, and the most recent results of any recurring
monitors.

For example there may be a failed monitor, but whether the resource is
considered failed or not would depend on whether there was a more
recent successful stop or start. Even if the failed monitor has been
superseded, it needs to stay in the history for display purposes until
the user has cleaned it up.

> > Every run of the scheduler is completely independent, so it doesn't
> > know about any earlier runs or what they logged. Think of it like
> > Frosty the Snowman saying "Happy Birthday!" every time his hat is
> > put
> > on.
> 
> I don't have this ref :)

I figured not everybody would, but it was too fun to pass up :)

The snowman comes to life every time his magic hat is put on, but to
him each time feels like he's being born for the first time, so he says
"Happy Birthday!"

https://www.youtube.com/watch?v=1PbWTEYoN8o


> > As far as each run is concerned, it is the first time it's seen the
> > history. This is what allows the DC role to move from node to node,
> > and
> > the scheduler to be run as a simulation using a saved CIB file.
> > 
> > We could change the wording further if necessary. The previous
> > version
> > would log something like:
> > 
> > warning: Processing failed monitor of my-rsc on node1: not running
> > 
> > and this latest change will log it like:
> > 
> > warning: Unexpected result (not running: No process state file
> > found)
> > was recorded for monitor of my-rsc on node1 at Nov 12 19:19:02 2019
> 
> /result/state/ ?

It's the result of a resource agent action, so it could be for example
a timeout or a permissions issue.

> > I wanted to be explicit about the message being about processing
> > resource history that may or may not be the first time it's been
> > processed and logged, but everything I came up with seemed too long
> > for
> > a log line. Another possibility might be something like:
> > 
> > warning: Using my-rsc history to determine its current state on
> > node1:
> > Unexpected result (not running: No process state file found) was
> > recorded for monitor at Nov 12 19:19:02 2019
> 
> I better like the first one.
> 
> However, it feels like implementation details exposed to the world,
> isn't it? How useful is

Re: [ClusterLabs] Announcing ClusterLabs Summit 2020

2019-11-18 Thread Ken Gaillot
Great! I've added you to the list.

On Fri, 2019-11-15 at 09:50 +, John Colgrave wrote:
> We are planning for two people from the IBM MQ development team to
> attend. 
> 
> Regards,
> 
> John Colgrave
> 
> Disaster Recovery and High Availability Architect
> IBM MQ
> Unless stated otherwise above:
> IBM United Kingdom Limited - Registered in England and Wales with
> number 741598. 
> Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire
> PO6 3AU
-- 
Ken Gaillot 

___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/


Re: [ClusterLabs] Antw: Re: Pacemaker 2.0.3-rc3 now available

2019-11-14 Thread Ken Gaillot
On Thu, 2019-11-14 at 15:22 +0100, Ulrich Windl wrote:
> > > > Jehan-Guillaume de Rorthais  schrieb am
> > > > 14.11.2019 um
> 
> 15:17 in
> Nachricht <20191114151719.6cbf4e38@firost>:
> > On Wed, 13 Nov 2019 17:30:31 ‑0600
> > Ken Gaillot  wrote:
> > ...
> > > A longstanding pain point in the logs has been improved. Whenever
> > > the
> > > scheduler processes resource history, it logs a warning for any
> > > failures it finds, regardless of whether they are new or old,
> > > which can
> > > confuse anyone reading the logs. Now, the log will contain the
> > > time of
> > > the failure, so it's obvious whether you're seeing the same event
> > > or
> > > not. The log will also contain the exit reason if one was
> > > provided by
> > > the resource agent, for easier troubleshooting.
> > 
> > I've been hurt by this in the past and I was wondering what was the
> > point
> 
> of
> > warning again and again in the logs for past failures during
> > scheduling? 
> > What
> > this information brings to the administrator?

The controller will log an event just once, when it happens.

The scheduler, on the other hand, uses the entire recorded resource
history to determine the current resource state. Old failures (that
haven't been cleaned) must be taken into account.

Every run of the scheduler is completely independent, so it doesn't
know about any earlier runs or what they logged. Think of it like
Frosty the Snowman saying "Happy Birthday!" every time his hat is put
on. As far as each run is concerned, it is the first time it's seen the
history. This is what allows the DC role to move from node to node, and
the scheduler to be run as a simulation using a saved CIB file.

We could change the wording further if necessary. The previous version
would log something like:

warning: Processing failed monitor of my-rsc on node1: not running

and this latest change will log it like:

warning: Unexpected result (not running: No process state file found)
was recorded for monitor of my-rsc on node1 at Nov 12 19:19:02 2019

I wanted to be explicit about the message being about processing
resource history that may or may not be the first time it's been
processed and logged, but everything I came up with seemed too long for
a log line. Another possibility might be something like:

warning: Using my-rsc history to determine its current state on node1:
Unexpected result (not running: No process state file found) was
recorded for monitor at Nov 12 19:19:02 2019


> > In my humble opinion, any entry in the log file should be about
> > something
> > happening by the time the message appears. And it should appears
> > only once,
> > not
> > repeated again and again for no (appearing) reasons. At least, most
> > of the
> > time. Do I miss something?
> > 
> > I'm sure these historical failure warnings raised by the scheduler
> > have
> 
> been
> > already raised in the past by either the lrm or crm process in most
> > of the
> > cases, aren't them?
> > 
> > Unless I'm not aware of something else, the scheduler might warn
> > about 
> > current
> > unexpected status of a resource, not all of them in the past.
> > 
> > Could you shed some lights on this mystery from the user point of
> > view?
> 
> Hi!
> 
> I can agree that the current pacemaker of SLES12 logs so much while
> virtually
> doing nothing that it's very hard to find out when pacemaker actually
> does
> something. And if it does something, it seems it's announcing the
> same thing at
> least three times before actually really doing anything.
> 
> Regards,
> Ulrich

Part of the difficulty arises from Pacemaker's design of using multiple
independent daemons to handle different aspects of cluster management.
A single failure event might get logged by the executor (lrmd),
controller (crmd), and scheduler (pengine), but in different contexts (
potentially on different nodes).

Improving the logs is a major focus of new releases, and we're always
looking for specific suggestions as to which messages need the most
attention. There's been a lot of progress between 1.1.14 and 2.0.3, but
it takes a while for that to land in distributions.
-- 
Ken Gaillot 

___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/

Re: [ClusterLabs] Pacemaker 2.0.3-rc3 now available

2019-11-14 Thread Ken Gaillot
On Thu, 2019-11-14 at 14:54 +0100, Jan Pokorný wrote:
> On 13/11/19 17:30 -0600, Ken Gaillot wrote:
> > This fixes some more minor regressions in crm_mon introduced in
> > rc1.
> > Additionally, after feedback from this list, the new output format
> > options were shortened. The help is now:
> > 
> > Output Options:
> >   --output-as=FORMAT   Specify output format as one of: console
> > (default), html, text, xml
> >   --output-to=DEST Specify file name for output (or "-" for
> > stdout)
> >   --html-cgi   Add CGI headers (requires --output-
> > as=html)
> >   --html-stylesheet=URILink to an external stylesheet (requires
> > --output-as=html)
> >   --html-title=TITLE   Specify a page title (requires --output-
> > as=html)
> >   --text-fancy Use more highly formatted output
> > (requires --output-as=text)
> 
> Wearing the random user§s shoes, this leaves more questions behind
> than it answers:
> 
> * what's the difference between "console" and "text", is any of these
>   considered more stable in time than the other?

Console is crm_mon's interactive curses interface

> 
>   - do I understand it correctly that the password, when needed for
> remote CIB access, will only be prompted for "console"?
> 
> * will --text-fancy work work with --output-as-console?
>   the wording seems to suggest it won't
> 
> Doubtless explorability seems to be put on the backburner in
> favour of straightforward wiring front-end to convoluted logic
> in the command's back-end rather than going backwards, from
> simple-to-understand behaviours down to the logic itself.
> 
> E.g., it may be easier to follow when there is
> a documented equivalence of "--output-as=console" and
> "--output-as=text --text-password-prompt-allowed",
> assuming a new text-output specific switch that is
> not enabled by default otherwise.
> 
> > A longstanding display issue in crm_mon has been fixed. The
> > disabled
> > and blocked resources count was previously incorrect. The new,
> > accurate
> > count changes the text from "resources" to "resource instances",
> > because individual instances of a cloned or bundled resource can be
> > blocked. For example, if you have one regular resource, and a
> > cloned
> > resource running on three nodes, it would count as 4 resource
> > instances.
> > 
> > A longstanding pain point in the logs has been improved. Whenever
> > the
> > scheduler processes resource history, it logs a warning for any
> > failures it finds, regardless of whether they are new or old, which
> > can
> > confuse anyone reading the logs. Now, the log will contain the time
> > of
> > the failure, so it's obvious whether you're seeing the same event
> > or
> > not.
> 
> Just curious, how sensitive is this to time shifts, e.g. timezone
> related?  If it is (human/machine can be unable to match the same
> event
> reported back then and now in a straightforward way, for say time
> zone
> transition in between), considering some sort of rather unique
> identifier would be a more systemic approach for an event matching
> in an invariant manner, but then would we need some notion of
> monotonous cluster-wide sequence ordering?
> 
> > The log will also contain the exit reason if one was provided by
> > the
> > resource agent, for easier troubleshooting.
> 
> ___
> Manage your subscription:
> https://lists.clusterlabs.org/mailman/listinfo/users
> 
> ClusterLabs home: https://www.clusterlabs.org/
-- 
Ken Gaillot 

___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/

[ClusterLabs] Pacemaker 2.0.3-rc3 now available

2019-11-13 Thread Ken Gaillot
The third, and possibly final, release candidate for Pacemaker 2.0.3 is
now available at:

https://github.com/ClusterLabs/pacemaker/releases/tag/Pacemaker-2.0.3-rc3

If there are no serious issues found in this release, I will release it
as the final 2.0.3 in another week or so.

This fixes some more minor regressions in crm_mon introduced in rc1.
Additionally, after feedback from this list, the new output format
options were shortened. The help is now:

Output Options:
  --output-as=FORMAT   Specify output format as one of: console (default), 
html, text, xml
  --output-to=DEST Specify file name for output (or "-" for stdout)
  --html-cgi   Add CGI headers (requires --output-as=html)
  --html-stylesheet=URILink to an external stylesheet (requires 
--output-as=html)
  --html-title=TITLE   Specify a page title (requires --output-as=html)
  --text-fancy Use more highly formatted output (requires 
--output-as=text)

A longstanding display issue in crm_mon has been fixed. The disabled
and blocked resources count was previously incorrect. The new, accurate
count changes the text from "resources" to "resource instances",
because individual instances of a cloned or bundled resource can be
blocked. For example, if you have one regular resource, and a cloned
resource running on three nodes, it would count as 4 resource
instances.

A longstanding pain point in the logs has been improved. Whenever the
scheduler processes resource history, it logs a warning for any
failures it finds, regardless of whether they are new or old, which can
confuse anyone reading the logs. Now, the log will contain the time of
the failure, so it's obvious whether you're seeing the same event or
not. The log will also contain the exit reason if one was provided by
the resource agent, for easier troubleshooting.

Everyone is encouraged to download, compile and test the new release.
We do many regression tests and simulations, but we can't cover all
possible use cases, so your feedback is important and appreciated.
-- 
Ken Gaillot 

___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/


Re: [ClusterLabs] fence agent configuration (was: fencing on iscsi device not working)

2019-11-07 Thread Ken Gaillot
On Thu, 2019-11-07 at 10:16 +0100, wf...@niif.hu wrote:
> Ken Gaillot  writes:
> 
> > I edited it so that the default and description are combined:
> > 
> > How to determine which machines are controlled by the device.
> > Allowed
> > values:
> > 
> > * +static-list:+ check the +pcmk_host_list+ or +pcmk_host_map+
> > attribute (this is the default if either one of those is set)
> > 
> > * +dynamic-list:+ query the device via the "list" command (this is
> > otherwise the default if the fence device supports the list action)
> > 
> > * +status:+ query the device via the "status" command (this is
> > otherwise the default if the fence device supports the status
> > action)
> > 
> > * +none:+ assume every device can fence every machine (this is
> > otherwise the default)
> 
> Before getting to typography: how can the status command help
> determining which machines are controlled by the device?

Status is not the same for fence agents as for resource agents. The
fence agents "monitor" action is what is used for recurring monitors.

For fence agents, "list" and "status" are two different common commands
for determining targets. "list" outputs a list of all possible targets.
"status" takes a node name and returns whether it is a possible target.
Fence agents are not required to support either one.

> Ignoring that, the explanation could be simplified by stating upfront
> that pcmk_host_check is ignored if pcmk_host_list or pcmk_host_map is
> set, and pcmk_host_list is ignored if pcmk_host_map is set (maybe
> start
> issuing a warning now and forbid these outright later).  Heck, if not
> for status, pcmk_host_check could be dropped altogether by doing:
> 
> 1. use pcmk_host_map if defined
> 2. use pcmk_host_list if defined
> 3. query the device if it supports the list command
> 4. assume that the device is universal

That's what the default is for, so most people can ignore it :)

The option allows for more complicated scenarios. For example a user
could force "none" even if the device supports list and/or status, in
case those are supported but unreliable or slow. Or a user could force
"status" but still supply pcmk_host_map to send the status command a
different name than the node name.

FYI, pcmk_host_map and pcmk_host_list can be used together. Anything in
pcmk_host_list but not pcmk_host_map will behave as if it were in
pcmk_host_map mapping to the same name. This could be convenient for
example if all the cluster nodes are fenced by their own name (and thus
can be put in pcmk_host_list), while some remote nodes must be fenced
under a different hostname (and thus must be pcmk_host_map).

> 
> It should also been clarified whether Pacemaker passes the port or
> the
> nodename attribute to the fence agent in the above cases.
> https://github.com/ClusterLabs/fence-agents/blob/master/doc/FenceAgentAPI.md
-- 
Ken Gaillot 

___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/


Re: [ClusterLabs] Antw: Re: fencing on iscsi device not working

2019-11-07 Thread Ken Gaillot
On Thu, 2019-11-07 at 07:43 +, Roger Zhou wrote:
> 
> On 11/7/19 1:55 AM, Andrei Borzenkov wrote:
> > 06.11.2019 18:55, Ken Gaillot пишет:
> > > On Wed, 2019-11-06 at 08:04 +0100, Ulrich Windl wrote:
> > > > > > > Ken Gaillot  schrieb am 05.11.2019
> > > > > > > um
> > > > > > > 16:05 in
> > > > 
> > > > Nachricht
> > > > :
> > > > > Coincidentally, the documentation for the pcmk_host_check
> > > > > default
> > > > > was
> > > > > recently updated for the upcoming 2.0.3 release. Once the
> > > > > release
> > > > > is
> > > > > out, the online documentation will be regenerated, but here
> > > > > is the
> > > > > text:
> > > > > 
> > > > > Default
> > > > > ‑‑‑
> > > > > static‑list if either pcmk_host_list or pcmk_host_map is set,
> > > > > otherwise
> > > > > dynamic‑list if the fence device supports the list action,
> > > > > otherwise
> > > > > status if the fence device supports the status action,
> > > > > otherwise
> > > > > none
> > > > 
> > > > I'd make that an itemized list with four items. I thinks it
> > > > would be
> > > > easer to
> > > > understand.
> > > 
> > > Good idea; I edited it so that the default and description are
> > > combined:
> > > 
> > > How to determine which machines are controlled by the device.
> > > Allowed
> > > values:
> > > 
> > > * +static-list:+ check the +pcmk_host_list+ or +pcmk_host_map+
> > > attribute (this is the default if either one of those is set)
> > > 
> > > * +dynamic-list:+ query the device via the "list" command (this
> > > is
> > > otherwise the default if the fence device supports the list
> > > action)
> > > 
> > 
> > Oops, now it became even more ambiguous. What if both
> > pcmk_host_list is
> > set *and* device supports "list" (or "status") command? Previous
> > variant
> > at least was explicit about precedence.
> > 
> > "Otherwise" above is hard to attribute correctly. I really like
> > previous
> > version more.
> 
> +1
> 
> plus 2 cents:
> 
> I feel confused between Default and Assigned value if combine them
> in 
> the description as above. I prefer to keep them separate.
> 
> I guest Ken might want to keep Pacemaker_Explained DOC more readable
> at 
> the end of the day, ie. to avoid too many words in Default column
> [1]. 
> For that, might be we can do differently, like the mockup [2].
> 
> [1] 
> https://github.com/ClusterLabs/pacemaker/blob/d863971b7e0c56fbe6cc12815348e8e39b2e25c4/doc/Pacemaker_Explained/en-US/Ch-Fencing.txt#L182
> 
> [2]
> 
> > pcmk_host_check
> > string
> > +NOTE+
> 
> a|How to determine which machines are controlled by the device.
> 
> * +NOTE:+
>   The default value is static-list if either +pcmk_host_list+ or 
> +pcmk_host_map+ is set,
>   otherwise dynamic-list if the fence device supports the list
> action,
>   otherwise status if the fence device supports the status action,
>   otherwise none.
> 
>   Allowed values:
> 
> * +dynamic-list:+ query the device via the "list" command
> * +static-list:+ check the +pcmk_host_list+ or +pcmk_host_map+
> attribute
> * +status:+ query the device via the "status" command
> * +none:+ assume every device can fence every machine

Elaborating on that approach, how about:

Default: "The value appropriate to other configuration options and
device capabilities (see note below)"

Description unchanged

Note: "The default value for +pcmk_host_check+ is +static-list+ if
either +pcmk_host_list+ or +pcmk_host_map+ is configured. If neither of
those are configured, the default is +dynamic-list+ if the fence device
supports the list action, or +status+ if the fence device supports the
status action but not the list action. If none of those conditions
apply, the default is +none+."

> 
> Cheers,
> Roger
> 
> > 
> > > * +status:+ query the device via the "status" command (this is
> > > otherwise the default if the fence device supports the status
> > > action)
> > > 
> > > * +none:+ assume every device can fence every machine (this is
> > > otherwise the default)
-- 
Ken Gaillot 

___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/

Re: [ClusterLabs] a resource being all -infinity

2019-11-06 Thread Ken Gaillot
tion score on dc-dcb-corosync-
> haproxy-09: -INFINITY
> group_color: HAProxyGroup_test2_43232 allocation score on dc-dcb-
> corosync-haproxy-07: 0
> group_color: HAProxyGroup_test2_43232 allocation score on dc-dcb-
> corosync-haproxy-08: 0
> group_color: HAProxyGroup_test2_43232 allocation score on dc-dcb-
> corosync-haproxy-09: 0
> group_color: HAProxyVIP_43232 allocation score on dc-dcb-corosync-
> haproxy-07: 0
> group_color: HAProxyVIP_43232 allocation score on dc-dcb-corosync-
> haproxy-08: 0
> group_color: HAProxyVIP_43232 allocation score on dc-dcb-corosync-
> haproxy-09: 0
> group_color: HAProxy_43232 allocation score on dc-dcb-corosync-
> haproxy-07: 0
> group_color: HAProxy_43232 allocation score on dc-dcb-corosync-
> haproxy-08: 0
> group_color: HAProxy_43232 allocation score on dc-dcb-corosync-
> haproxy-09: 0
> native_color: HAProxyVIP_43232 allocation score on dc-dcb-corosync-
> haproxy-07: 0
> native_color: HAProxyVIP_43232 allocation score on dc-dcb-corosync-
> haproxy-08: 0
> native_color: HAProxyVIP_43232 allocation score on dc-dcb-corosync-
> haproxy-09: -INFINITY
> native_color: HAProxy_43232 allocation score on dc-dcb-corosync-
> haproxy-07: -INFINITY
> native_color: HAProxy_43232 allocation score on dc-dcb-corosync-
> haproxy-08: 0
> native_color: HAProxy_43232 allocation score on dc-dcb-corosync-
> haproxy-09: -INFINITY
> group_color: HAProxyGroup_test3_43233 allocation score on dc-dcb-
> corosync-haproxy-07: 0
> group_color: HAProxyGroup_test3_43233 allocation score on dc-dcb-
> corosync-haproxy-08: 0
> group_color: HAProxyGroup_test3_43233 allocation score on dc-dcb-
> corosync-haproxy-09: 0
> group_color: HAProxyVIP_43233 allocation score on dc-dcb-corosync-
> haproxy-07: 0
> group_color: HAProxyVIP_43233 allocation score on dc-dcb-corosync-
> haproxy-08: 0
> group_color: HAProxyVIP_43233 allocation score on dc-dcb-corosync-
> haproxy-09: 0
> group_color: HAProxy_43233 allocation score on dc-dcb-corosync-
> haproxy-07: 0
> group_color: HAProxy_43233 allocation score on dc-dcb-corosync-
> haproxy-08: 0
> group_color: HAProxy_43233 allocation score on dc-dcb-corosync-
> haproxy-09: 0
> native_color: HAProxyVIP_43233 allocation score on dc-dcb-corosync-
> haproxy-07: 0
> native_color: HAProxyVIP_43233 allocation score on dc-dcb-corosync-
> haproxy-08: 0
> native_color: HAProxyVIP_43233 allocation score on dc-dcb-corosync-
> haproxy-09: -INFINITY
> native_color: HAProxy_43233 allocation score on dc-dcb-corosync-
> haproxy-07: -INFINITY
> native_color: HAProxy_43233 allocation score on dc-dcb-corosync-
> haproxy-08: 0
> native_color: HAProxy_43233 allocation score on dc-dcb-corosync-
> haproxy-09: -INFINITY
> group_color: HAProxyGroup_test4_43234 allocation score on dc-dcb-
> corosync-haproxy-07: 0
> group_color: HAProxyGroup_test4_43234 allocation score on dc-dcb-
> corosync-haproxy-08: 0
> group_color: HAProxyGroup_test4_43234 allocation score on dc-dcb-
> corosync-haproxy-09: 0
> group_color: HAProxyVIP_43234 allocation score on dc-dcb-corosync-
> haproxy-07: -INFINITY
> group_color: HAProxyVIP_43234 allocation score on dc-dcb-corosync-
> haproxy-08: -INFINITY
> group_color: HAProxyVIP_43234 allocation score on dc-dcb-corosync-
> haproxy-09: 0
> group_color: HAProxy_43234 allocation score on dc-dcb-corosync-
> haproxy-07: 0
> group_color: HAProxy_43234 allocation score on dc-dcb-corosync-
> haproxy-08: 0
> group_color: HAProxy_43234 allocation score on dc-dcb-corosync-
> haproxy-09: 0
> native_color: HAProxyVIP_43234 allocation score on dc-dcb-corosync-
> haproxy-07: -INFINITY
> native_color: HAProxyVIP_43234 allocation score on dc-dcb-corosync-
> haproxy-08: -INFINITY
> native_color: HAProxyVIP_43234 allocation score on dc-dcb-corosync-
> haproxy-09: 0
> native_color: HAProxy_43234 allocation score on dc-dcb-corosync-
> haproxy-07: -INFINITY
> native_color: HAProxy_43234 allocation score on dc-dcb-corosync-
> haproxy-08: -INFINITY
> native_color: HAProxy_43234 allocation score on dc-dcb-corosync-
> haproxy-09: -INFINITY
-- 
Ken Gaillot 

___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/

Re: [ClusterLabs] Announcing ClusterLabs Summit 2020

2019-11-06 Thread Ken Gaillot
On Wed, 2019-11-06 at 08:06 +0100, Michele Baldessari wrote:
> On Tue, Nov 05, 2019 at 02:21:29PM -0300, Rafael David Tinoco wrote:
> > On 04/11/2019 23:07, Ken Gaillot wrote:
> > > Hi all,
> > > 
> > > A reminder: We are still interested in ideas for talks, and rough
> > > estimates of potential attendees. "Maybe" is perfectly fine at
> > > this
> > > stage. It will let us negotiate hotel rates and firm up the
> > > location
> > > details.
> 
> Heya Ken,
> 
> I will try and be there. If there is time, I would like a slot where
> we
> present a bit our current pcmk usage inside openstack, our
> experiences,
> what works well, what we struggle with etc.
> 
> cheers,
> Michele

That would be great.

> 
> > Hello. This is Rafael, from Canonical. I'm currently in charge of
> > the HA
> > stack in Ubuntu (and helping debian-ha-maintainers), specially now
> > for 20.04
> > LTS release.
> > 
> > I wonder if I could participate in the event and, possibly, have a
> > slot to
> > share our experience and the work being done, specially related to
> > testing
> > in other architectures, like arm64 and s390x.
> > 
> > I also hope this opportunity can make us closer to upstream, so we
> > can start
> > contributing more w/ patches and fixes.
> > 
> > Thank you!
> > 
> > > 
> > > On Tue, 2019-10-15 at 16:42 -0500, Ken Gaillot wrote:
> > > > I'm happy to announce that we have a date and location for the
> > > > next
> > > > ClusterLabs Summit: Wednesday, Feb. 5, and Thursday, Feb. 6,
> > > > 2020, in
> > > > Brno, Czechia. This year's host is Red Hat.
> > > > 
> > > > Details will be given on this wiki page as they become
> > > > available:
> > > > 
> > > >http://plan.alteeve.ca/index.php/HA_Cluster_Summit_2020
> > > > 
> > > > We are still in the early stages of organizing, and need your
> > > > input.
> > > > 
> > > > Most importantly, we need a good idea of how many people will
> > > > attend,
> > > > to ensure we have an appropriate conference room and amenities.
> > > > The
> > > > wiki page has a section where you can say how many people from
> > > > your
> > > > organization expect to attend. We don't need a firm commitment
> > > > or an
> > > > immediate response, just let us know once you have a rough
> > > > idea.
> > > > 
> > > > We also invite you to propose a talk, whether it's a talk you
> > > > want to
> > > > give or something you are interested in hearing more about. The
> > > > wiki
> > > > page has a section for that, too. Anything related to open-
> > > > source
> > > > clustering is welcome: new features and plans for the cluster
> > > > software projects, how-to's and case histories for integrating
> > > > specific services into a cluster, utilizing specific
> > > > stonith/networking/etc. technologies in a cluster, tips for
> > > > administering a cluster, and so forth.
> > > > 
> > > > I'm excited about the chance for developers and users to meet
> > > > in
> > > > person. Past summits have been helpful for shaping the
> > > > direction of
> > > > the
> > > > projects and strengthening the community. I look forward to
> > > > seeing
> > > > many
> > > > of you there!
> > 
> > -- 
> > Manage your subscription:
> > https://lists.clusterlabs.org/mailman/listinfo/users
> > 
> > ClusterLabs home: https://www.clusterlabs.org/
> > 
> 
> 
-- 
Ken Gaillot 

___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/


Re: [ClusterLabs] Antw: Re: fencing on iscsi device not working

2019-11-06 Thread Ken Gaillot
On Wed, 2019-11-06 at 08:04 +0100, Ulrich Windl wrote:
> > > > Ken Gaillot  schrieb am 05.11.2019 um
> > > > 16:05 in
> 
> Nachricht
> :
> > Coincidentally, the documentation for the pcmk_host_check default
> > was
> > recently updated for the upcoming 2.0.3 release. Once the release
> > is
> > out, the online documentation will be regenerated, but here is the
> > text:
> > 
> > Default
> > ‑‑‑
> > static‑list if either pcmk_host_list or pcmk_host_map is set,
> > otherwise
> > dynamic‑list if the fence device supports the list action,
> > otherwise
> > status if the fence device supports the status action, otherwise
> > none
> 
> I'd make that an itemized list with four items. I thinks it would be
> easer to
> understand.

Good idea; I edited it so that the default and description are
combined:

How to determine which machines are controlled by the device. Allowed
values:

* +static-list:+ check the +pcmk_host_list+ or +pcmk_host_map+
attribute (this is the default if either one of those is set)

* +dynamic-list:+ query the device via the "list" command (this is
otherwise the default if the fence device supports the list action)

* +status:+ query the device via the "status" command (this is
otherwise the default if the fence device supports the status action)

* +none:+ assume every device can fence every machine (this is
otherwise the default)


> > 
> > On Tue, 2019‑11‑05 at 11:01 +0100, wf...@niif.hu wrote:
> > > Roger Zhou  writes:
> > > 
> > > > On 11/3/19 12:56 AM, wf...@niif.hu wrote:
> > > > 
> > > > > Andrei Borzenkov  writes:
> > > > > 
> > > > > > According to documentation, pcmk_host_list is used only if
> > > > > > pcmk_host_check=static‑list which is not default, by
> > > > > > default
> > > > > > pacemaker
> > > > > > queries agent for nodes it can fence and fence_scsi does
> > > > > > not
> > > > > > return
> > > > > > anything.
> > > > > 
> > > > > The documentation is somewhat vague here.  The note about
> > > > > pcmk_host_list
> > > > > says: "optional unless pcmk_host_check is static‑list".  It
> > > > > does
> > > > > not
> > > > > state how pcmk_host_list is used if pcmk_host_check is the
> > > > > default
> > > > > dynamic‑list, 
> > > > 
> > > > The confusion might be because of "the language barrier".
> > > > 
> > > > My interpretation is like this:
> > > > 
> > > > 1. pcmk_host_list is used only if pcmk_host_check is
> > > > static‑list.
> > > > 
> > > > 2. pcmk_host_check's default is dynamic‑list.
> > > > That means, by default pcmk_host_list is not used at all.
> > > 
> > > But this interpretation does not align with reality:
> > > 
> > > > > but I successfully use such setups with Pacemaker 1.1.16
> > > > > with fence_ipmilan.
> > > 
> > > (I mean I don't set pcmk_host_check on my fence_ipmilan
> > > resources,
> > > only
> > > pcmk_host_list, and they work.)
> > > 
> > > Unless:
> > > 
> > > > > the behavior is different in 2.0.1 (the version in Debian
> > > > > buster).
> > > 
> > > That's why I asked:
> > > 
> > > > > Ram, what happens if you set pcmk_host_check to static‑list?
> > > 
> > > Of course the developers are most welcome to chime in with their
> > > intentions and changes concerning this, I haven't got the time to
> > > dig
> > > into the core right now.  Tough I'm very much interested for my
> > > own
> > > sake
> > > as well, because I'm about to bring up a buster cluster with very
> > > similar config.
> > 
> > ‑‑ 
> > Ken Gaillot 
> > 
> > ___
> > Manage your subscription:
> > https://lists.clusterlabs.org/mailman/listinfo/users 
> > 
> > ClusterLabs home: https://www.clusterlabs.org/ 
> 
> 
> 
> ___
> Manage your subscription:
> https://lists.clusterlabs.org/mailman/listinfo/users
> 
> ClusterLabs home: https://www.clusterlabs.org/
-- 
Ken Gaillot 

___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/

Re: [ClusterLabs] Antw: Re: Announcing ClusterLabs Summit 2020

2019-11-06 Thread Ken Gaillot
This topic sounds promising. Maybe we could do a round table where 3 or
4 people give 15-minute presentations about their technique?

Jehan-Guillaume, Damien, Ulrich, would you possibly be interested in
participating? I realize it's early to make any firm commitments, but
we could start considering the possibilities.

On Wed, 2019-11-06 at 08:22 +0100, Ulrich Windl wrote:
> > > > Valentin Vidic  schrieb am
> > > > 05.11.2019 um
> 
> 20:35
> in Nachricht <20191105193555.gg27...@valentin-vidic.from.hr>:
> > On Mon, Nov 04, 2019 at 08:07:51PM ‑0600, Ken Gaillot wrote:
> > > A reminder: We are still interested in ideas for talks, and rough
> > > estimates of potential attendees. "Maybe" is perfectly fine at
> > > this
> > > stage. It will let us negotiate hotel rates and firm up the
> > > location
> > > details.
> > 
> > Not sure if I would be able to attend but I would be interested to
> > know if there is some framework for release testing resource
> > agents?
> > Something along the lines:
> > 
> > ‑ bring up 3 VMs
> > ‑ configure a cluster using ansible for service X
> > ‑ destroy node2
> > ‑ wait some time
> > ‑ check if the service is still available
> 
> Nothing like that, but I wrote thios wrapper to do some pre-release
> testing
> for my RAs:
> The first parameter is the RA name (required)
> If followed by "debug" the script is run by "bash -x"
> If followed by "manual" the following parameter is the action to test
> If no parameters follow ocf-tester is used
> 
> The actual parameters are written to files named "ocf/${BASE}-
> test*.params"
> ($BASE is the RA name, and it's expected that the testing RA (not
> inmstalled
> yet) lives in sub-directory ocf/). The parameter files by themselves
> contain
> lines like "name=value" like this example, and the tests are
> performed in
> "shell order":
> 
> dest="www/80"
> source="localhost/0"
> tag="HA"
> mask="I"
> logging="F"
> log_format="TL"
> options="K:120"
> 
> And finally the script (local commit 39030162, just for reference.
> You'll have
> to replace "xola" with the proper prefix to use RAs installed
> already):
> > cat tester
> 
> #!/bin/sh
> # wrapper script to test OCF RA
> if [ $# -lt 1 ]; then
> echo "$0: missing base" >&2
> exit 1
> fi
> BASE="$1"; shift
> for ra in "ocf/$BASE" "/usr/lib/ocf/resource.d/xola/$BASE"
> do
> if [ -e "$ra" ]; then
> RA="$ra"
> break
> fi
> done
> INSTANCE="$BASE"
> PARAM_FILES="ocf/${BASE}-test*.params"
> if [ X"$RA" = X ]; then
> echo "$0: RA $BASE not found" >&2
> exit 1
> fi
> case "$1" in
> debug)
> DEBUG="bash -x"
> MODE=$1
> shift
> ;;
> manual)
> MODE=$1
> shift
> ;;
> *)
> MODE=AUTO
> esac
> echo "$0: Using $INSTANCE ($RA) in $MODE mode"
> for PARAM_FILE in $PARAM_FILES
> do
> echo "$0: Using parameter file $PARAM_FILE"
> if [ $MODE != AUTO ]; then
> for action
> do
> eval OCF_ROOT=/usr/lib/ocf
> OCF_RESOURCE_INSTANCE="$INSTANCE" \
> $(sed -ne 's/^\([^#=]\+=.\+\)$/OCF_RESKEY_\1/p'
> "$PARAM_FILE")
> \
> $DEBUG $RA "$action"
> echo "$0: Exit status of $action is $?"
> done
> else
> if [ $# -eq 0 ]; then
> eval /usr/sbin/ocf-tester -n "$INSTANCE" \
> $(sed -ne 's/^\([^#=]\+=.\+\)$/-o \1/p'
> "$PARAM_FILE") \
> $RA
> echo "$0: Exit status is $?"
> else
> echo "$0: Extra parameters: $@" >&2
> fi
> fi
> echo "$0: Parameter file $PARAM_FILE done"
> done
> ###
> 
> Regards,
> Ulrich
> 
> > 
> > ‑‑ 
> > Valentin
> > ___
> > Manage your subscription:
> > https://lists.clusterlabs.org/mailman/listinfo/users 
> > 
> > ClusterLabs home: https://www.clusterlabs.org/ 
> 
> 
> 
> ___
> Manage your subscription:
> https://lists.clusterlabs.org/mailman/listinfo/users
> 
> ClusterLabs home: https://www.clusterlabs.org/
-- 
Ken Gaillot 

___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/

Re: [ClusterLabs] Announcing ClusterLabs Summit 2020

2019-11-05 Thread Ken Gaillot
On Tue, 2019-11-05 at 14:21 -0300, Rafael David Tinoco wrote:
> On 04/11/2019 23:07, Ken Gaillot wrote:
> > Hi all,
> > 
> > A reminder: We are still interested in ideas for talks, and rough
> > estimates of potential attendees. "Maybe" is perfectly fine at this
> > stage. It will let us negotiate hotel rates and firm up the
> > location
> > details.
> 
> Hello. This is Rafael, from Canonical. I'm currently in charge of the
> HA 
> stack in Ubuntu (and helping debian-ha-maintainers), specially now
> for 
> 20.04 LTS release.
> 
> I wonder if I could participate in the event and, possibly, have a
> slot 
> to share our experience and the work being done, specially related
> to 
> testing in other architectures, like arm64 and s390x.

Most definitely! I'll put you down for a speaking slot. (There's plenty
of time to change plans if necessary.)

> I also hope this opportunity can make us closer to upstream, so we
> can 
> start contributing more w/ patches and fixes.

The summit is a great way to get started. Making personal connections
really helps everyone understand each other's contexts.

> Thank you!
> 
> > 
> > On Tue, 2019-10-15 at 16:42 -0500, Ken Gaillot wrote:
> > > I'm happy to announce that we have a date and location for the
> > > next
> > > ClusterLabs Summit: Wednesday, Feb. 5, and Thursday, Feb. 6,
> > > 2020, in
> > > Brno, Czechia. This year's host is Red Hat.
> > > 
> > > Details will be given on this wiki page as they become available:
> > > 
> > >http://plan.alteeve.ca/index.php/HA_Cluster_Summit_2020
> > > 
> > > We are still in the early stages of organizing, and need your
> > > input.
> > > 
> > > Most importantly, we need a good idea of how many people will
> > > attend,
> > > to ensure we have an appropriate conference room and amenities.
> > > The
> > > wiki page has a section where you can say how many people from
> > > your
> > > organization expect to attend. We don't need a firm commitment or
> > > an
> > > immediate response, just let us know once you have a rough idea.
> > > 
> > > We also invite you to propose a talk, whether it's a talk you
> > > want to
> > > give or something you are interested in hearing more about. The
> > > wiki
> > > page has a section for that, too. Anything related to open-source
> > > clustering is welcome: new features and plans for the cluster
> > > software projects, how-to's and case histories for integrating
> > > specific services into a cluster, utilizing specific
> > > stonith/networking/etc. technologies in a cluster, tips for
> > > administering a cluster, and so forth.
> > > 
> > > I'm excited about the chance for developers and users to meet in
> > > person. Past summits have been helpful for shaping the direction
> > > of
> > > the
> > > projects and strengthening the community. I look forward to
> > > seeing
> > > many
> > > of you there!
> 
> 
-- 
Ken Gaillot 

___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/


Re: [ClusterLabs] Announcing ClusterLabs Summit 2020

2019-11-05 Thread Ken Gaillot
On Tue, 2019-11-05 at 00:39 -0500, Digimer wrote:
> On 2019-11-04 9:07 p.m., Ken Gaillot wrote:
> > Hi all,
> > 
> > A reminder: We are still interested in ideas for talks, and rough
> > estimates of potential attendees. "Maybe" is perfectly fine at this
> > stage. It will let us negotiate hotel rates and firm up the
> > location
> > details.
> 
> I will be there. I would like to talk about Anvil! M3, our RHEL8 +
> Pacemaker 2 + knet/corosync3 + DRBD 9 VM cluster stack. If time
> allows
> for a second slot, I'd be interested in talking about ScanCore AI, an
> artificial intelligence initiative Alteeve is embarking on with a
> local
> university (which is likely 2~3 years away, so I can leave it for the
> next summit in a couple years if we fill up the speaking slots).

Awesome, sounds interesting! Let's plan on two slots, I don't think
there will be a problem with that.
-- 
Ken Gaillot 

___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/


Re: [ClusterLabs] fencing on iscsi device not working

2019-11-05 Thread Ken Gaillot
Coincidentally, the documentation for the pcmk_host_check default was
recently updated for the upcoming 2.0.3 release. Once the release is
out, the online documentation will be regenerated, but here is the
text:

Default
---
static-list if either pcmk_host_list or pcmk_host_map is set, otherwise
dynamic-list if the fence device supports the list action, otherwise
status if the fence device supports the status action, otherwise none

On Tue, 2019-11-05 at 11:01 +0100, wf...@niif.hu wrote:
> Roger Zhou  writes:
> 
> > On 11/3/19 12:56 AM, wf...@niif.hu wrote:
> > 
> > > Andrei Borzenkov  writes:
> > > 
> > > > According to documentation, pcmk_host_list is used only if
> > > > pcmk_host_check=static-list which is not default, by default
> > > > pacemaker
> > > > queries agent for nodes it can fence and fence_scsi does not
> > > > return
> > > > anything.
> > > 
> > > The documentation is somewhat vague here.  The note about
> > > pcmk_host_list
> > > says: "optional unless pcmk_host_check is static-list".  It does
> > > not
> > > state how pcmk_host_list is used if pcmk_host_check is the
> > > default
> > > dynamic-list, 
> > 
> > The confusion might be because of "the language barrier".
> > 
> > My interpretation is like this:
> > 
> > 1. pcmk_host_list is used only if pcmk_host_check is static-list.
> > 
> > 2. pcmk_host_check's default is dynamic-list.
> > That means, by default pcmk_host_list is not used at all.
> 
> But this interpretation does not align with reality:
> 
> > > but I successfully use such setups with Pacemaker 1.1.16
> > > with fence_ipmilan.
> 
> (I mean I don't set pcmk_host_check on my fence_ipmilan resources,
> only
> pcmk_host_list, and they work.)
> 
> Unless:
> 
> > > the behavior is different in 2.0.1 (the version in Debian
> > > buster).
> 
> That's why I asked:
> 
> > > Ram, what happens if you set pcmk_host_check to static-list?
> 
> Of course the developers are most welcome to chime in with their
> intentions and changes concerning this, I haven't got the time to dig
> into the core right now.  Tough I'm very much interested for my own
> sake
> as well, because I'm about to bring up a buster cluster with very
> similar config.
-- 
Ken Gaillot 

___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/


Re: [ClusterLabs] Announcing ClusterLabs Summit 2020

2019-11-04 Thread Ken Gaillot
Hi all,

A reminder: We are still interested in ideas for talks, and rough
estimates of potential attendees. "Maybe" is perfectly fine at this
stage. It will let us negotiate hotel rates and firm up the location
details.

On Tue, 2019-10-15 at 16:42 -0500, Ken Gaillot wrote:
> I'm happy to announce that we have a date and location for the next
> ClusterLabs Summit: Wednesday, Feb. 5, and Thursday, Feb. 6, 2020, in
> Brno, Czechia. This year's host is Red Hat.
> 
> Details will be given on this wiki page as they become available:
> 
>   http://plan.alteeve.ca/index.php/HA_Cluster_Summit_2020
> 
> We are still in the early stages of organizing, and need your input.
> 
> Most importantly, we need a good idea of how many people will attend,
> to ensure we have an appropriate conference room and amenities. The
> wiki page has a section where you can say how many people from your
> organization expect to attend. We don't need a firm commitment or an
> immediate response, just let us know once you have a rough idea.
> 
> We also invite you to propose a talk, whether it's a talk you want to
> give or something you are interested in hearing more about. The wiki
> page has a section for that, too. Anything related to open-source
> clustering is welcome: new features and plans for the cluster
> software projects, how-to's and case histories for integrating
> specific services into a cluster, utilizing specific
> stonith/networking/etc. technologies in a cluster, tips for
> administering a cluster, and so forth.
> 
> I'm excited about the chance for developers and users to meet in
> person. Past summits have been helpful for shaping the direction of
> the
> projects and strengthening the community. I look forward to seeing
> many
> of you there!
-- 
Ken Gaillot 

___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/


[ClusterLabs] Pacemaker 2.0.3-rc2 now available

2019-10-31 Thread Ken Gaillot
The second release candidate for Pacemaker 2.0.3 is now available at:

https://github.com/ClusterLabs/pacemaker/releases/tag/Pacemaker-2.0.3-rc2

This has minor bug fixes and documentation improvements compared to
rc1, especially in crm_mon. Two recent suggestions from this mailing
list were implemented: crm_mon's --interval option now takes a wide
range of formats and is properly documented in the help and man page;
and the "Watchdog will be used" log message now mentions stonith-
watchdog-timeout. There is also a significant bug fix for clusters with
guest nodes or container bundles. For details, please see the change
log:

https://github.com/ClusterLabs/pacemaker/blob/Pacemaker-2.0.3-rc2/ChangeLog

My goal is to have the final release out in a few weeks.

Everyone is encouraged to download, compile and test the new release.
We do many regression tests and simulations, but we can't cover all
possible use cases, so your feedback is important and appreciated.

A 1.1.22-rc2 version with selected backports from this release will
also be released soon.
-- 
Ken Gaillot 

___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/


Re: [ClusterLabs] fencing on iscsi device not working

2019-10-30 Thread Ken Gaillot
-02a5-4f9a-b999-806413a3da12: No such device (-19) 
> Oct 30 12:22:34 duke pacemaker-controld  [1703]
> (tengine_stonith_callback)  notice: Stonith operation 26 for duke
> failed (No such device): aborting transition. 
> Oct 30 12:22:34 duke pacemaker-controld  [1703]
> (tengine_stonith_callback)  warning: No devices found in cluster
> to fence duke, giving up 
> Oct 30 12:22:34 duke pacemaker-controld  [1703]
> (abort_transition_graph)info: Transition 69 aborted: Stonith
> failed | source=abort_for_stonith_failure:776 complete=false 
> Oct 30 12:22:34 duke pacemaker-controld  [1703]
> (tengine_stonith_notify)error: Unfencing of duke by 
> failed: No such device (-19) 
> Oct 30 12:22:34 duke pacemaker-controld  [1703] (run_graph) notice:
> Transition 69 (Complete=2, Pending=0, Fired=0, Skipped=0,
> Incomplete=8, Source=/var/lib/pacemaker/pengine/pe-input-28.bz2):
> Complete 
> Oct 30 12:22:34 duke pacemaker-controld  [1703] (do_log) info: Input
> I_TE_SUCCESS received in state S_TRANSITION_ENGINE from notify_crmd 
> Oct 30 12:22:34 duke pacemaker-controld  [1703]
> (do_state_transition)   notice: State transition S_TRANSITION_ENGINE
> -> S_IDLE | input=I_TE_SUCCESS cause=C_FSA_INTERNAL
> origin=notify_crmd 
> Oct 30 12:22:37 duke pacemaker-based [1698]
> (cib_process_ping)  info: Reporting our current digest to duke:
> 2eb5c8ee7e7df17c5737befc7d93de76 for 0.37.6 (0x55a5ab900f70 0) 
> 
> ### 
> 
> Here is my corosync config for your reference, 
> 
> # Please read the corosync.conf.5 manual page 
> totem { 
> version: 2 
> cluster_name: debian 
> token: 3000 
> token_retransmits_before_loss_const: 10 
> transport: udpu 
> interface { 
> ringnumber: 0 
> bindnetaddr: 130.237.191.255 
> } 
> } 
> logging { 
> fileline: off 
> to_stderr: no 
> to_logfile: yes 
> logfile: /var/log/corosync/corosync.log 
> to_syslog: yes 
> debug: off 
> timestamp: on 
> logger_subsys { 
> subsys: QUORUM 
> debug: off 
> } 
> } 
> 
> quorum { 
> provider: corosync_votequorum 
> two_node: 1 
> } 
> 
> nodelist { 
> node { 
> name: duke 
> nodeid: 1 
> ring0_addr: XX 
> } 
> node { 
> name: miles 
> nodeid: 2 
> ring0_addr: XX 
> } 
> } 
> ### 
> 
> I am completely out of ideas in terms of what to do, and I would
> appreciate any help. Let me know if you guys need more details. 
> 
> Thanks in advance! 
> Ram 
> ___
> Manage your subscription:
> https://lists.clusterlabs.org/mailman/listinfo/users
> 
> ClusterLabs home: https://www.clusterlabs.org/
-- 
Ken Gaillot 

___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/


Re: [ClusterLabs] SLES12 SP4: update_cib_stonith_devices_v2 nonsense "Watchdog will be used via SBD if fencing is required"

2019-10-23 Thread Ken Gaillot
On Wed, 2019-10-23 at 15:53 +0300, Andrei Borzenkov wrote:
> 23.10.2019 13:35, Ulrich Windl пишет:
> > Hi!
> > 
> > In SLES12 SP4 I'm kind of annoyed due to repeating messages
> > "unpack_config:Watchdog will be used via SBD if fencing is
> > required".

It is annoying, but it's a side effect of pacemaker's atomicity.
Whenever pacemaker's scheduler runs, it bases its actions on a snapshot
of the cluster at that moment. Each such run is completely independent
of all other runs, so in effect, it sees everything in the
configuration as "new". Any configuration-related messages thus get
logged every time the scheduler runs.

We did introduce a squelch on this in 1.1.18 / 2.0.0. For certain
messages, we can mark them as "log once". Those will only be logged
once during the lifetime of the scheduler daemon (i.e. once per cluster
restart). We use this almost exclusively for deprecation warnings,
because it's better to have annoying repetition than missing
information that might be relevant to a problem investigation.

> > While examining another problem, I found this sequence:
> > * Some unrelated resource was moved (migrated)
> > * stonith-ng: info: update_cib_stonith_devices_v2:Updating
> > device list from the cib: create constraints
> > 
> > (at that point I'm expecting that there was NO update related to
> > SBD devices)

That one annoys me, too. I hope to get rid of it one day but there's
just too much higher priority work to do. It's a similar situation to
the scheduler: whenever the CIB changes at all, the fencer updates its
view of the stonith device list, and logs such a message regardless of
whether anything actually changed or not.

> > * stonith-ng: info: cib_devices_update:   Updating devices
> > to version 2.35.0
> > * stonith-ng:   notice: unpack_config:Watchdog will be used via
> > SBD if fencing is required
> > * cib: info: cib_file_write_with_digest:   Wrote version
> > 2.35.0 of the CIB to disk (digest:
> > 8e3625f4ef74b6fe6c6429757023b7e9)
> > 
> > I mean: I know that SBD will use the watchdog for fencing if
> > everything else fails, but why is this message logged so many
> > times?
> > 
> 
> This message is also misleading. Pacemaker will actually use watchdog
> self-fencing only if stonith-watchdog-timeout is not zero. And zero
> is
> default.

Good point, I'll update the message accordingly.
-- 
Ken Gaillot 

___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/

Re: [ClusterLabs] Antw: Re: Antw: Coming in Pacemaker 2.0.3: crm_mon output changes

2019-10-21 Thread Ken Gaillot
On Wed, 2019-10-16 at 08:08 +0200, Ulrich Windl wrote:

> > > > Why not replace "--weg-cgi" with "--output-format=cgi"?
> > 
> > CGI output is identical to HTML, with just a header change, so it
> > was
> > logically more of an option to the HTML implementation rather than
> > a
> > separate one.
> > 
> > With the new approach, each format type may define any additional
> > options to modify its behavior, and all tools automatically inherit
> > those options. These will be grouped together in the help/man page.
> > For
> > example, the HTML option help is:
> > 
> > Output Options (html):
> >   --output-cgi  Add text needed to use output
> > in a CGI 
> > program
> >   --output-meta-refresh=SECONDS How often to refresh
> 
> God bless the long options, but considering that the only thing that
> is
> refreshed in crm_mon's output is... well the output,,, why not just
> have
> --refresh or --refresh-interval.

One of the goals is to have options that are consistent across all
tools. We came up with the "--output-" prefix to make it easy to avoid
conflicts with existing/future tool options.

However, I think you're right that it's confusing. I'm thinking that
instead, we can reserve each of the format types as an option prefix.
For example, for html it would become:

Output Options (html):
  --html-cgi Add text needed to use output in a CGI program
  --html-stylesheet=URI  Link to an external CSS stylesheet
  --html-title=TITLE Page title

which I think is a little shorter and more intuitive. There are a few
existing --xml-* options we'd have to work around but I don't think
that's a problem.

Does that make more sense?

BTW we decided to get rid of --output-meta-refresh altogether, and just
continue using the existing --interval option for that purpose.

> Also it wouldn't be too har
> d (if there's any demand) to allow suffixes like
> 's' for seconds, 'm' for minutes, and most likely more do not make
> sense for a
> refresh interval.

Actually it already does, it's just not in the help description. We'll
update the help.


> > > > When called with ‑‑as‑xml, crm_mon's XML output will be
> > > > identical
> > > > to
> > > > previous versions. When called with the new ‑‑output‑as=xml
> > > > option,
> > > > it
> > > > will be slightly different: the outmost element will be a
> > > >  > > > result> element, which will be consistent across all tools. The
> > > > old
> > > > XML
> > > 
> > > Why not as simple "status" element? "-result" doesn't really add
> > > anything
> > > useful.
> > 
> > We wanted the design to allow for future flexibility in how users
> > ask
> > pacemaker to do something. The XML output would be the same whether
> > the
> > request came from a command-line tool, GUI, C API client
> > application,
> > REST API client, or any other future interface. The idea is that
> >  might be a response to a .
> 
> But most likely any response will be a kind of result, so why have
> "result"
> explicitly? Also as it's all about pacemaker, why have "pacemaker" in
> it?
> (Remember how easy it was to get rid of "heartbeat"? ;-))
> So my argument for "status" simply is that the data describes the
> status.

The idea is that if the output is saved to a file, someone looking at
that file later could easily figure out where it came from, even
without any other context.

> > All of the format options start with "--output-" so we can reserve
> > those option names across all tools.
> 
> Do you actually have a big matrix of all options available across the
> tools?
> I'd like to see!

Me too. :) Not yet, we just grep for a new option name we're thinking
of using. That's why we went with the "--output-" prefix, it was easy
to make them unique. :)
-- 
Ken Gaillot 

___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/

[ClusterLabs] Pacemaker 2.0.3-rc1 now available

2019-10-18 Thread Ken Gaillot
Hi all,

I am happy to announce that source code for the first release candidate
for Pacemaker version 2.0.3 is now available at:

https://github.com/ClusterLabs/pacemaker/releases/tag/Pacemaker-2.0.3-rc1

Highlights previously discussed on this list include a dynamic cluster
recheck interval (you don't have to care about cluster-recheck-interval 
for failure-timeout or most rules now), new Pacemaker Remote options
for security hardening (listen address and TLS priorities), and Year
2038 compatibility.

Also, crm_mon now supports the new --output-as/--output-to options, and
has some tweaks to the text and HTML output that will hopefully make it
easier to read.

A couple of changes that haven't been mentioned yet:

* A new fence-reaction cluster option controls whether the local node
will stop pacemaker or panic the local host if notified of its own
fencing. This generally happens with fabric fencing (e.g. fence_scsi)
when the host and networking are still functional. The default, "stop",
is the previous behavior. The new option of "panic" makes more sense
for a node that's been fenced, so it may become the default in a future
release, but we are not doing so at this time for backward
compatibility. Therefore, if you prefer the "stop" behavior (for
example, to avoid losing logs when fenced), it is recommended to
specify it explicitly.

* We discovered that the ocf:pacemaker:pingd agent, a legacy alias for
ocf:pacemaker:ping, has actually been broken since 1.1.3 (!). Rather
than fix it, we are formally deprecating it, and will remove it in a
future release.

As usual, there were many bug fixes and log message improvements as
well. For more details about changes in this release, please see the
change log:

https://github.com/ClusterLabs/pacemaker/blob/2.0/ChangeLog

Everyone is encouraged to download, compile and test the new release.
We do many regression tests and simulations, but we can't cover all
possible use cases, so your feedback is important and appreciated.

Many thanks to all contributors of source code to this release,
including Aleksei Burlakov, Chris Lumens, Gao,Yan, Hideo Yamauchi, Jan
Pokorný, John Eckersberg, Kazunori INOUE, Ken Gaillot, Klaus Wenninger,
Konstantin Kharlamov, Munenari, Roger Zhou, S. Schuberth, Tomas
Jelinek, and Yuusuke Iida.

1.1.22-rc1, with selected backports from this release, will also be
released soon.
-- 
Ken Gaillot 

___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/

Re: [ClusterLabs] Apache doesn't start under corosync with systemd

2019-10-17 Thread Ken Gaillot
On Wed, 2019-10-16 at 13:33 +, Reynolds, John F - San Mateo, CA -
Contractor wrote:
> > mailto:kgail...@redhat.com] 
> > Sent: Monday, October 14, 2019 12:02 PM
> > 
> > If you have SELinux enabled, check for denials. The cluster
> > processes have a different SELinux context than systemd, so
> > policies might not be set up correctly.
> > --
> > Ken Gaillot 
> 
> Alas, SELinux is not in use.
> 
> 
> I am thinking that the apache OCF module is not starting up apache
> with the modules that it needs.  
> 
>  Again, startup with 'systemctl start apache' brings up the http
> daemons, so we know that the Apache configuration is clean.  
> 
> But  if I enable trace and run the ocf script by hand:
> 
> export OCF_TRACE_RA=1
> /usr/lib/ocf/resource.d/heartbeat/apache start ; echo $?
> 
> Part of the output is Apache syntax errors that aren't flagged in the
> regular startup:
> 
> + 14:57:10: ocf_run:443: ocf_log err 'AH00526: Syntax error on line
> 22 of /etc/apache2/vhosts.d/aqvslookup.conf: Invalid command
> '\''Order'\'', perhaps misspelled or defined by a module not included
> in the server configuration '
> 
> The 'Allow' and ' AuthLDAPURL' commands are also flagged as invalid.
> 
> The /etc/sysconfig/apache2 module parameter includes the relevant
> modules:
> 
> APACHE_MODULES="actions alias auth_basic authn_file authz_host
> authz_groupfile authz_core authz_user autoindex cgi dir env expires
> include log_config mime negotiation setenvif ssl socache_shmcb
> userdir reqtimeout authn_core php5 rewrite ldap authnz_ldap status
> access_compat"
> 
> 
> Why are they invoked properly from systemctl but not from ocf?
> 
> John Reynolds 

OCF doesn't know anything about /etc/sysconfig; anything there will
have to specified in the actual apache configuration.

Alternatively, pacemaker can manage apache via systemd (using
"systemd:httpd" as the agent instead of "ocf:heartbeat:apache"). But in
that case the monitor will just check whether the process is running
rather than check the status URL.
-- 
Ken Gaillot 

___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/


Re: [ClusterLabs] -INFINITY location constraint not honored?

2019-10-17 Thread Ken Gaillot
gt; Node SRVDRSW03: standby
> > > > Online: [ SRVDRSW01 ]
> > > > 
> > > > Full list of resources:
> > > > 
> > > >   ClusterIP  (ocf::heartbeat:IPaddr2):   Started
> > > > SRVDRSW01
> > > >   CouchIP(ocf::heartbeat:IPaddr2):   Started
> > > > SRVDRSW01
> > > >   FrontEnd   (ocf::heartbeat:nginx): Started SRVDRSW01
> > > >   ITATESTSERVER-DIP  (ocf::nodejs:pm2):  Started
> > > > SRVDRSW01
> > > > 
> > > > crm_simulate -sL returns the follwoing
> > > > 
> > > > ---cut---
> > > > 
> > > > native_color: CouchIP allocation score on SRVDRSW01: 0
> > > > native_color: CouchIP allocation score on SRVDRSW02: 0
> > > > native_color: CouchIP allocation score on SRVDRSW03: 0
> > > > 
> > > > ---cut---
> > > > Why is that? I have explicitly assigned -INFINITY to CouchIP
> > > > resource
> > > > related to node SRVDRSW01 (as stated by pcs constraint:
> > > > Disabled on:
> > > > SRVRDSW01 (score:-INFINITY) ).
> > > > What am I missing or doing wrong?
> > > 
> > > I am not that deep into these relationships, proper design
> > > documentation with guided examples is non-existent[*].
> > > 
> > > But it occurs to me that the situation might be the inverse of
> > > what's
> > > been confusing for typical opt-out clusters:
> > > 
> > > https://lists.clusterlabs.org/pipermail/users/2017-April/005463.html
> > > 
> > > Have you tried avoiding:
> > > 
> > > >Resource: CouchIP
> > > >  Disabled on: SRVRDSW01 (score:-INFINITY)
> > 
> > Yes, I already tried that, but I did it again nevertheless since I
> > am a 
> > newbie. I deleted the whole set of resources and commented out the 
> > constraint from the creation script.
> > The cluster was running, then I put all the nodes in standby and
> > brought
> > SRVRDSW01 back. The CouchIP resource has been bound to the
> > "forbidden" 
> > node.
> > crm_simulate -sL still gives a score of 0 to the three nodes when
> > it 
> > should be something like -INFINITY 100 and 200 respectively.
> > 
> > Just to make the whole thing more confusing: I noticed that
> > SRVRDSW03, 
> > that is (implicitly) not allowed to get the ClusterIP resource is
> > marked 
> >   (correctly) as -INFINITY from crm_simulate. So the opt in 
> > configuration would seem to be correct, but for the CouchIP
> > resource 
> > that is no special or different from the ClusterIP resource.
> > 
> > I am really disoriented.
> > > 
> 
> Just another bit of information: I put the whole set in stand by
> then 
> brought back SRVRDSW03... surprise surprise the ClusterIP resource
> has 
> been bound to it even if it shouldn't.
> 
> What's wrong?
> 
> > > altogether, since when such explicit edge would be missing,
> > > implicit
> > > "cannot" (for opt-in cluster) would apply anyway, and perhaps
> > > even
> > > reliably, then?
> > > 
> > > 
> > > [*] as far as I know, except for
> > >  
> > > https://wiki.clusterlabs.org/w/images/a/ae/Ordering_Explained_-_White.pdf
> > >  
> > > https://wiki.clusterlabs.org/w/images/8/8a/Colocation_Explained_-_White.pdf
> > >  
> > > 
> > >  probably not revised for giving a truthful model in all
> > > details 
> > > for years,
> > >  and not demonstrating the effect of symmetry requested or
> > > avoided 
> > > within
> > >  the cluster on those, amongst others
> > >  (shameless plug: there will be such coverage for upcoming
> > > group 
> > > based
> > >  access control addition [those facilities haven't been
> > > terminated in
> > >  back-end so far] as a first foray in this area -- also the
> > > correct
> > >  understanding is rather important here)
> > > 
> > > 
> > > ___
> > > Manage your subscription:
> > > https://lists.clusterlabs.org/mailman/listinfo/users
> > > 
> > > ClusterLabs home: https://www.clusterlabs.org/
> > > 
> > 
> > Thank you for your reply.
> 
> 
-- 
Ken Gaillot 

___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/

[ClusterLabs] Announcing ClusterLabs Summit 2020

2019-10-15 Thread Ken Gaillot
I'm happy to announce that we have a date and location for the next
ClusterLabs Summit: Wednesday, Feb. 5, and Thursday, Feb. 6, 2020, in
Brno, Czechia. This year's host is Red Hat.

Details will be given on this wiki page as they become available:

  http://plan.alteeve.ca/index.php/HA_Cluster_Summit_2020

We are still in the early stages of organizing, and need your input.

Most importantly, we need a good idea of how many people will attend,
to ensure we have an appropriate conference room and amenities. The
wiki page has a section where you can say how many people from your
organization expect to attend. We don't need a firm commitment or an
immediate response, just let us know once you have a rough idea.

We also invite you to propose a talk, whether it's a talk you want to
give or something you are interested in hearing more about. The wiki
page has a section for that, too. Anything related to open-source
clustering is welcome: new features and plans for the cluster software 
projects, how-to's and case histories for integrating specific services into a 
cluster, utilizing specific stonith/networking/etc. technologies in a cluster, 
tips for administering a cluster, and so forth.

I'm excited about the chance for developers and users to meet in
person. Past summits have been helpful for shaping the direction of the
projects and strengthening the community. I look forward to seeing many
of you there!
-- 
Ken Gaillot 

___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/


Re: [ClusterLabs] DLM, cLVM, GFS2 and OCFS2 managed by systemd instead of crm ?

2019-10-15 Thread Ken Gaillot
On Tue, 2019-10-15 at 21:35 +0200, Lentes, Bernd wrote:
> Hi,
> 
> i'm a big fan of simple solutions (KISS).
> Currently i have DLM, cLVM, GFS2 and OCFS2 managed by pacemaker.
> They all are fundamental prerequisites for my resources (Virtual
> Domains).
> To configure them i used clones and groups.
> Why not having them managed by systemd to make the cluster setup more
> overseeable ?
> 
> Is there a strong reason that pacemaker cares about them ?
> 
> Bernd 

Either approach is reasonable. The advantages of keeping them in
pacemaker are:

- Service-aware recurring monitor (if OCF)

- If one of those components fails, pacemaker will know to try to
recover everything in the group from that point, and if necessary,
fence the node and recover the virtual domain elsewhere (if they're in
systemd, pacemaker will only know that the virtual domain has failed,
and likely keep trying to restart it fruitlessly)

- Convenience of things like putting a node in standby mode, and
checking resource status on all nodes with one command

If you do move them to systemd, be sure to use the resource-agents-deps 
target to ensure they're started before pacemaker and stopped after
pacemaker.
-- 
Ken Gaillot 

___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/


Re: [ClusterLabs] Antw: Coming in Pacemaker 2.0.3: crm_mon output changes

2019-10-15 Thread Ken Gaillot
On Tue, 2019-10-15 at 08:42 +0200, Ulrich Windl wrote:
> > > > Ken Gaillot  schrieb am 15.10.2019 um
> > > > 00:47 in
> 
> Nachricht
> :
> > Hi all,
> > 
> > With Pacemaker 2.0.2, we introduced a new experimental option for
> > XML
> > output from stonith_admin. This was the test case for a new output
> > model for Pacemaker tools. I'm happy to say this has been extended
> > to
> > crm_mon and will be considered stable as of 2.0.3.
> > 
> > crm_mon has always supported text, curses, HTML, and XML output,
> > and
> > that doesn't change. However the command‑line options for those
> > have
> > been deprecated and replaced with new forms:
> > 
> > Old:New:
> > ‑‑as‑xml‑‑output‑as=xml
> > ‑‑as‑html=FILE  ‑‑output‑as=html ‑‑output‑to=FILE
> > ‑‑web‑cgi   ‑‑output‑as=html ‑‑output‑cgi
> > ‑‑disable‑ncurses   ‑‑output‑as=text
> 
> I'd prefer "--output-format|output-fmt|format" over "--format-as",
> because I
> think it's more clear.

That's a good question, what it should be called. We chose --output-as
in 2.0.2 for stonith_admin, so that has some weight now. I think the
main reason was to keep it shorter for typing (there's no single-letter 
equivalent at the moment because there's not an obvious choice
available across all tools).

I'm open to changing it if there's a lot of demand for it, otherwise
I'd rather keep it compatible with 2.0.2.

> Accordingly I'd prefer "--output-file" over "--output-to".

I like the more generic meaning of this one. The new approach also
supports --output-to="-" for stdout (which is the default). One could
imagine adding some other non-file capability in the future.

> Why not replace "--weg-cgi" with "--output-format=cgi"?

CGI output is identical to HTML, with just a header change, so it was
logically more of an option to the HTML implementation rather than a
separate one.

With the new approach, each format type may define any additional
options to modify its behavior, and all tools automatically inherit
those options. These will be grouped together in the help/man page. For
example, the HTML option help is:

Output Options (html):
  --output-cgi  Add text needed to use output in a CGI 
program
  --output-meta-refresh=SECONDS How often to refresh
  --output-stylesheet-link=URI  Link to an external CSS stylesheet
  --output-title=TITLE  Page title

> > 
> > The new ‑‑output‑as and ‑‑output‑to options are identical to
> > stonith_admin's, and will eventually be supported by all Pacemaker
> > tools. Each tool may support a different set of formats; for
> > example,
> > stonith_admin supports text and xml.
> > 
> > When called with ‑‑as‑xml, crm_mon's XML output will be identical
> > to
> > previous versions. When called with the new ‑‑output‑as=xml option,
> > it
> > will be slightly different: the outmost element will be a
> >  > result> element, which will be consistent across all tools. The old
> > XML
> 
> Why not as simple "status" element? "-result" doesn't really add
> anything
> useful.

We wanted the design to allow for future flexibility in how users ask
pacemaker to do something. The XML output would be the same whether the
request came from a command-line tool, GUI, C API client application,
REST API client, or any other future interface. The idea is that
 might be a response to a .

 was also introduced with stonith_admin in 2.0.2, so
that carries some weight, but it was announced as experimental at the
time, so I'm open to changing the syntax if there's a clear preference
for an alternative.

> > schema remains documented in crm_mon.rng; the new XML schema will
> > be
> > documented in an api‑result.rng schema that will encompass all
> > tools'
> > XML output.
> > 
> > Beyond those interface changes, the text output displayed by
> > crm_mon
> > has been tweaked slightly. It is more organized with list headings
> > and
> > bullets. Hopefully you will find this easier to read. We welcome
> > any
> > feedback or suggestions for improvement.
> 
> Older versions use a mixture of space for indent, and '*' and '+' as
> bullets.
> A unified approach would be either to use some amount of spaces or
> one TAB for
> indenting consistently, maybe combined with a set of bullet
> characters for the
> individual levels of indent.
> 
> So "--bullet-list='*+'" and "--indent-string=..." could set such.

That's an interesting suggestion.

The new approach does use a consistent style, with 2-space indents (to
fit as much a

[ClusterLabs] Coming in Pacemaker 2.0.3: crm_mon output changes

2019-10-14 Thread Ken Gaillot
Hi all,

With Pacemaker 2.0.2, we introduced a new experimental option for XML
output from stonith_admin. This was the test case for a new output
model for Pacemaker tools. I'm happy to say this has been extended to
crm_mon and will be considered stable as of 2.0.3.

crm_mon has always supported text, curses, HTML, and XML output, and
that doesn't change. However the command-line options for those have
been deprecated and replaced with new forms:

Old:New:
--as-xml--output-as=xml
--as-html=FILE  --output-as=html --output-to=FILE
--web-cgi   --output-as=html --output-cgi
--disable-ncurses   --output-as=text

The new --output-as and --output-to options are identical to
stonith_admin's, and will eventually be supported by all Pacemaker
tools. Each tool may support a different set of formats; for example,
stonith_admin supports text and xml.

When called with --as-xml, crm_mon's XML output will be identical to
previous versions. When called with the new --output-as=xml option, it
will be slightly different: the outmost element will be a  element, which will be consistent across all tools. The old XML
schema remains documented in crm_mon.rng; the new XML schema will be
documented in an api-result.rng schema that will encompass all tools'
XML output.

Beyond those interface changes, the text output displayed by crm_mon
has been tweaked slightly. It is more organized with list headings and
bullets. Hopefully you will find this easier to read. We welcome any
feedback or suggestions for improvement.

The HTML output gains a new feature as well: it uses CSS throughout,
rather than ancient HTML formatting, and you can provide a custom
stylesheet (via --output-stylesheet-link=URL) to control how the page
looks.

If you are a heavy user of crm_mon, we encourage you to test the new
release (expected later this week) and let us know what you like and
don't like. You don't have to upgrade a whole cluster to test crm_mon;
you can install the new release on a test machine, copy the CIB from
your cluster to it, and run it like: CIB_file=/path/to/copied/cib.xml
crm_mon . That won't work with curses output, but you can test
text, HTML, and XML that way.
-- 
Ken Gaillot 

___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/


Re: [ClusterLabs] Apache doesn't start under corosync with systemd

2019-10-14 Thread Ken Gaillot
On Fri, 2019-10-11 at 17:15 +, Reynolds, John F - San Mateo, CA -
Contractor wrote:
> >  If pacemaker is managing a resource, the service should not be
> > enabled to start on boot (regardless of init or systemd). Pacemaker
> > will start and stop the service as needed according to the cluster
> > configuration.
> 
> Apache startup is disabled in systemctl, and there is no apache
> script in /etc/init.d
> 
> > Additionally, your pacemaker configuration is using the apache OCF
> > script, so the cluster won't use /etc/init.d/apache2 at all (it
> > invokes the httpd binary directly).
> > 
> > Keep in mind that the httpd monitor action requires the status
> > module to be enabled -- I assume that's already in place.
> 
> Yes, that is enabled, according to apache2ctl -M.
> 
> 
> The resource configuration is
> 
> Primitive  ncoa_apache apache \
>   Params configfile="/etc/apache2/httpd.conf"\
>   Op monitor internval=40s timeout=60s\
>   Meta target-role=Started
> 
> When I start the resource, crm status shows it in 'starting' mode,
> but never gets to 'Started'.
> 
> There is one process running "/bin/sh
> /usr/lib/ocf/resources.d/heartbeat/apache start"  but the httpd
> processes never come up.  What's worse, with that process running,
> the cluster resource can't migrate; I have to kill it before the
> cluster will finish cleanup and start  on the new node.  'crm
> resource cleanup ncoa_apache' hangs, as well.
> 
> Apache starts up just fine from the systemctl command, so it's not
> the Apache config that's broken.
> 
> Suggestions?
> 
> John Reynolds SMUnix

If you have SELinux enabled, check for denials. The cluster processes
have a different SELinux context than systemd, so policies might not be
set up correctly.
-- 
Ken Gaillot 

___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/


[ClusterLabs] Coming in Pacemaker 2.0.3: of interest to packagers and users who compile their own builds

2019-10-11 Thread Ken Gaillot
Hi all,

The following build-related changes arriving in Pacemaker 2.0.3 are
minor and unlikely to affect anyone, but may be of interest to
distribution packagers and users who compile pacemaker themselves.

The configure script now has options to override the default values of
pretty much everything in a pacemaker install that requires root
privileges. These include (shown with their usual values):

  --with-runstatedir   /run or /var/run
  --with-systemdsystemunitdir  /usr/lib/systemd/system
  --with-ocfdir/usr/lib/ocf
  --with-daemon-user   hacluster
  --with-daemon-group  haclient

Changing these may result in non-functional binaries unless all other
relevant software has been built with the same values, but it allows
non-root sandbox builds of pacemaker for testing purposes.

The configure script options --with-pkgname and --with-pkg-name have
long been unused. They are now officially deprecated and will be
removed in a future version of pacemaker.

The basic process for building pacemaker has been "./autogen.sh;
./configure; make". Now, if you don't need to change any defaults, you
can just run "make" in a clean source tree and it will run autogen.sh
and/or configure if needed.

"make export", "make dist", "make distcheck", and VPATH builds (i.e.
different source and build trees) should now all work as intended, and
pacemaker should build correctly in a source distribution (as opposed
to a git checkout).

The concurrent-fencing cluster property currently defaults to false. We
plan to change the default to true in a future version of pacemaker
(whenever the next minor version bump will be). Anyone who wants the
new default earlier can set "-DDEFAULT_CONCURRENT_FENCING_TRUE" in
CPPFLAGS before building.

When building RPM packages with "make rpm", pacemaker previously put
the RPM sources, spec file, and source rpm in the top-level build
directory, letting everything else (such as binary rpms) use the system
defaults (typically beneath the user's home directory). That will
remain the default behavior, but the new option "make RPMDEST=subtree
rpm" will put the RPM sources in the top-level build directory and
everything else in a dedicated "rpm" subdirectory of the build tree.
This keeps everything self-contained, which may be useful in certain
environments.
-- 
Ken Gaillot 

___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/


Re: [ClusterLabs] Why is node fenced ?

2019-10-10 Thread Ken Gaillot
On Thu, 2019-10-10 at 17:22 +0200, Lentes, Bernd wrote:
> HI,
> 
> i have a two node cluster running on SLES 12 SP4.
> I did some testing on it.
> I put one into standby (ha-idg-2), the other (ha-idg-1) got fenced a
> few minutes later because i made a mistake.
> ha-idg-2 was DC. ha-idg-1 made a fresh boot and i started
> corosync/pacemaker on it.
> It seems ha-idg-1 didn't find the DC after starting cluster and some
> sec later elected itself  to the DC, 
> afterwards fenced ha-idg-2.

For some reason, corosync on the two nodes was not able to communicate
with each other.

This type of situation is why corosync's two_node option normally
includes wait_for_all.

> 
> Oct 09 18:04:43 [9550] ha-idg-1 corosync notice  [MAIN  ] Corosync
> Cluster Engine ('2.3.6'): started and ready to provide service.
> Oct 09 18:04:43 [9550] ha-idg-1 corosync info[MAIN  ] Corosync
> built-in features: debug testagents augeas systemd pie relro bindnow
> Oct 09 18:04:43 [9550] ha-idg-1 corosync notice  [TOTEM ]
> Initializing transport (UDP/IP Multicast).
> Oct 09 18:04:43 [9550] ha-idg-1 corosync notice  [TOTEM ]
> Initializing transmit/receive security (NSS) crypto: aes256 hash:
> sha1
> Oct 09 18:04:43 [9550] ha-idg-1 corosync notice  [TOTEM ] The network
> interface [192.168.100.10] is now up.
> 
> Oct 09 18:05:06 [9565] ha-idg-1   crmd: info:
> crm_timer_popped: Election Trigger (I_DC_TIMEOUT) just popped
> (2ms)
> Oct 09 18:05:06 [9565] ha-idg-1   crmd:  warning: do_log:   Input
> I_DC_TIMEOUT received in state S_PENDING from crm_timer_popped
> Oct 09 18:05:06 [9565] ha-idg-1   crmd: info:
> do_state_transition:  State transition S_PENDING -> S_ELECTION |
> input=I_DC_TIMEOUT cause=C_TIMER_POPPED origin=crm_timer_popped
> Oct 09 18:05:06 [9565] ha-idg-1   crmd: info:
> election_check:   election-DC won by local node
> Oct 09 18:05:06 [9565] ha-idg-1   crmd: info: do_log:   Input
> I_ELECTION_DC received in state S_ELECTION from election_win_cb
> Oct 09 18:05:06 [9565] ha-idg-1   crmd:   notice:
> do_state_transition:  State transition S_ELECTION ->
> S_INTEGRATION | input=I_ELECTION_DC cause=C_FSA_INTERNAL
> origin=election_win_cb
> Oct 09 18:05:06 [9565] ha-idg-1   crmd: info:
> do_te_control:Registering TE UUID: f302e1d4-a1aa-4a3e-b9dd-
> 71bd17047f82
> Oct 09 18:05:06 [9565] ha-idg-1   crmd: info:
> set_graph_functions:  Setting custom graph functions
> Oct 09 18:05:06 [9565] ha-idg-1   crmd: info:
> do_dc_takeover:   Taking over DC status for this partition
> 
> Oct 09 18:05:07 [9564] ha-idg-1pengine:  warning:
> stage6:   Scheduling Node ha-idg-2 for STONITH
> Oct 09 18:05:07 [9564] ha-idg-1pengine:   notice:
> LogNodeActions:* Fence (Off) ha-idg-2 'node is unclean'
> 
> Is my understanding correct ?

Yes

> In the log of ha-idg-2 i don't find anything for this period:
> 
> Oct 09 17:58:46 [12504] ha-idg-2 stonith-ng: info:
> cib_device_update:   Device fence_ilo_ha-idg-2 has been disabled
> on ha-idg-2: score=-1
> Oct 09 17:58:51 [12503] ha-idg-2cib: info:
> cib_process_ping:Reporting our current digest to ha-idg-2:
> 59c4cfb14defeafbeb3417e42cd9 for 2.9506.36 (0x242b110 0)
> 
> Oct 09 18:00:42 [12508] ha-idg-2   crmd: info:
> throttle_send_command:   New throttle mode: 0001 (was )
> Oct 09 18:01:12 [12508] ha-idg-2   crmd: info:
> throttle_check_thresholds:   Moderate CPU load detected:
> 32.220001
> Oct 09 18:01:12 [12508] ha-idg-2   crmd: info:
> throttle_send_command:   New throttle mode: 0010 (was 0001)
> Oct 09 18:01:42 [12508] ha-idg-2   crmd: info:
> throttle_send_command:   New throttle mode: 0001 (was 0010)
> Oct 09 18:02:42 [12508] ha-idg-2   crmd: info:
> throttle_send_command:   New throttle mode:  (was 0001)
> 
> ha-idg-2 is fenced and after a reboot i started corosync/pacmeaker on
> it again:
> 
> Oct 09 18:29:05 [11795] ha-idg-2 corosync notice  [MAIN  ] Corosync
> Cluster Engine ('2.3.6'): started and ready to provide service.
> Oct 09 18:29:05 [11795] ha-idg-2 corosync info[MAIN  ] Corosync
> built-in features: debug testagents augeas systemd pie relro bindnow
> Oct 09 18:29:05 [11795] ha-idg-2 corosync notice  [TOTEM ]
> Initializing transport (UDP/IP Multicast).
> Oct 09 18:29:05 [11795] ha-idg-2 corosync notice  [TOTEM ]
> Initializing transmit/receive security (NSS) crypto: aes256 hash:
> sha1
> 
> What is the meaning of the lines with the throttle ?

Those messages could definitely be improved. The particular mode values
indicate no significant CPU load (), low load (0001), medium
(0010), high (0100), or extreme (1000).

I wouldn't

Re: [ClusterLabs] change of the configuration of a resource which is part of a clone

2019-10-10 Thread Ken Gaillot
On Wed, 2019-10-09 at 16:53 +0200, Lentes, Bernd wrote:
> Hi,
> 
> i finally managed to find out how i can simulate configuration
> changes and see their results before committing them.
> OMG. That makes live much more relaxed. I need to change the
> configuration of a resource which is part of a group, the group is 
> running as a clone on all nodes.
> Unfortunately the resource is a prerequisite for several other
> resources. The other resources will restart when i commit
> the changes which i definitely want to avoid.
> What can i do ?
> I have a two node cluster on SLES 12 SP4, with pacemaker-
> 1.1.19+20181105.ccd6b5b10-3.13.1.x86_64 and corosync-2.3.6-
> 9.13.1.x86_64.
> 
> Bernd

I believe it would work to unmanage the other resources, change the
configuration, wait for the changed resource to restart, then re-manage 
the remaining resources.
-- 
Ken Gaillot 

___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/


Re: [ClusterLabs] Howto stonith in the case of any interface failure?

2019-10-10 Thread Ken Gaillot
On Wed, 2019-10-09 at 20:10 +0200, Kadlecsik József wrote:
> On Wed, 9 Oct 2019, Ken Gaillot wrote:
> 
> > > One of the nodes has got a failure ("watchdog: BUG: soft lockup
> > > - 
> > > CPU#7 stuck for 23s"), which resulted that the node could
> > > process 
> > > traffic on the backend interface but not on the fronted one. Thus
> > > the 
> > > services became unavailable but the cluster thought the node is
> > > all 
> > > right and did not stonith it.
> > > 
> > > How could we protect the cluster against such failures?
> > 
> > See the ocf:heartbeat:ethmonitor agent (to monitor the interface
> > itself) 
> > and/or the ocf:pacemaker:ping agent (to monitor reachability of
> > some IP 
> > such as a gateway)
> 
> This looks really promising, thank you! Does the cluster regard it as
> a 
> failure when a ocf:heartbeat:ethmonitor agent clone on a node does
> not 
> run? :-)

If you configure it typically, so that it runs on all nodes, then a
start failure on any node will be recorded in the cluster status. To
get other resources to move off such a node, you would colocate them
with the ethmonitor resource.

> 
> Best regards,
> Jozsef
> --
> E-mail : kadlecsik.joz...@wigner.mta.hu
> PGP key: http://www.kfki.hu/~kadlec/pgp_public_key.txt
> Address: Wigner Research Centre for Physics
>  H-1525 Budapest 114, POB. 49, Hungary
> __
-- 
Ken Gaillot 

___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/

Re: [ClusterLabs] Howto stonith in the case of any interface failure?

2019-10-09 Thread Ken Gaillot
On Wed, 2019-10-09 at 09:58 +0200, Kadlecsik József wrote:
> Hello,
> 
> The nodes in our cluster have got backend and frontend interfaces:
> the 
> former ones are for the storage and cluster (corosync) traffic and
> the 
> latter ones are for the public services of KVM guests only.
> 
> One of the nodes has got a failure ("watchdog: BUG: soft lockup -
> CPU#7 
> stuck for 23s"), which resulted that the node could process traffic
> on the 
> backend interface but not on the fronted one. Thus the services
> became 
> unavailable but the cluster thought the node is all right and did
> not 
> stonith it. 
> 
> How could we protect the cluster against such failures?

See the ocf:heartbeat:ethmonitor agent (to monitor the interface
itself) and/or the ocf:pacemaker:ping agent (to monitor reachability of
some IP such as a gateway)

> 
> We could configure a second corosync ring, but that would be a
> redundancy 
> ring only.
> 
> We could setup a second, independent corosync configuration for a
> second 
> pacemaker just with stonith agents. Is it enough to specify the
> cluster 
> name in the corosync config to pair pacemaker to corosync? What about
> the 
> pairing of pacemaker to this corosync instance, how can we tell
> pacemaker 
> to connect to this corosync instance?
> 
> Which is the best way to solve the problem? 
> 
> Best regards,
> Jozsef
> --
> E-mail : kadlecsik.joz...@wigner.mta.hu
> PGP key: http://www.kfki.hu/~kadlec/pgp_public_key.txt
> Address: Wigner Research Centre for Physics
>  H-1525 Budapest 114, POB. 49, Hungary
> ___
> Manage your subscription:
> https://lists.clusterlabs.org/mailman/listinfo/users
> 
> ClusterLabs home: https://www.clusterlabs.org/
-- 
Ken Gaillot 

___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/

Re: [ClusterLabs] [ClusterLabs Developers] FYI: looks like there are DNS glitches with clusterlabs.org subdomains

2019-10-09 Thread Ken Gaillot
Due to a mix-up, all of clusterlabs.org is currently without DNS
service. :-(

List mail may continue to work for a while as mail servers rely on DNS
caches, so hopefully this reaches most of our subscribers.

No estimate yet for when it will be recovered.

On Wed, 2019-10-09 at 11:06 +0200, Jan Pokorný wrote:
> Neither bugs.c.o nor lists.c.o work for me ATM.
> Either it resolves by itself, or Ken will intervene, I believe.
-- 
Ken Gaillot 

___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/

Re: [ClusterLabs] Unable to resource due to nvpair[@name="target-role"]: No such device or address

2019-10-07 Thread Ken Gaillot
On Mon, 2019-10-07 at 13:34 +, S Sathish S wrote:
> Hi Team,
>  
> I have two below query , we have been using Rhel 6.5 OS Version with
> below clusterlab source code compiled.
>  
> corosync-1.4.10
> pacemaker-1.1.10
> pcs-0.9.90
> resource-agents-3.9.2

Ouch, that's really old. It should still work, but not many people here
will have experience with it.
 
> Query 1 : we have added below resource group as required later we are
> trying to start the resource group , but unable to perform it .
>But while executing RA file with start option ,
> required service is started but pacemaker unable to recognized it
> started .

Are you passing any arguments on the command line when starting the
agent directly? The cluster configuration below doesn't have any, so
that would be the first thing I'd consider.

>  
> # pcs resource show MANAGER
> Resource: MANAGER (class=ocf provider=provider type=MANAGER_RA)
>   Meta Attrs: priority=100 failure-timeout=120s migration-threshold=5
>   Operations: monitor on-fail=restart interval=10s timeout=120s
> (MANAGER-monitor-interval-10s)
>   start on-fail=restart interval=0s timeout=120s
> (MANAGER-start-timeout-120s-on-fail-restart)
>   stop interval=0s timeout=120s (MANAGER-stop-timeout-
> 120s)
>  
> Starting the below resource
> #pcs resource enable MANAGER
>  
> Below are error we are getting in corosync.log file ,Please suggest
> what will be RCA for below issue.
>  
> cib: info: crm_client_new:   Connecting 0x819e00 for uid=0 gid=0
> pid=18508 id=e5fdaf69-390b-447d-b407-6420ac45148f
> cib: info: cib_process_request:  Completed cib_query
> operation for section 'all': OK (rc=0, origin=local/crm_resource/2,
> version=0.89.1)
> cib: info: cib_process_request:  Completed cib_query
> operation for section //cib/configuration/resources//*[@id="MANAGER
> "]/meta_attributes//nvpair[@name="target-role"]: No such device or
> address (rc=-6, origin=local/crm_resource/3, version=0.89.1)
> cib: info: crm_client_destroy:   Destroying 0 events

"info" level messages aren't errors. You might find /var/log/messages
more helpful in most cases.

There will be two nodes of interest. At any given time, one of the
nodes serves as "DC" -- this node's logs will have "pengine:" entries
showing any actions that are needed (such as starting or stopping a
resource). Then the node that actually runs the resource will have any
logs from the resource agent.

Additionally the "pcs status" command will show if there were any
resource failures.

> Query 2 : stack we are using classic openais (with plugin) , In that
> start the pacemaker service by default “update-origin” parameter in
> cib.xml update as hostname which pull from get_node_name function
> (uname -n)  instead we need to configure IPADDRESS of the hostname ,
> Is it possible ? we have requirement to perform the same.
>  
>  
> Thanks and Regards,
> S Sathish S

I'm not familiar with what classic openais supported. At the very least
you might consider switching from the plugin to CMAN, which was better
supported on RHEL 6.

At least with corosync 2, I believe it is possible to configure IP
addresses as node names when setting up the cluster, but I'm not sure
there's a good reason to do so. "update-origin" is just a comment
indicating which node made the most recent configuration change, and
isn't used for anything.
-- 
Ken Gaillot 

___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/

Re: [ClusterLabs] Apache doesn't start under corosync with systemd

2019-10-07 Thread Ken Gaillot
On Fri, 2019-10-04 at 14:10 +, Reynolds, John F - San Mateo, CA -
Contractor wrote:
> Good morning.
>  
> I’ve just upgraded a two-node active-passive cluster from SLES11 to
> SLES12.  This means that I’ve gone from /etc/init.d scripts to
> systemd services.
>  
> On the SLES11 server, this worked:
>  
>  type="apache">
>   
>  id="ncoa_apache-instance_attributes-configfile"/>
>   
>   
>  id="ncoa_apache-monitor-40s"/>
>   
> 
>  
> I had to tweak /etc/init.d/apache2 to make sure it only started on
> the active node, but that’s OK.

If pacemaker is managing a resource, the service should not be enabled
to start on boot (regardless of init or systemd). Pacemaker will start
and stop the service as needed according to the cluster configuration.

Additionally, your pacemaker configuration is using the apache OCF
script, so the cluster won't use /etc/init.d/apache2 at all (it invokes
the httpd binary directly).

Keep in mind that the httpd monitor action requires the status module
to be enabled -- I assume that's already in place.

>  
> On the SLES12 server, the resource is the same:
>  
>  type="apache">
>   
>  id="ncoa_apache-instance_attributes-configfile"/>
>   
>   
>  id="ncoa_apache-monitor-40s"/>
>   
> 
>  
> and the cluster believes the resource is started:
>  
>  
> eagnmnmep19c1:/var/lib/pacemaker/cib # crm status
> Stack: corosync
> Current DC: eagnmnmep19c0 (version 1.1.16-4.8-77ea74d) - partition
> with quorum
> Last updated: Fri Oct  4 09:02:52 2019
> Last change: Thu Oct  3 10:55:03 2019 by root via crm_resource on
> eagnmnmep19c0
>  
> 2 nodes configured
> 16 resources configured
>  
> Online: [ eagnmnmep19c0 eagnmnmep19c1 ]
>  
> Full list of resources:
>  
> Resource Group: grp_ncoa
>   (edited out for brevity)
>  ncoa_a05shared (ocf::heartbeat:Filesystem):Started
> eagnmnmep19c1
>  IP_56.201.217.146  (ocf::heartbeat:IPaddr2):   Started
> eagnmnmep19c1
>  ncoa_apache(ocf::heartbeat:apache):Started
> eagnmnmep19c1
>  
> eagnmnmep19c1:/var/lib/pacemaker/cib #
>  
>  
> But the httpd daemons aren’t started.  I can start them by hand, but
> that’s not what I need.
>  
> I have gone through the ClusterLabs and SLES docs for setting up
> apache resources, and through this list’s archive; haven’t found my
> answer.   I’m missing something in corosync, apache, or systemd.
>  Please advise.
>  
>  
> John Reynolds, Contractor
> San Mateo Unix
>  
> ___
> Manage your subscription:
> https://lists.clusterlabs.org/mailman/listinfo/users
> 
> ClusterLabs home: https://www.clusterlabs.org/
-- 
Ken Gaillot 

___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/

Re: [ClusterLabs] Antw: Ocassionally IPaddr2 resource fails to start

2019-10-07 Thread Ken Gaillot
On Mon, 2019-10-07 at 14:40 +0300, Donat Zenichev wrote:
> Hello and thank you for your answer!
> 
> So should I just disable "monitor" options at all? In my case  I'd
> better delete the whole "op" row:
> "op monitor interval=20 timeout=60 on-fail=restart"
> 
> am I correct?

Personally I wouldn't delete the monitor -- at most, I'd configure it
with on-fail=ignore. That way you can still see failures in the cluster
status, even if the cluster doesn't react to them.

If this always happens when the VM is being snapshotted, you can put
the cluster in maintenance mode (or even unmanage just the IP resource)
while the snapshotting is happening. I don't know of any reason why
snapshotting would affect only an IP, though.

Most resource agents send some logs to the system log. If that doesn't
give any clue, you could set OCF_TRACE_RA=1 in the pacemaker
environment to get tons more logs from resource agents.

> 
> On Mon, Oct 7, 2019 at 2:36 PM Ulrich Windl <
> ulrich.wi...@rz.uni-regensburg.de> wrote:
> > Hi!
> > 
> > I can't remember the exact reason, but probably it was exactly that
> > what made us remove any monitor operation from IPaddr2 (back in
> > 2011). So far no problems doing so ;-)
> > 
> > 
> > Regards,
> > Ulrich
> > P.S.: Of cource it would be nice if the real issue could be found
> > and fixed.
> > 
> > >>> Donat Zenichev  schrieb am 20.09.2019
> > um 14:43 in
> > Nachricht
> >  > >:
> > > Hi there!
> > > 
> > > I've got a tricky case, when my IpAddr2 resource fails to start
> > with
> > > literally no-reason:
> > > "IPSHARED_monitor_2 on my-master-1 'not running' (7):
> > call=11,
> > > status=complete, exitreason='',
> > >last-rc-change='Wed Sep 4 06:08:07 2019', queued=0ms,
> > exec=0ms"
> > > 
> > > Resource IpAddr2 managed to fix itself and continued to work
> > properly
> > > further after that.
> > > 
> > > What I've done after, was setting 'Failure-timeout=900' seconds
> > for my
> > > IpAddr2 resource, to prevent working of
> > > the resource on a node where it fails. I also set the
> > > 'migration-threshold=2' so IpAddr2 can fail only 2 times, and
> > goes to a
> > > Slave side after that. Meanwhile Master gets banned for 900
> > seconds.
> > > 
> > > After 900 seconds cluster tries to start IpAddr2 again at Master,
> > in case
> > > it's ok, fail counter gets cleared.
> > > That's how I avoid appearing of the error I mentioned above.
> > > 
> > > I tried to get so hard, why this can happen, but still no idea on
> > the
> > > count. Any clue how to find a reason?
> > > And another question, can snap-shoting of VM machines have any
> > impact on
> > > such?
> > > 
> > > And my configurations:
> > > ---
> > > node 01: my-master-1
> > > node 02: my-master-2
> > > 
> > > primitive IPSHARED IPaddr2 \
> > > params ip=10.10.10.5 nic=eth0 cidr_netmask=24 \
> > > meta migration-threshold=2 failure-timeout=900 target-
> > role=Started \
> > > op monitor interval=20 timeout=60 on-fail=restart
> > > 
> > > location PREFER_MASTER IPSHARED 100: my-master-1
> > > 
> > > property cib-bootstrap-options: \
> > > have-watchdog=false \
> > > dc-version=1.1.18-2b07d5c5a9 \
> > > cluster-infrastructure=corosync \
> > > cluster-name=wall \
> > > cluster-recheck-interval=5s \
> > > start-failure-is-fatal=false \
> > > stonith-enabled=false \
> > > no-quorum-policy=ignore \
> > > last-lrm-refresh=1554982967
> > > ---
> > > 
> > > Thanks in advance!
> > > 
> > > -- 
> > > -- 
> > > BR, Donat Zenichev
-- 
Ken Gaillot 

___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/


[ClusterLabs] Coming in Pacemaker 2.0.3: Pacemaker Remote hardening

2019-10-03 Thread Ken Gaillot
Hi all,

Currently, the Pacemaker Remote server always binds to the wildcard IP
address, and always uses the same TLS cipher priority list (which can
be configured at compile-time, and in some cases use the system-wide
policy).

Some users want to restrict these for security hardening purposes.

The upcoming Pacemaker 2.0.3 will support two new environment variables
(in /etc/sysconfig/pacemaker, /etc/default/pacemaker, or wherever your
distro keeps such things):


# If the Pacemaker Remote service is run on the local node, it will listen
# for connections on this address. The value may be a resolvable hostname or an
# IPv4 or IPv6 numeric address. When resolving names or using the default
# wildcard address (i.e. listen on all available addresses), IPv6 will be
# preferred if available. When listening on an IPv6 address, IPv4 clients will
# be supported (via IPv4-mapped IPv6 addresses).
# PCMK_remote_address="192.0.2.1"

# Use these GnuTLS cipher priorities for TLS connections. See:
#
#   https://gnutls.org/manual/html_node/Priority-Strings.html
#
# Pacemaker will append ":+ANON-DH" for remote CIB access (when enabled) and
# ":+DHE-PSK:+PSK" for Pacemaker Remote connections, as they are required for
# the respective functionality.
# PCMK_tls_priorities="NORMAL"


In addition, bundles gain a new capability, since there's no equivalent
of that file inside a container. You can already pass environment
variables to a container via the bundle's "options" property, but those
must be identical on all hosts. Now, if you mount a file from the host
as /etc/pacemaker/pcmk-init.env inside the container (via the existing
"storage-mapping" property), Pacemaker Remote inside the container will
parse that file for NAME=VALUE pairs and set them as environment
variables.

This allows you to set not only PCMK_remote_address, but other
Pacemaker environment variables such as PCMK_debug, to a different
value for the container to use on each host.

The first release candidate is expected in a couple of weeks.
-- 
Ken Gaillot 

___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/


Re: [ClusterLabs] Pacemaker: pgsql

2019-09-27 Thread Ken Gaillot
On Fri, 2019-09-27 at 19:03 +0530, Shital A wrote:
> 
> 
> On Tue, 24 Sep 2019, 22:20 Shital A, 
> wrote:
> > Hello,
> > 
> > We have setup active-passive cluster using streaming replication on
> > Rhel7.5. We are testing pacemaker for automated failover.
> > We are seeing below issues with the setup :
> > 
> > 1. When a failover is triggered when data is being added to the
> > primary by killing primary (killall -9 postgres), the standby
> > doesnt come up in sync.
> > On pacemaker, the crm_mon -Afr shows standby in disconnected and
> > HS:alone state.
> > 
> > On postgres, we see below error:
> > 
> > < 2019-09-20 17:07:46.266 IST > LOG:  entering standby mode
> > < 2019-09-20 17:07:46.267 IST > LOG:  database system was not
> > properly shut down; automatic recovery in progress
> > < 2019-09-20 17:07:46.270 IST > LOG:  redo starts at 1/680A2188
> > < 2019-09-20 17:07:46.370 IST > LOG:  consistent recovery state
> > reached at 1/6879D9F8
> > < 2019-09-20 17:07:46.370 IST > LOG:  database system is ready to
> > accept read only connections
> > cp: cannot stat
> > '/var/lib/pgsql/9.6/data/archivedir/000100010068': No
> > such file or directory
> > < 2019-09-20 17:07:46.751 IST > LOG:  statement: select
> > pg_is_in_recovery()
> > < 2019-09-20 17:07:46.782 IST > LOG:  statement: show
> > synchronous_standby_names
> > < 2019-09-20 17:07:50.993 IST > LOG:  statement: select
> > pg_is_in_recovery()
> > < 2019-09-20 17:07:53.395 IST > LOG:  started streaming WAL from
> > primary at 1/6800 on timeline 1
> > < 2019-09-20 17:07:53.436 IST > LOG:  invalid contrecord length
> > 2662 at 1/6879D9F8
> > < 2019-09-20 17:07:53.438 IST > FATAL:  terminating walreceiver
> > process due to administrator command
> > cp: cannot stat
> > '/var/lib/pgsql/9.6/data/archivedir/0002.history': No such file
> > or directory
> > cp: cannot stat
> > '/var/lib/pgsql/9.6/data/archivedir/000100010068': No
> > such file or directory
> > 
> > When we try to restart postgres on the standby, using pg_ctl
> > restart, the standby start syncing.
> > 
> > 
> > 2. After standby syncs using pg_ctl restart as mentioned above, we
> > found out that 1-2 records are missing on the standby.
> > 
> > Need help to check:
> > 1. why the standby starts in disconnect, HS:alone state? 
> > 
> > f you have faced this issue/have knowledge, please let us know.
> > 
> > Thanks.
> 
> 
> Hello,
> 
> I didn't  receive any reply on this issue.wondering whether there are
> no opinions or whether pacemaker with pgsql is not recommended?.
> 
> 
> Thanks! 

Hi,

There are quite a few pacemaker+pgsql users active on this list, but
they may not have time to respond at the moment. Most are using the PAF
agent rather than the pgsql agent (see 
https://github.com/ClusterLabs/PAF ).
-- 
Ken Gaillot 

___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/


[ClusterLabs] Coming in Pacemaker 2.0.3: You can (mostly) forget cluster-recheck-interval exists

2019-09-24 Thread Ken Gaillot
Hi all,

Currently, if you configure a failure-timeout, a reconnect_interval for
an ocf:pacemaker:remote resource, or a rule with a date_expression,
Pacemaker doesn't guarantee checking them more often than the cluster-
recheck-interval (which defaults to 15 minutes).

Often, users either forget to change cluster-recheck-interval, in which
case the action may occur well after intended, or set cluster-recheck-
interval very low, which is inefficient.

With the upcoming Pacemaker 2.0.3, Pacemaker will dynamically compute
the recheck interval based on the configuration. If there's a failure
that will expire 9 minutes from now, Pacemaker will recheck in 9
minutes.

The lone exception is for date expressions using the cron-like
"date_spec" format, due to the difficulty of determining the next
recheck time in that case -- cluster-recheck-interval is still your
friend if you use date_spec.

Pacemaker will continue to check at least as often as cluster-recheck-
interval, both to evaluate date_spec entries and as a fail-safe in case
of certain types of scheduler bugs.
-- 
Ken Gaillot 

___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/


Re: [ClusterLabs] stonith-ng - performing action 'monitor' timed out with signal 15

2019-09-16 Thread Ken Gaillot
On Tue, 2019-09-03 at 10:09 +0200, Marco Marino wrote:
> Hi, I have a problem with fencing on a two node cluster. It seems
> that randomly the cluster cannot complete monitor operation for fence
> devices. In log I see:
> crmd[8206]:   error: Result of monitor operation for fence-node2 on
> ld2.mydomain.it: Timed Out
> As attachment there is 
> - /var/log/messages for node1 (only the important part)
> - /var/log/messages for node2 (only the important part) <-- Problem
> starts here
> - pcs status
> - pcs stonith show (for both fence devices)
> 
> I think it could be a timeout problem, so how can I see timeout value
> for monitor operation in stonith devices?
> Please, someone can help me with this problem?
> Furthermore, how can I fix the state of fence devices without
> downtime?
> 
> Thank you

How to investigate depends on whether this is an occasional monitor
failure, or happens every time the device start is attempted. From the
status you attached, I'm guessing it's at start.

In that case, my next step (since you've already verified ipmitool
works directly) would be to run the fence agent manually using the same
arguments used in the cluster configuration.

Check the man page for the fence agent, looking at the section for
"Stdin Parameters". These are what's used in the cluster configuration,
so make a note of what values you've configured. Then run the fence
agent like this:

echo -e "action=status\nPARAMETER=VALUE\nPARAMETER=VALUE\n..." | /path/to/agent

where PARAMETER=VALUE entries are what you have configured in the
cluster. If the problem isn't obvious from that, you can try adding a
debug_file parameter.
-- 
Ken Gaillot 

___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/


Re: [ClusterLabs] op stop timeout update causes monitor op to fail?

2019-09-11 Thread Ken Gaillot
On Tue, 2019-09-10 at 09:54 +0200, Dennis Jacobfeuerborn wrote:
> Hi,
> I just updated the timeout for the stop operation on an nfs cluster
> and
> while the timeout was update the status suddenly showed this:
> 
> Failed Actions:
> * nfsserver_monitor_1 on nfs1aqs1 'unknown error' (1): call=41,
> status=Timed Out, exitreason='none',
> last-rc-change='Tue Aug 13 14:14:28 2019', queued=0ms, exec=0ms

Are you sure it wasn't already showing that? The timestamp of that
error is Aug 13, while the logs show the timeout update happening Sep
10.

Old errors will keep showing up in status until you manually clean them
up (with crm_resource --cleanup or a higher-level tool equivalent), or
any configured failure-timeout is reached.

In any case, the log excerpt shows that nothing went wrong during the
time it covers. There were no actions scheduled in that transition in
response to the timeout change (which is as expected).

> 
> The command used:
> pcs resource update nfsserver op stop timeout=30s
> 
> I can't imagine that this is expected to happen. Is there another way
> to
> update the timeout that doesn't cause this?
> 
> I attached the log of the transition.
> 
> Regards,
>   Dennis
-- 
Ken Gaillot 

___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/


Re: [ClusterLabs] Corosync main process was not scheduled for 2889.8477 ms (threshold is 800.0000 ms), though it runs with realtime priority and there was not much load on the node

2019-09-09 Thread Ken Gaillot
On Mon, 2019-09-09 at 14:21 +0200, wf...@niif.hu wrote:
> Andrei Borzenkov  writes:
> 
> > 04.09.2019 0:27, wf...@niif.hu пишет:
> > 
> > > Jeevan Patnaik  writes:
> > > 
> > > > [16187] node1 corosyncwarning [MAIN  ] Corosync main process
> > > > was not
> > > > scheduled for 2889.8477 ms (threshold is 800. ms). Consider
> > > > token
> > > > timeout increase.
> > > > [...]
> > > > 2. How to fix this? We have not much load on the nodes, the
> > > > corosync is
> > > > already running with RT priority.
> > > 
> > > Does your corosync daemon use a watchdog device?  (See in the
> > > startup
> > > logs.)  Watchdog interaction can be *slow*.
> > 
> > Can you elaborate? This is the first time I see that corosync has
> > anything to do with watchdog. How exactly corosync interacts with
> > watchdog? Where in corosync configuration watchdog device is
> > defined?
> 
> Inside the resources directive you can specify a watchdog_device, 

Side comment: corosync's built-in watchdog handling is an older
alternative to sbd, the watchdog manager that pacemaker uses. You'd use
one or the other.

If you're running pacemaker on top of corosync, you'd probably want sbd
since pacemaker can use it for more situations than just cluster
membership loss.

> which
> Corosync will "pet" from its main loop.  From corosync.conf(5):
> 
> > In a cluster with properly configured power fencing a watchdog
> > provides no additional value.  On the other hand, slow watchdog
> > communication may incur multi-second delays in the Corosync main
> > loop,
> > potentially breaking down membership.  IPMI watchdogs are
> > particularly
> > notorious in this regard: read about kipmid_max_busy_us in IPMI.txt
> > in
> > the Linux kernel documentation.
-- 
Ken Gaillot 

___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/

Re: [ClusterLabs] Why is last-lrm-refresh part of the CIB config?

2019-09-09 Thread Ken Gaillot
On Mon, 2019-09-09 at 11:06 +0200, Ulrich Windl wrote:
> Hi!
> 
> In recent pacemaker I see that last-lrm-refresh is included in the
> CIB config (crm_config/cluster_property_set), so CIBs are "different"
> when they are actually the same.
> 
> Example diff:
> -   name="last-lrm-refresh" value="1566194010"/>
> +   name="last-lrm-refresh" value="1567945827"/>
> 
> I don't see a reason for having that. Can someone explain?
> 
> Regards,
> Ulrich

New transitions (re-calculation of cluster status) are triggered by
changes in the CIB. last-lrm-refresh isn't really special in any way,
it's just a value that can be changed arbitrarily to trigger a new
transition when nothing "real" is changing.

I'm not sure what would actually be setting it these days; its use has
almost vanished in recent code. I think it was used more commonly for
clean-ups in the past.
-- 
Ken Gaillot 

___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/


Re: [ClusterLabs] Antw: Re: Q: The effect of using "default" attribute in RA metadata

2019-09-05 Thread Ken Gaillot
On Thu, 2019-09-05 at 09:31 +0200, Ulrich Windl wrote:
> > > > Tomas Jelinek  schrieb am 05.09.2019 um
> > > > 09:22 in
> 
> Nachricht
> <651630f8-b871-e4c1-68d8-e6a42dd29...@redhat.com>:
> > Dne 03. 09. 19 v 11:27 Ulrich Windl napsal(a):
> > > Hi!
> > > 
> > > Reading the RA API metadata specification, there is a "default"
> > > attribute 
> > 
> > for "parameter".
> > > I wonder what the effect of specifying a default is: Is it
> > > purely 
> > 
> > documentation (and the RA has to take care it uses the same default
> > value as
> > in the metadata), or will the configuration tools actually use that
> > value if
> > the user did not specify a parameter value?
> > 
> > Pcs doesn't use the default values. If you don't specify a value
> > for an 
> > option, pcs simply doesn't put that option into the CIB leaving it
> > to 
> > the RA to figure out a default value. This has a benefit of always 
> > following the default even if it changes. There is no plan to
> > change the 
> > behavior.
> 
> I see. However changing a default value (that way) can cause
> unexpected
> surprises at the user's end.
> When copying the default to the actual resource configuration at the
> time when
> it was configured could prevent unexpected surprises (and the values
> being used
> are somewhat "documented") in the configuration.
> I agree that it's no longer obvious then whether those default values
> were set
> explicitly or implicitly,
> 
> > 
> > Copying default values to the CIB has at least two disadvantages:
> > 1) If the default in a RA ever changes, the change would have no
> > effect 
> > ‑ a value in the CIB would still be set to the previous default.
> > To 
> > configure it to follow the defaults, one would have to remove the
> > option 
> > value afterwards or a new option to pcs commands to control the
> > behavior 
> > would have to be added.
> 
> Agreed.
> 
> > 2) When a value is the same as its default it would be unclear if
> > the 
> > intention is to follow the default or the user set a value which is
> > the 
> > same as the default by coincidence.
> 
> Agreed.
> 
> Are there any plans to decorate the DTD or RNG with comments some
> day? I think
> that would be the perfect place to describe the meanings.

The standard has its own repo:

https://github.com/ClusterLabs/OCF-spec

The ra/next directory is where we're putting proposed changes (ra-
api.rng is the RNG). Once accepted for the upcoming 1.1 standard, the
changes are copied to the ra/1.1 directory, and at some point, 1.1 will
be officially adopted as the current standard.

So, pull requests are welcome :)

I have an outstanding PR that unfortunately I had to put on the back
burner but should be the last big set of changes for 1.1:

https://github.com/ClusterLabs/OCF-spec/pull/21/files

> 
> Regards,
> Ulrich
-- 
Ken Gaillot 

___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/

Re: [ClusterLabs] Antw: Re: Q: Recommened directory for RA auxillary files?

2019-09-05 Thread Ken Gaillot
On Thu, 2019-09-05 at 07:57 +0200, Ulrich Windl wrote:
> > > > Ken Gaillot  schrieb am 04.09.2019 um
> > > > 16:26 in
> 
> Nachricht
> <2634f19382b90736bdfb80b9c84997111479d337.ca...@redhat.com>:
> > On Wed, 2019‑09‑04 at 10:07 +0200, Jehan‑Guillaume de Rorthais
> > wrote:
> > > On Tue, 03 Sep 2019 09:35:39 ‑0500
> > > Ken Gaillot  wrote:
> > > 
> > > > On Mon, 2019‑09‑02 at 15:23 +0200, Ulrich Windl wrote:
> > > > > Hi!
> > > > > 
> > > > > Are there any recommendations where to place (fixed content)
> > > > > files an
> > > > > RA uses?
> > > > > Usually my RAs use a separate XML file for the metadata, just
> > > > > to
> > > > > allow editing it in XML mode automatically.
> > > > > Traditionally I put the file in the same directory as the RA
> > > > > itself
> > > > > (like "cat $0.xml" for meta‑data).
> > > > > Are there any expectations that every file in the RA
> > > > > directory is
> > > > > an
> > > > > RA?
> > > > > (Currently I'm extending an RA, and I'd like to provide some
> > > > > additional user‑modifiable template file, and I wonder which
> > > > > path
> > > > > to
> > > > > use)
> > > > > 
> > > > > Regards,
> > > > > Ulrich  
> > > > 
> > > > I believe most (maybe even all modern?) deployments have both
> > > > lib
> > > > and
> > > > resource.d under /usr/lib/ocf. If you have a custom provider
> > > > for
> > > > the RA
> > > > under resource.d, it would make sense to use the same pattern
> > > > under
> > > > lib.
> > > 
> > > Shouldn't it be $OCF_FUNCTIONS_DIR?
> > 
> > Good point ‑‑ if the RA is using ocf‑shellfuncs, yes. $OCF_ROOT/lib
> > should be safe if the RA doesn't use ocf‑shellfuncs.
> > 
> > It's a weird situation; the OCF standard actually specifies
> > /usr/ocf,
> > but everyone implemented /usr/lib/ocf. I do plan to add a configure
> > option for it in pacemaker, but it shouldn't be changed unless you
> > can
> > make the same change in every other cluster component that needs
> > it.
> 
> The thing with $OCF_ROOT is: If $OCF_ROOT already contains "/lib", it
> looks
> off to add another "/lib".

It does look weird, but that's the convention in use today.

I hope we eventually get to the point where the .../lib and
.../resource.d locations are configure-time options, and distros can
choose whatever's consistent with their usual policies. For those that
follow the FHS, it might be something like /usr/lib/ocf or
/usr/share/ocf, and /usr/libexec/ocf.

However all cluster components installed on a host must be configured
the same locations, so that will require careful coordination. It's
easier to just keep using the current ones :)

> To me it looks as if it's time for an $OCF_LIB (which would be
> $OCF_ROOT if
> the latter is /usr/lib/ocf already, otherwise $OCF_ROOT/lib).
> Personally I
> think the /usr/ predates the
> [/usr][/share]]/lib/.
> 
> > 
> > > Could this be generalized to RA for their
> > > own lib or permanent dependencies files?
> > 
> > The OCF standard specifies only the resource.d subdirectory, and
> > doesn't comment on adding others. lib/heartbeat is a common choice
> > for
> > the resource‑agents package shell includes (an older approach was
> > to
> > put them as dot files in resource.d/heartbeat, and there are often
> > symlinks at those locations for backward compatibility).
> > 
> > Since "heartbeat" is a resource agent provider name, and the
> > standard
> > specifies that agents go under resource.d/, it does
> > make
> > sense that lib/ would be where RA files would go.
> 
> I wonder when we will be able to retire "heartbeat" ;-) If it's
> supposed to be
> of "vendor" type, maybe replace it with "clusterlabs" at some time...

Definitely, that's been the plan for a while, it's just another change
that will require coordination across multiple components.

The hope is that we can at some point wrap up the OCF 1.1 standard, and
then move forward some of the bigger changes. It's just hard to
prioritize that kind of work when there's a backlog of important stuff.

> 
> Regards,
> Ulrich
-- 
Ken Gaillot 

___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/

Re: [ClusterLabs] Q: Recommened directory for RA auxillary files?

2019-09-04 Thread Ken Gaillot
On Wed, 2019-09-04 at 10:07 +0200, Jehan-Guillaume de Rorthais wrote:
> On Tue, 03 Sep 2019 09:35:39 -0500
> Ken Gaillot  wrote:
> 
> > On Mon, 2019-09-02 at 15:23 +0200, Ulrich Windl wrote:
> > > Hi!
> > > 
> > > Are there any recommendations where to place (fixed content)
> > > files an
> > > RA uses?
> > > Usually my RAs use a separate XML file for the metadata, just to
> > > allow editing it in XML mode automatically.
> > > Traditionally I put the file in the same directory as the RA
> > > itself
> > > (like "cat $0.xml" for meta-data).
> > > Are there any expectations that every file in the RA directory is
> > > an
> > > RA?
> > > (Currently I'm extending an RA, and I'd like to provide some
> > > additional user-modifiable template file, and I wonder which path
> > > to
> > > use)
> > > 
> > > Regards,
> > > Ulrich  
> > 
> > I believe most (maybe even all modern?) deployments have both lib
> > and
> > resource.d under /usr/lib/ocf. If you have a custom provider for
> > the RA
> > under resource.d, it would make sense to use the same pattern under
> > lib.
> 
> Shouldn't it be $OCF_FUNCTIONS_DIR?

Good point -- if the RA is using ocf-shellfuncs, yes. $OCF_ROOT/lib
should be safe if the RA doesn't use ocf-shellfuncs.

It's a weird situation; the OCF standard actually specifies /usr/ocf,
but everyone implemented /usr/lib/ocf. I do plan to add a configure
option for it in pacemaker, but it shouldn't be changed unless you can
make the same change in every other cluster component that needs it.

> Could this be generalized to RA for their
> own lib or permanent dependencies files?

The OCF standard specifies only the resource.d subdirectory, and
doesn't comment on adding others. lib/heartbeat is a common choice for
the resource-agents package shell includes (an older approach was to
put them as dot files in resource.d/heartbeat, and there are often
symlinks at those locations for backward compatibility).

Since "heartbeat" is a resource agent provider name, and the standard
specifies that agents go under resource.d/, it does make
sense that lib/ would be where RA files would go.
-- 
Ken Gaillot 

___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/


Re: [ClusterLabs] Antw: Re: Q: Recommened directory for RA auxillary files?

2019-09-04 Thread Ken Gaillot
On Wed, 2019-09-04 at 10:09 +0200, Jehan-Guillaume de Rorthais wrote:
> On Wed, 04 Sep 2019 07:54:50 +0200
> "Ulrich Windl"  wrote:
> 
> > > > > Ken Gaillot  schrieb am 03.09.2019 um
> > > > > 16:35 in  
> > 
> > Nachricht
> > <979978d5a488aabd9ed4a941ff4eac60c271c84d.ca...@redhat.com>:
> > > On Mon, 2019‑09‑02 at 15:23 +0200, Ulrich Windl wrote:  
> > > > Hi!
> > > > 
> > > > Are there any recommendations where to place (fixed content)
> > > > files an
> > > > RA uses?
> > > > Usually my RAs use a separate XML file for the metadata, just
> > > > to
> > > > allow editing it in XML mode automatically.
> > > > Traditionally I put the file in the same directory as the RA
> > > > itself
> > > > (like "cat $0.xml" for meta‑data).
> > > > Are there any expectations that every file in the RA directory
> > > > is an
> > > > RA?
> > > > (Currently I'm extending an RA, and I'd like to provide some
> > > > additional user‑modifiable template file, and I wonder which
> > > > path to
> > > > use)
> > > > 
> > > > Regards,
> > > > Ulrich  
> > > 
> > > I believe most (maybe even all modern?) deployments have both lib
> > > and
> > > resource.d under /usr/lib/ocf. If you have a custom provider for
> > > the RA
> > > under resource.d, it would make sense to use the same pattern
> > > under
> > > lib.  
> > 
> > So what concrete path are you suggesting?
> > /usr/lib//?
> 
> I would bet on /usr/lib/ocf/lib/ ?

That was what I had in mind. Parallels "heartbeat"
-- 
Ken Gaillot 

___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/

Re: [ClusterLabs] Q: Recommened directory for RA auxillary files?

2019-09-03 Thread Ken Gaillot
On Mon, 2019-09-02 at 15:23 +0200, Ulrich Windl wrote:
> Hi!
> 
> Are there any recommendations where to place (fixed content) files an
> RA uses?
> Usually my RAs use a separate XML file for the metadata, just to
> allow editing it in XML mode automatically.
> Traditionally I put the file in the same directory as the RA itself
> (like "cat $0.xml" for meta-data).
> Are there any expectations that every file in the RA directory is an
> RA?
> (Currently I'm extending an RA, and I'd like to provide some
> additional user-modifiable template file, and I wonder which path to
> use)
> 
> Regards,
> Ulrich

I believe most (maybe even all modern?) deployments have both lib and
resource.d under /usr/lib/ocf. If you have a custom provider for the RA
under resource.d, it would make sense to use the same pattern under
lib.

If you want to follow the FHS, you might consider /usr/share if you're
installing via custom packages, /usr/local/share if you're just
installing locally, or /srv in either case.
-- 
Ken Gaillot 

___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/


Re: [ClusterLabs] New status reporting for starting/stopping resources in 1.1.19-8.el7

2019-09-03 Thread Ken Gaillot
On Sat, 2019-08-31 at 03:39 +, Chris Walker wrote:
> Hello,
> The 1.1.19-8 EL7 version of Pacemaker contains a commit ‘Feature:
> crmd: default record-pending to TRUE’ that is not in the ClusterLabs
> Github repo.  This commit changes the reporting for resources that
> are in the process of starting and stopping for (at least) crm_mon
> and crm_resource
> crm_mon
> Resources that are in the process of
> starting
> Old reporting:
> Stopped
> New reporting:
> Starting
> Resources that are in the process of
> stopping
> Old reporting:
> Started
> New reporting:
> Stopping
>  
> crm_resource -r  -W
> Resources that are in the process of
> starting
> Old reporting:
> resource  is NOT running
> New reporting:
> resource  is running on: 
> Resources that are in the process of
> stopping
> Old reporting:
> resource  is running on: 
> New reporting:
> resource  is NOT running
>  
> The change to crm_mon is helpful and accurately reflects the current
> state of the resource, but the new reporting from crm_resource seems
> somewhat misleading.  Was this the intended reporting?  Regardless, 

Interesting, I never looked at how crm_resource showed pending actions.
That should definitely be improved.

The record-pending option itself has been around forever, and so has
this behavior when it is set to true. The only difference is that it
now defaults to true.

> the fact that this commit is not in the upstream ClusterLab repo
> makes me wonder whether this will be the default status reporting
> going forward (I will try the 2.0 branch soon).

It indeed was changed in the 2.0.0 release. RHEL 7 backported the
change from there.

>  
> Thanks,
> Chris
-- 
Ken Gaillot 

___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/

Re: [ClusterLabs] node name issues (Could not obtain a node name for corosync nodeid 739512332)

2019-08-22 Thread Ken Gaillot
739512331
> crmd: info: corosync_node_name:  Unable to get node name for
> nodeid 739512331
> crmd: info: pcmk_quorum_notification:Obtaining name for
> new node 739512331
> crmd: info: corosync_node_name:  Unable to get node name for
> nodeid 739512331
> crmd:   notice: get_node_name:   Could not obtain a node name for
> corosync nodeid 739512331
> crmd:   notice: crm_update_peer_state_iter:  Node (null) state is
> now member | nodeid=739512331 previous=unknown
> source=pcmk_quorum_notification
> crmd:   notice: crm_update_peer_state_iter:  Node h12 state is
> now member | nodeid=739512332 previous=unknown
> source=pcmk_quorum_notification
> crmd: info: peer_update_callback:Cluster node h12 is now
> member (was in unknown state)
> crmd: info: corosync_node_name:  Unable to get node name for
> nodeid 739512332
> crmd:   notice: get_node_name:   Defaulting to uname -n for the local
> corosync node name
> ...
> 
> ???
> 
> attrd: info: corosync_node_name:  Unable to get node name for
> nodeid 739512332
> attrd:   notice: get_node_name:   Defaulting to uname -n for the
> local corosync node name
> attrd: info: main:CIB connection active
> ...
> stonith-ng:   notice: get_node_name:   Could not obtain a node name
> for corosync nodeid 739512331
> stonith-ng: info: crm_get_peer:Created entry 956e8bf0-5634-
> 4535-aa72-cdd6cf319d5b/0x1d04440 for node (null)/739512331 (2 total)
> stonith-ng: info: crm_get_peer:Node 739512331 has uuid
> 739512331
> stonith-ng: info: pcmk_cpg_membership: Node 739512331 still
> member of group stonith-ng (peer=(null):7277, counter=0.0, at least
> once)
> stonith-ng: info: crm_update_peer_proc:pcmk_cpg_membership:
> Node (null)[739512331] - corosync-cpg is now online
> stonith-ng:   notice: crm_update_peer_state_iter:  Node (null)
> state is now member | nodeid=739512331 previous=unknown
> source=crm_update_peer_proc
> ...
> attrd: info: corosync_node_name:  Unable to get node name for
> nodeid 739512331
> attrd:   notice: get_node_name:   Could not obtain a node name for
> corosync nodeid 739512331
> attrd: info: crm_get_peer:Created entry 40380a43-c1e2-498a-
> bc9e-d68968acf4d6/0x2572850 for node (null)/739512331 (2 total)
> attrd: info: crm_get_peer:Node 739512331 has uuid 739512331
> attrd: info: pcmk_cpg_membership: Node 739512331 still member
> of group attrd (peer=(null):7279, counter=0.0, at least once)
> attrd: info: crm_update_peer_proc:pcmk_cpg_membership: Node
> (null)[739512331] - corosync-cpg is now online
> attrd:   notice: crm_update_peer_state_iter:  Node (null) state
> is now member | nodeid=739512331 previous=unknown
> source=crm_update_peer_proc
> attrd: info: pcmk_cpg_membership: Node 739512332 still member
> of group attrd (peer=h12:40553, counter=0.1, at least once)
> attrd: info: crm_get_peer:Node 739512331 is now known as h11
> attrd:   notice: attrd_check_for_new_writer:  Recorded new
> attribute writer: h11 (was unset)
> ...
> crmd: info: pcmk_cpg_membership: Node 739512332 joined group
> crmd (counter=0.0, pid=0, unchecked for rivals)
> crmd: info: corosync_node_name:  Unable to get node name for
> nodeid 739512331
> crmd:   notice: get_node_name:   Could not obtain a node name for
> corosync nodeid 739512331
> crmd: info: pcmk_cpg_membership: Node 739512331 still member
> of group crmd (peer=(null):7281, counter=0.0, at least once)
> crmd: info: crm_update_peer_proc:pcmk_cpg_membership: Node
> (null)[739512331] - corosync-cpg is now online
> 
> ???
> 
> crmd: info: pcmk_cpg_membership: Node 739512332 still member
> of group crmd (peer=h12:40555, counter=0.1, at least once)
> crmd: info: crm_get_peer:Node 739512331 is now known as h11
> crmd: info: peer_update_callback:Cluster node h11 is now
> member
> crmd: info: update_dc:   Set DC to h11 (3.0.14)
> crmd: info: crm_update_peer_expected:update_dc: Node
> h11[739512331] - expected state is now member (was (null))
> ...
> 
> I feel this mess with determining the node name is overly
> complicated...
> 
> Regards,
> Ulrich

Complicated, yes -- overly, depends on your point of view :)

Putting "name:" in corosync.conf simplifies things.
-- 
Ken Gaillot 

___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/


<    1   2   3   4   5   6   7   8   9   10   >