Re: [ClusterLabs] Adding HAProxy as a Resource

2019-07-11 Thread Kristoffer Grönlund

On 2019-07-11 09:31, Somanath Jeeva wrote:

Hi All,

I am using HAProxy in my environment  which I plan to add to pacemaker
as resource. I see no RA available for that in resource agent.

Should I write a new RA or is there any way to add it to pacemaker as
a systemd service.


Hello,

haproxy works well as a plain systemd service, so you can add it as
systemd:haproxy - that is, instead of an ocf: prefix, just put
systemd:.

If you want the cluster to manage multiple, differently configured
instances of haproxy, you might have to either create custom systemd
service scripts for each one, or create an agent with parameters.

Cheers,
Kristoffer





With Regards
Somanath Thilak J


___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/


___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/


Re: [ClusterLabs] Why do clusters have a name?

2019-03-27 Thread Kristoffer Grönlund
On Wed, 2019-03-27 at 12:25 +0100, Jehan-Guillaume de Rorthais wrote:
> On Wed, 27 Mar 2019 10:20:21 +0100
> Kristoffer Grönlund  wrote:
> 
> > On Wed, 2019-03-27 at 10:13 +0100, Jehan-Guillaume de Rorthais
> > wrote:
> > > On Wed, 27 Mar 2019 09:59:16 +0100
> > > Kristoffer Grönlund  wrote:
> > >   
> > > > On Wed, 2019-03-27 at 08:27 +0100, Ivan Devát  wrote:  
> > > > > On 26. 03. 19 21:12, Brian Reichert wrote:
> > > > > > This will sound like a dumb question:
> > > > > > 
> > > > > > The manpage for pcs(8) implies that to set up a cluster,
> > > > > > one
> > > > > > needs
> > > > > > to provide a name.
> > > > > > 
> > > > > > Why do clusters have names?
> > > > > > 
> > > > > > Is there a use case wherein there would be multiple
> > > > > > clusters
> > > > > > visible
> > > > > > in an administrative UI, such that they'd need to be
> > > > > > differentiated?
> > > > > > 
> > > > > 
> > > > > For example in a web UI of pcs is a page with multiple
> > > > > clusters.
> > > > > 
> > > > 
> > > > We use cluster names and rules to apply the same exact CIB to
> > > > multiple
> > > > clusters, particularly when configuring geo clusters.  
> > > 
> > > I'm not sure to understand. Is it possible to have multiple
> > > Pacemaker
> > > daemon instances on the same serveurs?
> > > 
> > > Or do you mean it is possible to have multiple namespace where
> > > resources are
> > > isolated in and one Pacemaker daemon to manage them?
> > >   
> > 
> > I am not sure what you mean by the second, but I am fairly sure I
> > don't
> > mean either of those :) I'm talking about having multiple actual,
> > distinct clusters
> 
> distinct cluster of Pacemaker/corosync daemons on the same servers or
> distinct
> cluster of servers?
> 

Distinct clusters of servers:

Cluster "Tokyo" consisting of node A, B, C
Cluster "Stockholm" consisting of node D, E, F
Cluster "New York" consisting of node G, H, I

All with the same CIB XML document.

Using tickets, resources can then be moved from one cluster to the
other, or cloned across multiple clusters. A cluster of clusters, if
you will.

Cheers,
Kristoffer

> > and sharing the same configuration across all of
> > them,
> 
> Same configuration like, the same file or the same content accross
> different
> files?
> 
> Sorry for being bold...I just don't get it :/
> 
> 
___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/

Re: [ClusterLabs] Why do clusters have a name?

2019-03-27 Thread Kristoffer Grönlund
On Wed, 2019-03-27 at 10:13 +0100, Jehan-Guillaume de Rorthais wrote:
> On Wed, 27 Mar 2019 09:59:16 +0100
> Kristoffer Grönlund  wrote:
> 
> > On Wed, 2019-03-27 at 08:27 +0100, Ivan Devát  wrote:
> > > On 26. 03. 19 21:12, Brian Reichert wrote:  
> > > > This will sound like a dumb question:
> > > > 
> > > > The manpage for pcs(8) implies that to set up a cluster, one
> > > > needs
> > > > to provide a name.
> > > > 
> > > > Why do clusters have names?
> > > > 
> > > > Is there a use case wherein there would be multiple clusters
> > > > visible
> > > > in an administrative UI, such that they'd need to be
> > > > differentiated?
> > > >   
> > > 
> > > For example in a web UI of pcs is a page with multiple clusters.
> > >   
> > 
> > We use cluster names and rules to apply the same exact CIB to
> > multiple
> > clusters, particularly when configuring geo clusters.
> 
> I'm not sure to understand. Is it possible to have multiple Pacemaker
> daemon instances on the same serveurs?
> 
> Or do you mean it is possible to have multiple namespace where
> resources are
> isolated in and one Pacemaker daemon to manage them?
> 

I am not sure what you mean by the second, but I am fairly sure I don't
mean either of those :) I'm talking about having multiple actual,
distinct clusters and sharing the same configuration across all of
them, using rules to separate the cases where the configurations
differ.

Cheers,
Kristoffer


___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/

Re: [ClusterLabs] Why do clusters have a name?

2019-03-27 Thread Kristoffer Grönlund
On Wed, 2019-03-27 at 08:27 +0100, Ivan Devát  wrote:
> On 26. 03. 19 21:12, Brian Reichert wrote:
> > This will sound like a dumb question:
> > 
> > The manpage for pcs(8) implies that to set up a cluster, one needs
> > to provide a name.
> > 
> > Why do clusters have names?
> > 
> > Is there a use case wherein there would be multiple clusters
> > visible
> > in an administrative UI, such that they'd need to be
> > differentiated?
> > 
> 
> For example in a web UI of pcs is a page with multiple clusters.
> 

We use cluster names and rules to apply the same exact CIB to multiple
clusters, particularly when configuring geo clusters.

Cheers,
Kristoffer

> Ivan
> ___
> Manage your subscription:
> https://lists.clusterlabs.org/mailman/listinfo/users
> 
> ClusterLabs home: https://www.clusterlabs.org/
> 
___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/

Re: [ClusterLabs] Antw: Announcing hawk-apiserver, now in ClusterLabs

2019-02-13 Thread Kristoffer Grönlund
Ulrich Windl   writes:

> Hello!
>
> I'd like to comment as an "old" SuSE customer:
> I'm amazed that lighttpd is dropped in favor of some new go application:
> SuSE now has a base system that needs (correct me if I'm wrong): shell, perl,
> python, java, go, ruby, ...?
>

Oh, that list is a lot longer, and this is not the first go project to
make it into SLE.

> Maybe each programmer has his favorite. Personally I also learned quite a lot
> of languages (and even editors), but most being equivalent, you'll have to
> decide whether it makes sense to start using still another language (go in 
> this
> case). Especially i'm afraid of single-vendor languages...

TBH I am more sceptical about languages designed by committee ;)

Cheers,
Kristoffer

>
> Regards,
> Ulrich
>
>>>> Kristoffer Grönlund  schrieb am 12.02.2019 um 20:00
> in
> Nachricht <87mun0g7c9@suse.com>:
>> Hello everyone,
>> 
>> I just wanted to send out an email about the hawk-apiserver project
>> which was moved into the ClusterLabs organization on Github today. This
>> project is used by us at SUSE for Hawk in our latest releases already,
>> and is also available in openSUSE for use with Hawk. However, I am
>> hoping that it can prove to be useful more generally, not just for Hawk
>> but for other projects that may want to integrate with Pacemaker using
>> the C API, and also to show what is possible when using the API.
>> 
>> To describe the hawk-apiserver briefly, I'll start by describing the use
>> case it was designed to cover: Previously, we were using lighttpd as the
>> web server for Hawk (a Ruby on Rails application), but a while ago the
>> maintainers of lighttpd decided that since Hawk was the only user of
>> this project in SLE, they would like to remove it from the next
>> release. This left Apache as the web server available to us, which has
>> some interesting issues for Hawk: Mainly, we expect people to run apache
>> as a resource in the cluster which might result in a confusing mix of
>> processes on the systems.
>> 
>> At the same time, I had started looking at Go and discovered how easy it
>> was to write a basic proxying web server in Go. So, as an experiment I
>> decided to see if I could replace the use of lighttpd with a custom web
>> server written in Go. Turns out the answer was yes! Once we had our own
>> web server, I discovered new things we could do with it. So here are
>> some of the other unique features in hawk-apiserver now:
>> 
>> * SSL certificate termination, and automatic detection and redirection
>>   from HTTP to HTTPS *on the same port*: Hawk runs on port 7630, and if
>>   someone accesses that port via HTTP, they will get a redirect to the
>>   same port but on HTTPS. It's magic.
>> 
>> * Persistent connection to Pacemaker via the C API, enabling instant
>>   change notification to the web frontend. From the point of view of the
>>   web frontend, this is a long-lived connection which completes when
>>   something changes in the CIB. On the backend side, it uses goroutines
>>   to enable thousands of such long-lived connections with minimal
>>   overhead.
>> 
>> * Optional exposure of the CIB as a REST API. Right now this is somewhat
>>   primitive, but we are working on making this a more fully featured
>>   API.
>> 
>> * Configurable static file serving routes (serve images on /img from
>>   /srv/http/images for example).
>> 
>> * Configurable proxying of subroutes to other web applications.
>> 
>> The URL to the project is https://github.com/ClusterLabs/hawk-apiserver,
>> I hope you will find it useful. Comments, issues and contributions are
>> of course more than welcome.
>> 
>> One final note: hawk-apiserver uses a project called go-pacemaker
>> located at https://github.com/krig/go-pacemaker. I indend to transfer
>> this to ClusterLabs as well. go-pacemaker is still somewhat rough around
>> the edges, and our plan is to work on the C API of pacemaker to make
>> using and exposing it via Go easier, as well as moving functionality
>> from crm_mon into the C API so that status information can be made
>> available in a more convenient format via the API as well.
>> 
>> -- 
>> // Kristoffer Grönlund
>> // kgronl...@suse.com 
>> ___
>> Users mailing list: Users@clusterlabs.org 
>> https://lists.clusterlabs.org/mailman/listinfo/users 
>> 
>> Project Home: http://www.clusterlabs.org 
>> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf

Re: [ClusterLabs] Proposal for machine-friendly output from Pacemaker tools

2019-01-08 Thread Kristoffer Grönlund
On Tue, 2019-01-08 at 10:07 -0600, Ken Gaillot wrote:
> On Tue, 2019-01-08 at 10:30 +0100, Kristoffer Grönlund wrote:
> > On Mon, 2019-01-07 at 17:52 -0600, Ken Gaillot wrote:
> > > 
> > Having all the tools able to produce XML output like cibadmin and
> > crm_mon would be good in general, I think. So that seems like a
> > good
> > proposal to me.
> > 
> > In the case of an error, at least in my experience just getting a
> > return code and stderr output is enough to make sense of it -
> > getting
> > XML on stderr in the case of an error wouldn't seem like something
> > that
> > would add much value to me.
> 
> There are two benefits: it can give extended information (such as the
> text string that corresponds to a numeric exit status), and because
> it
> would also be used by any future REST API (which won't have stderr),
> API/CLI output could be parsed identically.
> 

Hm, am I understanding you correctly:

My sort-of vision for implementing a REST API has been to move all of
the core functionality out of the command line tools and into the C
libraries (I think we discussed something like a libpacemakerclient
before) - the idea is that the XML output would be generated on that
level?

If so, that is something that I am all for :)

Right now, we are experimenting with a REST API based on taking what we
use in Hawk and moving that into an API server written in Go, and just
calling crm_mon --as-xml to get status information that can be exposed
via the API. Having that available in C directly and not having to call
out to command line tools would be great and a lot cleaner:

https://github.com/krig/hawk-apiserver
https://github.com/hawk-ui/hawk-web-client

Cheers,
Kristoffer

___
Users mailing list: Users@clusterlabs.org
https://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] Proposal for machine-friendly output from Pacemaker tools

2019-01-08 Thread Kristoffer Grönlund
On Mon, 2019-01-07 at 17:52 -0600, Ken Gaillot wrote:
> There has been some discussion in the past about generating more
> machine-friendly output from pacemaker CLI tools for scripting and
> high-level interfaces, as well as possibly adding a pacemaker REST
> API.
> 
> I've filed an RFE BZ
> 
>  https://bugs.clusterlabs.org/show_bug.cgi?id=5376
> 
> to design an output interface that would suit these goals. An actual
> REST API is not planned at this point, but this would provide a key
> component of any future implementation.

Having all the tools able to produce XML output like cibadmin and
crm_mon would be good in general, I think. So that seems like a good
proposal to me.

In the case of an error, at least in my experience just getting a
return code and stderr output is enough to make sense of it - getting
XML on stderr in the case of an error wouldn't seem like something that
would add much value to me.

Cheers,
Kristoffer

> 
> The question is what machine-friendly output should look like. The
> basic idea is: for commands like "crm_resource --constraints" or
> "stonith_admin --history", what output format would be most useful
> for
> a GUI or other program to parse?
> 
> Suggestions welcome here and/or on the bz ...
___
Users mailing list: Users@clusterlabs.org
https://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] Fwd: After failover Pacemaker moves resource back when dead node become up

2019-01-04 Thread Kristoffer Grönlund
On Fri, 2019-01-04 at 15:27 +0300, Özkan Göksu  wrote:
> Hello.
> 
> I'm using Pacemaker & Corosync for my cluster. When a node dies
> pacemaker
> moving my resources to another online node. Everything ok here.
> But when the dead node comes back, Pacemaker moving the resource
> back. I
> don't have any "location" line in my config and also I tried with
> "unmove"
> command but nothing changed.
> corosync & pacemaker services are enabled and starting at boot. If I
> run it
> manually it does not move resources failback.
> 
> How can I stop moving the resource if it is running normally?

Configuring a positive resource-stickiness should take care of this for
you, so there has to be something else going on. Do you get any strange
errors reported for the resources on the second node? Check if there is
any failcount for the resources on that node using "crm_mon --
failcounts". Other than that, looking in the logs for anything unusual
would be my next move.

Another thing that stands out to me is that you configure a monitor
action for the gui resource, but you don't set a timeout. I'm not sure
what the default is there, so I would configure a timeout explicitly.

Finally, it looks like you have a 2-node cluster with STONITH disabled.
That's not going to work. You need some kind of stonith, or things will
behave badly. So that could be why you're seeing strange behavior.

Cheers,
Kristoffer

> 
> *crm configure sh*
> 
> node 1: DEV1
> node 2: DEV2
> primitive poolip IPaddr2 \
> params ip=10.1.60.33 nic=enp2s0f0 cidr_netmask=24 \
> meta migration-threshold=2 target-role=Started \
> op monitor interval=20 timeout=20 on-fail=restart
> primitive gui systemd:gui \
> op monitor interval=20s \
> meta target-role=Started
> primitive gui-ip IPaddr2 \
> params ip=10.1.60.35 nic=enp2s0f0 cidr_netmask=24 \
> meta migration-threshold=2 target-role=Started \
> op monitor interval=20 timeout=20 on-fail=restart
> colocation cluster-gui inf: gui gui-ip
> order gui-after-ip Mandatory: gui-ip gui
> property cib-bootstrap-options: \
> have-watchdog=false \
> dc-version=2.0.0-1-8cf3fe749e \
> cluster-infrastructure=corosync \
> cluster-name=mycluster \
> stonith-enabled=false \
> no-quorum-policy=ignore \
> last-lrm-refresh=1545920437
> rsc_defaults rsc-options: \
> migration-threshold=10 \
> resource-stickiness=100
> 
> *pcs resource defaults*
> 
> migration-threshold=10
> resource-stickiness=100
> 
> *pcs resource show gui*
> 
> Resource: gui (class=systemd type=gui)
>  Meta Attrs: target-role=Started
>  Operations: monitor interval=20s (gui-monitor-20s)
> ___
> Users mailing list: Users@clusterlabs.org
> https://lists.clusterlabs.org/mailman/listinfo/users
> 
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.
> pdf
> Bugs: http://bugs.clusterlabs.org
___
Users mailing list: Users@clusterlabs.org
https://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] Coming in Pacemaker 2.0.1 / 1.1.20: improved fencing history

2018-12-12 Thread Kristoffer Grönlund
On Tue, 2018-12-11 at 14:48 -0600, Ken Gaillot wrote:
> Pacemaker has long had the stonith_admin --history option to show a
> history of past fencing actions that the cluster has carried out.
> However, this list included only events since the node it was run on
> had joined the cluster, and it just wasn't very convenient.
> 
> In the upcoming release, the cluster keeps the fence history
> synchronized across all nodes, so you get the same answer no matter
> which node you query.

This is a great feature!

On a related note, it would be amazing to have the complete transition
history synchronized across all nodes as well..

Cheers,
Kristoffer

___
Users mailing list: Users@clusterlabs.org
https://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] Announcing Anvil! m2 v2.0.7

2018-11-20 Thread Kristoffer Grönlund
On Tue, 2018-11-20 at 02:25 -0500, Digimer wrote:
> * https://github.com/ClusterLabs/striker/releases/tag/v2.0.7
> 
> This is the first release since March, 2018. No critical issues are
> know
> or where fixed. Users are advised to upgrade.
> 

Congratulations!

Cheers,
Kristoffer

> Main bugs fixed;
> 
> * Fixed install issues for Windows 10 and 2016 clients.
> * Improved duplicate record detection and cleanup in scan-clustat and
> scan-storcli.
> * Disabled the detection and recovery of 'paused' state servers (it
> caused more trouble than it solved).
> 
> Notable new features;
> * Improved the server boot logic to choose the node with the most
> running servers, all else being equal.
> * Updated UPS power transfer reason alerts from "warning" to "notice"
> level alerts.
> * Added support for EL 6.10.
> 
> Users can upgrade using 'striker-update' from their Striker
> dashboards.
> 
> /sbin/striker/striker-update --local
> /sbin/striker/striker-update --anvil all
> 
> Please feel free to report any issues in the Striker github
> repository.
> 
___
Users mailing list: Users@clusterlabs.org
https://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] resource-agents v4.2.0

2018-10-24 Thread Kristoffer Grönlund
On Wed, 2018-10-24 at 10:21 +0200, Oyvind Albrigtsen wrote:
> ClusterLabs is happy to announce resource-agents v4.2.0.
> Source code is available at:
> https://github.com/ClusterLabs/resource-agents/releases/tag/v4.2.0
> 

[snip]

>   - ocf.py: new Python library and dev guide
> 

I just wanted to highlight the Python library since I think it can make
agent development a lot easier in the future, especially as we expand
the library with more utilities that are commonly needed when writing
agents.

Any agents written in Python should (for now at least) be compatible
both with Python 2.7+ and Python 3.3+. We still need to expand the CI
to actually verify that agents do support these versions, so anyone who
would like to help out improving the test setup is more than welcome to
do so :)

The biggest example of an agent using it that we have now is the azure-
events agent [1], so I would recommend anyone interested in working on
new agents to take a look at that. For a more compact example, I wrote
a version of the Dummy resource agent using the ocf.py library and put
it in a gist [2], and then there is a small example in the document
describing the library and how to use it [3].

[1]: https://github.com/ClusterLabs/resource-agents/blob/master/heartbe
at/azure-events.in
[2]: https://gist.github.com/krig/6676d0ae065fd852fac8b445410e1c95
[3]: https://github.com/ClusterLabs/resource-agents/blob/master/doc/dev
-guides/writing-python-agents.md

Cheers,
Kristoffer

___
Users mailing list: Users@clusterlabs.org
https://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] resource-agents v4.2.0 rc1

2018-10-19 Thread Kristoffer Grönlund
On Fri, 2018-10-19 at 10:55 +0200, Oyvind Albrigtsen wrote:
> On 18/10/18 19:43 +0200, Valentin Vidic wrote:
> > On Wed, Oct 17, 2018 at 12:03:18PM +0200, Oyvind Albrigtsen wrote:
> > >  - apache: retry PID check.
> > 
> > I noticed that the ocft test started failing for apache in this
> > version. Not sure if the test is broken or the agent. Can you
> > check if the test still works for you? Restoring the previous
> > version of the agent fixes the problem for me.
> 
> It seems to work fine for me except for the that I had to change name
> from apache2 to httpd (which it's called on RHEL and Fedora) in the
> ocft-config, so I think we need some additional logic for that.

I wonder if perhaps there was a configuration change as well, since the
return code seems to be configuration related. Maybe something changed
in the build scripts that moved something around? Wild guess, but...

Cheers,
Kristoffer

> > 
> > # ocft test -v apache
> > Initializing 'apache' ...
> > Done.
> > 
> > Starting 'apache' case 0 'check base env':
> > ERROR: './apache monitor' failed, the return code is 2.
> > Starting 'apache' case 1 'check base env: set non-existing
> > OCF_RESKEY_statusurl':
> > ERROR: './apache monitor' failed, the return code is 2.
> > Starting 'apache' case 2 'check base env: set non-existing
> > OCF_RESKEY_configfile':
> > ERROR: './apache monitor' failed, the return code is 2.
> > Starting 'apache' case 3 'normal start':
> > ERROR: './apache monitor' failed, the return code is 2.
> > Starting 'apache' case 4 'normal stop':
> > ERROR: './apache monitor' failed, the return code is 2.
> > Starting 'apache' case 5 'double start':
> > ERROR: './apache monitor' failed, the return code is 2.
> > Starting 'apache' case 6 'double stop':
> > ERROR: './apache monitor' failed, the return code is 2.
> > Starting 'apache' case 7 'running monitor':
> > ERROR: './apache monitor' failed, the return code is 2.
> > Starting 'apache' case 8 'not running monitor':
> > ERROR: './apache monitor' failed, the return code is 2.
> > Starting 'apache' case 9 'unimplemented command':
> > ERROR: './apache monitor' failed, the return code is 2.
> > 
> > -- 
> > Valentin
> > ___
> > Users mailing list: Users@clusterlabs.org
> > https://lists.clusterlabs.org/mailman/listinfo/users
> > 
> > Project Home: http://www.clusterlabs.org
> > Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratc
> > h.pdf
> > Bugs: http://bugs.clusterlabs.org
> 
> ___
> Users mailing list: Users@clusterlabs.org
> https://lists.clusterlabs.org/mailman/listinfo/users
> 
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.
> pdf
> Bugs: http://bugs.clusterlabs.org
> 
___
Users mailing list: Users@clusterlabs.org
https://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] crm resource stop VirtualDomain - how to know when/if VirtualDomain is really stopped ?

2018-10-11 Thread Kristoffer Grönlund
On Thu, 2018-10-11 at 13:59 +0200,  Lentes, Bernd  wrote:
> Hi,
> 
> i'm trying to write a script which shutdown my VirtualDomains in the
> night for a short period to take a clean snapshot with libvirt.
> To shut them down i can use "crm resource stop VirtualDomain".
> 
> But when i do a "crm resource stop VirtualDomain" in my script, the
> command returns immediately. How can i know if my VirtualDomains are
> really stopped, because the shutdown may take up to several minutes.
> 
> I know i could do something with a loop and "crm resource status" and
> grepping for e.g. stopped, but i would prefer a cleaner solution.
> 
> Any ideas ?

You should be able to pass -w to crm,

crm -w resource stop VirtualDomain

That should wait until the policy engine settles down again.

Cheers,
Kristoffer

> 
> Thanks.
> 
> 
> Bernd
> 
___
Users mailing list: Users@clusterlabs.org
https://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] Antw: Re: meatware stonith

2018-09-27 Thread Kristoffer Grönlund
On Thu, 2018-09-27 at 02:49 -0400, Digimer wrote:
> On 2018-09-27 01:54 AM, Ulrich Windl wrote:
> > > > > Digimer  schrieb am 26.09.2018 um 18:29 in
> > > > > Nachricht
> > 
> > <1c70b5e2-ea8e-8cbe-3d83-e207ca47b...@alteeve.ca>:
> > > On 2018-09-26 11:11 AM, Patrick Whitney wrote:
> > > > Hey everyone,
> > > > 
> > > > I'm doing some pacemaker/corosync/dlm/clvm testing.  I'm
> > > > without a power
> > > > fencing solution at the moment, so I wanted to utilize
> > > > meatware, but it
> > > > doesn't show when I list available stonith devices (pcs stonith
> > > > list).
> > > > 
> > > > I do seem to have it on the system, as cluster-glue is
> > > > installed, and I
> > > > see meatware.so and meatclient on the system, and I also see
> > > > meatware
> > > > listed when running the command 'stonith -L' 
> > > > 
> > > > Can anyone guide me as to how to create a stonith meatware
> > > > resource
> > > > using pcs? 
> > > > 
> > > > Best,
> > > > -Pat
> > > 
> > > The "fence_manual" agent was removed after EL5 days, a lng
> > > time ago,
> > > because it so often led to split-brains because of misuse. Manual
> > > fencing is NOT recommended.
> > > 
> > > There are new options, like SBD (storage-based death) if you have
> > > a
> > > watchdog timer.
> > 
> > And even if you do not ;-)
> 
> I've not used SBD. How, without a watchdog timer, can you be sure the
> target node is dead?

You can't. You can use the Linux softdog module though, but since it is
a pure software solution it is limited and not ideal.

> 
-- 

Cheers,
Kristoffer

___
Users mailing list: Users@clusterlabs.org
https://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] Q: Reusing date specs in crm shell

2018-09-13 Thread Kristoffer Grönlund
On Tue, 2018-09-11 at 13:52 +0200,  Ulrich Windl  wrote:
> Hi!
> 
> I have a set of resources with almost identical rules, one part being
> a data spec. Currently I'm using two different date specs in those
> rules. However I repeated the date spec in every rule. Foreseeing
> that I might change those one day, I wonder whether it's possible in
> crm shell to define a date spec once (outside of any resource for
> symmetry) and reference that data spec inside a rule. Ok, time for an
> example:
> 
> meta 1: ...default settings... \
> meta 2: rule 0: date spec hours=7-18 weekdays=1-5 ...override
> settings outside prime time...
> 
> In the crm manual page the reference examples use dummy primitives.
> 

I wonder if this could be done with id-based references, but it's not
something I've actually experimented with. Not a great answer, I
know...

> Regards,
> Ulrich
> 
> 
> ___
> Users mailing list: Users@clusterlabs.org
> https://lists.clusterlabs.org/mailman/listinfo/users
> 
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.
> pdf
> Bugs: http://bugs.clusterlabs.org
> 
-- 

Cheers,
Kristoffer

___
Users mailing list: Users@clusterlabs.org
https://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] Q: ordering for a monitoring op only?

2018-08-20 Thread Kristoffer Grönlund
On Mon, 2018-08-20 at 10:51 +0200,  Ulrich Windl  wrote:
> Hi!
> 
> I wonder whether it's possible to run a monitoring op only if some
> specific resource is up.
> Background: We have some resource that runs fine without NFS, but the
> start, stop and monitor operations will just hang if NFS is down. In
> effect the monitor operation will time out, the cluster will try to
> recover, calling the stop operation, which in turn will time out,
> making things worse (i.e.: causing a node fence).
> 
> So my idea was to pause the monitoing operation while NFS is down
> (NFS itself is controlled by the cluster and should recover "rather
> soon" TM).
> 
> Is that possible?

It would be a lot better to fix the problem in the RA which causes it
to fail when NFS is down, I would think?

> And before you ask: No, I have not written that RA that has the
> problem; a multi-million-dollar company wrote it (Years before I had
> written a monitor for HP-UX' cluster that did not have this problem,
> even though the configuration files were read from NFS (It's not
> magic: Just periodically copy them to shared memory, and read the
> config from shared memory).
> 
> Regards,
> Ulrich
> 
> 
> ___
> Users mailing list: Users@clusterlabs.org
> https://lists.clusterlabs.org/mailman/listinfo/users
> 
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.
> pdf
> Bugs: http://bugs.clusterlabs.org
> 
-- 

Cheers,
Kristoffer

___
Users mailing list: Users@clusterlabs.org
https://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] crm --version shows "cam dev"

2018-07-04 Thread Kristoffer Grönlund
On Wed, 2018-07-04 at 17:52 +0200, Salvatore D'angelo wrote:
> Hi,
> 
> With crash 2.2.0 the command:
> cam —version
> works fine. I downloaded 3.0.1 and it shows:
> crm dev
> 
> I know this is not a big issue but I just wanted to verify I
> installed the correct version of crash.
> 

It's probably right, but can you describe in more detail from where you
downloaded and how you installed it?

Cheers,
Kristoffer

> ___
> Users mailing list: Users@clusterlabs.org
> https://lists.clusterlabs.org/mailman/listinfo/users
> 
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.
> pdf
> Bugs: http://bugs.clusterlabs.org
___
Users mailing list: Users@clusterlabs.org
https://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] difference between external/ipmi and fence_ipmilan

2018-06-27 Thread Kristoffer Grönlund
"Stefan K"  writes:

> OK I see, but it would be good if somebody mark one of this as deprecated and 
> then delete it. So that noone get confused about these.
>

The external/* agents are not deprecated, though. Future agents will be
implemented in the fence-agents framework, but the existing agents are
still being used (not by RH, but by SUSE at least).

Cheers,
Kristoffer

> best regards
> Stefan
>
>> Gesendet: Dienstag, 26. Juni 2018 um 18:26 Uhr
>> Von: "Ken Gaillot" 
>> An: "Cluster Labs - All topics related to open-source clustering welcomed" 
>> 
>> Betreff: Re: [ClusterLabs] difference between external/ipmi and fence_ipmilan
>>
>> On Tue, 2018-06-26 at 12:00 +0200, Stefan K wrote:
>> > Hello,
>> > 
>> > can somebody tell me the difference between external/ipmi and
>> > fence_ipmilan? Are there preferences?
>> > Is one of these more common or has some advantages? 
>> > 
>> > Thanks in advance!
>> > best regards
>> > Stefan
>> 
>> The distinction is mostly historical. At one time, there were two
>> different open-source clustering environments, each with its own set of
>> fence agents. The community eventually settled on Pacemaker as a sort
>> of merged evolution of the earlier environments, and so it supports
>> both styles of fence agents. Thus, you often see an "external/*" agent
>> and a "fence_*" agent available for the same physical device.
>> 
>> However, they are completely different implementations, so there may be
>> substantive differences as well. I'm not familiar enough with these two
>> to address that, maybe someone else can.
>> -- 
>> Ken Gaillot 
>> ___
>> Users mailing list: Users@clusterlabs.org
>> https://lists.clusterlabs.org/mailman/listinfo/users
>> 
>> Project Home: http://www.clusterlabs.org
>> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
>> Bugs: http://bugs.clusterlabs.org
>>
> ___
> Users mailing list: Users@clusterlabs.org
> https://lists.clusterlabs.org/mailman/listinfo/users
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org

-- 
// Kristoffer Grönlund
// kgronl...@suse.com
___
Users mailing list: Users@clusterlabs.org
https://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] [questionnaire] Do you manage your pacemaker configuration by hand and (if so) what reusability features do you use?

2018-06-15 Thread Kristoffer Grönlund
Jan Pokorný  writes:

>> 4.  [ ] Do you use "tag" based syntactic grouping[3] in CIB?
>
> 0x
>
> keeps me at guess what it was meant to/could be used for in practice
> (had some ideas but will gladly be surprised if anyone's going to
> give it a crack)
>

The background for this feature as far as I understand it was related to
booth-based geo clusters, where the tag feature made it easier to unify
the configuration of two geo clusters. Hawk also supports the tag
feature via the user interface, where you can get a custom status view
for a tag showing only the tagged resources instead of the whole cluster
status.

I honestly don't know how much use it sees in practice.

Cheers,
Kristoffer

-- 
// Kristoffer Grönlund
// kgronl...@suse.com
___
Users mailing list: Users@clusterlabs.org
https://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] Booth fail-over conditions

2018-04-16 Thread Kristoffer Grönlund
Zach Anderson <zpanderso...@gmail.com> writes:

>  Hey all,
>
> new user to pacemaker/booth and I'm fumbling my way through my first proof
> of concept. I have a 2 site configuration setup with local pacemaker
> clusters at each site (running rabbitmq) and a booth arbitrator. I've
> successfully validated the base failover when the "granted" site has
> failed. My question is if there are any other ways to configure failover,
> i.e. using resource health checks or the like?
>

Hi Zach,

Do you mean that a resource health check should trigger site failover?
That's actually something I'm not sure comes built-in.. though making a
resource agent which revokes a ticket on failure should be fairly
straight-forward. You could then group your resource which the ticket
resource to enable this functionality.

The logic in the ticket resource ought to be something like "if monitor
fails and the current site is granted, then revoke the ticket, else do
nothing". You would probably want to handle probe monitor invocations
differently. There is a ocf_is_probe function provided to help with
this.

Cheers,
Kristoffer

> Thanks!
> ___
> Users mailing list: Users@clusterlabs.org
> https://lists.clusterlabs.org/mailman/listinfo/users
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org

-- 
// Kristoffer Grönlund
// kgronl...@suse.com
___
Users mailing list: Users@clusterlabs.org
https://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] Possible idea for 2.0.0: renaming the Pacemaker daemons

2018-04-09 Thread Kristoffer Grönlund
Jehan-Guillaume de Rorthais <j...@dalibo.com> writes:

>
> I feel like you guys are talking of a solution that already exists and you
> probably already know, eg. "etcd".
>
> Etcd provides:
>
> * a cluster wide key/value storage engine
> * support quorum
> * key locking
> * atomic changes
> * REST API
> * etc...
>
> However, it requires to open a new TCP port, indeed :/
>

My main inspiration and reasoning is indeed to introduce the same
functionality provided by etcd into a corosync-based cluster without
having to add a parallel cluster consensus solution. Simply installing
etcd means 1) now you have two clusters, 2) etcd doesn't handle 2-node
clusters or fencing and doesn't degrade well to a single node, 3)
relying on the presence of the KV-store in pacemaker tools is not an
option unless pacemaker wants to make etcd a requirement.

Cheers,
Kristoffer

> Moreover, as a RA developer, I am currently messing with attrd weird
> behavior[1], so any improvement there is welcomed :)
>
> Cheers,
>
> [1] https://github.com/ClusterLabs/PAF/issues/131
>

-- 
// Kristoffer Grönlund
// kgronl...@suse.com
___
Users mailing list: Users@clusterlabs.org
https://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] Possible idea for 2.0.0: renaming the Pacemaker daemons

2018-04-09 Thread Kristoffer Grönlund
Jan Pokorný <jpoko...@redhat.com> writes:

> /me keenly joins the bike-shedding
>
> What about pcmk-based/pcmk-infod.  First, we effectively tone down
> "common information/base" from the expanded CIB abbreviation[*1],
> and second, in the former case, we highlight that's the central point
> providing resident data glue (pcmk-datad?[*2]) amongst the other daemons.

pcmk-infod sounds pretty good to me, it indicates data management /
central information handling etc. Plus it contains at least part of one
of the words of the expansion of "CIB".

Cheers,
Kristoffer

-- 
// Kristoffer Grönlund
// kgronl...@suse.com
___
Users mailing list: Users@clusterlabs.org
https://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] Possible idea for 2.0.0: renaming the Pacemaker daemons

2018-04-06 Thread Kristoffer Grönlund
Klaus Wenninger <kwenn...@redhat.com> writes:

>
> One thing I thought over as well is some kind of
> a chicken & egg issue arising when you want to
> use the syncing-mechanism so setup (bootstrap)
> the cluster.
> So something like the ssh-mechanism pcsd is
> using might still be needed.
> The file-syncing approach would have the data
> easily available locally prior to starting the
> actual cluster-wide syncing.
>
> Well ... no solutions or anything ... just
> a few thoughts I had on that issue ... 25ct max ;-)
>

Bootstrapping is a problem I've thought about quite a bit.. It's
possible to implement in a number of ways, and it's not clear what's the
better approach. But I see a cluster-wide configuration database as an
enabler for better bootstrapping rather than a hurdle. If a new node
doesn't need a local copy of the database but can access the database
from an existing node, it would be possible for the new node to
bootstrap itself into the cluster with nothing more than remote access
to that database, so a single port to open and a single authentication
mechanism - this could certainly be handled over SSH just like pcsd and
crmsh implements it today.

But yes, at some point there needs to be communication channel opened..

-- 
// Kristoffer Grönlund
// kgronl...@suse.com
___
Users mailing list: Users@clusterlabs.org
https://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] Possible idea for 2.0.0: renaming the Pacemaker daemons

2018-04-06 Thread Kristoffer Grönlund
Ken Gaillot <kgail...@redhat.com> writes:

> On Tue, 2018-04-03 at 08:33 +0200, Kristoffer Grönlund wrote:
>> Ken Gaillot <kgail...@redhat.com> writes:
>> 
>> > > I
>> > > would vote against PREFIX-configd as compared to other cluster
>> > > software,
>> > > I would expect that daemon name to refer to a more generic
>> > > cluster
>> > > configuration key/value store, and that is something that I have
>> > > some
>> > > hope of adding in the future ;) So I'd like to keep "config" or
>> > > "database" for such a possible future component...
>> > 
>> > What's the benefit of another layer over the CIB?
>> > 
>> 
>> The idea is to provide a more generalized key-value store that other
>> applications built on top of pacemaker can use. Something like a
>> HTTP REST API to a key-value store with transactional semantics
>> provided
>> by the cluster. My understanding so far is that the CIB is too heavy
>> to
>> support that kind of functionality well, and besides that the
>> interface
>> is not convenient for non-cluster applications.
>
> My first impression is that it sounds like a good extension to attrd,
> cluster-wide attributes instead of node attributes. (I would envision a
> REST API daemon sitting in front of all the daemons without providing
> any actual functionality itself.)
>
> The advantage to extending attrd is that it already has code to
> synchronize attributes at start-up, DC election, partition healing,
> etc., as well as features such as write dampening.

Yes, I've considered that as well and yes, I think it could make
sense. I need to gain a better understanding of the current attrd
implementation to see how to make it do what I want. The configd
name/part comes into play when bringing in syncing data beyond the
key-value store (see below).

>
> Also cib -> pcmk-configd is very popular :)
>

I can live with it. ;)

>> My most immediate applications for that would be to build file
>> syncing
>> into the cluster and to avoid having to have an extra communication
>> layer for the UI.
>
> How would file syncing via a key-value store work?
>
> One of the key hurdles in any cluster-based sync is
> authentication/authorization. Authorization to use a cluster UI is not
> necessarily equivalent to authorization to transfer arbitrary files as
> root.
>

Yeah, the key-value store wouldn't be enough to implement file
syncing, but it could potentially be the mechanism by which the file
syncing implementation maintains its state. I'm somewhat conflating two
things that I want that are both related to syncing configuration beyond
the cluster daemon itself across the cluster.

I don't see authentication/authorization as a hurdle or blocker, but
it's certainly something that needs to be considered. Clearly a
less-privileged user shouldn't be able to configure syncing of
root-owned files across the cluster.

-- 
// Kristoffer Grönlund
// kgronl...@suse.com
___
Users mailing list: Users@clusterlabs.org
https://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] Possible idea for 2.0.0: renaming the Pacemaker daemons

2018-04-03 Thread Kristoffer Grönlund
Ken Gaillot <kgail...@redhat.com> writes:

>> I
>> would vote against PREFIX-configd as compared to other cluster
>> software,
>> I would expect that daemon name to refer to a more generic cluster
>> configuration key/value store, and that is something that I have some
>> hope of adding in the future ;) So I'd like to keep "config" or
>> "database" for such a possible future component...
>
> What's the benefit of another layer over the CIB?
>

The idea is to provide a more generalized key-value store that other
applications built on top of pacemaker can use. Something like a
HTTP REST API to a key-value store with transactional semantics provided
by the cluster. My understanding so far is that the CIB is too heavy to
support that kind of functionality well, and besides that the interface
is not convenient for non-cluster applications.

My most immediate applications for that would be to build file syncing
into the cluster and to avoid having to have an extra communication
layer for the UI.

Cheers,
Kristoffer

-- 
// Kristoffer Grönlund
// kgronl...@suse.com
___
Users mailing list: Users@clusterlabs.org
https://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] Possible idea for 2.0.0: renaming the Pacemaker daemons

2018-03-29 Thread Kristoffer Grönlund
Ken Gaillot <kgail...@redhat.com> writes:

> Hi all,
>
> Andrew Beekhof brought up a potential change to help with reading
> Pacemaker logs.
>
> Currently, pacemaker daemon names are not intuitive, making it
> difficult to search the system log or understand what each one does.
>
> The idea is to rename the daemons, with a common prefix, and a name
> that better reflects the purpose.
>

[...]

> Here are the current names, with some example replacements:
>
>  pacemakerd: PREFIX-launchd, PREFIX-launcher
>
>  attrd: PREFIX-attrd, PREFIX-attributes
>
>  cib: PREFIX-configd, PREFIX-state
>
>  crmd: PREFIX-controld, PREFIX-clusterd, PREFIX-controller
>
>  lrmd: PREFIX-locald, PREFIX-resourced, PREFIX-runner
>
>  pengine: PREFIX-policyd, PREFIX-scheduler
>
>  stonithd: PREFIX-fenced, PREFIX-stonithd, PREFIX-executioner
>
>  pacemaker_remoted: PREFIX-remoted, PREFIX-remote

Better to do it now rather than later. I vote in favor of changing the
names. Yes, it'll mess up crmsh, but at least for distributions it's
just a simple search/replace patch to apply.

I would also vote in favour of sticking to the 15 character limit, and
to use "pcmk" as the prefix. That leaves 11 characters for the name,
which should be enough for anyone ;)

My votes:

pacemakerd -> pcmk-launchd
attrd -> pcmk-attrd
cib -> pcmk-stated
crmd -> pcmk-controld
lrmd -> pcmk-resourced
pengine -> pcmk-schedulerd
stonithd -> pcmk-fenced
pacemaker_remoted -> pcmk-remoted

The one I'm the most divided about is cib. pcmk-cibd would also work. I
would vote against PREFIX-configd as compared to other cluster software,
I would expect that daemon name to refer to a more generic cluster
configuration key/value store, and that is something that I have some
hope of adding in the future ;) So I'd like to keep "config" or
"database" for such a possible future component...

Cheers,
Kristoffer

-- 
// Kristoffer Grönlund
// kgronl...@suse.com
___
Users mailing list: Users@clusterlabs.org
https://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] crm shell 2.1.2 manual bug?

2018-03-28 Thread Kristoffer Grönlund
"Ulrich Windl" <ulrich.wi...@rz.uni-regensburg.de> writes:

> Hi!
>
> For crmsh-2.1.2+git132.gbc9fde0-18.2 I think there's a bug in the manual 
> describing resource sets:
>
>sequential
>If true, the resources in the set do not depend on each other 
> internally. Setting sequential to true implies a strict order of dependency 
> within the set.
>
> Obviously "true" cannot mean both: "do not depend" and "depend". My guess is 
> that the first true has to be false.

Right, "do not depend" should be "depend" there. Thanks for catching it :)

> I came across this when trying to add a colocation like this:
> colocation col_LV inf:( cln_LV cln_LV-L1 cln_LV-L2 cln_ML cln_ML-L1 cln_ML-L2 
> ) cln_VMs
>
> crm complained about this:
> ERROR: 1: syntax in role: Unmatched opening bracket near  parsing 
> 'colocation ...'
> ERROR: 2: syntax: Unknown command near  parsing 'cln_ml-l2 ) 
> cln_VMs'
> (note the lower case)

The problem reported is that there is no space between "inf:" and "(" -
the parser in crmsh doesn't handle missing spaces between tokens right
now.

Cheers,
Kristoffer

>
> Regards,
> Ulrich
>
>
> ___
> Users mailing list: Users@clusterlabs.org
> https://lists.clusterlabs.org/mailman/listinfo/users
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org
>

-- 
// Kristoffer Grönlund
// kgronl...@suse.com
___
Users mailing list: Users@clusterlabs.org
https://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] Error when linking to libqb in shared library

2018-02-12 Thread Kristoffer Grönlund
Jan Pokorný <jpoko...@redhat.com> writes:

> I guess you are linking your python extension with one of the
> pacemaker libraries (directly on indirectly to libcrmcommon), and in
> that case, you need to rebuild pacemaker with the patched libqb[*] for
> the whole arrangement to work.  Likewise in that case, as you may be
> aware, the "API" is quite uncommitted at this point, stability hasn't
> been of importance so far (because of the handles into pacemaker being
> mostly abstracted through built-in CLI tools for the outside players
> so far, which I agree is encumbered with tedious round-trips, etc.).
> There's a huge debt in this area, so some discretion and perhaps
> feedback which functions are indeed proper-API-worth is advised.

The ultimate goal of my project is indeed to be able to propose or begin
a discussion around a stable API for Pacemaker to eventually move away
from command-line tools as the only way to interact with the cluster.

Thank you, I'll investigate the proposed changes.

Cheers,
Kristoffer

>
> [*]
> shortcut 1: just recompile pacemaker with those extra
> /usr/include/qb/qblog.h modifications as of the
>   referenced commit)
> shortcut 2: if the above can be tolerated widely, this is certainly
> for local development only: recompile pacemaker with
>   CPPFLAGS=-DQB_KILL_ATTRIBUTE_SECTION
>
> Hope this helps.
>
> -- 
> Jan (Poki)
> ___
> Users mailing list: Users@clusterlabs.org
> http://lists.clusterlabs.org/mailman/listinfo/users
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org

-- 
// Kristoffer Grönlund
// kgronl...@suse.com
___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


[ClusterLabs] Error when linking to libqb in shared library

2018-02-11 Thread Kristoffer Grönlund
Hi everyone,

(and especially the libqb developers)

I started hacking on a python library written in C which links to
pacemaker, and so to libqb as well, but I'm encountering a strange
problem which I don't know how to solve.

When I try to import the library in python, I see this error:

--- command ---
PYTHONPATH='/home/krig/projects/work/libpacemakerclient/build/python' 
/usr/bin/python3 
/home/krig/projects/python-pacemaker/build/../python/clienttest.py
--- stderr ---
python3: utils.c:66: common: Assertion `"implicit callsite section is 
observable, otherwise target's and/or libqb's build is at fault, preventing 
reliable logging" && work_s1 != NULL && work_s2 != NULL' failed.
---

This appears to be coming from the following libqb macro:

https://github.com/ClusterLabs/libqb/blob/master/include/qb/qblog.h#L352

There is a long comment above the macro which if nothing else tells me
that I'm not the first person to have issues with it, but it doesn't
really tell me what I'm doing wrong...

Does anyone know what the issue is, and if so, what I could do to
resolve it?

Cheers,
Kristoffer

-- 
// Kristoffer Grönlund
// kgronl...@suse.com
___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] Feedback wanted: changing "master/slave" terminology

2018-01-17 Thread Kristoffer Grönlund
Ken Gaillot <kgail...@redhat.com> writes:

>
> I can see the point, but I do like having  separate.
>
> A clone with a single instance is not identical to a primitive. Think
> of building a cluster, starting with one node, and configuring a clone
> -- it has only one instance, but you wouldn't expect it to show up as a
> primitive in status displays.
>
> Also, there are a large number of clone meta-attributes that aren't
> applicable to simple primitives. By contrast, master adds only two
> attributes to clones.

I'm not convinced by either argument. :)

The distinction between single-instance clone and primitive is certainly
not clear to me, and there is no problem for status displays to display
a resource with a single replica differently from a resource that isn't
configured to be replicated.

The number of meta-attributes related to clones seems irrelevant as
well, pacemaker can reject a configuration that sets clone-related
attributes for non-clone resources just as well as if they were on a
different node in the XML.

>
> From the XML perspective, I think the current approach is logically
> structured, a  wrapped around a  or , each
> with its own meta-attributes.

Well, I guess it's a matter of opinion. For me, I don't think it is very
logical at all. For example, the result of having the hierarchy of nodes
is that it is possible to configure target-role for both the wrapped
 and the container:


 
  
 
 
  
   
  
 


Then edit the configuration removing the clone, save, and the resource
starts when it should have been stopped.

It's even worse in the case of a clone wrapping a group holding
clones of resources, in which case there can be four levels of attribute
inheritance -- and this applies to both meta attributes and instance
attributes.

Add to that the fact that there can be multiple sets of instance
attributes and meta attributes for each of these with rule expressions
and implicit precedence determining which set actually applies...

-- 
// Kristoffer Grönlund
// kgronl...@suse.com

___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] Feedback wanted: changing "master/slave" terminology

2018-01-17 Thread Kristoffer Grönlund
Ken Gaillot <kgail...@redhat.com> writes:

>
> For Pacemaker 2, I'd like to replace the  resource type with
> . (The old syntax would be transparently
> upgraded to the new one.) The role names themselves are not likely to
> be changed in that time frame, as they are used in more external pieces
> such as notification variables. But it would be the first step.
>
> I hope that this will be an uncontroversial change in the ClusterLabs
> community, but because such changes have been heated elsewhere, here is
> why this change is desirable:
>

I agree 100% about this change. In Hawk, we've already tried to hide the
Master/Slave terms as much as possible and replace them with
primary/secondary and "Multi-state", but I'm happy to converge on common
terms.

I'm partial to "Promoted" and "Started" since it makes it clearer that
the secondary state is a base state and that it's the promoted state
which is different / special.

However, can I throw a wrench in the machinery? When replacing the
 resource type with , why not go a step
further and merge both  and  with the basic ?

 => clone
 => master

or for groups,



I have never understood the usefulness of separate meta-attribute sets
for the  and  nodes.

-- 
// Kristoffer Grönlund
// kgronl...@suse.com

___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] Antw: Re: Antw: Changes coming in Pacemaker 2.0.0

2018-01-11 Thread Kristoffer Grönlund
Jehan-Guillaume de Rorthais <j...@dalibo.com> writes:

>
> For what is worth, while using crmsh, I always have to explain to
> people or customers that:
>
> * we should issue an "unmigrate" to remove the constraint as soon as the
>   resource can get back to the original node or get off the current node if
>   needed (depending on the -inf or +inf constraint location issued)
> * this will not migrate back the resource if it's sticky enough on the current
>   node. 
>
> See:
> http://clusterlabs.github.io/PAF/Debian-8-admin-cookbook.html#swapping-master-and-slave-roles-between-nodes
>
> This is counter-intuitive, indeed. I prefer the pcs interface using
> the move/clear actions.

No need! You can use crm rsc move / crm rsc clear. In fact, "unmove" is
just a backwards-compatibility alias for clear in crmsh.

Cheers,
Kristoffer

>
> ___
> Users mailing list: Users@clusterlabs.org
> http://lists.clusterlabs.org/mailman/listinfo/users
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org
>

-- 
// Kristoffer Grönlund
// kgronl...@suse.com

___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] Cluster IP that "supports" two subnets !?

2018-01-08 Thread Kristoffer Grönlund
Zarko Dudic <zarko.du...@oracle.com> writes:

> Hi there, I'd like to setup a cluster, with two nodes, but on two 
> different sub-nets (nodes are in two different cities). Notes are 
> running Oracle Linux 7.4 and so far I have both them running and cluster 
> software have been installed and configured.
>
> Well, next is to add a resources and I'd like to start with ClusterIP, 
> and seems it's straightforward if nodes are on same subnet, which is not 
> my case. First of all is it possible to accomplish what I want, and if 
> yes, I'd appreciate to hear some suggestions. Thanks a lot.

Hi,

I'm not sure I understand the question so my answer may be off the
mark.

An IP address is intrinsically part of a particular subnet, so how would
managing an IP address across separate subnets work? Or do you mean to
manage an IP address from a third subnet mapped to both locations? This
second option is indeed possible using the regular IP resources, it is
more of a network setup problem.

Another option would be to manage DNS records across subnets. This is
possible using the dnsupdate resource.

Yet a third option would be to access the resources through a proxy, but
then availability is of course limited to the availability of the
proxy and network between proxy and the active site.

Cheers,
Kristoffer

>
>
> -- 
> Thanks,
> Zarko
>
>
> ___
> Users mailing list: Users@clusterlabs.org
> http://lists.clusterlabs.org/mailman/listinfo/users
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org
>

-- 
// Kristoffer Grönlund
// kgronl...@suse.com

___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] crmsh resource failcount does not appear to work

2017-12-27 Thread Kristoffer Grönlund
Andrei Borzenkov <arvidj...@gmail.com> writes:

> As far as I can tell, pacemaker acts on failcount attributes qualified
> by operation name, while crm sets/queries unqualified attribute; I do
> not see any syntax to set fail-count for specific operation in crmsh.

crmsh uses crm_attribute to get the failcount. It could be that this
usage has stopped working as of 1.1.17..

Cheers,
Kristoffer

>
> ha1:~ # rpm -q crmsh
> crmsh-4.0.0+git.1511604050.816cb0f5-1.1.noarch
> ha1:~ # crm_mon -1rf
> Stack: corosync
> Current DC: ha2 (version 1.1.17-3.3-36d2962a8) - partition with quorum
> Last updated: Sun Dec 24 10:55:54 2017
> Last change: Sun Dec 24 10:55:47 2017 by hacluster via crmd on ha2
>
> 2 nodes configured
> 4 resources configured
>
> Online: [ ha1 ha2 ]
>
> Full list of resources:
>
>  stonith-sbd  (stonith:external/sbd): Started ha1
>  rsc_dummy_1  (ocf::pacemaker:Dummy): Started ha2
>  Master/Slave Set: ms_Stateful_1 [rsc_Stateful_1]
>  Masters: [ ha1 ]
>  Slaves: [ ha2 ]
>
> Migration Summary:
> * Node ha2:
> * Node ha1:
> ha1:~ # echo xxx > /run/Stateful-rsc_Stateful_1.state
> ha1:~ # crm_failcount -G -r rsc_Stateful_1
> scope=status  name=fail-count-rsc_Stateful_1 value=1
> ha1:~ # crm resource failcount rsc_Stateful_1 show ha1
> scope=status  name=fail-count-rsc_Stateful_1 value=0
> ha1:~ # crm resource failcount rsc_Stateful_1 set ha1 4
> ha1:~ # crm_failcount -G -r rsc_Stateful_1
> scope=status  name=fail-count-rsc_Stateful_1 value=1
> ha1:~ # crm resource failcount rsc_Stateful_1 show ha1
> scope=status  name=fail-count-rsc_Stateful_1 value=4
> ha1:~ # cibadmin -Q | grep fail-count
>id="status-1084752129-fail-count-rsc_Stateful_1.monitor_1"
> name="fail-count-rsc_Stateful_1#monitor_1" value="1"/>
>name="fail-count-rsc_Stateful_1" value="4"/>
> ha1:~ #
>
> ___
> Users mailing list: Users@clusterlabs.org
> http://lists.clusterlabs.org/mailman/listinfo/users
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org
>

-- 
// Kristoffer Grönlund
// kgronl...@suse.com

___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] Antw: Re: questions about startup fencing

2017-12-04 Thread Kristoffer Grönlund
Tomas Jelinek <tojel...@redhat.com> writes:

>> 
>> * how is it shutting down the cluster when issuing "pcs cluster stop --all"?
>
> First, it sends a request to each node to stop pacemaker. The requests 
> are sent in parallel which prevents resources from being moved from node 
> to node. Once pacemaker stops on all nodes, corosync is stopped on all 
> nodes in the same manner.
>
>> * any race condition possible where the cib will record only one node up 
>> before
>>the last one shut down?
>> * will the cluster start safely?

That definitely sounds racy to me. The best idea I can think of would be
to set all nodes except one in standby, and then shutdown pacemaker
everywhere...

-- 
// Kristoffer Grönlund
// kgronl...@suse.com

___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] questions about startup fencing

2017-11-29 Thread Kristoffer Grönlund
Adam Spiers <aspi...@suse.com> writes:

>
> OK, so reading between the lines, if we don't want our cluster's
> latest config changes accidentally discarded during a complete cluster
> reboot, we should ensure that the last man standing is also the first
> one booted up - right?

That would make sense to me, but I don't know if it's the only
solution. If you separately ensure that they all have the same
configuration first, you could start them in any order I guess.

>
> If so, I think that's a perfectly reasonable thing to ask for, but
> maybe it should be documented explicitly somewhere?  Apologies if it
> is already and I missed it.

Yeah, maybe a section discussing both starting and stopping a whole
cluster would be helpful, but I don't know if I feel like I've thought
about it enough myself. Regarding the HP Service Guard commands that
Ulrich Windl mentioned, the very idea of such commands offends me on
some level but I don't know if I can clearly articulate why. :D

-- 
// Kristoffer Grönlund
// kgronl...@suse.com

___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] questions about startup fencing

2017-11-29 Thread Kristoffer Grönlund
Adam Spiers <aspi...@suse.com> writes:

> - The whole cluster is shut down cleanly.
>
> - The whole cluster is then started up again.  (Side question: what
>   happens if the last node to shut down is not the first to start up?
>   How will the cluster ensure it has the most recent version of the
>   CIB?  Without that, how would it know whether the last man standing
>   was shut down cleanly or not?)

This is my opinion, I don't really know what the "official" pacemaker
stance is: There is no such thing as shutting down a cluster cleanly. A
cluster is a process stretching over multiple nodes - if they all shut
down, the process is gone. When you start up again, you effectively have
a completely new cluster.

When starting up, how is the cluster, at any point, to know if the
cluster it has knowledge of is the "latest" cluster? The next node could
have a newer version of the CIB which adds yet more nodes to the
cluster.

The only way to bring up a cluster from being completely stopped is to
treat it as creating a completely new cluster. The first node to start
"creates" the cluster and later nodes join that cluster.

Cheers,
Kristoffer

-- 
// Kristoffer Grönlund
// kgronl...@suse.com

___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] How much cluster-glue support is still needed in Pacemaker?

2017-11-17 Thread Kristoffer Grönlund
Ken Gaillot <kgail...@redhat.com> writes:

> We're starting work on Pacemaker 2.0, which will remove support for the
> heartbeat stack.
>
> cluster-glue was traditionally associated with heartbeat. Do current
> distributions still ship it?
>
> Currently, Pacemaker uses cluster-glue's stonith/stonith.h to support
> heartbeat-class stonith agents via the fence_legacy agent. If this is
> still widely used, we can keep this support.
>
> Pacemaker also checks for heartbeat/glue_config.h and uses certain
> configuration values there in favor of Pacemaker's own defaults (e.g.
> the value of HA_COREDIR instead of /var/lib/pacemaker/cores). Does
> anyone still use the cluster-glue configuration for such things? If
> not, I'd prefer to drop this.

Hi Ken,

We're still shipping it, but mostly only for the legacy agents which we
still use - although we aim to phase them out in favor of fence-agents.

I would say that if you can keep the fence_legacy agent intact, dropping
the rest is OK.

Cheers,
Kristoffer

> -- 
> Ken Gaillot <kgail...@redhat.com>
>
> ___
> Users mailing list: Users@clusterlabs.org
> http://lists.clusterlabs.org/mailman/listinfo/users
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org
>

-- 
// Kristoffer Grönlund
// kgronl...@suse.com

___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] Pacemaker 1.1.18 Release Candidate 4

2017-11-03 Thread Kristoffer Grönlund
Ken Gaillot <kgail...@redhat.com> writes:

> I decided to do another release candidate, because we had a large
> number of changes since rc3. The fourth release candidate for Pacemaker
> version 1.1.18 is now available at:
>
> https://github.com/ClusterLabs/pacemaker/releases/tag/Pacemaker-1.1.18-
> rc4
>
> The big changes are numerous scalability improvements and bundle fixes.
> We're starting to test Pacemaker with as many as 1,500 bundles (Docker
> containers) running on 20 guest nodes running on three 56-core physical
> cluster nodes.

Hi Ken,

That's really cool. What's the size of the CIB with that kind of
configuration? I guess it would compress pretty well, but still.

Cheers,
Kristoffer

>
> For details on the changes in this release, see the ChangeLog.
>
> This is likely to be the last release candidate before the final
> release next week. Any testing you can do is very welcome.
> -- 
> Ken Gaillot <kgail...@redhat.com>
>
> ___
> Users mailing list: Users@clusterlabs.org
> http://lists.clusterlabs.org/mailman/listinfo/users
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org
>

-- 
// Kristoffer Grönlund
// kgronl...@suse.com

___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] Azure Resource Agent

2017-09-18 Thread Kristoffer Grönlund
-set the ipconfig name
> AZ_IPCONFIG_NAME="ipconfig-""$OCF_RESKEY_ip"
> logIt "debug1: AZ_IPCONFIG_NAME=$AZ_IPCONFIG_NAME"
>
> #--get the resource group name
> AZ_RG_NAME=$(az group list|grep name|cut -d":" -f2|sed "s/  *//g"|sed 
> "s/\"//g"|sed "s/,//g")
> if [ -z "$AZ_RG_NAME" ]
> then
> logIt "could not determine the Azure resource group name"
> exit $OCF_ERR_GENERIC
> else
> logIt "debug1: AZ_RG_NAME=$AZ_RG_NAME"
> fi
>
> #--get the nic name
> AZ_NIC_NAME=$(az vm nic list -g $AZ_RG_NAME --vm-name $MY_HOSTNAME|grep 
> networkInterfaces|cut -d"/" -f9|sed "s/\",//g")
> if [ -z "$AZ_NIC_NAME" ]
> then
> echo "could not determine the Azure NIC name"
> exit $OCF_ERR_GENERIC
> else
> logIt "debug1: AZ_NIC_NAME=$AZ_NIC_NAME"
> fi
>
> #--get the vnet and subnet names
> R=$(az network nic show --name $AZ_NIC_NAME --resource-group $AZ_RG_NAME|grep 
> -i subnets|head -1|sed "s/  */ /g"|cut -d"/" -f9,11|sed "s/\",//g")
> LDIFS=$IFS
> IFS="/"
> R_ARRAY=( $R )
> AZ_VNET_NAME=${R_ARRAY[0]}
> AZ_SUBNET_NAME=${R_ARRAY[1]}
> if [ -z "$AZ_VNET_NAME" ]
> then
> logIt "could not determine Azure vnet name"
> exit $OCF_ERR_GENERIC
> else
> logIt "debug1: AZ_VNET_NAME=$AZ_VNET_NAME"
> fi
> if [ -z "$AZ_SUBNET_NAME" ]
> then
>     logIt "could not determine the Azure subnet name"
> exit $OCF_ERR_GENERIC
> else
> logIt "debug1: AZ_SUBNET_NAME=$AZ_SUBNET_NAME"
> fi
>
> ##
> #  Actions
> ##
>
> case $__OCF_ACTION in
> meta-data) meta_data
> RC=$?
> ;;
> usage|help)   azip_usage
> RC=$?
> ;;
> start) azip_start
> RC=$?
> ;;
> stop) azip_stop
> RC=$?
> ;;
> status)  azip_query
> RC=$?
> ;;
> monitor)  azip_monitor
> RC=$?
> ;;
> validate-all);;
> *)azip_usage
> RC=$OCF_ERR_UNIMPLEMENTED
> ;;
> esac
>
> #--exit with return code
> logIt "debug1: exiting $SCRIPT_NAME with code $RC"
> exit $RC
>
> #--end
>
> --
> Eric Robinson
>
> ___
> Users mailing list: Users@clusterlabs.org
> http://lists.clusterlabs.org/mailman/listinfo/users
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org

-- 
// Kristoffer Grönlund
// kgronl...@suse.com

___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] PostgreSQL Automatic Failover (PAF) v2.2.0

2017-09-14 Thread Kristoffer Grönlund
Jehan-Guillaume de Rorthais <j...@dalibo.com> writes:

>> Planning to move this under the Clusterlabs github group?
>
> Yes!
>
> I'm not sure how long and how many answers I should wait for to reach a
> community agreement. But first answers are encouraging :)

Regarding your concerns with submitting it into resource-agents, I would
say that moving into ClusterLabs/ as a separate repository at first
makes sense to me as well. We can look at including it in
resource-agents and the implications of supporting various
language-libraries for OCF agents later.

Cheers,
Kristoffer

-- 
// Kristoffer Grönlund
// kgronl...@suse.com

___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] Moving PAF to clusterlabs ?

2017-09-08 Thread Kristoffer Grönlund
Jehan-Guillaume de Rorthais <j...@dalibo.com> writes:

> Hi All,
>
> I am currently thinking about moving the RA PAF (PostgreSQL Automatic 
> Failover)
> out of the Dalibo organisation on Github. Code and website.

[snip]

> Note that part of the project (some perl modules) might be pushed to
> resource-agents independently, see [2]. Two years after, I'm still around on
> this project. Obviously, I'll keep maintaining it on my Dalibo's and personal
> time.
>
> Thoughts?

Hi,

I for one would be happy to see it included in the resource-agents
repository. If people are worried about the additional dependency on
perl, we can just add a --without-perl flag (or something along those
lines) to the Makefile.

We already have different agents for the same application but with
different contexts so this wouldn't be anything new.

Cheers,
Kristoffer

>
> [1] http://lists.clusterlabs.org/pipermail/developers/2015-August/66.html
> [2] http://lists.clusterlabs.org/pipermail/developers/2015-August/68.html
>
> Regards,
> -- 
> Jehan-Guillaume de Rorthais
> Dalibo
>
> ___
> Users mailing list: Users@clusterlabs.org
> http://lists.clusterlabs.org/mailman/listinfo/users
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org
>

-- 
// Kristoffer Grönlund
// kgronl...@suse.com

___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] Oh how we've grown! :D

2017-09-08 Thread Kristoffer Grönlund
Digimer <li...@alteeve.ca> writes:

> Here are the attendee pictures from 2015 and from this summit today.
>
> So amazing to see how far our community has come. I am stoked to see how
> much larger we are still in 2019!
>

A huge thank you again to everyone! You are all awesome.

Cheers,
Kristoffer

>
>
>
> -- 
> Digimer
> Papers and Projects: https://alteeve.com/w/
> "I am, somehow, less interested in the weight and convolutions of
> Einstein’s brain than in the near certainty that people of equal talent
> have lived and died in cotton fields and sweatshops." - Stephen Jay Gould
> ___
> Users mailing list: Users@clusterlabs.org
> http://lists.clusterlabs.org/mailman/listinfo/users
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org

-- 
// Kristoffer Grönlund
// kgronl...@suse.com

___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


[ClusterLabs] Clusterlabs Summit: Presentation material

2017-09-07 Thread Kristoffer Grönlund
Hi everyone,

I got some requests to provide the slides for the presentations at the
summit, and I thought that the best solution is probably to do what some
presenters already did on the Trello board: For those of you who have
slides to share, please attach them to the card of your presentation at
on the Trello board:

https://trello.com/b/LNUrtV1Q/clusterlabs-summit-2017

There's also a link to the group photo on the plan wiki now:

http://plan.alteeve.ca/index.php/Main_Page

Cheers,
Kristoffer

-- 
// Kristoffer Grönlund
// kgronl...@suse.com

___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


[ClusterLabs] Clusterlabs Summit: Expect rain tomorrow

2017-09-05 Thread Kristoffer Grönlund
Hey everyone!

I am going to try to be at the event area at 8 in the morning tomorrow,
and I wouldn't recommend showing up earlier than that. The doors will
probably be locked. The summit itself is scheduled to start at 9.

Unfortunately it seems we can expect rain tomorrow, so I wanted to send
out a small warning: In case you haven't brought an umbrella or rain
gear, now is the time to go out and get it.

For anyone needing to take a taxi, the number is +49 (0911) 19 410, or
the reception here at the SUSE office can help call a taxi as
well. It is also possible to take the U-bahn to Maxfeld station, though
unfortunately there is a short walk to the office even then.

Cheers and welcome,
Kristoffer

-- 
// Kristoffer Grönlund
// kgronl...@suse.com

___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] Pacemaker in Azure

2017-08-25 Thread Kristoffer Grönlund
Eric Robinson <eric.robin...@psmnv.com> writes:

> Hi Kristoffer --
>
> If you would be willing to share your AWS ip control agent(s), I think those 
> would be very helpful to us and the community at large. I'll be happy to 
> share whatever we come up with in terms of an Azure agent when we're all done.

I meant the agents that are in resource-agents already:

https://github.com/ClusterLabs/resource-agents/blob/master/heartbeat/awsvip
https://github.com/ClusterLabs/resource-agents/blob/master/heartbeat/awseip
https://github.com/ClusterLabs/resource-agents/blob/master/heartbeat/aws-vpc-route53

You'll probably also be interested in fencing: There are agents for
fencing both on AWS and Azure in the fence-agents repository.

Cheers,
Kristoffer

>
> --
> Eric Robinson
>
> -Original Message-
> From: Kristoffer Grönlund [mailto:kgronl...@suse.com] 
> Sent: Friday, August 25, 2017 3:16 AM
> To: Eric Robinson <eric.robin...@psmnv.com>; Cluster Labs - All topics 
> related to open-source clustering welcomed <users@clusterlabs.org>
> Subject: Re: [ClusterLabs] Pacemaker in Azure
>
> Eric Robinson <eric.robin...@psmnv.com> writes:
>
>> I deployed a couple of cluster nodes in Azure and found out right away that 
>> floating a virtual IP address between nodes does not work because Azure does 
>> not honor IP changes made from within the VMs. IP changes must be made to 
>> virtual NICs in the Azure portal itself. Anybody know of an easy way around 
>> this limitation?
>
> You will need a custom IP control agent for Azure. We have a series of agents 
> for controlling IP addresses and domain names in AWS, but there is no agent 
> for Azure IP control yet. (At least as far as I am aware).
>
> Cheers,
> Kristoffer
>
>>
>> --
>> Eric Robinson
>>
>> ___
>> Users mailing list: Users@clusterlabs.org 
>> http://lists.clusterlabs.org/mailman/listinfo/users
>>
>> Project Home: http://www.clusterlabs.org Getting started: 
>> http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
>> Bugs: http://bugs.clusterlabs.org
>
> --
> // Kristoffer Grönlund
> // kgronl...@suse.com

-- 
// Kristoffer Grönlund
// kgronl...@suse.com

___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] Pacemaker in Azure

2017-08-25 Thread Kristoffer Grönlund
Eric Robinson <eric.robin...@psmnv.com> writes:

> I deployed a couple of cluster nodes in Azure and found out right away that 
> floating a virtual IP address between nodes does not work because Azure does 
> not honor IP changes made from within the VMs. IP changes must be made to 
> virtual NICs in the Azure portal itself. Anybody know of an easy way around 
> this limitation?

You will need a custom IP control agent for Azure. We have a series of
agents for controlling IP addresses and domain names in AWS, but there
is no agent for Azure IP control yet. (At least as far as I am aware).

Cheers,
Kristoffer

>
> --
> Eric Robinson
>
> ___
> Users mailing list: Users@clusterlabs.org
> http://lists.clusterlabs.org/mailman/listinfo/users
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org

-- 
// Kristoffer Grönlund
// kgronl...@suse.com

___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


[ClusterLabs] Clusterlabs Summit - Finding the office

2017-08-25 Thread Kristoffer Grönlund
Hello everyone,

The summit is coming closer, and I thought I should send out a brief
mail about how to find the event area once you are in Nuremberg.

Finding the office
==

The SUSE office is within walking distance from the conference hotel and
the old town center. The closest subway station is the Maxfeld station
on the U3 line.

Google maps link: https://goo.gl/maps/JMzSnv8ZGqF2

If you are coming from the Central Station, take the U3 directly to
Maxfeld (direction Friedrich-Ebert-Platz).

From the airport, take the U2 to Rathenauplatz, then change to U3
(direction Friedrich-Ebert-Platz) and exit at Maxfeld.

Finding the event
=

The summit will take place in the SUSE Event Area at Rollnerstraße
8. This is the same building as the SUSE offices, but it is a separate
ground floor entrace. We will put up posters to make this clear.

The regular SUSE reception is on the 3rd floor, and they have kindly
asked me to direct everyone attending the summit directly to the event
area.

Finding the hotel
=

The main conference hotel is the Sorat Saxx, located on the Hauptmarkt
square in the Nuremberg old town. This is within easy walking distance
from both the Central Station and the SUSE office. The closest subway
station is Lorenzkirche on the U1 line.

Hotel website: https://www.sorat-hotels.com/en/hotel/saxx-nuernberg.html

If you have any questions or concerns, please feel free to contact me.

See you there!
// Kristoffer

-- 
// Kristoffer Grönlund
// kgronl...@suse.com

___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] SLES11 SP4: Strange problem with "(crm configure) commit"

2017-08-21 Thread Kristoffer Grönlund
Ulrich Windl <ulrich.wi...@rz.uni-regensburg.de> writes:

> Hi! 
>
> I just had a strange problem: When trying to "clean up" the cib configuration 
> (acually deleting unneded "operations" lines), I failed to commit the change, 
> even through it verified OK:
>
> crm(live)configure# commit
> Call cib_apply_diff failed (-206): Application of an update diff failed
> ERROR: could not patch cib (rc=206)
> INFO: offending xml diff: 

It looks to me (from a cursory glance) like you may be hitting a bug
with the patch generation in pacemaker. But there isn't enough details
to say for sure.

Try running crmsh with the "-dR" command line options to get it to
output the patch it tries to apply to the log.

Cheers,
Kristoffer

>
> In Syslog I see this:
> Aug 21 15:01:48 h02 cib[19397]:error: xml_apply_patchset_v2: Moved 
> meta_attributes.14926208 to position 1 instead of 2 (0xe3f0f0)
> Aug 21 15:01:48 h02 cib[19397]:error: xml_apply_patchset_v2: Moved 
> meta_attributes.9876096 to position 1 instead of 2 (0xe3c470)
> Aug 21 15:01:48 h02 cib[19397]:error: xml_apply_patchset_v2: Moved 
> utilization.10594784 to position 1 instead of 2 (0x96a2b0)
> Aug 21 15:01:48 h02 cib[19397]:error: xml_apply_patchset_v2: Moved 
> meta_attributes.11397008 to position 1 instead of 2 (0xacc5b0)
> Aug 21 15:01:48 h02 cib[19397]:  warning: cib_server_process_diff: Something 
> went wrong in compatibility mode, requesting full refresh
> Aug 21 15:01:48 h02 cib[19397]:  warning: cib_process_request: Completed 
> cib_apply_diff operation for section 'all': Application of an update diff 
> failed (rc=-206, origin=local/cibadmin/2, version=1.65.23)
>
> What could be causing this? I think I did the same change about three years 
> ago without problem (with different software, of course).
>
> # rpm -q pacemaker corosync crmsh
> pacemaker-1.1.12-18.1
> corosync-1.4.7-0.23.5
> crmsh-2.1.2+git132.gbc9fde0-18.2
> (latest)
>
> Regards,
> Ulrich
>
>
>
> ___
> Users mailing list: Users@clusterlabs.org
> http://lists.clusterlabs.org/mailman/listinfo/users
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org
>

-- 
// Kristoffer Grönlund
// kgronl...@suse.com

___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] big trouble with a DRBD resource

2017-08-10 Thread Kristoffer Grönlund
"Lentes, Bernd" <bernd.len...@helmholtz-muenchen.de> writes:

> In both cases i'm inside crmsh.
> The difference is that i always enter the complete command from the highest 
> level of crm. This has the advantage that i can execute any command from the 
> history directly.
> And this has a kind of autocommit.
> If i would enter a lower level, then my history is less useless. I always 
> have to go to the respective level before executing the command from the 
> history.
> But then i have to commit.
> Am i the only one who does it like this ? Nobody stumbled across this ?
> I always wondered about my ineffective commit, but never got the idea that 
> such a small difference is the reason.

You are right, this is a quirk of crmsh: Each level has its own "state",
and exiting the level triggers a commit. Running a command like
"configure primitive ..." results internally in three movements;

* enter the configure level: This fetches the CIB and checks that it is writable
* create the primitive: This updates the internal copy of the CIB
* exit the configure level: This creates, verifies and applies a patch to the 
CIB

I can't speak for others, but somehow this has never caused me problems
as far as I can remember. Either I have been using it interactively from
within the configure section, or I have been running commands from
bash. I can't recall if that's because I was told at some point or if it
was made clear in the documentation somewhere.

Cheers,
Kristoffer

-- 
// Kristoffer Grönlund
// kgronl...@suse.com

___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


[ClusterLabs] Clusterlabs Summit 2017: Please register!

2017-08-09 Thread Kristoffer Grönlund
Hi everyone,

This mail is for attendees of the Clusterlabs Summit event in Nuremberg,
September 6-7 2017. If it didn't arrive via the Clusterlabs mailing
list and you're not going but got this mail anyway, please let me know
since apparently I have you on my list of possible attendees ;)

Apologies for springing this on you at such a late stage, but as we are
investigating dinner options, making badges and making sure there are
enough chairs for everyone at the event, it became more and more clear
that it would be very useful to have a better grasp of how many people
are coming to the event.

URL to sign up
--

https://www.eventbrite.com/e/clusterlabs-summit-2017-dinner-tickets-3689052

To make it as easy as possible, I created an event on Eventbrite for
this purpose. Signing up is not a requirement! However, it would be
great if you could send an email to me confirming your attendance
regardless, in case you are unhappy about using Eventbrite.

Also, it would be great if you could register as quickly as possible so
that we can make dinner reservations early enough to hopefully be able
to fit everyone into one space.

Thank you,
Kristoffer

-- 
// Kristoffer Grönlund
// kgronl...@suse.com

___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] big trouble with a DRBD resource

2017-08-06 Thread Kristoffer Grönlund
"Lentes, Bernd" <bernd.len...@helmholtz-muenchen.de> writes:

> Hi,
>
> first: is there a tutorial or s.th. else which helps in understanding what 
> pacemaker logs in syslog and /var/log/cluster/corosync.log ?
> I try hard to find out what's going wrong, but they are difficult to 
> understand, also because of the amount of information.
> Or should i deal more with "crm histroy" or hb_report ?

I like to use crm history log to get the logs from all the nodes in a
single flow, but it depends quite a bit on configuration what gets
logged where..

>
> What happened:
> I tried to configure a simple drbd resource following 
> http://clusterlabs.org/doc/en-US/Pacemaker/1.1-plugin/html-single/Clusters_from_Scratch/index.html#idm140457860751296
> I used this simple snip from the doc:
> configure primitive WebData ocf:linbit:drbd params drbd_resource=wwwdata \
> op monitor interval=60s

I'll try to sum up the issues I see, from a glance:

* The drbd resource is a multi-state / master-slave resource, which is
  technically a variant of a clone resource where different clones can
  either be in a primary or secondary state. To configure it correctly,
  you'll need to create a master resource as well. Doing this with a
  single command is unfortunately a bit painful. Either use crm
  configure edit, or the interactive crm mode (with a verify / commit
  after creating both the primitive and the master resources).

* You'll need to create monitor operations for both the master and slave
  roles, as you note below, and set explicit timeouts for all
  operations.

* Make sure the wwwdata DRBD resource exists, is accessible from both
  nodes, and is in a good state to begin with (that is, not
  split-brained).

I would recommend following one of the tutorials provided by Linbit
themselves which show how to set this stuff up correctly, since it is
quite a bit involved.

> Btw: is there a history like in the bash where i see which crm command i 
> entered at which time ? I know that crm history is mighty, but didn't find 
> that.

We don't have that yet :/ If you're not in interactive mode, your bash
history should have the commands though.

> no backup - no mercy

lol ;)

Cheers,
Kristoffer

-- 
// Kristoffer Grönlund
// kgronl...@suse.com

___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] Antw: Re: Antw: Re: from where does the default value for start/stop op of a resource come ?

2017-08-02 Thread Kristoffer Grönlund
Ulrich Windl <ulrich.wi...@rz.uni-regensburg.de> writes:

>
> See my proposal above. ;-)

Hmm, yes. It's a possibility. Magic values rarely end up making things
simpler though :/

Cheers,
Kristoffer

-- 
// Kristoffer Grönlund
// kgronl...@suse.com

___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] Antw: Re: from where does the default value for start/stop op of a resource come ?

2017-08-02 Thread Kristoffer Grönlund
Ulrich Windl <ulrich.wi...@rz.uni-regensburg.de> writes:

>
> What aout this priority for newly added resources:?
> 1) Use the value specified explicitly
> 2) Use the value the RA's metadata specifies
> 3) Use the global default
>
> With "use" I mean "add it to the RA configuration".

Yeah, I've considered it. The main issue I see with making the change to
crmsh now is that it would also be confusing, when configuring a
resource without any operations and getting operations defined
anyway. Also, it would be impossible not to define operations that have
defaults in the metadata.

One idea might be to have a new command which inserts missing operations
and operation timeouts based on the RA metadata.

Cheers,
Kristoffer

>
> Regards,
> Ulrich
>
>
> ___
> Users mailing list: Users@clusterlabs.org
> http://lists.clusterlabs.org/mailman/listinfo/users
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org

-- 
// Kristoffer Grönlund
// kgronl...@suse.com

___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] from where does the default value for start/stop op of a resource come ?

2017-08-02 Thread Kristoffer Grönlund
"Lentes, Bernd" <bernd.len...@helmholtz-muenchen.de> writes:

> Hi,
>
> i'm wondering from where the default values for operations of a resource come 
> from.

[snip]

>
> Is it hardcoded ? All timeouts i found in my config were explicitly related 
> to a dedicated resource.
> What are the values for the hardcoded defaults ?
>
> Does that also mean that what the description of the RA says as "default" 
> isn't a default, but just a recommendation ?

The default timeout is set by the default-action-timeout property, and
the default value is 20s.

You are correct, the timeout values defined in the resource agent are
not used automatically. They are recommended minimums, and the
thought as I understand it (this predates my involvement in HA) is that
any timeouts need to be reviewed carefully by the administrator.

I agree that it is somewhat surprising.

Cheers,
Kristoffer

-- 
// Kristoffer Grönlund
// kgronl...@suse.com

___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


[ClusterLabs] Clusterlabs Summit 2017 (Sept. 6-7 in Nuremberg) - One month left!

2017-08-01 Thread Kristoffer Grönlund
Hey everyone!

Here's a quick update for the upcoming Clusterlabs Summit at the SUSE
office in Nuremberg in September:

The time to register for the pool of hotel rooms has now expired - we
have sent the final list of names to the hotel. There may still be hotel
rooms available at the Sorat Saxx or other hotels in Nuremberg, so if
anyone missed the deadline and still needs a room, either contact me or
feel free to contact the hotel directly. The same goes for any changes,
for those who have reservations: Please either contact me, or contact
the hotel directly at i...@saxx-nuernberg.de.

The schedule is being sorted out right now, and the planning wiki will
be updated with a preliminary schedule soon. If there is anyone who
would like to present on a topic or would like to discuss a topic that
isn't on the wiki yet, now is the time to add it there.

Other than that, I don't have any other remarks, other than to wish
everyone welcome to Nuremberg in a month! Feel free to contact me with
any concerns or issues related to the summit, and I'll do what I can to
help out.

Cheers,
Kristoffer

-- 
// Kristoffer Grönlund
// kgronl...@suse.com

___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] [ClusterLabs Developers] [HA/ClusterLabs Summit] Key-Signing Party, 2017 Edition

2017-07-24 Thread Kristoffer Grönlund
Jan Pokorný <jpoko...@redhat.com> writes:

> [ Unknown signature status ]
> Hello cluster masters :-)
>
> as there's little less than 7 weeks left to "The Summit" meetup
> (<http://plan.alteeve.ca/>), it's about time to get the ball
> rolling so we can voluntarily augment the digital trust amongst
> us the attendees, on OpenGPG basis.
>
> Doing that, we'll actually establish a tradition since this will
> be the second time such event is being kicked off (unlike the birds
> of the feather gathering itself, was edu-feathered back then):
>
>   <https://people.redhat.com/jpokorny/keysigning/2015-ha/>
>   <http://lists.linux-ha.org/pipermail/linux-ha/2015-January/048507.html>
>
> If there are no objections, yours truly will conduct this undertaking.
> (As an aside, I am toying with an idea of optimizing the process
> a bit now that many keys are cross-signed already; I doubt there's
> a value of adding identical signatures just with different timestamps,
> unless, of course, the inscribed level of trust is going to change,
> presumably elevate -- any comments?)

Hi Jan,

No objections from me, thank you for taking charge of this!

Cheers,
Kristoffer


>
> * * *
>
> So, going to attend summit and want your key signed while reciprocally
> spreading the web of trust?
> Awesome, let's reuse the steps from the last time:
>
> Once you have a key pair (and provided that you are using GnuPG),
> please run the following sequence:
>
> # figure out the key ID for the identity to be verified;
> # IDENTITY is either your associated email address/your name
> # if only single key ID matches, specific key otherwise
> # (you can use "gpg -K" to select a desired ID at the "sec" line)
> KEY=$(gpg --with-colons 'IDENTITY' | grep '^pub' | cut -d: -f5)
>
> # export the public key to a file that is suitable for exchange
> gpg --export -a -- $KEY > $KEY
>
> # verify that you have an expected data to share
> gpg --with-fingerprint -- $KEY
>
> with IDENTITY adjusted as per the instruction above, and send me the
> resulting $KEY file, preferably in a signed (or even encrypted[*]) email
> from an address associated with that very public key of yours.
>
> Timeline?
> Please, send me your public keys *by 2017-09-05*, off-list and
> best with [key-2017-ha] prefix in the subject.  I will then compile
> a list of the attendees together with their keys and publish it at
> <https://people.redhat.com/jpokorny/keysigning/2017-ha/>
> so it can be printed beforehand.
>
> [*] You can find my public key at public keyservers:
> <http://pool.sks-keyservers.net/pks/lookup?op=vindex=0x60BCBB4F5CD7F9EF>
> Indeed, the trust in this key should be ephemeral/one-off
> (e.g. using a temporary keyring, not a universal one before we
> proceed with the signing :)
>
> * * *
>
> Thanks for your cooperation, looking forward to this side stage
> (but nonetheless important if release or commit[1] signing is to get
> traction) happening and hope this will be beneficial to all involved.
>
> See you there!
>
>
> [1] for instance, see:
> <https://github.com/blog/2144-gpg-signature-verification>
> <https://pagure.io/pagure/issue/885>
>
> -- 
> Jan (Poki)
> ___
> Developers mailing list
> develop...@clusterlabs.org
> http://lists.clusterlabs.org/mailman/listinfo/developers

-- 
// Kristoffer Grönlund
// kgronl...@suse.com

___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


[ClusterLabs] crmsh: Release 3.0.1

2017-07-21 Thread Kristoffer Grönlund
Hello everyone!

I'm happy to announce the release of crmsh version 3.0.1 today. This
is mainly a bug fix release, so no new exciting features and mainly
fixes to the new bootstrap functionality added in 3.0.0.

I would also like to take the opportinity to introduce a new core
developer for crmsh, Xin Liang! For this release he has contributed
some of the bug fixes discovered, but he has also contributed a
rewrite of hb_report into Python, as well as worked on improving the
tab completion support in crmsh. I also want to recognize the hard
work of Shiwen Zhang who initially started the work of rewriting the
hb_report script in Python.

For the complete list of changes in this release, see the ChangeLog:

* https://github.com/ClusterLabs/crmsh/blob/3.0.1/ChangeLog

The source code can be downloaded from Github:

* https://github.com/ClusterLabs/crmsh/releases/tag/3.0.1

This version of crmsh (or a version very close to it) is already
available in openSUSE Tumbleweed, and packages for several popular
Linux distributions will be available from the Stable repository at
the OBS:

* http://download.opensuse.org/repositories/network:/ha-clustering:/Stable/

Archives of the tagged release:

* https://github.com/ClusterLabs/crmsh/archive/3.0.1.tar.gz
* https://github.com/ClusterLabs/crmsh/archive/3.0.1.zip

As usual, a huge thank you to all contributors and users of crmsh!

Cheers,
Kristoffer

-- 
// Kristoffer Grönlund
// kgronl...@suse.com

___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] Introducing the Anvil! Intelligent Availability platform

2017-07-10 Thread Kristoffer Grönlund
Digimer <li...@alteeve.ca> writes:

> Hi all,
>
>   I suspect by now, many of you here have heard me talk about the Anvil!
> intelligent availability platform. Today, I am proud to announce that it
> is ready for general use!
>
> https://github.com/ClusterLabs/striker/releases/tag/v2.0.0
>

Cool, congratulations!

Cheers,
Kristoffer

>
>   Now, time to start working full time on version 3!
>
> -- 
> Digimer
> Papers and Projects: https://alteeve.com/w/
> "I am, somehow, less interested in the weight and convolutions of
> Einstein’s brain than in the near certainty that people of equal talent
> have lived and died in cotton fields and sweatshops." - Stephen Jay Gould
>
> ___
> Users mailing list: Users@clusterlabs.org
> http://lists.clusterlabs.org/mailman/listinfo/users
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org

-- 
// Kristoffer Grönlund
// kgronl...@suse.com

___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] Installing on SLES 12 -- Where's the Repos?

2017-06-16 Thread Kristoffer Grönlund
Eric Robinson <eric.robin...@psmnv.com> writes:

>> If you're looking to run without support, you can run openSUSE Leap - it's 
>> the
>> closest equivalent to centOS in the SUSE world and the HA packages are all in
>> there.
>> 
>
> Out of curiosity, do the openSUSE Leap repos and packages work with SLES? 

I know that there are some base system differences that could cause
problems, things like Leap using systemd/journald for logging while SLES
is still logging via syslog-ng (IIRC)... so it's possible that you could
get into problems if you mix versions. And adding the Leap repositories
to SLES will probably mess things up since both deliver slightly
different versions of the base system.

For SLES, there's now the Package Hub which has open source packages
taken from Leap and confirmed not to conflict with SLES, so you can mix
a supported base system with unsupported open source packages with less
risk for breaking anything:

https://packagehub.suse.com/

Cheers,
Kristoffer

>
> --Eric

-- 
// Kristoffer Grönlund
// kgronl...@suse.com

___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] Installing on SLES 12 -- Where's the Repos?

2017-06-16 Thread Kristoffer Grönlund
Eric Robinson <eric.robin...@psmnv.com> writes:

> We've been a Red Hat/CentOS shop for 10+ years and have installed 
> Corosync+Pacemaker+DRBD dozens of times using the repositories, all for free.
>
> We are now trying out our first SLES 12 server, and I'm looking for the 
> repos. Where the heck are they? I went looking, and all I can find is the 
> SLES "High Availability Extension," which I must pay $700/year for? No 
> freaking way!
>
> This is Linux we're talking about, right? There's got to be an easy way to 
> install the cluster without paying for a subscription... right?
>
> Someone talk me off the ledge here.
>

If you're looking to run without support, you can run openSUSE Leap -
it's the closest equivalent to centOS in the SUSE world and the HA
packages are all in there.

(I'd recommend the supported version, of course ;)

Cheers,
Kristoffer


> --
> Eric Robinson
>
> ___
> Users mailing list: Users@clusterlabs.org
> http://lists.clusterlabs.org/mailman/listinfo/users
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org

-- 
// Kristoffer Grönlund
// kgronl...@suse.com

___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] "Connecting" Pacemaker with another cluster manager

2017-05-23 Thread Kristoffer Grönlund
Timo <t...@kroenchenstadt.de> writes:

> Hi,
>
> I have a proprietary cluster manager running on a bunch (four) of nodes.
> It decides to run the daemon for which HA is required on its own set of
> (undisclosed) requirements and decisions. This is, unfortunately,
> unavoidable due to business requirements.
>
> However, I have to put also Pacemaker onto the nodes in order to provide
> an additional daemon running in HA mode. (I cannot do this using the
> existing cluster manager, as this is a closed system.)
>
> I have to make sure that the additional daemon (which I plan to
> coordinate using Pacemaker) only runs on the machine where the daemon
> (controlled by the existing, closed cluster manager) runs. I could check
> for local VIPs, for example, to check whether it runs on a node or not.
>
> Is there any way to make Pacemaker "check" for existence of a local
> (V)IP so that I could "connect" both cluster managers?
>
> In short: I need Pacemaker to put the single instance of a daemon
> exactly onto the node the other cluster manager decided to run the
> (primary) daemon.

Hi,

I'm not sure I completely understand the problem description, but if I
parsed it correctly:

What you can do is run an external script which sets a node attribute on
the node that has the external cluster manager daemon, and have a
constraint which locates the additional daemon based on that node
attribute.

Cheers,
Kristoffer

>
> Best regards,
>
> Timo
>
> ___
> Users mailing list: Users@clusterlabs.org
> http://lists.clusterlabs.org/mailman/listinfo/users
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org
>

-- 
// Kristoffer Grönlund
// kgronl...@suse.com

___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


[ClusterLabs] Clusterlabs Summit 2017 (Nuremberg, 6-7 September) - Hotels and Topics

2017-05-02 Thread Kristoffer Grönlund
Hi everyone!

Here's a quick update on the summit happening at the SUSE office in
Nuremberg on September 6-7.

I am still collecting hotel reservations from attendees. In order to
notify the hotel about how many rooms we actually need, I'll need a
complete list of people who want to attend before 15 June, at the
latest. So if you plan to attend and need a hotel room, let me know as
soon as possible by emailing me! There are 40 hotel rooms reserved,
and about half of those are claimed at this point.

We are starting to have a preliminary list of topics ready. The event
area has a projector and A/V equipment available, so we should be able
to show slides for those wanting to present a particular topic.

This is the current list of topics:

Topic Requester/Presenter Topic

Andrew Beekhof or Ken Gaillot New container "bundle" feature in Pacemaker   
Ken Gaillot   What would Pacemaker 1.2 or 2.0 look like?
Ken Gaillot   Ideas for the OCF resource agent standard 
Klaus Wenninger   Recent work and future plans for SBD  
Chrissie Caulfieldknet and corosync 3   
Chris Feist (requestor)   kubernetes
Chris Feist (requestor)   Multisite (QDevice/Booth) 
Madison Kelly ScanCore and "Intelligent Availability"   
Kristoffer Gronlund,  Hawk, Cluster API and future plans
Ayoub Belarbi

We also have Kai Wagner from the openATTIC team attending, and he has
agreed to present openATTIC. For those who aren't familiar with it,
openATTIC is a storage management tool with some support for managing
things like LVM, DRBD and Ceph.

I am also happy to say that Adam Spiers from the SUSE Cloud team will be
attending the summit, and hopefully I can convince him to present their
work on using Pacemaker with Openstack, the current state of Openstack
HA and perhaps some of his future plans and wishes around HA.

Keep adding topics to the list! We'll work out a rough schedule for
the two days as the event draws nearer, but I'd hope to leave enough
room for deeper discussions around the topics as we work through
them.

As a reminder, the plans for the summit are being collected at the
Alteeve! planning wiki, here:

http://plan.alteeve.ca/index.php/Main_Page

Cheers,
Kristoffer

-- 
// Kristoffer Grönlund
// kgronl...@suse.com

___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] Coming in Pacemaker 1.1.17: start a node in standby

2017-04-25 Thread Kristoffer Grönlund
Ken Gaillot <kgail...@redhat.com> writes:

> Hi all,
>
> Pacemaker 1.1.17 will have a feature that people have occasionally asked
> for in the past: the ability to start a node in standby mode.
>
> It will be controlled by an environment variable (set in
> /etc/sysconfig/pacemaker, /etc/default/pacemaker, or wherever your
> distro puts them):
>
>
> # By default, nodes will join the cluster in an online state when they first
> # start, unless they were previously put into standby mode. If this
> variable is
> # set to "standby" or "online", it will force this node to join in the
> # specified state when starting.
> # (experimental; currently ignored for Pacemaker Remote nodes)
> # PCMK_node_start_state=default
>
>
> As described, it will be considered experimental in this release, mainly
> because it doesn't work with Pacemaker Remote nodes yet. However, I
> don't expect any problems using it with cluster nodes.
>
> Example use cases:
>
> You want want fenced nodes to automatically start the cluster after a
> reboot, so they contribute to quorum, but not run any resources, so the
> problem can be investigated. You would leave
> PCMK_node_start_state=standby permanently.
>
> You want to ensure a newly added node joins the cluster without problems
> before allowing it to run resources. You would set this to "standby"
> when deploying the node, and remove the setting once you're satisfied
> with the node, so it can run resources at future reboots.
>
> You want a standby setting to last only until the next boot. You would
> set this permanently to "online", and any manual setting of standby mode
> would be overwritten at the next boot.
>
> Many thanks to developers Alexandra Zhuravleva and Sergey Mishin, who
> contributed this feature as part of a project with EMC.

One of those features that seem obvious in retrospect. Great addition,
thanks to everyone involved!

Cheers,
Kristoffer

> -- 
> Ken Gaillot <kgail...@redhat.com>
>
> ___
> Users mailing list: Users@clusterlabs.org
> http://lists.clusterlabs.org/mailman/listinfo/users
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org
>

-- 
// Kristoffer Grönlund
// kgronl...@suse.com

___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] Wtrlt: Antw: Re: Antw: Re: how important would you consider to have two independent fencing device for each node ?

2017-04-21 Thread Kristoffer Grönlund
Ken Gaillot <kgail...@redhat.com> writes:

>>> I think it works differently: One task periodically reads ist mailbox slot 
>>> for commands, and once a comment was read, it's executed immediately. Only
>> if 
>>> the read task does hang for a long time, the watchdog itself triggers a
>> reset 
>>> (as SBD seems dead). So the delay is actually made from the sum of "write 
>>> delay", "read delay", "command excution".
>
> I think you're right when sbd uses shared-storage, but there is a
> watchdog-only configuration that I believe digimer was referring to.
>
> With watchdog-only, the cluster will wait for the value of the
> stonith-watchdog-timeout property before considering the fencing successful.

I think there are some important distictions to make, to clarify what
SBD is and how it works:

* The original SBD model uses shared storage as its fencing mechanism
  (thus the name Shared-storage based death) - when talking about
  watchdog-only SBD, a new mode only introduced in a fork of the SBD
  project, it would probably help avoid confusion to be explicit about
  that.

* Watchdog-only SBD relies on quorum to avoid split-brain or fence
  loops, and thus requires at least three nodes or an additional qdevice
  node. This is my understanding, correct me if I am wrong. Also, this
  disqualifies watchdog-sbd from any of Digimers setups since they are
  2-node only, so that's probably something to be aware of in this
  discussion. ;)

* The watchdog fencing in SBD is not the primary fence mechanism when
  shared storage is available. In fact, it is an optional although
  strongly recommended component. [1]

[1]: We (as in SUSE) require use of a watchdog for supported
configurations, but technically it is optional.

-- 
// Kristoffer Grönlund
// kgronl...@suse.com

___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


[ClusterLabs] Surprising semantics of location constraints with INFINITY score

2017-04-11 Thread Kristoffer Grönlund
Hi all,

I discovered today that a location constraint with score=INFINITY
doesn't actually restrict resources to running only on particular
nodes. From what I can tell, the constraint assigns the score to that
node, but doesn't change scores assigned to other nodes. So if the node
in question happens to be offline, the resource will be started on any
other node.

Example:



If node2 is offline, I see the following:

 dummy  (ocf::heartbeat:Dummy): Started node1
native_color: dummy allocation score on node1: 1
native_color: dummy allocation score on node2: -INFINITY
native_color: dummy allocation score on webui: 0

It makes some kind of sense, but seems surprising - and the
documentation is a bit unclear on the topic. In particular, the
statement that a score = INFINITY means "must" is clearly not correct in
this case. Maybe the documentation should be clarified for location
constraints?

-- 
// Kristoffer Grönlund
// kgronl...@suse.com

___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] Antw: Re: Rename option group resource id with pcs

2017-04-11 Thread Kristoffer Grönlund
Ulrich Windl <ulrich.wi...@rz.uni-regensburg.de> writes:

>>>> Dejan Muhamedagic <deja...@fastmail.fm> schrieb am 11.04.2017 um 11:43 in
> Nachricht <20170411094352.GD8414@tuttle.homenet>:
>> Hi,
>> 
>> On Tue, Apr 11, 2017 at 10:50:56AM +0200, Tomas Jelinek wrote:
>>> Dne 11.4.2017 v 08:53 SAYED, MAJID ALI SYED AMJAD ALI napsal(a):
>>> >Hello,
>>> >
>>> >Is there any option in pcs to rename group resource id?
>>> >
>>> 
>>> Hi,
>>> 
>>> No, there is not.
>>> 
>>> Pacemaker doesn't really cover the concept of renaming a resource.
>> 
>> Perhaps you can check how crmsh does resource rename. It's not
>> impossible, but can be rather involved if there are other objects
>> (e.g. constraints) referencing the resource. Also, crmsh will
>> refuse to rename the resource if it's running.
>
> The real problem in pacemaker (as resources are created now) is that the 
> "IDs" have too much semantic, i.e. most are derived from the resource name 
> (while lacking a name attribute or element), and some required elements are 
> IDs are accessed by ID, and not by name.
>
> Examples:
> 
>value="1.1
> .12-f47ea56"/>
>
> A s and s have no name, but only an ID (it seems).
>
>   
>
> This is redundant: As the  is part of a resource (by XML structure) it's 
> unneccessary to put the name of the resource into the ID of the operation.
>
> It all looks like a kind of abuse of XML IMHO.I think the next CIB format 
> should be able to handle IDs that are free of semantics other than to denote 
> (relatively unique) identity. That is: It should be OK to assign IDs like 
> "i1", "i2", "i3", ... and besides from an IDREF the elements should be 
> accessed by structure and/or name.
>
> (If the ID should be the primary identification feature, flatten all 
> structure and drop all (redundant) names.)

The abuse of ids in the pacemaker schema is a pet peeve of mine; it
would be better to only have ids for nodes where it makes sense: Naming
resources, for example (though I would prefer human-friendly names
rather than ids with loosely defined restrictions). References to
individual XML nodes can be done via XPATH rather than having to assign
ids to every single node in the tree.

Of course, changing it at this point is probably not worth the trouble.

Cheers,
Kristoffer

>
> Regards,
> Ulrich
>
>> 
>> Thanks,
>> 
>> Dejan
>> 
>>> From
>>> pacemaker's point of view one resource gets removed and another one gets
>>> created.
>>> 
>>> This has been discussed recently:
>>> http://lists.clusterlabs.org/pipermail/users/2017-April/005387.html 
>>> 
>>> Regards,
>>> Tomas
>>> 
>>> >
>>> >
>>> >
>>> >
>>> >
>>> >
>>> >*/MAJID SAYED/*
>>> >
>>> >/HPC System Administrator./
>>> >
>>> >/King Abdullah International Medical Research Centre/
>>> >
>>> >/Phone:+9661801(Ext:40631)/
>>> >
>>> >/Email:sayed...@ngha.med.sa/
>>> >
>>> >
>>> >
>>> >
>>> >
>>> >This Email and any files transmitted may contain confidential and/or
>>> >privileged information and is intended solely for the addressee(s)
>>> >named. If you have received this information in error, or are being
>>> >posted by accident, please notify the sender by return Email, do not
>>> >redistribute this email message, delete it immediately and keep no
>>> >copies of it. All opinions and/or views expressed in this email are
>>> >solely those of the author and do not necessarily represent those of
>>> >NGHA. Any purchase order, purchase advice or legal commitment is only
>>> >valid once backed by the signed hardcopy by the authorized person from 
>>> >NGHA.
>>> >
>>> >
>>> >___
>>> >Users mailing list: Users@clusterlabs.org 
>>> >http://lists.clusterlabs.org/mailman/listinfo/users 
>>> >
>>> >Project Home: http://www.clusterlabs.org 
>>> >Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf 
>>> >Bugs: http://bugs.clusterlabs.org 
>>> >
>>> 
>>> ___
&

Re: [ClusterLabs] Can't See Why This Cluster Failed Over

2017-04-10 Thread Kristoffer Grönlund
Eric Robinson <eric.robin...@psmnv.com> writes:

>> crm configure show xml c_clust19
>
> Here is what I am entering using crmsh (version 2.0-1):
>
>
> colocation c_clust19 inf: [ p_mysql_057 p_mysql_092 p_mysql_187 ] 
> p_vip_clust19 p_fs_clust19 p_lv_on_drbd0 ms_drbd0:Master
> order o_clust19 inf: ms_drbd0:promote p_lv_on_drbd0 p_fs_clust19 
> p_vip_clust19 [ p_mysql_057 p_mysql_092 p_mysql_187 ]
>
>
> After I save it, I get no errors, but it converts it to this...
>
>
> colocation c_clust19 inf: [ p_mysql_057 p_mysql_092 p_mysql_187 ] ( 
> p_vip_clust19:Master p_fs_clust19:Master p_lv_on_drbd0:Master ) ( 
> ms_drbd0:Master )
> order o_clust19 inf: ms_drbd0:promote ( p_lv_on_drbd0:start 
> p_fs_clust19:start p_vip_clust19:start ) [ p_mysql_057 p_mysql_092 
> p_mysql_187 ]
>
> This looks incorrect to me.
>
> Here is the xml that it generates.
>
> 
>   
> 
>   
>   
>   
>   
>   
>   
>   
>   
>   
>   
>   
>   
>   
> 
> 
>   
>   
>   
> 
> 
>   
> 
>   
> 
>
> The resources in set c_clust19-1 should start sequentially, starting with 
> p_lv_on_drbd0 and ending with p_vip_clust19. I also don't understand why 
> p_lv_on_drbd0 and p_vip_clust19 are getting the Master designation. 

Hi,

Yeah, that does indeed look like a bug.. One thing that is confusing and
may be one reason why things get split in an unexpected way is because
as you can see, the role attribute is applied per resource set, while
it looks like it applies per resource in the crmsh syntax. So the shell
does some complex logic to "split" sets based on role assignment.

Cheers,
Kristoffer

>
> --
> Eric Robinson
>
>

-- 
// Kristoffer Grönlund
// kgronl...@suse.com

___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] Can't See Why This Cluster Failed Over

2017-04-09 Thread Kristoffer Grönlund
Eric Robinson <eric.robin...@psmnv.com> writes:

> Here's the config. I don't know why the CRM put in the parenthesis where it 
> did. That's not the way I typed it. I usually have all my mysql instances 
> between parenthesis and everything else outside.

[ ...]

> colocation c_clust19 inf: ( p_mysql_057 p_mysql_092 p_mysql_187 p_mysql_213 
> p_mysql_250 p_mysql_289 p_mysql_312 p_vip_clust19 p_mysql_702 p_mysql_743 
> p_mysql_745 p_mysql_746 p_fs_clust19 p_lv_on_drbd0 ) ( ms_drbd0:Master )
> colocation c_clust20 inf: p_vip_clust20 p_fs_clust20 p_lv_on_drbd1 
> ms_drbd1:Master
> order o_clust19 inf: ms_drbd0:promote ( p_lv_on_drbd0:start ) ( p_fs_clust19 
> p_vip_clust19 ) ( p_mysql_057 p_mysql_092 p_mysql_187 p_mysql_213 p_mysql_250 
> p_mysql_289 p_mysql_312 p_mysql_702 p_mysql_743 p_mysql_745 p_mysql_746 )

This might be a bug in crmsh: What was the expression you intended to
write, and which version of crmsh do you have?

You can see the resulting XML that crmsh generates and then re-parses
into the line syntax using

crm configure show xml c_clust19

Cheers,
Kristoffer

-- 
// Kristoffer Grönlund
// kgronl...@suse.com

___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] Antw: Re: question about ocf metadata actions

2017-03-31 Thread Kristoffer Grönlund
Ulrich Windl <ulrich.wi...@rz.uni-regensburg.de> writes:

> I thought the hierarchy is like this:
> 1) default timeout
> 2) RA's default timeout
> 3) user-specified timeout
>
> So crm would go from 1) to 3) taking the last value it finds. Isn't it like
> that?

No, step 2) is not taken by crm.

> I mean if there's no timeout in the resource cnfiguration, doesn't the RM use
> the default timeout?

Yes, it then uses the timeout defined in op_defaults:

http://clusterlabs.org/doc/en-US/Pacemaker/1.1-pcs/html-single/Pacemaker_Explained/index.html#s-operation-defaults

Cheers,
Kristoffer

>
> Regards,
> Ulrich
>
>> 
>> https://github.com/ClusterLabs/resource-agents/blob/master/doc/dev-guides/ra
>
>> -dev-guide.asc#_metadata
>> 
>>> Every action should list its own timeout value. This is a hint to the
>>> user what minimal timeout should be configured for the action. This is
>>> meant to cater for the fact that some resources are quick to start and
>>> stop (IP addresses or filesystems, for example), some may take several
>>> minutes to do so (such as databases).
>> 
>>> In addition, recurring actions (such as monitor) should also specify a
>>> recommended minimum interval, which is the time between two
>>> consecutive invocations of the same action. Like timeout, this value
>>> does not constitute a default— it is merely a hint for the user which
>>> action interval to configure, at minimum.
>> 
>> Cheers,
>> Kristoffer
>> 
>>>
>>> Br,
>>>
>>> Allen
>>> ___
>>> Users mailing list: Users@clusterlabs.org 
>>> http://lists.clusterlabs.org/mailman/listinfo/users 
>>>
>>> Project Home: http://www.clusterlabs.org 
>>> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf 
>>> Bugs: http://bugs.clusterlabs.org 
>> 
>> -- 
>> // Kristoffer Grönlund
>> // kgronl...@suse.com 
>> 
>> ___
>> Users mailing list: Users@clusterlabs.org 
>> http://lists.clusterlabs.org/mailman/listinfo/users 
>> 
>> Project Home: http://www.clusterlabs.org 
>> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf 
>> Bugs: http://bugs.clusterlabs.org 
>
>
>
>
> ___
> Users mailing list: Users@clusterlabs.org
> http://lists.clusterlabs.org/mailman/listinfo/users
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org

-- 
// Kristoffer Grönlund
// kgronl...@suse.com

___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] Stonith

2017-03-30 Thread Kristoffer Grönlund
Alexander Markov <prof...@tic-tac.ru> writes:

> Hello, Kristoffer
>
>> Did you test failover through pacemaker itself?
>
> Yes, I did, no problems here.
>
>> However: Am I understanding it correctly that you have one node in each
>> data center, and a stonith device in each data center?
>
> Yes.
>
>> If the
>> data center is lost, the stonith device for the node in that data 
>> center
>> would also be lost and thus not able to fence.
>
> Exactly what happens!
>
>> In such a hardware configuration, only a poison pill solution like SBD
>> could work, I think.
>
> I've got no shared storage here. Every datacenter has its own storage 
> and they have replication on top (similar to drbd). I can organize a 
> cross-shared solution though if it help, but don't see how.

The only solution I know which allows for a configuration like this is
using separate clusters in each data center, and using booth for
transferring ticket ownership between them. Booth requires a data
center-level quorum (meaning at least 3 locations), though the third
location can be just a small daemon without an actual cluster, and can
run in a public cloud or similar for example.

Cheers,
Kristoffer

>
>> --
>> Regards,
>> Alexander
>
>

-- 
// Kristoffer Grönlund
// kgronl...@suse.com

___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] question about ocf metadata actions

2017-03-30 Thread Kristoffer Grönlund
he.hailo...@zte.com.cn writes:

> Hi,
>
>
> Does the timeout configured in the ocf metadata actually take effect?
>
>
>
>
> <actions>
>
> <action name="start" timeout="300s" />
>
> <action name="stop" timeout="200s" />
>
> <action name="status" timeout="20s" />
>
> <action name="monitor" depth="0" timeout="20s" interval="2s" />
>
> <action name="meta-data" timeout="120s" />
>
> <action name="validate-all"  timeout="20s" />
>
> </actions>
>
>
>
>
> what's the relationship with the ones configured using "crm configure 
> primitive" ?

Hi Allen,

The timeouts in the OCF metadata are merely documentation hints, and
ignored by Pacemaker unless configured appropriately in the CIB (which
is what crm configure primitive does). See the OCF documentation:

https://github.com/ClusterLabs/resource-agents/blob/master/doc/dev-guides/ra-dev-guide.asc#_metadata

> Every action should list its own timeout value. This is a hint to the
> user what minimal timeout should be configured for the action. This is
> meant to cater for the fact that some resources are quick to start and
> stop (IP addresses or filesystems, for example), some may take several
> minutes to do so (such as databases).

> In addition, recurring actions (such as monitor) should also specify a
> recommended minimum interval, which is the time between two
> consecutive invocations of the same action. Like timeout, this value
> does not constitute a default — it is merely a hint for the user which
> action interval to configure, at minimum.

Cheers,
Kristoffer

>
> Br,
>
> Allen
> ___
> Users mailing list: Users@clusterlabs.org
> http://lists.clusterlabs.org/mailman/listinfo/users
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org

-- 
// Kristoffer Grönlund
// kgronl...@suse.com

___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] Stonith

2017-03-30 Thread Kristoffer Grönlund
Alexander Markov <prof...@tic-tac.ru> writes:

> Hello guys,
>
> it looks like I miss something obvious, but I just don't get what has 
> happened.
>
> I've got a number of stonith-enabled clusters within my big POWER boxes. 
> My stonith devices are two HMC (hardware management consoles) - separate 
> servers from IBM that can reboot separate LPARs (logical partitions) 
> within POWER boxes - one per every datacenter.
>
> So my definition for stonith devices was pretty straightforward:
>
> primitive st_dc2_hmc stonith:ibmhmc \
> params ipaddr=10.1.2.9
> primitive st_dc1_hmc stonith:ibmhmc \
> params ipaddr=10.1.2.8
> clone cl_st_dc2_hmc st_dc2_hmc
> clone cl_st_dc1_hmc st_dc1_hmc
>
> Everything was ok when we tested failover. But today upon power outage 

Did you test failover through pacemaker itself?

Otherwise, the logs for the attempted stonith should reveal more about
how Pacemaker tried to call the stonith device, and what went wrong.

However: Am I understanding it correctly that you have one node in each
data center, and a stonith device in each data center? That doesn't
sound like a setup that can recover from data center failure: If the
data center is lost, the stonith device for the node in that data center
would also be lost and thus not able to fence.

In such a hardware configuration, only a poison pill solution like SBD
could work, I think.

Cheers,
Kristoffer

> we lost one DC completely. Shortly after that cluster just literally 
> hanged itself upong trying to reboot nonexistent node. No failover 
> occured. Nonexistent node was marked OFFLINE UNCLEAN and resources were 
> marked "Started UNCLEAN" on nonexistent node.
>
> UNCLEAN seems to flag a problems with stonith configuration. So my 
> question is: how to avoid such behaviour?
>
> Thank you!
>
> -- 
> Regards,
> Alexander
>
> ___
> Users mailing list: Users@clusterlabs.org
> http://lists.clusterlabs.org/mailman/listinfo/users
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org
>

-- 
// Kristoffer Grönlund
// kgronl...@suse.com

___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] Fence agent for VirtualBox

2017-02-23 Thread Kristoffer Grönlund
Marek Grac <mg...@redhat.com> writes:

> Hi,
>
> we have added support for a host with Windows but it is not trivial to
> setup because of various contexts/privileges.
>
> Install openssh on Windows (tutorial can be found on
> http://linuxbsdos.com/2015/07/30/how-to-install-openssh-on-windows-10/)
>
> There is a major issue with current setup in Windows.  You have to start
> virtual machines from openssh connection if you wish to manage them from
> openssh connection.
>
> So, you have to connect from Windows to very same Windows using ssh and
> then run
>
> “/Program Files/Oracle/VirtualBox/VBoxManage.exe” start NAME_OF_VM
>
> Be prepared that you will not see that your machine VM is running in
> VirtualBox
> management UI.
>
> Afterwards it is enough to add parameter --host-os windows (or
> host_os=windows when stdin/pcs is used).
>

Cool, nice work!

Cheers,
Kristoffer

> m,
>
> On Wed, Feb 22, 2017 at 11:49 AM, Marek Grac <mg...@redhat.com> wrote:
>
>> Hi,
>>
>> I have updated fence agent for Virtual Box (upstream git). The main
>> benefit is new option --host-os (host_os on stdin) that supports
>> linux|macos. So if your host is linux/macos all you need to set is this
>> option (and ssh access to a machine). I would love to add a support also
>> for windows but I'm not able to run vboxmanage.exe over the openssh. It
>> works perfectly from command prompt under same user, so there are some
>> privileges issues, if you know how to fix this please let me know.
>>
>> m,
>>
> ___
> Users mailing list: Users@clusterlabs.org
> http://lists.clusterlabs.org/mailman/listinfo/users
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org

-- 
// Kristoffer Grönlund
// kgronl...@suse.com

___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] crm shell RA completion

2017-02-20 Thread Kristoffer Grönlund
Ulrich Windl <ulrich.wi...@rz.uni-regensburg.de> writes:

> Hi!
>
> I have a proposal for crm shell's RA completion: When pressing TAB after "ra 
> info", crm shell suggests a long list of RAs. Wouldn't it preferable to 
> complete only up to the next ':'?
>
> Consider this:
> crm(live)# ra info
> Display all 402 possibilities? (y or n)n
> crm(live)# ra info ocf:
> Display all 101 possibilities? (y or n)n
> crm(live)# ra info ocf:heartbeat:
> (a long list is displayed)
>
> So at the first level not all 402 RAs should be suggested but only the first 
> level (like "ocf"), and at the second level not all 101 completions should be 
> suggested, but only a few (like "heartbeat").
>
> What do you think?

Sounds good to me, yes. The completion is a bit wonky and tricky to get
right. Still a work in progress.

Cheers,
Kristoffer

>
> Regards,
> Ulrich
>
>
>
> ___
> Users mailing list: Users@clusterlabs.org
> http://lists.clusterlabs.org/mailman/listinfo/users
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org
>

-- 
// Kristoffer Grönlund
// kgronl...@suse.com

___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] question about equal resource distribution

2017-02-18 Thread Kristoffer Grönlund
Ilia Sokolinski <i...@clearskydata.com> writes:

> Suppose I have a N node cluster where N > 2 running m*N resources. Resources 
> don’t have preferred nodes, but since resources take RAM and CPU it is 
> important to distribute them equally among the nodes.
> Will pacemaker do the equal distribution, e.g. m resources per node?
> If a node fails, will pacemaker redistribute the resources equally too, e.g. 
> m * N/(N-1) per node?
>
> I don’t see any settings controlling this behavior in the documentation, but 
> perhaps, pacemaker tries to be “fair” by default.
>

Yes, pacemaker tries to allocate resources evenly by default, and will
move resources when nodes fail in order to maintain that.

There are several different mechanisms that influence this behaviour:

* Any placement constraints in general influence where resources are
  allocated.

* You can set resource-stickiness to a non-zero value which determines
  to which degree Pacemaker prefers to leave resources running where
  they are. The score is in relation to other placement scores, like
  constraint scores etc. This can be set for individual resources or
  globally. [1]

* If you have an asymmetrical cluster, resources have to be manually
  allocated to nodes via constraints, see [2]

[1]: 
http://clusterlabs.org/doc/en-US/Pacemaker/1.1-pcs/html-single/Pacemaker_Explained/index.html#s-resource-options
[2]: 
http://clusterlabs.org/doc/en-US/Pacemaker/1.1-pcs/html-single/Pacemaker_Explained/index.html#_asymmetrical_opt_in_clusters

Cheers,
Kristoffer

> Thanks 
>
> Ilia Sokolinski
> ___
> Users mailing list: Users@clusterlabs.org
> http://lists.clusterlabs.org/mailman/listinfo/users
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org

-- 
// Kristoffer Grönlund
// kgronl...@suse.com

___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] question about equal resource distribution

2017-02-18 Thread Kristoffer Grönlund
Ilia Sokolinski <i...@clearskydata.com> writes:

> Thank you!
>
> What quantity does pacemaker tries to equalize - number of running resources 
> per node or total stickiness per node?
>

I honestly don't know exactly what the criteria are. Without any
utilization definitions for nodes, I *think* it tries to balance the
number of resources per node. But if the resources and nodes have
cpu/memory utilization defined, the rules change. But I'm afraid I
haven't dug into exactly what the logic looks like.

> Suppose I have a bunch of web server groups each with IPaddr and apache 
> resources, and a fewer number of database groups each with IPaddr, postgres 
> and LVM resources.
>
> In that case, does it mean that 3 web server groups are weighted the same as 
> 2 database groups in terms of distribution?

Good question, I think it looks purely at the primitive
resources. Groups are just shorthand for a series of ordering and
placement constraints.

Cheers,
Kristoffer

>
> Ilia
>
>
>
>> On Feb 17, 2017, at 2:58 AM, Kristoffer Grönlund <deceive...@gmail.com> 
>> wrote:
>> 
>> Ilia Sokolinski <i...@clearskydata.com> writes:
>> 
>>> Suppose I have a N node cluster where N > 2 running m*N resources. 
>>> Resources don’t have preferred nodes, but since resources take RAM and CPU 
>>> it is important to distribute them equally among the nodes.
>>> Will pacemaker do the equal distribution, e.g. m resources per node?
>>> If a node fails, will pacemaker redistribute the resources equally too, 
>>> e.g. m * N/(N-1) per node?
>>> 
>>> I don’t see any settings controlling this behavior in the documentation, 
>>> but perhaps, pacemaker tries to be “fair” by default.
>>> 
>> 
>> Yes, pacemaker tries to allocate resources evenly by default, and will
>> move resources when nodes fail in order to maintain that.
>> 
>> There are several different mechanisms that influence this behaviour:
>> 
>> * Any placement constraints in general influence where resources are
>>  allocated.
>> 
>> * You can set resource-stickiness to a non-zero value which determines
>>  to which degree Pacemaker prefers to leave resources running where
>>  they are. The score is in relation to other placement scores, like
>>  constraint scores etc. This can be set for individual resources or
>>  globally. [1]
>> 
>> * If you have an asymmetrical cluster, resources have to be manually
>>  allocated to nodes via constraints, see [2]
>> 
>> [1]: 
>> http://clusterlabs.org/doc/en-US/Pacemaker/1.1-pcs/html-single/Pacemaker_Explained/index.html#s-resource-options
>> [2]: 
>> http://clusterlabs.org/doc/en-US/Pacemaker/1.1-pcs/html-single/Pacemaker_Explained/index.html#_asymmetrical_opt_in_clusters
>> 
>> Cheers,
>> Kristoffer
>> 
>>> Thanks 
>>> 
>>> Ilia Sokolinski
>>> ___
>>> Users mailing list: Users@clusterlabs.org
>>> http://lists.clusterlabs.org/mailman/listinfo/users
>>> 
>>> Project Home: http://www.clusterlabs.org
>>> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
>>> Bugs: http://bugs.clusterlabs.org
>> 
>> -- 
>> // Kristoffer Grönlund
>> // kgronl...@suse.com
>
>
> ___
> Users mailing list: Users@clusterlabs.org
> http://lists.clusterlabs.org/mailman/listinfo/users
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org

-- 
// Kristoffer Grönlund
// kgronl...@suse.com

___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] resources management - redesign

2017-02-06 Thread Kristoffer Grönlund
Hi Florin,

I'm afraid I don't quite understand what it is that you are asking. You
can specify the resource ID when creating resources, and using resource
constraints, you can specify any order/colocation structure that you
need.

> 1. RG = rg1 + following resources: fs1, fs2,fs3, ocf:heartbeat[my custom
> systemd script] 

What do you mean by ocf:heartbeat[my custom systemd script]? If you've
got your own service with a systemd service file and you don't need
custom monitoring, you can use "systemd:" as the resource agent.

> Now, what solution exists ?  export cib, edit cib and re-import cib;
> what if  I will need a new fs:fs4, so what: export cib, create new
> resource inside exported cib and re-import it. 

One way to make large changes to the configuration is to

1. Stop all resources

crm configure property stop-all-resources=true 

2. Edit configuration to what you need

crm configure edit

3. Start all resources

   crm configure property stop-all-resources=false

You might have some success in keeping services running during editing
by using maintenance-mode=true instead, but that takes a lot more
care and is difficult to recommend in the general case.

It is also possible to use the shadow CIB facitility to simulate changes
to the cluster before applying them:

http://clusterlabs.org/man/pacemaker/crm_simulate.8.html

There's some documentation on using Hawk with the simulator which is
already outdated but might be of some help in figuring out what is
possible:

https://hawk-guide.readthedocs.io/en/latest/simulator.html

Cheers,
Kristoffer

-- 
// Kristoffer Grönlund
// kgronl...@suse.com

___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] Antw: Re: crm shell: How to display properties?

2017-02-06 Thread Kristoffer Grönlund
Ulrich Windl <ulrich.wi...@rz.uni-regensburg.de> writes:

>>>> xin <xli...@suse.com> schrieb am 06.02.2017 um 10:50 in Nachricht
> <65fbbdf9-f820-63e7-fe02-1d1acefc5...@suse.com>:
>> Hi Ulrich:
>> 
>>"crm configure show" can display what you set for properties.
>> 
>>Do you find another way?
>
> Yes,, but it shows the while configuration. If your configuration is long, the
> output can be very long.
> What I'm talking about is:
> crm(live)configure# show property
> ERROR: object property does not exist
> crm(live)configure# show pe-error-series-max
> ERROR: object pe-error-series-max does not exist
>
> But I found out: This one works: "crm(live)configure# show
> cib-bootstrap-options".
>

You can also use

crm configure show type:property

If you follow the *-options naming convention, you can do

crm configure show \*options

Cheers,
Kristoffer

> Regards,
> Ulrich
>
>> 
>> 在 2017年02月06日 17:12, Ulrich Windl 写道:
>>>>>> Ken Gaillot <kgail...@redhat.com> schrieb am 02.02.2017 um 21:19 in
> Nachricht
>>> <cdabf966-944b-a030-58c7-36ec3ee79...@redhat.com>:
>>>
>>> [...]
>>>> The files are not necessary for cluster operation, so you can clean them
>>>> as desired. The cluster can clean them for you based on cluster options;
>>>> see pe-error-series-max, pe-warn-series-max, and pe-input-series-max:
>>> [...]
>>>
>>> Related question:
>>> in crm shell I can set properties in configure context ("property ..."),
> but 
>> how can I display them (except from looking at the end of a "show")?
>>>
>>> Regards,
>>> Ulrich
>>>
>>>
>>>
>>> ___
>>> Users mailing list: Users@clusterlabs.org 
>>> http://lists.clusterlabs.org/mailman/listinfo/users 
>>>
>>> Project Home: http://www.clusterlabs.org 
>>> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf 
>>> Bugs: http://bugs.clusterlabs.org 
>>>
>> 
>> 
>> ___
>> Users mailing list: Users@clusterlabs.org 
>> http://lists.clusterlabs.org/mailman/listinfo/users 
>> 
>> Project Home: http://www.clusterlabs.org 
>> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf 
>> Bugs: http://bugs.clusterlabs.org 
>
>
>
>
> ___
> Users mailing list: Users@clusterlabs.org
> http://lists.clusterlabs.org/mailman/listinfo/users
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org

-- 
// Kristoffer Grönlund
// kgronl...@suse.com

___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] fence_vbox '--action=' not executing action

2017-02-06 Thread Kristoffer Grönlund
dur...@mgtsciences.com writes:

> Kristoffer Grönlund <kgronl...@suse.com> wrote on 02/01/2017 10:49:54 PM:
>
>> 
>> Another possibility is that the command that fence_vbox tries to run
>> doesn't work for you for some reason. It will either call
>> 
>> VBoxManage startvm  --type headless
>> 
>> or
>> 
>> VBoxManage controlvm  poweroff
>> 
>> when passed on or off as the --action parameter.
>
> If there is no further work being done on fence_vbox, is there a 'dummy' 
> fence
> which I might use to make STONITH happy in my configuration?  It need only 
> send
> the correct signals to STONITH so that I might create an active/active 
> cluster
> to experiment with?  This is only an experimental configuration.
>

Another option would be to use SBD for fencing if your hypervisor can
provide uncached shared storage:

https://github.com/ClusterLabs/sbd

This is what we usually use for our test setups here, both with
VirtualBox and qemu/kvm.

fence_vbox is actively maintained for sure, but we'd need to narrow down
what the correct changes would be to make it work in your
environment.

Trying to use a dummy fencing agent is likely to come back to bite you,
the cluster will act very unpredictably if it thinks that there is a
fencing option that doesn't actually work.

For fence_vbox, the best path forward is probably to create an issue
upstream, and attach as much relevant information about your environment
as possible:

https://github.com/ClusterLabs/fence-agents/issues/new

Cheers,
Kristoffer

> Thank you,
>
> Durwin
>

-- 
// Kristoffer Grönlund
// kgronl...@suse.com

___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] How to change the name of one cluster resource and resource group ?

2017-02-01 Thread Kristoffer Grönlund
Jihed M'selmi <jihed.mse...@gmail.com> writes:

> Thanks for reply,
> I don't have crm command. It's  corosync version 2.3.4.el7_2.1.
>

crmsh is a separate project, you can install it in parallel with
corosync/pacemaker. There are packages on OBS:

http://download.opensuse.org/repositories/network:/ha-clustering:/Stable/RedHat_RHEL-7/

Otherwise if you have pcs it should have something similar to crm
configure rename.

Cheers,
Kristoffer

> On Wed, Feb 1, 2017, 3:38 PM Kristoffer Grönlund <kgronl...@suse.com> wrote:
>
>> Jihed M'selmi <jihed.mse...@gmail.com> writes:
>>
>> > Hello,
>> >
>> > I need update the name of one resource group with a new name. Any
>> thoughts?
>> >
>>
>> crmsh has the crm configure rename command, which tries to update any
>> constraint references atomically as well.
>>
>> Cheers,
>> Kristoffer
>>
>> > Cheers,
>> > JM
>> > --
>> >
>> > J.M
>> > ___
>> > Users mailing list: Users@clusterlabs.org
>> > http://lists.clusterlabs.org/mailman/listinfo/users
>> >
>> > Project Home: http://www.clusterlabs.org
>> > Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
>> > Bugs: http://bugs.clusterlabs.org
>>
>> --
>> // Kristoffer Grönlund
>> // kgronl...@suse.com
>>
> -- 
>
> J.M

-- 
// Kristoffer Grönlund
// kgronl...@suse.com

___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] How to change the name of one cluster resource and resource group ?

2017-02-01 Thread Kristoffer Grönlund
Jihed M'selmi <jihed.mse...@gmail.com> writes:

> Hello,
>
> I need update the name of one resource group with a new name. Any thoughts?
>

crmsh has the crm configure rename command, which tries to update any
constraint references atomically as well.

Cheers,
Kristoffer

> Cheers,
> JM
> -- 
>
> J.M
> ___
> Users mailing list: Users@clusterlabs.org
> http://lists.clusterlabs.org/mailman/listinfo/users
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org

-- 
// Kristoffer Grönlund
// kgronl...@suse.com

___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] [ClusterLabs Developers] HA/Clusterlabs Summit 2017 Proposal

2017-01-31 Thread Kristoffer Grönlund
Chris Feist <cfe...@redhat.com> writes:

> On Mon, Jan 30, 2017 at 8:23 AM, Kristoffer Grönlund <kgronl...@suse.com>
> wrote:
>
>> Hi everyone!
>>
>> The last time we had an HA summit was in 2015, and the intention then
>> was to have SUSE arrange the next meetup in the following year. We did
>> try to find a date that would be suitable for everyone, but for various
>> reasons there was never a conclusion and 2016 came and went.
>>
>> Well, I'd like to give it another try this year! This time, I've already
>> got a proposal for a place and date: September 7-8 in Nuremberg, Germany
>> (SUSE main office). I've got the new event area in the SUSE office
>> already reserved for these dates.
>>
>> My suggestion is to do a two day event similar to the one in Brno, but I
>> am open to any suggestions as to format and content. The main reason for
>> having the event would be for everyone to have a chance to meet and get
>> to know each other, but it's also an opportunity to discuss the future
>> of Clusterlabs and the direction going forward.
>>
>> Any thoughts or feedback are more than welcome! Let me know if you are
>> interested in coming or unable to make it.
>>
>
> Kristoffer,
>
> Thank you for getting some dates and providing a space for the summit.  I
> know myself and several cluster engineers from Red Hat are definitely
> interested in attending.  The only thing that I might recommend is moving
> the conference one day earlier (change to Wed/Thu instead of Thu/Fri) to
> make it easier for people traveling to/from the conference.

Hi Chris,

Sounds great! Happy to move it to September 6-7 if that works out
better.

Cheers,
Kristoffer

>
> Thanks!
> Chris
>
>
>>
>> Cheers,
>> Kristoffer
>>
>> --
>> // Kristoffer Grönlund
>> // kgronl...@suse.com
>>
>> ___
>> Developers mailing list
>> develop...@clusterlabs.org
>> http://lists.clusterlabs.org/mailman/listinfo/developers
>>
> ___
> Developers mailing list
> develop...@clusterlabs.org
> http://lists.clusterlabs.org/mailman/listinfo/developers

-- 
// Kristoffer Grönlund
// kgronl...@suse.com

___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] Antw: Re: Antw: Colocations and Orders Syntax Changed?

2017-01-31 Thread Kristoffer Grönlund
Ulrich Windl <ulrich.wi...@rz.uni-regensburg.de> writes:

>>>> Eric Robinson <eric.robin...@psmnv.com> schrieb am 20.01.2017 um 12:56 in
> Nachricht
> <dm5pr03mb2729d5003219b644b4e0bc7cfa...@dm5pr03mb2729.namprd03.prod.outlook.com>
>
>> Thanks for the input. I usually just do a 'crm config show > 
>> myfile.xml.date_time' and the read it back in if I need to. 
>
> I guess 'crm configure show xml > myfile.xml.date_time', because here I get 
> "ERROR: config: No such command" and no XML... ;-)
>
> Acutally I'm using "cibadmin -Q -o configuration", because I think it's 
> faster...

If you use a more recent version of crmsh, "crm config show" will
actually work as well, thanks to some fuzzy command matching ;)

(though to get XML you do need the xml argument still)

Cheers,
Kristoffer

>
> Regards,
> Ulrich
>
>
>
> ___
> Users mailing list: Users@clusterlabs.org
> http://lists.clusterlabs.org/mailman/listinfo/users
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org
>

-- 
// Kristoffer Grönlund
// kgronl...@suse.com

___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


[ClusterLabs] Releasing crmsh version 3.0.0

2017-01-31 Thread Kristoffer Grönlund
Hello everyone!

I'm happy to announce the release of crmsh version 3.0.0 today. The
main reason for the major version bump is because I have merged the
sleha-bootstrap project with crmsh, replacing the cluster
init/add/remove commands with the corresponding commands from
sleha-bootstrap.

At the moment, these commands are highly specific to SLE and openSUSE,
unfortunately. I am working on making them as distribution agnostic as
possible, but would appreciate help from users of other distributions
in making them work as well on those platforms as they do on
SLE/openSUSE.

Briefly, the "cluster init" command configures a complete cluster from
scratch, including optional configuration of fencing via SBD, shared
storage using OCFS2, setting up the Hawk web interface etc.

There are some other changes in this release as well, see the
ChangeLog for the complete list of changes:

* https://github.com/ClusterLabs/crmsh/blob/3.0.0/ChangeLog

The source code can be downloaded from Github:

* https://github.com/ClusterLabs/crmsh/releases/tag/3.0.0

This version of crmsh will be available in openSUSE Tumbleweed as soon
as possible, and packages for several popular Linux distributions are
available from the Stable repository at the OBS:

* http://download.opensuse.org/repositories/network:/ha-clustering:/Stable/

Archives of the tagged release:

* https://github.com/ClusterLabs/crmsh/archive/3.0.0.tar.gz
* https://github.com/ClusterLabs/crmsh/archive/3.0.0.zip

As usual, a huge thank you to all contributors and users of crmsh!

Cheers,
Kristoffer

-- 
// Kristoffer Grönlund
// kgronl...@suse.com

___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] [ClusterLabs Developers] HA/Clusterlabs Summit 2017 Proposal

2017-01-31 Thread Kristoffer Grönlund
Digimer <li...@alteeve.ca> writes:

> On 30/01/17 09:23 AM, Kristoffer Grönlund wrote:
>> Hi everyone!
>> 
>> The last time we had an HA summit was in 2015, and the intention then
>> was to have SUSE arrange the next meetup in the following year. We did
>> try to find a date that would be suitable for everyone, but for various
>> reasons there was never a conclusion and 2016 came and went.
>> 
>> Well, I'd like to give it another try this year! This time, I've already
>> got a proposal for a place and date: September 7-8 in Nuremberg, Germany
>> (SUSE main office). I've got the new event area in the SUSE office
>> already reserved for these dates.
>> 
>> My suggestion is to do a two day event similar to the one in Brno, but I
>> am open to any suggestions as to format and content. The main reason for
>> having the event would be for everyone to have a chance to meet and get
>> to know each other, but it's also an opportunity to discuss the future
>> of Clusterlabs and the direction going forward.
>> 
>> Any thoughts or feedback are more than welcome! Let me know if you are
>> interested in coming or unable to make it.
>> 
>> Cheers,
>> Kristoffer
>
> Thank you for starting this back up. I was just thinking about this a
> few days ago.
>
> I could make it, and I would be happy to help organize it however I
> might be able to help.

Hi,

Awesome! I might hold you to that promise :) If nothing else your wiki
has been useful in the past as a place to host the list of attendees and
the agenda.

Another option would be to create a repository in the Clusterlabs github
organization and have people add themselves there via pull requests. I'm
also open to suggestions on that front.

Cheers,
Kristoffer

>
> -- 
> Digimer
> Papers and Projects: https://alteeve.com/w/
> "I am, somehow, less interested in the weight and convolutions of
> Einstein’s brain than in the near certainty that people of equal talent
> have lived and died in cotton fields and sweatshops." - Stephen Jay Gould
>
> ___
> Developers mailing list
> develop...@clusterlabs.org
> http://lists.clusterlabs.org/mailman/listinfo/developers

-- 
// Kristoffer Grönlund
// kgronl...@suse.com

___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


[ClusterLabs] HA/Clusterlabs Summit 2017 Proposal

2017-01-30 Thread Kristoffer Grönlund
Hi everyone!

The last time we had an HA summit was in 2015, and the intention then
was to have SUSE arrange the next meetup in the following year. We did
try to find a date that would be suitable for everyone, but for various
reasons there was never a conclusion and 2016 came and went.

Well, I'd like to give it another try this year! This time, I've already
got a proposal for a place and date: September 7-8 in Nuremberg, Germany
(SUSE main office). I've got the new event area in the SUSE office
already reserved for these dates.

My suggestion is to do a two day event similar to the one in Brno, but I
am open to any suggestions as to format and content. The main reason for
having the event would be for everyone to have a chance to meet and get
to know each other, but it's also an opportunity to discuss the future
of Clusterlabs and the direction going forward.

Any thoughts or feedback are more than welcome! Let me know if you are
interested in coming or unable to make it.

Cheers,
Kristoffer

-- 
// Kristoffer Grönlund
// kgronl...@suse.com

___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] sbd: Cannot open watchdog device: /dev/watchdog

2017-01-03 Thread Kristoffer Grönlund
Muhammad Sharfuddin <m.sharfud...@nds.com.pk> writes:

> Hello,
>
> pacemaker does not start on this machine(Fujitsu PRIMERGY RX2540 M1) 
> with following error in  the logs:
>
> sbd: [13236]: ERROR: Cannot open watchdog device: /dev/watchdog: No such 
> file or directory

Does /dev/watchdog exist? If so, it may be opened by a different
process. If you have more than one watchdog device, you can configure
sbd to use a different device using the -w option.

Cheers,
Kristoffer

>
> System Info:
>
> sbd-1.2.1-8.7.x86_64  corosync-2.3.3-7.12.x86_64 pacemaker-1.1.12-7.1.x86_64
>
> lsmod | egrep "(wd|dog)"
> iTCO_wdt   13480  0
> iTCO_vendor_support13718  1 iTCO_wdt
>
> dmidecode | grep -A3 '^System Information'
> System Information
>  Manufacturer: FUJITSU
>  Product Name: PRIMERGY RX2540 M1
>  Version: GS01
>
> logs:
>
> 2017-01-03T21:00:26.890503+05:00 prdnode1 sbd: [13235]: info: Watchdog 
> enabled.
> 2017-01-03T21:00:26.899817+05:00 prdnode1 sbd: [13238]: info: Servant 
> starting for device 
> /dev/disk/by-id/wwn-0x60e00d28002825b5-part1
> 2017-01-03T21:00:26.900175+05:00 prdnode1 sbd: [13238]: info: Device 
> /dev/disk/by-id/wwn-0x60e00d28002825b5-part1 uuid: 
> fda42d64-ca74-4578-90c8-976ea7ff5f6e
> 2017-01-03T21:00:26.900418+05:00 prdnode1 sbd: [13239]: info: Monitoring 
> Pacemaker health
> 2017-01-03T21:00:27.901022+05:00 prdnode1 sbd: [13236]: ERROR: Cannot 
> open watchdog device: /dev/watchdog: No such file or directory
> 2017-01-03T21:00:27.912098+05:00 prdnode1 sbd: [13236]: WARN: Servant 
> for pcmk (pid: 13239) has terminated
> 2017-01-03T21:00:27.941950+05:00 prdnode1 sbd: [13236]: WARN: Servant 
> for /dev/disk/by-id/wwn-0x60e00d28002825b5-part1 (pid: 
> 13238) has terminated
> 2017-01-03T21:00:27.949401+05:00 prdnode1 sbd.sh[13231]: sbd failed; 
> please check the logs.
> 2017-01-03T21:00:27.992606+05:00 prdnode1 sbd.sh[13231]: SBD failed to 
> start; aborting.
> 2017-01-03T21:00:27.993061+05:00 prdnode1 systemd[1]: sbd.service: 
> control process exited, code=exited status=1
> 2017-01-03T21:00:27.993339+05:00 prdnode1 systemd[1]: Failed to start 
> Shared-storage based fencing daemon.
> 2017-01-03T21:00:27.993610+05:00 prdnode1 systemd[1]: Dependency failed 
> for Pacemaker High Availability Cluster Manager.
> 2017-01-03T21:00:27.994054+05:00 prdnode1 systemd[1]: Unit sbd.service 
> entered failed state.
>
> please help.
>
> -- 
> Regards,
>
> Muhammad Sharfuddin
> <http://www.nds.com.pk>
>
> ___
> Users mailing list: Users@clusterlabs.org
> http://lists.clusterlabs.org/mailman/listinfo/users
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org

-- 
// Kristoffer Grönlund
// kgronl...@suse.com

___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] Antw: Re: [ClusterLabs Developers] announcement: schedule for resource-agents release 3.9.8

2017-01-03 Thread Kristoffer Grönlund
Ulrich Windl <ulrich.wi...@rz.uni-regensburg.de> writes:

>>>> Kristoffer Grönlund <kgronl...@suse.com> schrieb am 03.01.2017 um 11:55 in
> Nachricht <878tqsjtv4@suse.com>:
>> Oyvind Albrigtsen <oalbr...@redhat.com> writes:
>> 
>>> Hi,
>>>
>>> This is a tentative schedule for resource-agents v3.9.8:
>>> 3.9.8-rc1: January 10.
>>> 3.9.8: January 31.
>>>
>>> I modified the corresponding milestones at
>>> https://github.com/ClusterLabs/resource-agents/milestones 
>>>
>>> If there's anything you think should be part of the release
>>> please open an issue, a pull request, or a bugzilla, as you see
>>> fit.
>>>
>> 
>> Hi Oyvind,
>> 
>> I think it's high time for a new release! My only suggestion would be to
>> call it 4.0.0, since there are much bigger changes from 3.9.7 than an
>> update to the patch release number would suggest.
>
> I don't know the semantics of everybody's release numbering, but for a
> three-level number a "compatibility"."feature"."bug-fix" pattern wouldn't be
> bad; that is only change the first number if there are incompatible changes
> (things may not work after ugrading from the previous level). Change the 
> second
> number whenever there are new features (the users may want to read about), and
> change only the last number if just bugs were fixed (without affecting the
> interfaces).
> And: There's nothing wrong with "10" following "9" ;-)
>
> And if you are just happy to throw out new versions (whatever they bring),
> call it "2017-01" ;-)

There was a recent talk by Rich Hickey on this topic, his way of putting
it was that versions basically boil down to X.Y where Y means "don't
care, just upgrade" and X means "anything can have changed, be very
careful" :)

For resource-agents and the releases historically, I personally think
having a single number that just increments each release makes as much
sense as anything else, at least in my experience there is just a single
development track where bug fixes, new features and backwards
incompatible changes mix freely, even if we do try to keep the
incompatible changes as rare as possible.

But, keeping the x.y.z triplet is easier to maintain in relation to the
older releases. 

Cheers,
Kristoffer

>
> Regards,
> Ulrich
>
>> 
>> Cheers,
>> Kristoffer
>> 
>>> If there's anything that hasn't received due attention, please
>>> let us know.
>>>
>>> Finally, if you can help with resolving issues consider yourself
>>> invited to do so. There are currently 49 issues and 38 pull
>>> requests still open.
>>>
>>>
>>> Cheers,
>>> Oyvind Albrigtsen
>>>
>>> ___
>>> Developers mailing list
>>> develop...@clusterlabs.org 
>>> http://lists.clusterlabs.org/mailman/listinfo/developers 
>>>
>> 
>> -- 
>> // Kristoffer Grönlund
>> // kgronl...@suse.com 
>> 
>> ___
>> Users mailing list: Users@clusterlabs.org 
>> http://lists.clusterlabs.org/mailman/listinfo/users 
>> 
>> Project Home: http://www.clusterlabs.org 
>> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf 
>> Bugs: http://bugs.clusterlabs.org 
>
>
>
>
> ___
> Users mailing list: Users@clusterlabs.org
> http://lists.clusterlabs.org/mailman/listinfo/users
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org

-- 
// Kristoffer Grönlund
// kgronl...@suse.com

___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] [ClusterLabs Developers] announcement: schedule for resource-agents release 3.9.8

2017-01-03 Thread Kristoffer Grönlund
Oyvind Albrigtsen <oalbr...@redhat.com> writes:

> Hi,
>
> This is a tentative schedule for resource-agents v3.9.8:
> 3.9.8-rc1: January 10.
> 3.9.8: January 31.
>
> I modified the corresponding milestones at
> https://github.com/ClusterLabs/resource-agents/milestones
>
> If there's anything you think should be part of the release
> please open an issue, a pull request, or a bugzilla, as you see
> fit.
>

Hi Oyvind,

I think it's high time for a new release! My only suggestion would be to
call it 4.0.0, since there are much bigger changes from 3.9.7 than an
update to the patch release number would suggest.

Cheers,
Kristoffer

> If there's anything that hasn't received due attention, please
> let us know.
>
> Finally, if you can help with resolving issues consider yourself
> invited to do so. There are currently 49 issues and 38 pull
> requests still open.
>
>
> Cheers,
> Oyvind Albrigtsen
>
> ___
> Developers mailing list
> develop...@clusterlabs.org
> http://lists.clusterlabs.org/mailman/listinfo/developers
>

-- 
// Kristoffer Grönlund
// kgronl...@suse.com

___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] New ClusterLabs logo unveiled :-)

2017-01-02 Thread Kristoffer Grönlund
Ken Gaillot <kgail...@redhat.com> writes:

> Hi all,
>
> ClusterLabs is happy to unveil its new logo! Many thanks to the
> designer, Kristoffer Grönlund <kgronl...@suse.com>, who graciously
> donated the clever approach.
>
> You can see it on our GitHub page:
>
>   https://github.com/ClusterLabs
>
> It is also now the site icon for www.clusterlabs.org and
> wiki.clusterlabgs.org. Your browser might have cached the old version,
> so you might not see the new one immediately, but you can see it by
> going straight to the links and reloading:
>
>   http://clusterlabs.org/favicon.ico
>   http://clusterlabs.org/apple-touch-icon.png
>
> It is also on the wiki banner, though the banner will need some tweaking
> to make the best use of it. You might not see it there immediately due
> to browser caching and DNS resolver caching (the wiki IP changed
> recently as part of an OS upgrade), but it's there. :-)
>
> Wishing everyone a happy holiday season,

Thanks for using my logo! Nice holiday surprise :)

Cheers,
Kristoffer

> -- 
> Ken Gaillot <kgail...@redhat.com>
>
> ___
> Users mailing list: Users@clusterlabs.org
> http://lists.clusterlabs.org/mailman/listinfo/users
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org

-- 
// Kristoffer Grönlund
// kgronl...@suse.com

___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] meta-target role stopped in one resource in a group ( SLEHA 11.4 )

2017-01-02 Thread Kristoffer Grönlund
Cristiano Coltro <cristiano.col...@microfocus.com> writes:

> I have some questions:
>
>
> 1) I believe that the command crm resource stop 
> pri_fs_oracle_FS1_sapdata1 has inserted the instruction meta 
> target-role="Stopped" and this will force the resource to stay offline when 
> the grp_all_FS1 will be online again. Am I correct?

Yes

>
> 2) I believe he can easily delete or maybe a crm resource start 
> pri_fs_oracle_FS1_sapdata1 will delete the instruction. Am I correct again? 
> Testing also in my lab.
>

Yes, again :)

Running "crm resource start" might change it to say
"target-role=Started" but the effect should be the same as not setting
target-role at all.

Cheers,
Kristoffer

> Thanks in advance
> Cristiano
>
>
> Cristiano Coltro
> Premium Support Engineer
>
> mail: cristiano.col...@microfocus.com<mailto:cristiano.col...@microfocus.com>
> mobile: +39 335 1435589
> phone +39 02 36634936
> __
> [microfocus-logo]
>
> ___
> Users mailing list: Users@clusterlabs.org
> http://lists.clusterlabs.org/mailman/listinfo/users
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org

-- 
// Kristoffer Grönlund
// kgronl...@suse.com

___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] Antwort: Re: hawk - pacemaker remote

2016-12-12 Thread Kristoffer Grönlund
philipp.achmuel...@arz.at writes:

>
> tried several things, didn't get this working. 
> Any examples how to configure this? 
> Also how to configure for VirtualDomain with remote_node enabled
>
> thank you!
>

Without any details, it is difficult to help - what things did you try,
what does "not working" mean? Hawk can show remote nodes, but it only
shows them if they have entries in the nodes section of the
configuration (as Ken said).

Cheers,
Kristoffer

-- 
// Kristoffer Grönlund
// kgronl...@suse.com

___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] Change resource-stickiness during working hours with crmsh

2016-11-05 Thread Kristoffer Grönlund
Kostiantyn Ponomarenko <konstantin.ponomare...@gmail.com> writes:

> Hi,
>
> I was reading about changing default resource stickiness based on time
> rules, but I didn't find a way to set using crmsh. I tried en example
> configuration from
> http://clusterlabs.org/doc/en-US/Pacemaker/1.1/html-single/Pacemaker_Explained/#_using_rules_to_control_cluster_options
> using cibadmin, and "crm configure show" showed me xml lines. Now I wander,
> is this OK?

It is OK in so far as everything will work just fine, it is just crmsh
that doesn't translate the XML to line syntax.

There are some gaps in the support for rule expressions in the line
syntax, and setting the score attribute for rsc_defaults meta_attributes
is unfortunately one of them.

However, this is not harmful and everything will still work fine, it's
just that crmsh will display the XML directly.

Cheers,
Kristoffer

>
> Thank you,
> Kostia
> ___
> Users mailing list: Users@clusterlabs.org
> http://clusterlabs.org/mailman/listinfo/users
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org

-- 
// Kristoffer Grönlund
// kgronl...@suse.com

___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] RFC: allowing soft recovery attempts before ignore/block/etc.

2016-09-22 Thread Kristoffer Grönlund
Ken Gaillot <kgail...@redhat.com> writes:
>
> "restart" is the only on-fail value that it makes sense to escalate.
>
> block/stop/fence/standby are final. Block means "don't touch the
> resource again", so there can't be any further response to failures.
> Stop/fence/standby move the resource off the local node, so failure
> handling is reset (there are 0 failures on the new node to begin with).

Hrm. If a restart potentially migrates the resource to a different node,
is the failcount reset then as well? If so, wouldn't that complicate the
hard-fail-threshold variable too, since potentially, the resource could
keep migrating between nodes and since the failcount is reset on each
migration, it would never reach the hard-fail-threshold. (or am I
missing something?)

-- 
// Kristoffer Grönlund
// kgronl...@suse.com

___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] RFC: allowing soft recovery attempts before ignore/block/etc.

2016-09-22 Thread Kristoffer Grönlund
Ken Gaillot <kgail...@redhat.com> writes:

> I'm not saying it's a bad idea, just that it's more complicated than it
> first sounds, so it's worth thinking through the implications.

Thinking about it and looking at how complicated it gets, maybe what
you'd really want, to make it clearer for the user, is the ability to
explicitly configure the behavior, either globally or per-resource. So
instead of having to tweak a set of variables that interact in complex
ways, you'd configure something like rule expressions,


  
  
  


So, try to restart the service 3 times, if that fails migrate the
service, if it still fails, fence the node.

(obviously the details and XML syntax are just an example)

This would then replace on-fail, migration-threshold, etc.

-- 
// Kristoffer Grönlund
// kgronl...@suse.com

___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] RFC: allowing soft recovery attempts before ignore/block/etc.

2016-09-21 Thread Kristoffer Grönlund
Kristoffer Grönlund <kgronl...@suse.com> writes:

> If implementing the first option, I would prefer to keep the behavior of
> migration-threshold of counting all failures, not just
> monitors. Otherwise there would be two closely related thresholds with
> subtly divergent behavior, which seems confusing indeed.

I see now that the proposed threshold would be per-operation, in which
case I completely reverse opinions and think that a per-operation
configured threshold should apply to instances of that operation
only. :)

-- 
// Kristoffer Grönlund
// kgronl...@suse.com

___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] Force Unmount - SLES 11 SP4

2016-09-21 Thread Kristoffer Grönlund
Jorge Fábregas <jorge.fabre...@gmail.com> writes:

> Hi,
>
> I have an issue while shutting down one of our clusters.  The unmounting
> of an OCFS2 filesystem (ocf:heartbeat:Filesystem) is triggering a node
> fence (accordingly).  This is because the script for stopping the
> application is not killing all processes using the filesystem.  Is there
> a way to "force unmount" the filesystem using pacemaker as it is in SLES
> 11 SP4?
>
> I searched for something related and found the "force_unmount" parameter
> for ocf:heartbeat:Filesystem but it only works in RHEL (apparently it's
> a newer OCF version).
>
> It appears I'll have to deal with this out of pacemaker (perhaps thru an
> init script using "fuser -k" that would run prior to openais at system
> shutdown).
>
> If anyone here using SUSE has a better idea please let me know.
>

The force_unmount option is available in more recent version of SLES as
well, but not in SLES 11 SP4. You could try installing the upstream
version of the Filesystem agent and see if that works for you.

Cheers,
Kristoffer

-- 
// Kristoffer Grönlund
// kgronl...@suse.com

___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] RFC: allowing soft recovery attempts before ignore/block/etc.

2016-09-21 Thread Kristoffer Grönlund
Ken Gaillot <kgail...@redhat.com> writes:

> Hi everybody,
>
> Currently, Pacemaker's on-fail property allows you to configure how the
> cluster reacts to operation failures. The default "restart" means try to
> restart on the same node, optionally moving to another node once
> migration-threshold is reached. Other possibilities are "ignore",
> "block", "stop", "fence", and "standby".
>
> Occasionally, we get requests to have something like migration-threshold
> for values besides restart. For example, try restarting the resource on
> the same node 3 times, then fence.
>
> I'd like to get your feedback on two alternative approaches we're
> considering.
>
> ###
>
> Our first proposed approach would add a new hard-fail-threshold
> operation property. If specified, the cluster would first try restarting
> the resource on the same node, before doing the on-fail handling.
>
> For example, you could configure a promote operation with
> hard-fail-threshold=3 and on-fail=fence, to fence the node after 3 failures.
>
> One point that's not settled is whether failures of *any* operation
> would count toward the 3 failures (which is how migration-threshold
> works now), or only failures of the specified operation.
>
> Currently, if a start fails (but is retried successfully), then a
> promote fails (but is retried successfully), then a monitor fails, the
> resource will move to another node if migration-threshold=3. We could
> keep that behavior with hard-fail-threshold, or only count monitor
> failures toward monitor's hard-fail-threshold. Each alternative has
> advantages and disadvantages.
>
> ###
>
> The second proposed approach would add a new on-restart-fail resource
> property.
>
> Same as now, on-fail set to anything but restart would be done
> immediately after the first failure. A new value, "ban", would
> immediately move the resource to another node. (on-fail=ban would behave
> like on-fail=restart with migration-threshold=1.)
>
> When on-fail=restart, and restarting on the same node doesn't work, the
> cluster would do the on-restart-fail handling. on-restart-fail would
> allow the same values as on-fail (minus "restart"), and would default to
> "ban".
>
> So, if you want to fence immediately after any promote failure, you
> would still configure on-fail=fence; if you want to try restarting a few
> times first, you would configure on-fail=restart and on-restart-fail=fence.
>
> This approach keeps the current threshold behavior -- failures of any
> operation count toward the threshold. We'd rename migration-threshold to
> something like hard-fail-threshold, since it would apply to more than
> just migration, but unlike the first approach, it would stay a resource
> property.
>
> ###
>
> Comparing the two approaches, the first is more flexible, but also more
> complex and potentially confusing.
>
> With either approach, we would deprecate the start-failure-is-fatal
> cluster property. start-failure-is-fatal=true would be equivalent to
> hard-fail-threshold=1 with the first approach, and on-fail=ban with the
> second approach. This would be both simpler and more useful -- it allows
> the value to be set differently per resource.

Apologies for quoting the entire mail, but I had a hard time picking out
which part was more relevant when replying.

First of all, is there a use case for when fence-after-3-failures is a
useful behavior? I seem to recall some case where someone expected that
to be the behavior and were surprised by how pacemaker works, but that
problem wouldn't be helped by adding another option for them not to know
about.

My second comment would be that to me, the first option sounds less
complex, but then I don't know the internals of pacemaker that
well. Having a special case on-fail for restarts seems inelegant,
somehow.

If implementing the first option, I would prefer to keep the behavior of
migration-threshold of counting all failures, not just
monitors. Otherwise there would be two closely related thresholds with
subtly divergent behavior, which seems confusing indeed.

Cheers,
Kristoffer

> -- 
> Ken Gaillot <kgail...@redhat.com>
>
> ___
> Users mailing list: Users@clusterlabs.org
> http://clusterlabs.org/mailman/listinfo/users
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org

-- 
// Kristoffer Grönlund
// kgronl...@suse.com

___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] pacemaker doesn't failover when httpd killed

2016-09-05 Thread Kristoffer Grönlund
Nurit Vilosny <nur...@mellanox.com> writes:

> Here is the configuration for the httpd:
>
> # pcs resource show cluster_virtualIP
> Resource: cluster_virtualIP (class=ocf provider=heartbeat type=IPaddr2)
>   Attributes: ip=10.215.53.99
>   Operations: monitor interval=20s (cluster_virtualIP-monitor-interval-20s)
>   start interval=0s timeout=20s 
> (cluster_virtualIP-start-interval-0s)
>   stop interval=0s timeout=20s on-fail=restart 
> (cluster_virtualIP-stop-interval-0s)
>
> (yes - I have monitoring configured and yes I used the ocf)
>

Hi Nurit,

That's just the cluster resource for managing a virtual IP, not the
resource for managing the httpd daemon itself.

If you've only got this resource, then there is nothing that monitors
the web server. You need a cluster resource for the web server as well
(ocf:heartbeat:apache, usually).

You are missing both that resource and the constraints that ensure that
the virtual IP is active on the same node as the web server. The
Clusters from Scratch document on the clusterlabs.org website shows you
how to configure this.

Cheers,
Kristoffer

-- 
// Kristoffer Grönlund
// kgronl...@suse.com

___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


  1   2   >