Re: [ClusterLabs] big trouble with a DRBD resource

2017-08-09 Thread Lentes, Bernd


- Am 8. Aug 2017 um 15:36 schrieb Lars Ellenberg lars.ellenb...@linbit.com:
 
> crm shell in "auto-commit"?
> never seen that.

i googled for "crmsh autocommit pacemaker" and found that: 
https://github.com/ClusterLabs/crmsh/blob/master/ChangeLog
See line 650. Don't know what that means.
> 
> You are sure you did not forget this necessary piece?
> ms WebDataClone WebData \
>meta master-max="1" master-node-max="1" clone-max="2"
>clone-node-max="1" notify="true"

I didn't come so far. I followed that guide 
(http://clusterlabs.org/doc/en-US/Pacemaker/1.1-plugin/html-single/Clusters_from_Scratch/index.html#_configure_the_cluster_for_drbd),
but didn't use the shadow cib. The cluster is in testing, not in production, so 
i thought "nothing severe can happen". Misjudged. My error.
After configuring the primitive without the ms clone my resource ClusterMon 
reacted promptly and sent 2 snmp traps to my management station in 193 
seconds, which triggered 2 e-Mails ...

I understand now that the cluster missed the ms clone configuration.
But so much traps in such a short period. Is that intended ? Or a bug ?
 
> If you ever are stuck in a situation like that,
> I suggest you put your cluster in "maintenance mode",
> then fix up your configuration
> (remove the primitive, or add the ms definition),
> do cleanups for "everything",
> simulate the "maintenance mode off",
> and if that looks plausible, commit the maintenance mode off.
> 
> 
> Also, even though that has nothing to do with your issue there:
> just because you *can* do dual-primary DRBD + GFS2 does not mean that it
> is a good idea. That "Cluster from scratch" is a prove of concept,
> NOT a "best practices how to set up a web server on pacemaker and DRBD"
> 
> If you don't have a *very* good reason to use a cluster file
> system, for things like web servers, mail servers, file servers,
> ...  most services actually, a "classic" file system as xfs or
> ext4 in failover configuration will usually easily outperform a
> two-node GFS2 setup, while being less complex at the same time.
> 

I will not use a cluster FS. DRBD is intended as a backing device for a 
VirtualDomain which resides in a plain volume, without a fs.

Bernd
 

Helmholtz Zentrum Muenchen
Deutsches Forschungszentrum fuer Gesundheit und Umwelt (GmbH)
Ingolstaedter Landstr. 1
85764 Neuherberg
www.helmholtz-muenchen.de
Aufsichtsratsvorsitzende: MinDir'in Baerbel Brumme-Bothe
Geschaeftsfuehrer: Prof. Dr. Guenther Wess, Heinrich Bassler, Dr. Alfons Enhsen
Registergericht: Amtsgericht Muenchen HRB 6466
USt-IdNr: DE 129521671


___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] notify action is not called for the docker bundle resources

2017-08-09 Thread Numan Siddique
On Wed, Aug 9, 2017 at 9:18 PM, Ken Gaillot  wrote:

> On Fri, 2017-07-28 at 20:24 +0530, Numan Siddique wrote:
> > Hi,
> >
> >
> > I am creating a redis bundle resource (master - slave mode). It is
> > creating successfully, but I am noticing that "notify" action is not
> > called by pacemaker_remoted.
> >
> >
> > Below are the steps I used to create the redis bundle resource [1].
> > The sosreport can be found here - [2]
> >
> >
> >
> > I am seeing the same behaviour when I create the "ovndb-servers" [3]
> > bundle resource (master/slave) as well.
> > In the case of ovndb-servers OCF resource, we rely on notify action to
> > change the mode of the OVN Db server to active or backup.
> >
> > Can some one please help me on why notify action is not called ? Is
> > there something wrong in my setup ? Or bundle resources lack the
> > support to call notify actions ?
>
> Clone notification support for bundles was added in commit b632ef0a in
> the current upstream master branch. It hasn't made it into an upstream
> release yet, but is expected to be in the next one.
>
> It was backported to the RHEL 7.4 GA release, which is 1.1.16-12.el7.
> The pre-release shown in your logs (1.1.16-10.el7-94ff4df) doesn't have
> it.
>
>
Thanks Ken. I will pickup 1.1.16-12 and test it out

Regards
Numan

> [1] -
> >
> >
> > # pcs cluster cib tmp-cib.xml
> > # cp tmp-cib.xml tmp-cib.xml.deltasrc
> > # pcs -f tmp-cib.xml resource bundle create tredis-bundle \
> > container docker
> > image=192.168.24.1:8787/tripleoupstream/centos-binary-redis:latest
> > masters=1 network=host \
> > options="--user=root --log-driver=journald -e
> > KOLLA_CONFIG_STRATEGY=COPY_ALWAYS" replicas=3
> > run-command="/bin/bash /usr/local/bin/kolla_start" \
> > network control-port=3124 \
> > storage-map id=t1
> > source-dir=/var/lib/kolla/config_files/redis.json
> > target-dir=/var/lib/kolla/config_files/config.json options=ro\
> > storage-map id=t2
> > source-dir=/var/lib/config-data/puppet-generated/redis/
> > target-dir=/var/lib/kolla/config_files/src options=ro\
> > storage-map id=t3 source-dir=/etc/hosts target-dir=/etc/hosts
> > options=ro\
> > storage-map id=t4 source-dir=/etc/localtime
> > target-dir=/etc/localtime options=ro\
> > storage-map id=t5 source-dir=/var/lib/redis
> > target-dir=/var/lib/redis options=rw\
> > storage-map id=t6 source-dir=/var/log/redis
> > target-dir=/var/log/redis options=rw\
> > storage-map id=t7 source-dir=/var/run/redis
> > target-dir=/var/run/redis options=rw \
> > storage-map id=t8 source-dir=/usr/lib/ocf/
> > target-dir=/usr/lib/ocf/ options=rw \
> > storage-map id=t9 source-dir=/etc/pki/ca-trust/extracted
> > target-dir=/etc/pki/ca-trust/extracted options=ro \
> > storage-map id=t10 source-dir=/etc/pki/tls/certs/ca-bundle.crt
> > target-dir=/etc/pki/tls/certs/ca-bundle.crt options=ro \
> > storage-map id=t11
> > source-dir=/etc/pki/tls/certs/ca-bundle.trust.crt
> > target-dir=/etc/pki/tls/certs/ca-bundle.trust.crt options=ro \
> > storage-map id=t12 source-dir=/etc/pki/tls/cert.pem
> > target-dir=/etc/pki/tls/cert.pem options=ro
> > storage-map id=t13 source-dir=/dev/log target-dir=/dev/log
> > options=rw
> > storage-map id=t14 source-dir=/etc/corosync
> > target-dir=/etc/corosync options=rw\
> >
> >
> > # pcs -f tmp-cib.xml resource create tredis ocf:heartbeat:redis
> > wait_last_known_master=true meta interleave=true notify=true
> > ordered=true \
> >bundle tredis-bundle
> >
> >
> > # pcs cluster cib-push tmp-cib.xml diff-against=tmp-cib.xml.deltasrc
> >
> >
> > # pcs status
> > Cluster name: tripleo_cluster
> > Stack: corosync
> > Current DC: overcloud-controller-2 (version 1.1.16-10.el7-94ff4df) -
> > partition with quorum
> > Last updated: Fri Jul 28 14:46:10 2017
> > Last change: Fri Jul 28 13:22:53 2017 by root via cibadmin on
> > overcloud-controller-0
> >
> >
> > 9 nodes configured
> > 15 resources configured
> >
> >
> > Online: [ overcloud-controller-0 overcloud-controller-1
> > overcloud-controller-2 ]
> > RemoteOFFLINE: [ rabbitmq-bundle-0 rabbitmq-bundle-1 rabbitmq-bundle-2
> > ]
> > GuestOnline: [ tredis-bundle-0@overcloud-controller-0
> > tredis-bundle-1@overcloud-controller-1
> > tredis-bundle-2@overcloud-controller-2 ]
> >
> >
> > Full list of resources:
> >
> >
> >  ip-192.168.24.8 (ocf::heartbeat:IPaddr2): Started
> > overcloud-controller-0
> >  ip-10.0.0.6 (ocf::heartbeat:IPaddr2): Started overcloud-controller-1
> >  ip-172.16.2.8 (ocf::heartbeat:IPaddr2): Started
> > overcloud-controller-2
> >  ip-172.16.2.13 (ocf::heartbeat:IPaddr2): Started
> > overcloud-controller-0
> >  ip-172.16.1.11 (ocf::heartbeat:IPaddr2): Started
> > overcloud-controller-1
> >  ip-172.16.3.8 (ocf::heartbeat:IPaddr2): Started
> > overcloud-controller-2
> >  Docker container set: tredis-bundle
> > [192.168.24.1:8787/tripleoupstream/centos-binary-redis:latest]
> >tredis-bundle-0 (ocf::heartbeat:redis): Master
> > overcloud-controller-0
> >tr

Re: [ClusterLabs] notify action is not called for the docker bundle resources

2017-08-09 Thread Ken Gaillot
On Fri, 2017-07-28 at 20:24 +0530, Numan Siddique wrote:
> Hi,
> 
> 
> I am creating a redis bundle resource (master - slave mode). It is
> creating successfully, but I am noticing that "notify" action is not
> called by pacemaker_remoted.
> 
> 
> Below are the steps I used to create the redis bundle resource [1].
> The sosreport can be found here - [2]
> 
> 
> 
> I am seeing the same behaviour when I create the "ovndb-servers" [3]
> bundle resource (master/slave) as well. 
> In the case of ovndb-servers OCF resource, we rely on notify action to
> change the mode of the OVN Db server to active or backup.
>  
> Can some one please help me on why notify action is not called ? Is
> there something wrong in my setup ? Or bundle resources lack the
> support to call notify actions ?

Clone notification support for bundles was added in commit b632ef0a in
the current upstream master branch. It hasn't made it into an upstream
release yet, but is expected to be in the next one.

It was backported to the RHEL 7.4 GA release, which is 1.1.16-12.el7.
The pre-release shown in your logs (1.1.16-10.el7-94ff4df) doesn't have
it.

> [1] -
> 
> 
> # pcs cluster cib tmp-cib.xml
> # cp tmp-cib.xml tmp-cib.xml.deltasrc
> # pcs -f tmp-cib.xml resource bundle create tredis-bundle \
> container docker
> image=192.168.24.1:8787/tripleoupstream/centos-binary-redis:latest
> masters=1 network=host \
> options="--user=root --log-driver=journald -e
> KOLLA_CONFIG_STRATEGY=COPY_ALWAYS" replicas=3
> run-command="/bin/bash /usr/local/bin/kolla_start" \
> network control-port=3124 \
> storage-map id=t1
> source-dir=/var/lib/kolla/config_files/redis.json
> target-dir=/var/lib/kolla/config_files/config.json options=ro\
> storage-map id=t2
> source-dir=/var/lib/config-data/puppet-generated/redis/
> target-dir=/var/lib/kolla/config_files/src options=ro\
> storage-map id=t3 source-dir=/etc/hosts target-dir=/etc/hosts
> options=ro\
> storage-map id=t4 source-dir=/etc/localtime
> target-dir=/etc/localtime options=ro\
> storage-map id=t5 source-dir=/var/lib/redis
> target-dir=/var/lib/redis options=rw\
> storage-map id=t6 source-dir=/var/log/redis
> target-dir=/var/log/redis options=rw\
> storage-map id=t7 source-dir=/var/run/redis
> target-dir=/var/run/redis options=rw \
> storage-map id=t8 source-dir=/usr/lib/ocf/
> target-dir=/usr/lib/ocf/ options=rw \
> storage-map id=t9 source-dir=/etc/pki/ca-trust/extracted
> target-dir=/etc/pki/ca-trust/extracted options=ro \
> storage-map id=t10 source-dir=/etc/pki/tls/certs/ca-bundle.crt
> target-dir=/etc/pki/tls/certs/ca-bundle.crt options=ro \
> storage-map id=t11
> source-dir=/etc/pki/tls/certs/ca-bundle.trust.crt
> target-dir=/etc/pki/tls/certs/ca-bundle.trust.crt options=ro \
> storage-map id=t12 source-dir=/etc/pki/tls/cert.pem
> target-dir=/etc/pki/tls/cert.pem options=ro
> storage-map id=t13 source-dir=/dev/log target-dir=/dev/log
> options=rw
> storage-map id=t14 source-dir=/etc/corosync
> target-dir=/etc/corosync options=rw\
> 
> 
> # pcs -f tmp-cib.xml resource create tredis ocf:heartbeat:redis
> wait_last_known_master=true meta interleave=true notify=true
> ordered=true \
>bundle tredis-bundle
> 
> 
> # pcs cluster cib-push tmp-cib.xml diff-against=tmp-cib.xml.deltasrc
> 
> 
> # pcs status
> Cluster name: tripleo_cluster
> Stack: corosync
> Current DC: overcloud-controller-2 (version 1.1.16-10.el7-94ff4df) -
> partition with quorum
> Last updated: Fri Jul 28 14:46:10 2017
> Last change: Fri Jul 28 13:22:53 2017 by root via cibadmin on
> overcloud-controller-0
> 
> 
> 9 nodes configured
> 15 resources configured
> 
> 
> Online: [ overcloud-controller-0 overcloud-controller-1
> overcloud-controller-2 ]
> RemoteOFFLINE: [ rabbitmq-bundle-0 rabbitmq-bundle-1 rabbitmq-bundle-2
> ]
> GuestOnline: [ tredis-bundle-0@overcloud-controller-0
> tredis-bundle-1@overcloud-controller-1
> tredis-bundle-2@overcloud-controller-2 ]
> 
> 
> Full list of resources:
> 
> 
>  ip-192.168.24.8 (ocf::heartbeat:IPaddr2): Started
> overcloud-controller-0
>  ip-10.0.0.6 (ocf::heartbeat:IPaddr2): Started overcloud-controller-1
>  ip-172.16.2.8 (ocf::heartbeat:IPaddr2): Started
> overcloud-controller-2
>  ip-172.16.2.13 (ocf::heartbeat:IPaddr2): Started
> overcloud-controller-0
>  ip-172.16.1.11 (ocf::heartbeat:IPaddr2): Started
> overcloud-controller-1
>  ip-172.16.3.8 (ocf::heartbeat:IPaddr2): Started
> overcloud-controller-2
>  Docker container set: tredis-bundle
> [192.168.24.1:8787/tripleoupstream/centos-binary-redis:latest]
>tredis-bundle-0 (ocf::heartbeat:redis): Master
> overcloud-controller-0
>tredis-bundle-1 (ocf::heartbeat:redis): Slave
> overcloud-controller-1
>tredis-bundle-2 (ocf::heartbeat:redis): Slave
> overcloud-controller-2
> 
> 
> 
> 
> contents of /var/lib/kolla/config_files/redis.json
> 
> {"config_files": [{"dest": "/etc/libqb/force-filesystem-sockets",
> "owne

[ClusterLabs] Clusterlabs Summit 2017: Please register!

2017-08-09 Thread Kristoffer Grönlund
Hi everyone,

This mail is for attendees of the Clusterlabs Summit event in Nuremberg,
September 6-7 2017. If it didn't arrive via the Clusterlabs mailing
list and you're not going but got this mail anyway, please let me know
since apparently I have you on my list of possible attendees ;)

Apologies for springing this on you at such a late stage, but as we are
investigating dinner options, making badges and making sure there are
enough chairs for everyone at the event, it became more and more clear
that it would be very useful to have a better grasp of how many people
are coming to the event.

URL to sign up
--

https://www.eventbrite.com/e/clusterlabs-summit-2017-dinner-tickets-3689052

To make it as easy as possible, I created an event on Eventbrite for
this purpose. Signing up is not a requirement! However, it would be
great if you could send an email to me confirming your attendance
regardless, in case you are unhappy about using Eventbrite.

Also, it would be great if you could register as quickly as possible so
that we can make dinner reservations early enough to hopefully be able
to fit everyone into one space.

Thank you,
Kristoffer

-- 
// Kristoffer Grönlund
// kgronl...@suse.com

___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org