Re: [ClusterLabs] Antw: Re: [Slightly OT] OCFS2 over LVM

2015-08-25 Thread Digimer
On 25/08/15 04:45 AM, Ulrich Windl wrote:
 Digimer  schrieb am 24.08.2015 um 18:20 in Nachricht
> <55db4453.10...@alteeve.ca>:
> [...]
>> Using a pair of nodes with a traditional file system exported by NFS and
>> made accessible by a floating (virtual) IP address gives you redundancy
>> without incurring the complexity and performance overhead of cluster
>> locking. Also, you won't need clvmd either. The trade-off through is
>> that if/when the primary fails, the nfs daemon will appear to restart to
>> the users and that may require a reconnection (not sure, I use nfs
>> sparingly).
> 
> But that's a cheap trick: You say don't provide HA-storage (CFS), but use 
> existing one (NFS). How do you build a HA-NFS server? You need another 
> cluster. Not everybody has that many nodes available.

DRBD in single-primary will do the job just fine. Recovery is simply a
matter of; fence -> promote to primary -> mount -> start nfs -> take
virtual IP, done.

Only 2-nodes needed. This is a common setup.

>> Generally speaking, I recommend always avoiding cluster FSes unless
>> they're really required. I say this as a person who uses gfs2 in every
>> cluster I build, but I do so carefully and in limited uses. In my case,
>> gfs2 backs ISOs and XML definition files for VMs, things that change
>> rarely so cluster locking overhead is all but a non-issue, and I have to
>> have DLM for clustered LVM anyway, so I've already incurred the
>> complexity costs so hey, why not.
>>
>> -- 
>> Digimer
>> Papers and Projects: https://alteeve.ca/w/ 
>> What if the cure for cancer is trapped in the mind of a person without
>> access to education?
>>
>> ___
>> Users mailing list: Users@clusterlabs.org 
>> http://clusterlabs.org/mailman/listinfo/users 
>>
>> Project Home: http://www.clusterlabs.org 
>> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf 
>> Bugs: http://bugs.clusterlabs.org 
> 
> 
> 
> 
> 
> ___
> Users mailing list: Users@clusterlabs.org
> http://clusterlabs.org/mailman/listinfo/users
> 
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org
> 


-- 
Digimer
Papers and Projects: https://alteeve.ca/w/
What if the cure for cancer is trapped in the mind of a person without
access to education?

___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] Corosync GitHub vs. dev list

2015-08-25 Thread Ken Gaillot
On 08/25/2015 05:20 AM, Ferenc Wagner wrote:
> Hi,
> 
> Since Corosync is hosted on GitHub, I wonder if it's enough to submit
> pull requests/issues/patch comments there to get the developers'
> attention, or should I also post to develop...@clusterlabs.org?

GitHub is good for patches, and when you want to reach just the corosync
developers. They'll get the usual github notifications.

The list is good for discussion, and reaches a broader audience
(developers of other cluster components, and advanced users who write
code for their clusters).

___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] Cluster.conf

2015-08-25 Thread Christine Caulfield
On 25/08/15 14:14, Streeter, Michelle N wrote:
> I am using pcs but it does nothing with the cluster.conf file.   Also, I am 
> currently required to use rhel6.6.   
> 
> I have not been able to find any documentation on what is required in the 
> cluster.conf file under the newer versions of pacemaker and I have not been 
> able to reduce my current version down enough to satisfy pacemaker and so 
> would you please provide an example of what is required in the cluster.conf 
> file?
> 
> "I don't think CMAN component can operate without that file (location
> possibly overridden with $COROSYNC_CLUSTER_CONFIG_FILE environment
> variable).  What distro, or at least commands to bring the cluster up
> do you use?"
> 
> We are only allowed to download from Red hat and I have both corosync and 
> pacemaker services set to on so they start at boot up.   It does not matter 
> if I stop all three services cman, corosync, and pacemaker and then start 
> corosync first and then pacemaker, if I have a cluster.conf file in place, it 
> fails to start.
> 

We need to know more about what exactly you mean by 'failed to start'.
Actual error messages and the command you used to start the cluster
would be appreciated, along with any syslog messages.

pacemaker on RHEL-6 requires cman. if cman is failing to start then
that's a configuration error that we need to look into (and that
cluster.conf you posted is not enough for a valid cluster BTW - you need
fencing in there at least!).

If the cluster starts 'without cman' then I can only assume that
something is very strangely wrong on your system. What command do you
use in this scenario, and what do you class as 'started'? Again
messages, and logs would be helpful in diagnosing what's going on here,

Chrissie

> This is my current cluster.conf file which just failed.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Michelle Streeter 
> ASC2 MCS - SDE/ACL/SDL/EDL OKC Software Engineer
> The Boeing Company
> 
> Date: Mon, 24 Aug 2015 17:52:01 +
> From: "Streeter, Michelle N" 
> To: "users@clusterlabs.org" 
> Subject: [ClusterLabs] Cluster.conf
> Message-ID:
>   <9a18847a77a9a14da7e0fd240efcafc2504...@xch-phx-501.sw.nos.boeing.com>
> Content-Type: text/plain; charset="us-ascii"
> 
> If I have a cluster.conf file in /etc/cluster, my cluster will not start.   
> Pacemaker 1.1.11, Corosync 1.4.7, cman 3.0.12,  But if I do not have a 
> cluster.conf file then my cluster does start with my current configuration.   
> However, when I try to stop the cluster, it wont stop unless I have my 
> cluster.conf file in place.   How can I dump my cib to my cluster.conf file 
> so my cluster will start with the conf file in place?
> 
> Michelle Streeter
> ASC2 MCS - SDE/ACL/SDL/EDL OKC Software Engineer
> The Boeing Company
> 
> -- next part --
> An HTML attachment was scrubbed...
> URL: 
> 
> 
> --
> 
> Message: 3
> Date: Mon, 24 Aug 2015 14:00:48 -0400
> From: Digimer 
> To: Cluster Labs - All topics related to open-source clustering
>   welcomed
> Subject: Re: [ClusterLabs] Cluster.conf
> Message-ID: <55db5bd0.4010...@alteeve.ca>
> Content-Type: text/plain; charset=windows-1252
> 
> The cluster.conf is needed by cman, and in RHEL 6, pacemaker needs to
> use cman as the quorum provider. So you need a skeleton cluster.conf and
> it is different from cib.xml.
> 
> If you use pcs/pcsd to setup pacemaker on RHEL 6.7, it should configure
> everything for you, so you should be able to go straight to setting up
> pacemaker and not worry about cman/corosync directly.
> 
> digimer
> 
> 
> ___
> Users mailing list: Users@clusterlabs.org
> http://clusterlabs.org/mailman/listinfo/users
> 
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org
> 


___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] Cluster.conf

2015-08-25 Thread Streeter, Michelle N
I am using pcs but it does nothing with the cluster.conf file.   Also, I am 
currently required to use rhel6.6.   

I have not been able to find any documentation on what is required in the 
cluster.conf file under the newer versions of pacemaker and I have not been 
able to reduce my current version down enough to satisfy pacemaker and so would 
you please provide an example of what is required in the cluster.conf file?

"I don't think CMAN component can operate without that file (location
possibly overridden with $COROSYNC_CLUSTER_CONFIG_FILE environment
variable).  What distro, or at least commands to bring the cluster up
do you use?"

We are only allowed to download from Red hat and I have both corosync and 
pacemaker services set to on so they start at boot up.   It does not matter if 
I stop all three services cman, corosync, and pacemaker and then start corosync 
first and then pacemaker, if I have a cluster.conf file in place, it fails to 
start.

This is my current cluster.conf file which just failed.










Michelle Streeter 
ASC2 MCS - SDE/ACL/SDL/EDL OKC Software Engineer
The Boeing Company

Date: Mon, 24 Aug 2015 17:52:01 +
From: "Streeter, Michelle N" 
To: "users@clusterlabs.org" 
Subject: [ClusterLabs] Cluster.conf
Message-ID:
<9a18847a77a9a14da7e0fd240efcafc2504...@xch-phx-501.sw.nos.boeing.com>
Content-Type: text/plain; charset="us-ascii"

If I have a cluster.conf file in /etc/cluster, my cluster will not start.   
Pacemaker 1.1.11, Corosync 1.4.7, cman 3.0.12,  But if I do not have a 
cluster.conf file then my cluster does start with my current configuration.   
However, when I try to stop the cluster, it wont stop unless I have my 
cluster.conf file in place.   How can I dump my cib to my cluster.conf file so 
my cluster will start with the conf file in place?

Michelle Streeter
ASC2 MCS - SDE/ACL/SDL/EDL OKC Software Engineer
The Boeing Company

-- next part --
An HTML attachment was scrubbed...
URL: 


--

Message: 3
Date: Mon, 24 Aug 2015 14:00:48 -0400
From: Digimer 
To: Cluster Labs - All topics related to open-source clustering
welcomed
Subject: Re: [ClusterLabs] Cluster.conf
Message-ID: <55db5bd0.4010...@alteeve.ca>
Content-Type: text/plain; charset=windows-1252

The cluster.conf is needed by cman, and in RHEL 6, pacemaker needs to
use cman as the quorum provider. So you need a skeleton cluster.conf and
it is different from cib.xml.

If you use pcs/pcsd to setup pacemaker on RHEL 6.7, it should configure
everything for you, so you should be able to go straight to setting up
pacemaker and not worry about cman/corosync directly.

digimer


___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] [Slightly OT] OCFS2 over LVM

2015-08-25 Thread Jorge Fábregas
On 08/24/2015 12:20 PM, Digimer wrote:
> Speaking from a gfs2 background, but assuming it's similar in concept to
> ocfs2...
> 
> Cluster locking comes at a performance cost. All locks need to be
> coordinated between the nodes, and that will always be slower that local
> locking only. They are also far less commonly used than options like nfs.
> 
> Using a pair of nodes with a traditional file system exported by NFS and
> made accessible by a floating (virtual) IP address gives you redundancy
> without incurring the complexity and performance overhead of cluster
> locking. Also, you won't need clvmd either. The trade-off through is
> that if/when the primary fails, the nfs daemon will appear to restart to
> the users and that may require a reconnection (not sure, I use nfs
> sparingly).
> 
> Generally speaking, I recommend always avoiding cluster FSes unless
> they're really required. I say this as a person who uses gfs2 in every
> cluster I build, but I do so carefully and in limited uses. In my case,
> gfs2 backs ISOs and XML definition files for VMs, things that change
> rarely so cluster locking overhead is all but a non-issue, and I have to
> have DLM for clustered LVM anyway, so I've already incurred the
> complexity costs so hey, why not.

Your point is well-taken.  Thanks for the advice Digimer!

-- 
Jorge

___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


[ClusterLabs] Corosync GitHub vs. dev list

2015-08-25 Thread Ferenc Wagner
Hi,

Since Corosync is hosted on GitHub, I wonder if it's enough to submit
pull requests/issues/patch comments there to get the developers'
attention, or should I also post to develop...@clusterlabs.org?
-- 
Thanks,
Feri.

___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


[ClusterLabs] Antw: Antw: Re: [ClusterLabs Developers] Resource Agent language discussion

2015-08-25 Thread Ulrich Windl
>>> "Ulrich Windl"  schrieb am 25.08.2015 um
08:59 in Nachricht <55dc2e6602a10001b...@gwsmtp1.uni-regensburg.de>:
 Jehan-Guillaume de Rorthais  schrieb am 19.08.2015 um
> 10:59 in
> Nachricht <20150819105900.24f85553@erg>:
> 
> [...]
[...]
> 
> After users have set up their preference, the maintainer of the software 
> could
> add a work of obsolescence to the RA that lost in the users' vote...

s/work/word/

> [...]



___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


[ClusterLabs] Antw: Re: [Slightly OT] OCFS2 over LVM

2015-08-25 Thread Ulrich Windl
>>> Digimer  schrieb am 24.08.2015 um 18:20 in Nachricht
<55db4453.10...@alteeve.ca>:
[...]
> Using a pair of nodes with a traditional file system exported by NFS and
> made accessible by a floating (virtual) IP address gives you redundancy
> without incurring the complexity and performance overhead of cluster
> locking. Also, you won't need clvmd either. The trade-off through is
> that if/when the primary fails, the nfs daemon will appear to restart to
> the users and that may require a reconnection (not sure, I use nfs
> sparingly).

But that's a cheap trick: You say don't provide HA-storage (CFS), but use 
existing one (NFS). How do you build a HA-NFS server? You need another cluster. 
Not everybody has that many nodes available.

> 
> Generally speaking, I recommend always avoiding cluster FSes unless
> they're really required. I say this as a person who uses gfs2 in every
> cluster I build, but I do so carefully and in limited uses. In my case,
> gfs2 backs ISOs and XML definition files for VMs, things that change
> rarely so cluster locking overhead is all but a non-issue, and I have to
> have DLM for clustered LVM anyway, so I've already incurred the
> complexity costs so hey, why not.
> 
> -- 
> Digimer
> Papers and Projects: https://alteeve.ca/w/ 
> What if the cure for cancer is trapped in the mind of a person without
> access to education?
> 
> ___
> Users mailing list: Users@clusterlabs.org 
> http://clusterlabs.org/mailman/listinfo/users 
> 
> Project Home: http://www.clusterlabs.org 
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf 
> Bugs: http://bugs.clusterlabs.org 





___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


[ClusterLabs] Antw: CRM managing ADSL connection; failure not handled

2015-08-25 Thread Ulrich Windl
Why not start with writing a real OCF RA?

>>> Tom Yates  schrieb am 24.08.2015 um 11:35 in 
>>> Nachricht
:
> I've got a failover firewall pair where the external interface is ADSL; 
> that is, PPPoE.  i've defined the service thus:
> 
> primitive ExternalIP lsb:hb-adsl-helper \
>  op monitor interval="60s"
> 
> and in addition written a noddy script /etc/init.d/hb-adsl-helper, thus:
> 
> #!/bin/bash
> RETVAL=0
> start() {
>  /sbin/pppoe-start
> }
> stop() {
>  /sbin/pppoe-stop
> }
> case "$1" in
>start)
>  start
>  ;;
>stop)
>  stop
>  ;;
>status)
>  /sbin/ifconfig ppp0 >& /dev/null && exit 0
>  exit 1
>  ;;
>*)
>  echo $"Usage: $0 {start|stop|status}"
>  exit 3
> esac
> exit $?
> 
> The problem is that sometimes the ADSL connection falls over, as they do, 
> eg:
> 
> Aug 20 11:42:10 positron pppd[2469]: LCP terminated by peer
> Aug 20 11:42:10 positron pppd[2469]: Connect time 8619.4 minutes.
> Aug 20 11:42:10 positron pppd[2469]: Sent 1342528799 bytes, received 
> 164420300 bytes.
> Aug 20 11:42:13 positron pppd[2469]: Connection terminated.
> Aug 20 11:42:13 positron pppd[2469]: Modem hangup
> Aug 20 11:42:13 positron pppoe[2470]: read (asyncReadFromPPP): Session 1735: 
> Input/output error
> Aug 20 11:42:13 positron pppoe[2470]: Sent PADT
> Aug 20 11:42:13 positron pppd[2469]: Exit.
> Aug 20 11:42:13 positron pppoe-connect: PPPoE connection lost; attempting 
> re-connection.
> 
> CRMd then logs a bunch of stuff, followed by
> 
> Aug 20 11:42:18 positron lrmd: [1760]: info: rsc:ExternalIP:8: stop
> Aug 20 11:42:18 positron lrmd: [28357]: WARN: For LSB init script, no 
> additional parameters are needed.
> [...]
> Aug 20 11:42:18 positron pppoe-stop: Killing pppd
> Aug 20 11:42:18 positron pppoe-stop: Killing pppoe-connect
> Aug 20 11:42:18 positron lrmd: [1760]: WARN: Managed ExternalIP:stop process 
> 28357 exited with return code 1.
> 
> 
> At this point, the PPPoE connection is down, and stays down.  CRMd doesn't 
> fail the group which contains both internal and external interfaces over 
> to the other node, but nor does it try to restart the service.  I'm fairly 
> sure this is because I've done something boneheaded, but I can't get my 
> bone head around what it might be.
> 
> Any light anyone can shed is much appreciated.
> 
> 
> -- 
> 
>Tom Yates  -  http://www.teaparty.net 
> 
> ___
> Users mailing list: Users@clusterlabs.org 
> http://clusterlabs.org/mailman/listinfo/users 
> 
> Project Home: http://www.clusterlabs.org 
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf 
> Bugs: http://bugs.clusterlabs.org 





___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


[ClusterLabs] Antw: [Slightly OT] OCFS2 over LVM

2015-08-25 Thread Ulrich Windl
>>> Jorge Fábregas  schrieb am 23.08.2015 um 20:13
in
Nachricht <55da0d4f.1080...@gmail.com>:
> Hi,
> 
> I'm still doing some tests on SLES 11 SP4 & I was trying to run
> "mkfs.ocfs2" against a logical volume (with all infrastructure
> ready: cLVM & DLM & o2cb) but it gives me errors while creating it.  If
> I run it against a raw device (no LVM) it works.
> 
> Then I found this from an Oracle PDF:
> 
> "It is important to note that creating OCFS2 volumes on logical volumes
> (LVM) is not supported. This is due to the fact that logical volumes are
> not cluster aware and corruption of the OCFS2 file system may occur."

Of course you need cLVM! With cLVM it definitely worked up to SP3.

> 
> Can anyone please confirm if indeed OCFS2 won't work on top of LVM as of
> today?  I found no mention of this in the HAE Guide (strange).
> 
> Thanks!
> Jorge
> 
> ___
> Users mailing list: Users@clusterlabs.org 
> http://clusterlabs.org/mailman/listinfo/users 
> 
> Project Home: http://www.clusterlabs.org 
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf 
> Bugs: http://bugs.clusterlabs.org 




___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


[ClusterLabs] Antw: Re: SLES 11 SP4 & csync2

2015-08-25 Thread Ulrich Windl
>>> Jorge Fábregas  schrieb am 22.08.2015 um 19:07
in
Nachricht <55d8ac56.4020...@gmail.com>:
> On 08/22/2015 01:38 AM, Andrei Borzenkov wrote:
>> Wrong question :) Of course you can do everything manually. The real 
>> question should be - will SUSE support installation done manually. If 
>> you do not care about support - sure, you do not need it.
> 
> That's a good point (SUSE support).  Ok, I played with the yast cluster
> module (for initial cluster configuration) and noticed that, apart from
> creating the corosync.conf file, it created:
> 
> - /etc/syconfig/pacemaker
> - /etc/sysconfig/corosync
> 
> ...so I must remind myself that this is not just Linux with
> pacemaker/corosync & friends.  It's all that *on SUSE* so, "when in
> Rome, do as Romands do" :)
> 
> I'll set it up then, in order not to break the warranty.  The HAE guide
> also mentions about placing a call to csync2 in ~/.bash_logout which is
> nice (so you don't forget).

I wouldn't do that; neither recommend. I would sync when I'm ready. If
multiple people log in and out as root, you may have trouble...

> 
> 
>> No, they manipulate CIB so this should be OK. But in real life there are 
>> always more files that should be kept in sync between cluster nodes, 
>> having tool to automate it is good.
> 
> Got it.  Thanks Andrei!
> 
> All the best,
> Jorge
> 
> ___
> Users mailing list: Users@clusterlabs.org 
> http://clusterlabs.org/mailman/listinfo/users 
> 
> Project Home: http://www.clusterlabs.org 
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf 
> Bugs: http://bugs.clusterlabs.org 




___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


[ClusterLabs] Antw: SLES 11 SP4 & csync2

2015-08-25 Thread Ulrich Windl
>>> Jorge Fábregas  schrieb am 22.08.2015 um 07:12
in
Nachricht <55d804a4.50...@gmail.com>:
> Hi everyone,
> 
> I'm trying out SLES 11 SP4 with the "High-Availability Extension" on two
> virtual machines.  I want to keep things simple & I have a question
> regarding the csync2 tool from SUSE.  Considering that:
> 
> - I'll have just two nodes
> - I'll be using corosync without encryption (no authkey file)
> - I won't be using DRBD
> 
> Do I really need the csync2 service? In order to bootstrap the cluster

IMHO it's handy. We always change the same node (if up) and sync from there.
The advantage is when you add more nodes (or files to sync). You can live
without, but be disciplined when making changes.

> I'll configure corosync.conf on the first node & then I'll manually
> transfer it to the 2nd node (and modify accordingly).  That's the only
> thing I can think of that I need to take care of (file-wise).  After
> that I'll use the crm shell & the Hawk web console.
> 
> I guess my question is: does the crm shell or Hawk need the csync2 tool
> to function properly? Is there anything an admin could do through them
> that might require a file to be synced afterwards?
> 
> Thanks!
> 
> -- 
> Jorge
> 
> ___
> Users mailing list: Users@clusterlabs.org 
> http://clusterlabs.org/mailman/listinfo/users 
> 
> Project Home: http://www.clusterlabs.org 
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf 
> Bugs: http://bugs.clusterlabs.org 




___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


[ClusterLabs] Antw: Re: Antw: Re: Antw: Re: Antw: Re: MySQL resource causes error "0_monitor_20000".

2015-08-25 Thread Ulrich Windl
>>> Kiwamu Okabe  schrieb am 20.08.2015 um 18:14 in 
>>> Nachricht
:
> Hi,
> 
> On Wed, Aug 19, 2015 at 5:03 PM, Kiwamu Okabe  wrote:
>> The resource-agents have no ocf-tester command.
> 
> I updated pacemaker as 1.1.12-1.el6.
> And run ocf-tester that show following message:
> 
> ```
> # ocf-tester -n mysql_repl -o binary=/usr/local/mysql/bin/mysqld_safe
> -o datadir=/data/mysql -o pid=/data/mysql/mysql.pid -o
> socket=/tmp/mysql.sock -o log=/data/mysql/centillion.db.err -o
> replication_user=repl -o replication_passwd=slavepass
> /usr/lib/ocf/resource.d/heartbeat/mysql
> Beginning tests for /usr/lib/ocf/resource.d/heartbeat/mysql...
> * rc=107: Demoting a start resource should not fail
> * rc=107: Demote failed
> Error signing on to the CIB service: Transport endpoint is not connected
> Aborting tests
> ```
> 
> What are the 107 meaning?

The text behind is more important: There is either a problem in the RA, or in 
the RA configuration. You could also try " -v Be verbose while testing" for 
testing to get more output. As the RA really doe the indicated operations, you 
may find more details in the syslogs.

Regards,
Ulrich


> 
> Thank's,
> -- 
> Kiwamu Okabe at METASEPI DESIGN
> 
> ___
> Users mailing list: Users@clusterlabs.org 
> http://clusterlabs.org/mailman/listinfo/users 
> 
> Project Home: http://www.clusterlabs.org 
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf 
> Bugs: http://bugs.clusterlabs.org 





___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


[ClusterLabs] Antw: Re: [ClusterLabs Developers] Resource Agent language discussion

2015-08-25 Thread Ulrich Windl
>>> Jehan-Guillaume de Rorthais  schrieb am 19.08.2015 um
10:59 in
Nachricht <20150819105900.24f85553@erg>:

[...]
>> Because if both are included, then they will forevermore be answering the
>> question “which one should I use?”.
> 
> True.

I think the user base will answer this in terms of how many users get a RA do
what they expect it to do, and I'd favor that decision over some maintainer
decision whether this or that is better or worse.

After users have set up their preference, the maintainer of the software could
add a work of obsolescence to the RA that lost in the users' vote...
[...]

Regards,
Ulrich


___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org