On 24/01/2013, at 3:36 AM, David Vossel wrote:
>
>
> - Original Message -
>> From: "Yan Gao"
>> To: pacemaker@oss.clusterlabs.org
>> Sent: Monday, January 21, 2013 11:28:40 PM
>> Subject: Re: [Pacemaker] Enable remote monitoring
>>
>> Hi,
>> Here's the code for supporting nagios plug
On Thu, Jan 31, 2013 at 3:39 AM, E-Blokos wrote:
>
> The apply button came finally back
> by remove completely cib.xml and put a new one.
Maybe some required option was missing. Can you send me the old cib.xml,
if you still have it? There can still be a copy in /var/lib/pacemaker/cib/
Rasto
--
Hi Yuichi
>>> On 1/31/2013 at 02:05 PM, in message
<510a7a3502156...@soto.provo.novell.com>, "Xia Li"
wrote:
> Hi Yuichi
>
> I create two patches trying to fix this issue.
>
> In these patches, expand lockfile() to let it not only record the daemon
> pid,
> but also record da
Hi Yuichi
I create two patches trying to fix this issue.
In these patches, expand lockfile() to let it not only record the daemon pid,
but also record daemon starting status(include "starting" and "started") .
At the same time, modify the logic of the controld RA, so that it can read
that sta
- Original Message -
From: "Rasto Levrinc"
To: "The Pacemaker cluster resource manager"
Sent: Tuesday, January 29, 2013 4:09 AM
Subject: Re: [Pacemaker] LCMC and pacemaker 1.1.8
On Tue, Jan 29, 2013 at 9:50 AM, Lars Marowsky-Bree wrote:
On 2013-01-28T22:50:26, Rasto Levrinc wrote
Ah, very good - thank you so much!!
On 01/30/2013 05:36 PM, Andreas Kurz wrote:
> On 2013-01-30 20:51, Matthew O'Connor wrote:
>> Hi! I must be doing something stupidly wrong... every time I add a new
>> node to my live cluster, the first thing the cluster decides to do is
>> STONITH the node,
On 2013-01-30 20:51, Matthew O'Connor wrote:
> Hi! I must be doing something stupidly wrong... every time I add a new
> node to my live cluster, the first thing the cluster decides to do is
> STONITH the node, and despite any precautions I take (other than
> flat-out disabling STONITH during the
Hi,
Is there any known memory leak issue corosync 1.4.1. I have a setup
here where corosync eats memory at a few kB a minute:
[root@mys002 mysql]# while [ 1 ]; do ps faxu | grep corosync | grep -v
grep; sleep 60; done
root 11071 0.2 0.0 624256 8840 ?Ssl 09:14 0:02 corosy
Hi! I must be doing something stupidly wrong... every time I add a new
node to my live cluster, the first thing the cluster decides to do is
STONITH the node, and despite any precautions I take (other than
flat-out disabling STONITH during the reconfiguration). Is this
normal? I'm currently run
Hi,
nfs umount fails if the server went down unexpectedly. So the
ocF:heartbeat:Filesystem resource becomes unmanaged.
Is there any poosibility to "force" umount the NFS share? I tried already with
-f and -l but nothing helped.
Thanks for any hints.
--
Mit freundlichen Grüßen,
Michael Schwa
Hi all,
I'm having a bit of difficulty with the way that my cluster is behaving on
failure of a resource.
The objective of my clustering setup is to provide a virtual IP, to which a
number of other services are bound. The services are bound to the VIP with
constraints to force the service to b
Hello Keith,
you are right.
I don't have got an idea why there are two firewalls installed, beside iptables
(that I checked) there is a
second firewalld on our systems and this one was still active.
I noticed that there is something wrong when I changed the configuration back
to unicast and
sa
On Wed, Jan 30, 2013 at 3:27 PM, Keith Ouellette
wrote:
> Hans,
>
>
>
>Is the multicast port 5405 "opened" in the firewall? That has bitten me
> before.
5405 and 5404.
>
>
>
> Thanks,
>
> Keith
>
>
>
>
> From: Hans Bert [dadeda2...@yahoo.de]
> Sent: Wednesday
I'm experimenting with asymmetric clusters and resource location
constraints.
My cluster has some resources which have to be restricted to certain
nodes and other resources which can run on any node. Given that, an
"opt-in" cluster seems the most manageable. That is, it seems easier to
create co
Hans,
Is the multicast port 5405 "opened" in the firewall? That has bitten me
before.
Thanks,
Keith
From: Hans Bert [dadeda2...@yahoo.de]
Sent: Wednesday, January 30, 2013 8:22 AM
To: and k; The Pacemaker cluster resource manager
Subject: Re: [Pacemake
Hi,
in the meantime I modified the configuration to check if it works with multicast
totem {
version: 2
secauth: off
cluster_name: mcscluster
interface {
ringnumber: 0
bindnetaddr: 192.168.100.0
mcastaddr: 239.255.1.12
mcastport: 5405
ttl: 1
}
}
but unfortunately it
Hi,
It seem to be problem with network traffic.
Have you tried to sniff network traffic to be sure that udp traffic reaches
from one node to another ??
Try on server1:
tcpdump -i interface -p udp -s 192.168.100.112
on server2:
tcpdump -i interface -p udp -s 192.168.100.111
if there will be
Hi,
I would like to know the planned releases for Pacemaker 1.2 / 2.0.
Can you help me know approximate date or any link to find exact information
?
Regards,
Kashif Jawed Siddiqui
***
This e-mai
Hello,
we had to move from Fedora 16 to Fedora 18 and wanted to set up Corosync with
Pacemaker and PCS as management tool.
With F16 our cluster was running pretty good, but with F18 after 5 days we are
reaching the point were we don't have
got ideas what might be the problem(s).
The cluster is
19 matches
Mail list logo