Hi Dejan, Andreas, Yamauchi-san
2013/4/18 renayama19661...@ybb.ne.jp
Hi Dejan,
Hi Andreas,
The shell in pacemaker v1.0.x is in maintenance mode and shipped
along with the pacemaker code. The v1.1.x doesn't have the
ordered and collocated meta attributes.
I sent the pull request of
I'm using pacemaker 1.1.8 and I don't see stonith resources moving away
from AWOL hosts like I thought I did with 1.1.7. So I guess the first
thing to do is clear up what is supposed to happen.
If I have a single stonith resource for a cluster and it's running on
node A and then node A goes
On 2013-04-30T10:55:41, Brian J. Murrell br...@interlinx.bc.ca wrote:
From what I think I know of pacemaker, pacemaker wants to be able to
stonith that AWOL node before moving any resources away from it since
starting a resource on a new node while the state of the AWOL node is
unknown is
On 2013-04-24T11:44:57, Rainer Brestan rainer.bres...@gmx.net wrote:
Current DC: int2node2 - partition WITHOUT quorum
Version: 1.1.8-7.el6-394e906
This may not be the answer you want, since it is fairly unspecific. But
I think we noticed something similar when we pulled in 1.1.8, I don't
On 13-04-30 11:13 AM, Lars Marowsky-Bree wrote:
Pacemaker 1.1.8's stonith/fencing subsystem directly ties into the CIB,
and will complete the fencing request even if the fencing/stonith
resource is not instantiated on the node yet.
But clearly that's not happening here.
(There's a bug in
Using 1.1.8 on EL6.4, I am seeing this sort of thing:
pengine[1590]: warning: unpack_rsc_op: Processing failed op monitor for
my_resource on node1: unknown error (1)
The full log from the point of adding the resource until the errors:
Apr 30 11:46:30 node1 cibadmin[3380]: notice:
Hi Mori san,
The patch for crmsh is now included in the 1.0.x repository:
https://github.com/ClusterLabs/pacemaker-1.0/commit/9227e89fb748cd52d330f5fca80d56fbd9d3efbf
It will be appeared in 1.0.14 maintenance release, which is not scheduled yet
though.
All right.
Many Thanks!
Please ask questions on the mailing lists.
On 01/05/2013, at 12:30 AM, Babu Challa babu.cha...@ipaccess.com wrote:
Hi Andrew,
Greetings,
We are using corosync/pacemaker for high availability
This is a 4 node HA cluster where each pair of nodes are configured for DB
and file
Done. Thanks!
On 30/04/2013, at 3:34 PM, nozawat noza...@gmail.com wrote:
Hi
Because there was typo in pacemaker.spec.in, I was not able to register
service of pacemaker_remote.
-
diff --git a/pacemaker.spec.in b/pacemaker.spec.in
index
On 19/04/2013, at 6:36 AM, David Adair david_ad...@xyratex.com wrote:
Hello.
I have an issue with pacemaker 1.1.6.1 but believe this may still be
present in the
latest git versions and would like to know if the fix makes sense.
What I see is the following:
Setup:
- 2 node cluster
-
On 19/04/2013, at 11:05 AM, Yuichi SEINO seino.clust...@gmail.com wrote:
HI,
2013/4/16 Andrew Beekhof and...@beekhof.net:
On 15/04/2013, at 7:42 PM, Yuichi SEINO seino.clust...@gmail.com wrote:
Hi All,
I look at the daemon of tools to make a new daemon. So, I have a question.
On 01/05/2013, at 1:28 AM, Brian J. Murrell br...@interlinx.bc.ca wrote:
On 13-04-30 11:13 AM, Lars Marowsky-Bree wrote:
Pacemaker 1.1.8's stonith/fencing subsystem directly ties into the CIB,
and will complete the fencing request even if the fencing/stonith
resource is not instantiated on
On 01/05/2013, at 2:51 AM, Brian J. Murrell br...@interlinx.bc.ca wrote:
Using 1.1.8 on EL6.4, I am seeing this sort of thing:
pengine[1590]: warning: unpack_rsc_op: Processing failed op monitor for
my_resource on node1: unknown error (1)
The full log from the point of adding the
On 20/04/2013, at 3:07 AM, Ivor Prebeg ivor.pre...@gmail.com wrote:
Guys,
I can't get rid of following warnings:
Apr 19 19:00:37 node2 crmd: [32230]: WARN: start_subsystem: Client pengine
already running as pid 32240
Apr 19 19:00:44 node2 pengine: [32240]: WARN: unpack_status: Node
On 17/04/2013, at 6:15 PM, Ivor Prebeg ivor.pre...@gmail.com wrote:
Hi Andreas, thank you for your answer.
Maybe my description was a little fuzzy, sorry for that.
What I want is following:
* if l3_ping fails on a particular node, all services should go to standby on
that node (which
On 17/04/2013, at 9:54 PM, Johan Huysmans johan.huysm...@inuits.be wrote:
Hi All,
I'm trying to setup a specific configuration in our cluster, however I'm
struggling with my configuration.
This is what I'm trying to achieve:
On both nodes of the cluster a daemon must be running
On 28/04/2013, at 9:19 PM, Oriol Mula-Valls oriol.mula-va...@ic3.cat wrote:
Hi,
I have modified the previous configuration to use sbd fencing. I have also
fixed several other issues with the configuration and now when the node
reboots seems not to be able to rejoin the cluster.
I
17 matches
Mail list logo