Re: [Pacemaker] How to really deal with gateway restarts?

2010-06-14 Thread Andrew Beekhof
On Thu, Jun 10, 2010 at 9:22 PM, Maros Timko tim...@gmail.com wrote: Hi all, I know it was requested here number of times, but with no real conclusive answer. All of the requests were update Pacemaker and use ping RA. Setup:  - simple symetric 2 node DRBD-Xen cluster  - both nodes

Re: [Pacemaker] Best Practice Flush Pacemaker Config / Troubvleshooting unknown error

2010-06-14 Thread Andrew Beekhof
On Fri, Jun 11, 2010 at 12:59 PM, Koch, Sebastian sebastian.k...@netzwerk.de wrote: Hi, currently i am trying to deploy my already running 2 node active/passive LAMP Cluster to physical machines. I got several problems while importing the config and therefore i often need to fully flush the

Re: [Pacemaker] crm node delete

2010-06-14 Thread Dejan Muhamedagic
Hi, On Fri, Jun 11, 2010 at 03:45:19PM +0100, Maros Timko wrote: Hi all, using heartbeat stack. I have a system with one node offline: Last updated: Fri Jun 11 13:52:40 2010 Stack: Heartbeat Current DC: vsp7.example.com (ba6d6332-71dd-465b-a030-227bcd31a25f) - partition

Re: [Pacemaker] [Problem]Cib cannot update an attribute by 16 node constitution.

2010-06-14 Thread renayama19661014
Hi Andrew, Thank you for comment. More likely of the underlying messaging infrastructure, but I'll take a look. Perhaps the default cib operation timeouts are too low for larger clusters. The log attached it to next Bugzilla. #65533;*

[Pacemaker] What that means? (stonith-ng: crm_abort: Triggered assert at remote.c)

2010-06-14 Thread Aleksey Zholdak
Hi. I successfully use sbd stonith on previous version of pacemaker (SLES11). When I installed SLES11SP1 I found new version of pacemaker. Everything was fine until I decided to check the work of sbd fensing and what I see: _What_ does this mean (see last line of log)? ... Jun 14 11:29:40

Re: [Pacemaker] What that means? (stonith-ng: crm_abort: Triggered assert at remote.c)

2010-06-14 Thread Lars Marowsky-Bree
On 2010-06-14T11:40:51, Aleksey Zholdak alek...@zholdak.com wrote: I successfully use sbd stonith on previous version of pacemaker (SLES11). When I installed SLES11SP1 I found new version of pacemaker. Everything was fine until I decided to check the work of sbd fensing and what I see: _What_

Re: [Pacemaker] Pacemaker and Apache recourse configuration problem

2010-06-14 Thread Gianluca Cecchi
2010/6/12 Julio Gómez ju...@openinside.es There is the error. Thanks. Marco was meaning about uncommenting these lines in your /etc/apache2/apache2.conf Location /server-status SetHandler server-status Order deny,allow Deny from all Allow from 127.0.0.1 /Location and to

[Pacemaker] how do I avoid infinite reboot cycles by fencing just the offline node?

2010-06-14 Thread Oliver Heinz
I configured a sbd fencing device on the shared storage to prevent data corruption. It works basically, but when I pull the network plugs on one node to simulate a failure one of the nodes is fenced (not necessarily the one that was unplugged). After the fenced node reboots it fences the other

[Pacemaker] SBD Fencing daemon: explain me more clear

2010-06-14 Thread Aleksey Zholdak
Hi, developers and/or happy users of sbd! Can anybody explain me more clear than on official and (IMHO) outdated page http://www.linux-ha.org/wiki/SBD_Fencing next: What timeouts I must specify, if my multipath needs from 90 to 160 secs to be switched off the dead path... Timeouts below are

Re: [Pacemaker] Shouldn't colocation -inf: be mandatory?

2010-06-14 Thread Vadym Chepkov
On Jun 7, 2010, at 8:04 AM, Vadym Chepkov wrote: I filed bug 2435, glad to hear it's not me Andrew closed this bug (http://developerbugs.linux-foundation.org/show_bug.cgi?id=2435) as resolved, but I respectfully disagree. I will try to explain a problem again in this list. lets assume

Re: [Pacemaker] how do I avoid infinite reboot cycles by fencing just the offline node?

2010-06-14 Thread Dejan Muhamedagic
Hi, On Mon, Jun 14, 2010 at 02:26:57PM +0200, Oliver Heinz wrote: I configured a sbd fencing device on the shared storage to prevent data corruption. It works basically, but when I pull the network plugs on one node to simulate a failure one of the nodes is fenced (not necessarily the one

Re: [Pacemaker] how do I avoid infinite reboot cycles by fencing just the offline node?

2010-06-14 Thread Oliver Heinz
Am Montag, 14. Juni 2010, um 16:43:54 schrieb Dejan Muhamedagic: Hi, On Mon, Jun 14, 2010 at 02:26:57PM +0200, Oliver Heinz wrote: I configured a sbd fencing device on the shared storage to prevent data corruption. It works basically, but when I pull the network plugs on one node to

Re: [Pacemaker] how do I avoid infinite reboot cycles by fencing just the offline node?

2010-06-14 Thread Dejan Muhamedagic
Hi, On Mon, Jun 14, 2010 at 06:29:59PM +0200, Oliver Heinz wrote: Am Montag, 14. Juni 2010, um 16:43:54 schrieb Dejan Muhamedagic: Hi, On Mon, Jun 14, 2010 at 02:26:57PM +0200, Oliver Heinz wrote: I configured a sbd fencing device on the shared storage to prevent data corruption. It

Re: [Pacemaker] How to really deal with gateway restarts?

2010-06-14 Thread Maros Timko
Date: Mon, 14 Jun 2010 08:13:59 +0200 From: Andrew Beekhof and...@beekhof.net To: The Pacemaker cluster resource manager        pacemaker@oss.clusterlabs.org Subject: Re: [Pacemaker] How to really deal with gateway restarts? Message-ID:        

[Pacemaker] VirtualDomain/DRBD live migration with pacemaker...

2010-06-14 Thread Erich Weiler
Hi All, We have this interesting problem I was hoping someone could shed some light on. Basically, we have 2 servers acting as a pacemaker cluster for DRBD and VirtualDomain (KVM) resources under CentOS 5.5. As it is set up, if one node dies, the other node promotes the DRBD devices to

Re: [Pacemaker] VirtualDomain/DRBD live migration with pacemaker...

2010-06-14 Thread Vadym Chepkov
On Mon, Jun 14, 2010 at 4:37 PM, Erich Weiler wei...@soe.ucsc.edu wrote: Hi All, We have this interesting problem I was hoping someone could shed some light on.  Basically, we have 2 servers acting as a pacemaker cluster for DRBD and VirtualDomain (KVM) resources under CentOS 5.5. As it is