bingo, I upgrade to sles11 hae sp3 and it provides the beiaviour I expect.
hatest01:~ # rpm -q pacemaker
pacemaker-1.1.9-0.19.102
2013/8/7 Andrew Beekhof
>
> On 06/08/2013, at 1:06 PM, Mia Lueng wrote:
>
> > colocation ftp_www_balance -inf: apache vsftp
> > order apache_mount inf: nfs_clien
I believe this patch should help:
https://github.com/beekhof/pacemaker/commit/7a0a6f8
Can you give it a try?
On 07/08/2013, at 12:28 PM, Andrew Beekhof wrote:
>
> On 02/08/2013, at 5:56 PM, Johan Huysmans wrote:
>
>> Hi Andrew,
>>
>> Thanks for the fix.
>> I tried it on my setup and no
On 02/08/2013, at 5:56 PM, Johan Huysmans wrote:
> Hi Andrew,
>
> Thanks for the fix.
> I tried it on my setup and now when a cloned resource fails the group will
> move to the other node as expected.
>
> However I noticed something strange.
> If a cloned resource is failing I see this in the
Many thanks.
Fixed in:
https://github.com/beekhof/pacemaker/commit/fab0978
Apparently there was no regression test covering this (things collocated with
the group too) but there is now:
https://github.com/beekhof/pacemaker/commit/d2be466
So you can be sure it wont break again.
On 02/08/2
On 06/08/2013, at 1:06 PM, Mia Lueng wrote:
> colocation ftp_www_balance -inf: apache vsftp
> order apache_mount inf: nfs_client_clone apache
> order nfs_cs_order 0: nfs_server nfs_client_clone
> order vsftp_mount inf: nfs_client_clone vsftp
>
>
> nfsserver>nfsclient->www
> \
>
I'm trying to use heartbeat with ha_logd, but I can't find any
documentation for the proper way to handle log file rotation when using
ha_logd. The docs at http://linux-ha.org/wiki/Ha.cf state:
> If the logging daemon is used, all log messages will be sent through
> IPC to the logging daemon, whic
At http://clusterlabs.org/wiki/Initial_Configuration, the sample ha.cf
file has 'use_logd' set to 'false', but according to
http://linux-ha.org/wiki/Ha.cf, ha_logd should be used and all other
logging methods are deprecated.
Also, the page references the link http://www.linux-ha.org/ha.cf, which
In my case this does not work - read my original post. So I wonder if there
is a pacemaker bug (version 1.1.9-2db99f1). Killing pengine and stonithd on
the node which is supposed to "shoot" seems to resolve the problem, though
this is not a solution of course.
I also tested two separate stonith re
Hi,
I'd like to ask whether somebody met similar bug:
On one of the test two node clusters, node suddenly hung, and cib started
spawning following messages:
error: qb_ipcs_us_connection_acceptor: Could not accept client connection: Too
many open files (24)
in lsof, I see over thousand of opene
Hi,
On Tue, Aug 06, 2013 at 11:22:56AM +0200, Andreas Mock wrote:
> Hi Dejan,
>
> can you explain how the SDB agent works, when this resource
> is running on exactly that node which has to be stonithed?
It's actually in the hands of the resource manager to take care
of that. The pacemaker is goi
Hi Dejan,
can you explain how the SDB agent works, when this resource
is running on exactly that node which has to be stonithed?
Thank you in advance.
Best regards
Andreas Mock
-Ursprüngliche Nachricht-
Von: Dejan Muhamedagic [mailto:deja...@fastmail.fm]
Gesendet: Dienstag, 6. August
Hi,
On Thu, Aug 01, 2013 at 07:58:55PM +0200, Jan Christian Kaldestad wrote:
> Thanks for the explanation. But I'm quite confused about the SBD stonith
> resource configuration, as the SBD fencing wiki clearly states:
> "The sbd agent does not need to and should not be cloned. If all of your
> nod
12 matches
Mail list logo