11.11.2013 06:32, Vladislav Bogdanov wrote:
11.11.2013 02:30, Andrew Beekhof wrote:
On 5 Nov 2013, at 2:22 am, Vladislav Bogdanov bub...@hoster-ok.com wrote:
Hi Andrew, David, all,
Just found interesting fact, don't know is it a bug or not.
When doing service pacemaker stop on a node
Hi Andrew, David, all,
Just found interesting fact, don't know is it a bug or not.
When doing service pacemaker stop on a node which has drbd resource
promoted, that resource does not promote on another node, and promote
operation timeouts.
This is related to drbd fence integration with
Hi,
17.10.2013 12:46, Dejan Muhamedagic wrote:
Hi,
On Thu, Oct 17, 2013 at 11:36:51AM +0200, Andreas Mock wrote:
Hi all,
probably a totally stupid question:
But how can I stop a clone resource on one
node? Is there a way with crm?
The only thing which comes to my mind is
creating a
26.09.2013 17:55, Dermot Tynan wrote:
I've tried Googling for this, with little success. Wondering if someone
here can help.
I am operating Pacemaker in a clone configuration with multiple nodes
(usually between 4 and 6). In the event that one of the nodes fails, I
want one of the
20.09.2013 02:52, Andrew Beekhof wrote:
On 19/09/2013, at 7:45 PM, David Lang da...@lang.hm wrote:
On Thu, 19 Sep 2013, Florian Crouzat wrote:
Le 19/09/2013 00:25, David Lang a ?crit :
I'm frequently running into a problem that shutting down
pacemaker/corosync takes a very long time
12.09.2013 09:44, Lars Marowsky-Bree wrote:
...
The most directly equivalent solution would be to number the per-node
in-flight operations similar to what migration-threshold does.
You probably meant migration-limit here and then?
migration-threshold is a different beast.
19.08.2013 07:36, Andrew Beekhof wrote:
On 14/08/2013, at 7:58 AM, Vladislav Bogdanov bub...@hoster-ok.com wrote:
14.08.2013 00:51, Vladislav Bogdanov wrote:
...
Sure, reason of the failure of the fence_ipmilan requires investigations
too, but that is not important for the above issue
16.08.2013 16:04, Elmar Marschke wrote:
Hi all,
i'm working on a two node pacemaker cluster with dual primary drbd and
ocfs2.
Dual pri drbd and ocfs2 WITHOUT pacemaker work fine (mounting, reading,
writing, everything...).
ocfs2 uses own clustering stack by default.
When i try to
13.08.2013 10:42, Mistina Michal wrote:
[snip]
Here are my questions:
1. Was it the right path I have taken to install Corosync 2.0 +
pacemaker 1.1.10 even if I am using RHEL? Suggestion 3 on the
aforementioned blog seemed nicer to me than option 2 (Everyone Talks to
CMAN).
It
Hi,
I just caught unexpected fencing of a node because (as I see from a very
quick analysis, but I may be wrong) stonith resource running on it
(fence_ipmilan) failed to start and then stop.
Excerpt from logs:
Aug 13 20:57:56 v03-a pengine[2329]: notice: stage6: Scheduling Node
v03-a for
14.08.2013 00:51, Vladislav Bogdanov wrote:
...
Sure, reason of the failure of the fence_ipmilan requires investigations
too, but that is not important for the above issue I think.
That seems to be stonith-ng failure:
Aug 13 20:56:39 mgmt01 stonith-ng[10206]: notice: log_cib_diff
26.07.2013 03:43, Andrew Beekhof wrote:
...
Release candidates for the next Pacemaker release (1.1.11) can be
expected some time around Novemeber.
Did you completely discard plan of releasing 2.0.0?
___
Pacemaker mailing list:
09.07.2013 07:47, Andrew Beekhof wrote:
On 04/07/2013, at 9:55 PM, Vladislav Bogdanov bub...@hoster-ok.com wrote:
04.07.2013 14:50, Andrew Beekhof wrote:
On 04/07/2013, at 7:24 PM, Vladislav Bogdanov bub...@hoster-ok.com wrote:
Hi,
I think about safest way to expanding the cluster
Hi,
I think about safest way to expanding the cluster, and my observations
show that new nodes are always added in the online state
(standby=off). I would like nodes to appear in standby=on state
unless they can be fenced immediately after addition (fencing is
configured for them). Other option
04.07.2013 14:50, Andrew Beekhof wrote:
On 04/07/2013, at 7:24 PM, Vladislav Bogdanov bub...@hoster-ok.com wrote:
Hi,
I think about safest way to expanding the cluster, and my observations
show that new nodes are always added in the online state
(standby=off). I would like nodes
02.07.2013 08:46, Andrew Beekhof wrote:
Is How important is the ability to use redundant PDUs for fencing? better?
Yes.
I'd only add ...redundant PDUs (or similar) for fencing...
On 02/07/2013, at 3:30 PM, Vladislav Bogdanov bub...@hoster-ok.com wrote:
02.07.2013 03:10, Andrew Beekhof
02.07.2013 03:10, Andrew Beekhof wrote:
On 02/07/2013, at 8:51 AM, Andrew Beekhof and...@beekhof.net wrote:
On 01/07/2013, at 10:19 PM, Vladislav Bogdanov bub...@hoster-ok.com wrote:
01.07.2013 15:10, Andrew Beekhof wrote:
And if people start using it, then we might look at simplifying
29.06.2013 02:22, Andrew Beekhof wrote:
On 29/06/2013, at 12:22 AM, Digimer li...@alteeve.ca wrote:
On 06/28/2013 06:21 AM, Andrew Beekhof wrote:
On 28/06/2013, at 5:22 PM, Lars Marowsky-Bree l...@suse.com wrote:
On 2013-06-27T12:53:01, Digimer li...@alteeve.ca wrote:
primitive
01.07.2013 11:46, Nikola Ciprich wrote:
Hi,
I wanted to try RHEL6 based cluster with cman+pacemaker+clvmd.
I've got simple test cluster running on two virtual machines
according to clusters from scratch document. Please not, that
since it's just test cluste for playing, I do not have any
01.07.2013 12:29, Nikola Ciprich wrote:
clvmd by default blocks if there are nodes in cluster which do not run
clvmd.
There was an attempt to solve this issue for corosync2 stack, that
exists as a patch to clvmd (posted to lvm list -
01.07.2013 14:53, Andrew Beekhof wrote:
On 01/07/2013, at 9:45 PM, Vladislav Bogdanov bub...@hoster-ok.com wrote:
01.07.2013 14:14, Andrew Beekhof wrote:
...
I'm yet to be convinced that having two PDUs is helping those people in
the first place.
If it were actually useful, I suspect
01.07.2013 14:04, Nikola Ciprich wrote:
I actually did that myself, but I wouldn't recommend that way unless you
are familiar with all that. You may search through archives and look at
Andrew's blog (blog.clusterlabs.org, notably
01.07.2013 15:10, Andrew Beekhof wrote:
On 01/07/2013, at 10:06 PM, Vladislav Bogdanov bub...@hoster-ok.com wrote:
01.07.2013 14:53, Andrew Beekhof wrote:
On 01/07/2013, at 9:45 PM, Vladislav Bogdanov bub...@hoster-ok.com wrote:
01.07.2013 14:14, Andrew Beekhof wrote:
...
I'm yet
01.07.2013 22:51, Digimer wrote:
Hi all,
I wanted to elaborate on Andrew's Guest Fencing[1] tutorial to make
it a bit easier for newer users to follow. I also updated it for Fedora
18/19 as well.
It's the first release, so there is certainly typos, mistakes and
what-not. Any feedback
02.07.2013 03:10, Andrew Beekhof wrote:
On 02/07/2013, at 8:51 AM, Andrew Beekhof and...@beekhof.net wrote:
On 01/07/2013, at 10:19 PM, Vladislav Bogdanov bub...@hoster-ok.com wrote:
01.07.2013 15:10, Andrew Beekhof wrote:
And if people start using it, then we might look at simplifying
26.06.2013 12:07, Lars Marowsky-Bree wrote:
On 2013-06-25T12:03:22, Colin Blair cbl...@technicacorp.com wrote:
Andrew,
Does Pacemaker support GPU processes?
Pacemaker is not very CPU intensive; what would it use a GPU for?
Finding strict optimal solution for utilization-based placement
25.06.2013 09:59, Jacek Konieczny wrote:
On Tue, 25 Jun 2013 16:43:54 +1000
Andrew Beekhof and...@beekhof.net wrote:
Ok, I was just checking Pacemaker was built for the running version
of libqb.
Yes it was. corosync 2.2.0 and libqb 0.14.0 both on the build system and
on the cluster
25.06.2013 10:50, Vladislav Bogdanov wrote:
25.06.2013 09:59, Jacek Konieczny wrote:
On Tue, 25 Jun 2013 16:43:54 +1000
Andrew Beekhof and...@beekhof.net wrote:
Ok, I was just checking Pacemaker was built for the running version
of libqb.
Yes it was. corosync 2.2.0 and libqb 0.14.0 both
22.06.2013 16:37, Sven Arnold wrote:
Hi,
I am getting closer... Some updates for those who are interested.
Did you turn caching off for your VMs disks?
That's a point. Indeed caching was not explicitely turned off and I just
noticed that the default setting of the cache attribute of the
24.06.2013 13:55, Munehiro SATO wrote:
Hi all,
Can resource agents know the cause of stop action?
I want to know following situations in RA for my application(it's
Master/Slave resource).
* stop by crm resource stop
In this case, my application shutdowns normally in stop action of
24.06.2013 04:17, Andrew Beekhof wrote:
Either people have given up on testing, or rc5[1] is looking good for the
final release.
Is it going to be 1.1.10 or 1.2.0 (2.0.0)?
So just a reminder, we're particularly looking for feedback in the following
areas:
| plugin-based clusters,
21.06.2013 17:23, Sven Arnold wrote:
Thank you for replying, Vladislav!
I think the problem should be unrelated to iSCSI, you have correct setup
(of course I did not thoroughly look through all info, but idea is
perfectly correct).
Thank you for confirming.
Did you turn caching off
20.06.2013 09:00, Andrew Beekhof wrote:
On 20/06/2013, at 2:52 PM, Vladislav Bogdanov bub...@hoster-ok.com wrote:
20.06.2013 00:36, Andrew Beekhof wrote:
On 20/06/2013, at 6:33 AM, Doug Clow doug.c...@dashbox.com wrote:
Hello All,
I have some 2-node active-passive clusters
20.06.2013 03:12, Sven Arnold wrote:
Hi Doug,
I have some 2-node active-passive clusters that occasionally lose
Corosync connectivity. The connectivity is fixed with a reboot.
They don't have shared storage so stonith doesn't have to happen for
another node to take control of the resource.
20.06.2013 00:36, Andrew Beekhof wrote:
On 20/06/2013, at 6:33 AM, Doug Clow doug.c...@dashbox.com wrote:
Hello All,
I have some 2-node active-passive clusters that occasionally lose
Corosync connectivity. The connectivity is fixed with a reboot. They
don't have shared storage so stonith
--reject-with icmp-port-unreachable
COMMIT
# Completed on Sat Jun 15 23:17:15 2013
On 06/15/2013 06:04 PM, Vladislav Bogdanov wrote:
15.06.2013 20:26, Digimer wrote:
Ah, I think it's a problem with the firewall rules on the host. Not sure
how to fix it though...
You probably need to open
15.06.2013 20:26, Digimer wrote:
Ah, I think it's a problem with the firewall rules on the host. Not sure
how to fix it though...
You probably need to open port 1229/tcp in filter INPUT on virtual
cluster members. That is where fence_xvm listens for a connection from
fence_virtd after it sends
, sscanf() is tricky to use with space-containing
strings. I usually avoid such usage.
On 10/06/2013, at 9:46 PM, Vladislav Bogdanov bub...@hoster-ok.com wrote:
This should make warning:
: Bad UUID (crm_resource.c) in sscanf result (4) for
31980:0:0:crm_resource.c
go away.
---
tools
12.06.2013 01:59, Andrew Beekhof wrote:
On 11/06/2013, at 8:57 PM, Vladislav Bogdanov bub...@hoster-ok.com wrote:
11.06.2013 13:26, Andrew Beekhof wrote:
This shouldn't be needed because of:
https://github.com/beekhof/pacemaker/commit/d13dc296
Still have
This should make warning: decode_transition_key: Bad UUID (crm_resource.c) in
sscanf result (4) for 31980:0:0:crm_resource.c
go away.
---
tools/crm_resource.c |4 +++-
1 files changed, 3 insertions(+), 1 deletions(-)
diff --git a/tools/crm_resource.c b/tools/crm_resource.c
index
05.06.2013 21:30, paul wrote:
Hi. I have followed the Clusters from scratch PDF and have a working two
node active passive cluster with ClusterIP, WebDataClone,WebFS and
WebSite working. I am using BIND DNS to direct my websites to the
cluster address. When I perform a failover which works ok
29.05.2013 11:01, Andrew Beekhof wrote:
On 28/05/2013, at 4:30 PM, Andrew Beekhof and...@beekhof.net wrote:
On 28/05/2013, at 10:12 AM, Andrew Beekhof and...@beekhof.net wrote:
On 27/05/2013, at 5:08 PM, Vladislav Bogdanov bub...@hoster-ok.com wrote:
27.05.2013 04:20, Yuichi SEINO
27.05.2013 04:20, Yuichi SEINO wrote:
Hi,
2013/5/24 Vladislav Bogdanov bub...@hoster-ok.com:
24.05.2013 06:34, Andrew Beekhof wrote:
Any help figuring out where the leaks might be would be very much
appreciated :)
One (and the only) suspect is unfortunately crmd itself.
It has private
/5/22, Andrew Beekhof and...@beekhof.net wrote:
On 17/05/2013, at 4:17 PM, Vladislav Bogdanov bub...@hoster-ok.com wrote:
P.S. Andrew, is this patch ok to apply?
https://github.com/beekhof/pacemaker/commit/c7e10c6 :)
___
Pacemaker mailing list
22.05.2013 09:05, Andrew Beekhof wrote:
On 17/05/2013, at 4:17 PM, Vladislav Bogdanov bub...@hoster-ok.com wrote:
P.S. Andrew, is this patch ok to apply?
https://github.com/beekhof/pacemaker/commit/c7e10c6 :)
Awesome.
Thanks.
___
Pacemaker
24.05.2013 06:34, Andrew Beekhof wrote:
Any help figuring out where the leaks might be would be very much appreciated
:)
One (and the only) suspect is unfortunately crmd itself.
It has private heap grown from 2708 to 3680 kB.
All other relevant differences are in qb shm buffers, which are
Hi Hideo-san,
You may try the following patch (with trick below)
From 2c4418d11c491658e33c149f63e6a2f2316ef310 Mon Sep 17 00:00:00 2001
From: Vladislav Bogdanov bub...@hoster-ok.com
Date: Fri, 17 May 2013 05:58:34 +
Subject: [PATCH] Feature: PE: Unlink pengine output files before writing
the patch in conjunction with the write_xml processing in your
repository have to apply it before the confirmation of the patch of Vladislav?
Many Thanks!
Hideo Yamauchi.
--- On Fri, 2013/5/17, Vladislav Bogdanov bub...@hoster-ok.com wrote:
Hi Hideo-san,
You may try
15.05.2013 10:25, Andrew Beekhof wrote:
On 15/05/2013, at 3:50 PM, Vladislav Bogdanov bub...@hoster-ok.com wrote:
15.05.2013 08:23, Andrew Beekhof wrote:
On 15/05/2013, at 3:11 PM, renayama19661...@ybb.ne.jp wrote:
Hi Andrew,
Thank you for comments.
The guest located it to the shared
15.05.2013 11:18, Andrew Beekhof wrote:
On 15/05/2013, at 5:31 PM, Vladislav Bogdanov bub...@hoster-ok.com wrote:
15.05.2013 10:25, Andrew Beekhof wrote:
On 15/05/2013, at 3:50 PM, Vladislav Bogdanov bub...@hoster-ok.com wrote:
15.05.2013 08:23, Andrew Beekhof wrote:
On 15/05/2013
15.05.2013 11:18, Andrew Beekhof wrote:
On 15/05/2013, at 5:31 PM, Vladislav Bogdanov bub...@hoster-ok.com wrote:
15.05.2013 10:25, Andrew Beekhof wrote:
On 15/05/2013, at 3:50 PM, Vladislav Bogdanov bub...@hoster-ok.com wrote:
15.05.2013 08:23, Andrew Beekhof wrote:
On 15/05/2013
16.05.2013 02:46, Andrew Beekhof wrote:
On 15/05/2013, at 6:44 PM, Vladislav Bogdanov bub...@hoster-ok.com wrote:
15.05.2013 11:18, Andrew Beekhof wrote:
On 15/05/2013, at 5:31 PM, Vladislav Bogdanov bub...@hoster-ok.com wrote:
15.05.2013 10:25, Andrew Beekhof wrote:
On 15/05/2013
15.05.2013 08:23, Andrew Beekhof wrote:
On 15/05/2013, at 3:11 PM, renayama19661...@ybb.ne.jp wrote:
Hi Andrew,
Thank you for comments.
The guest located it to the shared disk.
What is on the shared disk? The whole OS or app-specific data (i.e.
nothing pacemaker needs directly)?
13.05.2013 19:28, Lindsay Todd wrote:
Folks: On my cluster built on SL6.4, I need to deploy some VMs that
depend on other VMs:
* db0 has no dependendencies
* ldap01,ldap02 require db0 to be running -- ordering constraint, but
no collocation (other than we'll use a collocation
08.04.2013 04:52, Andrew Beekhof wrote:
On 07/04/2013, at 2:17 AM, Vladislav Bogdanov bub...@hoster-ok.com wrote:
Hi,
I found that it is now (cb7b3f4) possible to add meta attributes with
any names and values to any resource definition. IIRC I failed to do
that year ago or so. So
Hi,
I found that it is now (cb7b3f4) possible to add meta attributes with
any names and values to any resource definition. IIRC I failed to do
that year ago or so. So, is it a bug or a feature?
Best,
Vladislav
___
Pacemaker mailing list:
01.04.2013 17:28, David Vossel пишет:
- Original Message -
From: Vladislav Bogdanov bub...@hoster-ok.com
To: pacemaker@oss.clusterlabs.org
Sent: Friday, March 29, 2013 2:03:27 AM
Subject: Re: [Pacemaker] Speeding up startup after migration
29.03.2013 03:31, Andrew Beekhof
01.04.2013 20:09, David Vossel wrote:
- Original Message -
From: Vladislav Bogdanov bub...@hoster-ok.com
To: pacemaker@oss.clusterlabs.org
Sent: Monday, April 1, 2013 10:35:39 AM
Subject: Re: [Pacemaker] Speeding up startup after migration
01.04.2013 17:28, David Vossel пишет
29.03.2013 03:31, Andrew Beekhof wrote:
On Fri, Mar 29, 2013 at 4:12 AM, Benjamin Kiessling
mittages...@l.unchti.me wrote:
Hi,
we've got a small pacemaker cluster running which controls an
active/passive router. On this cluster we've got a semi-large (~30)
number of primitives which are
Dennis Jacobfeuerborn denni...@conversis.de wrote:
On 26.03.2013 06:14, Vladislav Bogdanov wrote:
26.03.2013 04:23, Dennis Jacobfeuerborn wrote:
I have now reduced the configuration further and removed LVM from
the
picture. Still the cluster fails when I set the master node to
standby.
What's
15.03.2013 13:52, Leon Fauster wrote:
Am 15.03.2013 um 10:49 schrieb Dejan Muhamedagic deja...@fastmail.fm:
Hi,
On Thu, Mar 14, 2013 at 11:48:16PM -0400, Matthew O'Connor wrote:
Hi!! Two quick questions.
1. I have a resource that many other resources depend on. I need to
modify this base
15.03.2013 18:38, Dejan Muhamedagic wrote:
[ ... ]
putting the resource in unmanaged mode, thought?
No, monitors are still run in unmanaged mode.
Global maintainance-mode=true disables them.
Right. Probably the simplest option.
Or just temporary delete 'op monitor' lines from affected
13.03.2013 03:46, Andrew Beekhof wrote:
On Tue, Mar 12, 2013 at 2:51 PM, Vladislav Bogdanov
bub...@hoster-ok.com wrote:
12.03.2013 04:44, Andrew Beekhof wrote:
On Thu, Mar 7, 2013 at 5:30 PM, Vladislav Bogdanov bub...@hoster-ok.com
wrote:
07.03.2013 03:37, Andrew Beekhof wrote:
On Thu, Mar
12.03.2013 04:44, Andrew Beekhof wrote:
On Thu, Mar 7, 2013 at 5:30 PM, Vladislav Bogdanov bub...@hoster-ok.com
wrote:
07.03.2013 03:37, Andrew Beekhof wrote:
On Thu, Mar 7, 2013 at 2:41 AM, Vladislav Bogdanov bub...@hoster-ok.com
wrote:
06.03.2013 08:35, Andrew Beekhof wrote:
So
06.03.2013 08:35, Andrew Beekhof wrote:
On Thu, Feb 28, 2013 at 5:13 PM, Vladislav Bogdanov
bub...@hoster-ok.com wrote:
28.02.2013 07:21, Andrew Beekhof wrote:
On Tue, Feb 26, 2013 at 7:36 PM, Vladislav Bogdanov
bub...@hoster-ok.com wrote:
26.02.2013 11:10, Andrew Beekhof wrote:
On Mon, Feb
07.03.2013 03:37, Andrew Beekhof wrote:
On Thu, Mar 7, 2013 at 2:41 AM, Vladislav Bogdanov bub...@hoster-ok.com
wrote:
06.03.2013 08:35, Andrew Beekhof wrote:
So basically, you want to be able to add/remove nodes from nodelist.*
in corosync.conf and have pacemaker automatically add/remove
Sorry for being annoying, but... bump.
18.02.2013 10:18, Vladislav Bogdanov wrote:
Hi Andrew, all,
I had an idea last night, that it may be worth implementing
fully-dynamic cluster resize support in pacemaker, utilizing
possibilities CMAP and votequorum provide.
Idea is to:
* Do not add
26.02.2013 11:10, Andrew Beekhof wrote:
On Mon, Feb 18, 2013 at 6:18 PM, Vladislav Bogdanov
bub...@hoster-ok.com wrote:
Hi Andrew, all,
I had an idea last night, that it may be worth implementing
fully-dynamic cluster resize support in pacemaker,
We already support nodes being added
22.02.2013 10:45, Andrew Beekhof wrote:
On Fri, Feb 22, 2013 at 4:55 PM, Vladislav Bogdanov
bub...@hoster-ok.com wrote:
04.01.2013 13:56, Andrew Beekhof wrote:
On Fri, Jan 4, 2013 at 4:27 PM, Vladislav Bogdanov bub...@hoster-ok.com
wrote:
04.01.2013 06:07, Andrew Beekhof wrote:
On Wed, Dec
04.01.2013 13:56, Andrew Beekhof wrote:
On Fri, Jan 4, 2013 at 4:27 PM, Vladislav Bogdanov bub...@hoster-ok.com
wrote:
04.01.2013 06:07, Andrew Beekhof wrote:
On Wed, Dec 19, 2012 at 7:33 PM, Vladislav Bogdanov
bub...@hoster-ok.com wrote:
Hi all,
I'd like to share my successful attempt
Hi Andrew, all,
I had an idea last night, that it may be worth implementing
fully-dynamic cluster resize support in pacemaker, utilizing
possibilities CMAP and votequorum provide.
Idea is to:
* Do not add nodes from nodelist to CIB if their join-count in cmap is
zero (but do not touch CIB nodes
12.02.2013 13:05, Viacheslav Biriukov wrote:
Hi,
Maybe this help
you http://lists.corosync.org/pipermail/discuss/2011-October/000101.html
That is perfectly described (at least as of 2.3.0) in section DYNAMIC
ADD/REMOVE OF UDPU NODE of cmap_keys(8). Although I did not use it yet.
Vladislav
12.02.2013 07:11, Andrew Beekhof wrote:
On Tue, Feb 12, 2013 at 3:07 PM, Andrew Beekhof and...@beekhof.net wrote:
[...]
So we'll still need the crm_report, it will have more detail on the
Child process pengine terminated with signal 6 (pid=19357, core=128)
part.
Signal 6 is an assertion
Hi,
I use connection to CIB in a daemon application, and it seems like
following patch is required against cib_native_signon_raw() to not leak
memory on reconnects.
--- a/lib/cib/cib_native.c 2013-02-07 06:26:43.0 +
+++ b/lib/cib/cib_native.c 2013-02-07 12:33:32.501257239
06.02.2013 00:47, Andrew Beekhof wrote:
[...]
I thought it was supposed to be legal to do this, its not like the
definitions are different :-/
Grumble.
Following fixes this issue for me on EL6.
diff --git a/include/crm/common/ipcs.h b/include/crm/common/ipcs.h
index 5202bbc..b7991ae 100644
12.12.2012 05:35, Andrew Beekhof wrote:
On Tue, Dec 11, 2012 at 5:49 PM, Vladislav Bogdanov
bub...@hoster-ok.com wrote:
11.12.2012 06:52, Vladislav Bogdanov wrote:
11.12.2012 05:12, Andrew Beekhof wrote:
On Mon, Dec 10, 2012 at 11:34 PM, Vladislav Bogdanov
bub...@hoster-ok.com wrote
10.12.2012 09:56, Vladislav Bogdanov wrote:
10.12.2012 04:29, Andrew Beekhof wrote:
On Fri, Dec 7, 2012 at 5:37 PM, Vladislav Bogdanov bub...@hoster-ok.com
wrote:
06.12.2012 09:04, Vladislav Bogdanov wrote:
06.12.2012 06:05, Andrew Beekhof wrote:
I wonder what the growth looks like
11.12.2012 05:12, Andrew Beekhof wrote:
On Mon, Dec 10, 2012 at 11:34 PM, Vladislav Bogdanov
bub...@hoster-ok.com wrote:
10.12.2012 09:56, Vladislav Bogdanov wrote:
10.12.2012 04:29, Andrew Beekhof wrote:
On Fri, Dec 7, 2012 at 5:37 PM, Vladislav Bogdanov bub...@hoster-ok.com
wrote
11.12.2012 06:52, Vladislav Bogdanov wrote:
11.12.2012 05:12, Andrew Beekhof wrote:
On Mon, Dec 10, 2012 at 11:34 PM, Vladislav Bogdanov
bub...@hoster-ok.com wrote:
10.12.2012 09:56, Vladislav Bogdanov wrote:
10.12.2012 04:29, Andrew Beekhof wrote:
On Fri, Dec 7, 2012 at 5:37 PM, Vladislav
10.12.2012 04:29, Andrew Beekhof wrote:
On Fri, Dec 7, 2012 at 5:37 PM, Vladislav Bogdanov bub...@hoster-ok.com
wrote:
06.12.2012 09:04, Vladislav Bogdanov wrote:
06.12.2012 06:05, Andrew Beekhof wrote:
I wonder what the growth looks like with the recent libqb fix.
That could
06.12.2012 09:04, Vladislav Bogdanov wrote:
06.12.2012 06:05, Andrew Beekhof wrote:
I wonder what the growth looks like with the recent libqb fix.
That could be an explanation.
Valid point. I will watch.
On a almost static cluster the only change in memory state during 24
hours is +700kb
06.12.2012 06:05, Andrew Beekhof wrote:
I wonder what the growth looks like with the recent libqb fix.
That could be an explanation.
Valid point. I will watch.
On Sat, Sep 15, 2012 at 5:23 AM, Vladislav Bogdanov
bub...@hoster-ok.com wrote:
14.09.2012 09:54, Vladislav Bogdanov wrote
29.11.2012 09:36, Angus Salkeld wrote:
...
so, qb_array_index() fails once idx spans uint16_t boundary (0x) and
(uint16_t)idx 0.
IMHO this naturally means some kind of integer overflow.
Well done, I'll have a closer look at it.
Patch here:
22.11.2012 14:18, Angus Salkeld wrote:
On 22/11/12 11:48 +1100, Andrew Beekhof wrote:
On Tue, Nov 20, 2012 at 5:32 PM, Vladislav Bogdanov
bub...@hoster-ok.com wrote:
Hi,
Running 06229e9 with qb 0.14.3, and noticed following assert() in trace
logging path:
#0 0x7f40451688a5 in raise
22.11.2012 14:18, Angus Salkeld wrote:
On 22/11/12 11:48 +1100, Andrew Beekhof wrote:
On Tue, Nov 20, 2012 at 5:32 PM, Vladislav Bogdanov
bub...@hoster-ok.com wrote:
Hi,
Running 06229e9 with qb 0.14.3, and noticed following assert() in trace
logging path:
#0 0x7f40451688a5 in raise
Hi,
Looking at pengine inputs (06229e9) I noticed that there are transient
last-failure-rsc_id attributes for resources last failed a long ago
(more that 6 seconds).
Example is:
node_state id=1107559690 uname=vd01-c in_ccm=true crmd=online
join=member expected=member
12.11.2012 05:42, Andrew Beekhof wrote:
On Fri, Nov 9, 2012 at 5:15 PM, Vladislav Bogdanov bub...@hoster-ok.com
wrote:
09.11.2012 04:48, Andrew Beekhof wrote:
...
A bit of an update
The reverse lookup functionality has turned out to cause far more
problems and confusion than
13.11.2012 01:39, Andrew Beekhof пишет:
On Tue, Nov 13, 2012 at 9:36 AM, Vladislav Bogdanov
bub...@hoster-ok.com wrote:
12.11.2012 05:42, Andrew Beekhof wrote:
On Fri, Nov 9, 2012 at 5:15 PM, Vladislav Bogdanov bub...@hoster-ok.com
wrote:
09.11.2012 04:48, Andrew Beekhof wrote:
...
A bit
09.11.2012 04:48, Andrew Beekhof wrote:
...
A bit of an update
The reverse lookup functionality has turned out to cause far more
problems and confusion than it was intended to solve.
So I am basically removing it. Anyone worried about that
bootstrapping case will be encouraged to use
-handling functions.
On Fri, Oct 26, 2012 at 9:57 PM, Vladislav Bogdanov
bub...@hoster-ok.com wrote:
26.10.2012 13:38, Vladislav Bogdanov wrote:
26.10.2012 12:43, Andrew Beekhof wrote:
...
May be also set it forcibly to uname if uname contains full lexem
found
in dns name?
Run that past
05.11.2012 08:40, Andrew Beekhof wrote:
On Fri, Nov 2, 2012 at 6:22 PM, Vladislav Bogdanov bub...@hoster-ok.com
wrote:
02.11.2012 02:05, Andrew Beekhof wrote:
On Thu, Nov 1, 2012 at 5:09 PM, Vladislav Bogdanov bub...@hoster-ok.com
wrote:
01.11.2012 02:47, Andrew Beekhof wrote:
...
One
03.11.2012 15:26, Vladimir Elisseev wrote:
I've been able to reproduce the problem. Herewith I've attached
crm_report tarballs from both nodes. Although I don't know what
particular package triggers this problem, but below is the list of what
has been updated. Hopefully this helps.
I bet that
02.11.2012 02:05, Andrew Beekhof wrote:
On Thu, Nov 1, 2012 at 5:09 PM, Vladislav Bogdanov bub...@hoster-ok.com
wrote:
01.11.2012 02:47, Andrew Beekhof wrote:
...
One remark about that - it requires that gfs2 communicates with dlm in
the kernel space - so gfs_controld is not longer
01.11.2012 02:47, Andrew Beekhof wrote:
...
One remark about that - it requires that gfs2 communicates with dlm in
the kernel space - so gfs_controld is not longer required. I think
Fedora 17 is the first version with that feature. And it is definitely
not available for EL6 (centos6 which I
01.11.2012 02:53, Andrew Beekhof wrote:
On Tue, Oct 30, 2012 at 6:15 PM, Vladislav Bogdanov
bub...@hoster-ok.com wrote:
29.10.2012 19:51, Bernardo Cabezas Serra wrote:
Hello,
disclaimer: I have posted this issue to linux-ha list too a couple of
days ago. I'm sorry if this is not the correct
31.10.2012 12:39, Bernardo Cabezas Serra wrote:
El 30/10/12 17:29, Bernardo Cabezas Serra escribió:
Hello,
Got some errors compiling pacemaker 1.1.8 (also tested previous
versions) on top of corosync 2.1.0.
Also tested with corosync 2.0.2, and fails the same.
In fact,
01.11.2012 03:28, Andrew Beekhof wrote:
On Tue, Oct 30, 2012 at 5:13 PM, Vladislav Bogdanov
bub...@hoster-ok.com wrote:
30.10.2012 04:27, Andrew Beekhof wrote:
On reflection, I think making this configurable is going to cause more
trouble than its worth.
Any sysconfig mismatch between
introduce one global
var (domain name), and will have one extra call to uname() and three or
less calls to string-handling functions.
On Fri, Oct 26, 2012 at 9:57 PM, Vladislav Bogdanov
bub...@hoster-ok.com wrote:
26.10.2012 13:38, Vladislav Bogdanov wrote:
26.10.2012 12:43, Andrew Beekhof wrote
for dlm_controld
with pacemaker stack. Last version with support was 3.0.17.
But this was done some years ago, and as far as I have been able to
understand, things are still broken.
The most relevant info found about this issue are these threads from
Andrew Beekhof and Vladislav Bogdanov, wich suggest
are these threads from
Andrew Beekhof and Vladislav Bogdanov, wich suggest to compile
dlm_controld from Cluster, applying some patches. They report it worked
(whith some remaining issues):
http://oss.clusterlabs.org/pipermail/pacemaker/2009-October/003064.html
http://www.mail-archive.com/pacemaker
101 - 200 of 387 matches
Mail list logo