05.08.2011 14:55, Ulrich Windl wrote:
Hi,
we run a cluster that has about 30 LVM VGs that are monitored every
minute with a timeout interval of 90s. Surprisingly even if the system
is in nominal state, the LVM monitor times out.
I suspect this has to do with multiple LVM commands being
speaking you are not guaranteed to have the same sd* or dm-*
name every time you connect to NAS (think about two connections to it,
which may occur in a different order).
Best,
Vladislav.
Regards,
Ulrich
Vladislav Bogdanov bub...@hoster-ok.com schrieb am 20.08.2011 um 23:07 in
Nachricht
17.09.2011 14:18, Vadym Chepkov wrote:
Hi,
How does one filter iSCSI disks in lvm.conf? In our configurations
all iSCSI disks belong to virtual machines, so I don't want host
server to see LVM on those disks. Just relying on the device names is
not reliable, especially if the server has
17.09.2011 23:11, Vadym Chepkov wrote:
On Sep 17, 2011, at 3:20 PM, Vladislav Bogdanov wrote:
17.09.2011 14:18, Vadym Chepkov wrote:
Hi,
How does one filter iSCSI disks in lvm.conf? In our configurations
all iSCSI disks belong to virtual machines, so I don't want host
server to see LVM
22.09.2011 13:58, Dejan Muhamedagic wrote:
On Thu, Sep 22, 2011 at 12:45:19PM +0200, Sascha Hagedorn wrote:
Hello Dejan,
well, actually the nodes communicate over a virtual network device since
they are virtual machines on a XEN host. No switches or hardware involved as
far as I know.
23.09.2011 21:15, mike wrote:
Last year I set up an HA cluster with ldirector pointing to 2 load
balanced real servers. We had jboss on the backend listening to the
Real IP on port 8080. Initially, we could not get the backend to reply -
we kept getting refused connections when we tried
24.09.2011 16:21, mike wrote:
On 11-09-24 05:02 AM, Vladislav Bogdanov wrote:
23.09.2011 21:15, mike wrote:
Last year I set up an HA cluster with ldirector pointing to 2 load
balanced real servers. We had jboss on the backend listening to the
Real IP on port 8080. Initially, we could not get
25.09.2011 02:29, mike wrote:
On 11-09-24 02:43 PM, Vladislav Bogdanov wrote:
24.09.2011 16:21, mike wrote:
On 11-09-24 05:02 AM, Vladislav Bogdanov wrote:
23.09.2011 21:15, mike wrote:
Last year I set up an HA cluster with ldirector pointing to 2 load
balanced real servers. We had jboss
25.09.2011 11:09, Vladislav Bogdanov wrote:
25.09.2011 02:29, mike wrote:
On 11-09-24 02:43 PM, Vladislav Bogdanov wrote:
24.09.2011 16:21, mike wrote:
On 11-09-24 05:02 AM, Vladislav Bogdanov wrote:
23.09.2011 21:15, mike wrote:
Last year I set up an HA cluster with ldirector pointing to 2
Hi Lars,
Thank you for information.
27.09.2011 17:58, Lars Marowsky-Bree wrote:
Hi all,
it turns out that there was zero feedback about people wanting to
present, only some about travel budget being too tight to come. So we
had some discussions about whether to cancel this completely, as
06.10.2011 14:16, Vadym Chepkov wrote:
[...]
The problem is not with amount of lines, pacemaker has limitations
which I don't see how to solve with your approach. How would you
configure dependencies on such combined resource? There is no way
to express if one of two is running, unless I miss
06.10.2011 17:58, Vadym Chepkov wrote:
On Oct 6, 2011, at 10:38 AM, Vladislav Bogdanov wrote:
06.10.2011 14:16, Vadym Chepkov wrote:
[...]
The problem is not with amount of lines, pacemaker has limitations
which I don't see how to solve with your approach. How would you
configure
13.10.2011 22:04, Florian Haas wrote:
[snip]
Pacemaker does support staggered fencing device priorities, where if one
means of fencing malfunctions, you can still fall back to another one.
It didn't work with 1.1 last time I checked (several months ago).
Hopefully it works now. Do you have
28.10.2011 04:04, Nick Khamis wrote:
Hello Everyone,
I just want to make sure this is still the case before I go through
with it. I am trying to setup an
active/active using:
Corosync 1.4.2
Pacemaker 1.1.6
Cluster3
DRBD 8.3.7
OCFS2
The only reason I installed Cluster3 was for dlm
01.11.2011 11:01, Andrew Beekhof wrote:
On Tue, Nov 1, 2011 at 12:02 PM, Nick Khamis sym...@gmail.com wrote:
I included /etc/corosync/service.d/pcmk:
service {
# Load the Pacemaker Cluster Resource Manager
name: pacemaker
ver: 1
}
That doesnt start pacemaker and
02.11.2011 12:47, Andrew Beekhof wrote:
On Wed, Nov 2, 2011 at 6:38 PM, Vladislav Bogdanov bub...@hoster-ok.com
wrote:
01.11.2011 11:01, Andrew Beekhof wrote:
On Tue, Nov 1, 2011 at 12:02 PM, Nick Khamis sym...@gmail.com wrote:
I included /etc/corosync/service.d/pcmk:
service
02.11.2011 16:36, Nick Khamis wrote:
Vladislav,
Thank you so much for your response. Just to make sure, all I need is to:
* Apply the three patches to cman. Found here
http://www.gossamer-threads.com/lists/linuxha/pacemaker/75164?do=post_view_threaded;.
* Recompile CMAN
* Do I have to
03.11.2011 15:37, Nick Khamis wrote:
Hello Vlad,
Thank you so much for your response. I am experiencing the same hang
as well. Did you
have better luck with GFS2, or any other network file system?
If you see almost simultaneous kernel panic on all cluster nodes, then
you probably hit the
.
#
# Copyright (c) 2011 Vladislav Bogdanov bub...@hoster-ok.com
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of version 2 of the GNU General Public License as
# published by the Free Software Foundation.
#
# This program is distributed in the hope
Hi,
11.11.2011 10:25, Ulrich Windl wrote:
Hi!
I found some obscure problem having to do with LVM multipathing and
hot-plugged disks:
I have written some RAs that support hotplugging of SAN disks via
NPIV (N_Port ID Virtualization) and addition and removal of multipath
maps. On to of
Hi,
14.11.2011 19:29, Nick Khamis wrote:
Hello Andrew,
Thank you so much for your response. I wanted to clarify, I am running the
pacemaker stack, and experiening errors with ocf:pacemaker:o2cb and
ocfs2_controld.pcmk. Tracking some of the o2cb processes, I waned to say that:
You run
15.11.2011 10:25, Vladislav Bogdanov wrote:
Hi,
11.11.2011 10:25, Ulrich Windl wrote:
Hi!
I found some obscure problem having to do with LVM multipathing and
hot-plugged disks:
I have written some RAs that support hotplugging of SAN disks via
NPIV (N_Port ID Virtualization) and addition
15.11.2011 10:55, Ulrich Windl wrote:
Hi Vladislav,
filtering the devices for LVM also was on my mind, but changing the
filter cluster-wide after any configuration change seems too
error-prone for me. On the other side I couldn't a generic rule that
will match our configuration.
Attached
/dm-name-*) to be displayed, but those are just
symlinks to the dm files.
Regards, Ulrich
Vladislav Bogdanov bub...@hoster-ok.com schrieb am 15.11.2011
um 09:21 in
Nachricht 4ec22127.9070...@hoster-ok.com:
15.11.2011 10:55, Ulrich Windl wrote:
Hi Vladislav,
filtering the devices
Hi,
15.11.2011 19:29, Nick Khamis wrote:
Hello Vladislav,
Thank you so much for your response.
I may be wrong in details a little bit, Andrew may correct me then.
Actually, you are exactly right. With that being said, how can I stop
cman, quorum, fenced being started and use just the
Hi,
18.11.2011 17:02, Nick Khamis wrote:
[snip]
Vladislav, was this ocfs2 stack kernel crash you were experiencing one year
ago:
To be frank, I do not remember.
It was a year ago ;)
Starting ocfs2_controld... [ OK ]
Message from syslogd@astdrbd1 at Nov 18 08:51:59 ...
kernel:[
21.11.2011 04:18, Nick Khamis wrote:
Correction!
Some of the ocfs2_controld.pcmk errors posted earlier was due to pacemaker not
running with /service.d/pcmk. The error is actually:
http://pastebin.com/XCiuhU20.
If I can get the standard dlm working it will all come together!
You can't.
21.11.2011 01:18, Andrew Beekhof wrote:
On Sat, Nov 19, 2011 at 1:02 AM, Nick Khamis sym...@gmail.com wrote:
Hello Andrew,
Thank you so much for your response. My concern was elimination as
much of cman as
possible,
Then don't use it at all.
since the goal was to run pacemaker on top
28.11.2011 13:09, Ulrich Windl wrote:
Hi!
I posted one more implementation in
http://www.mail-archive.com/linux-ha@lists.linux-ha.org/msg19760.html as
a part of bigger code snippet.
It just uses -C shell option to create lock files (exists at least in
bash and dash).
Here is a locking
13.01.2012 13:04, Niclas Müller wrote:
I've grouped both as www-services and not it is running like i want.
Change to takeover is 4-6 sec. Its good, but I want to go to 1-3 sec as
far as possible. Much process last will there not because I only made a
Pacemaker runs monitor actions at a
Hi Dejan,
I'm evaluating crmsh in place of pacemaker bundled crm (because of
rsc_ticket support).
With current crmsh (b4b063507de0) it is impossible (ok, very hard) to
manage cluster properties:
# crm configure
crm(live)configure# property [tab] ERROR: crmd:metadata: could not parse
meta-data:
Hi Dejan,
thank you very much for a good pointer, you saved me much time.
16.01.2012 16:20, Dejan Muhamedagic wrote:
Hi Vladislav,
On Mon, Jan 16, 2012 at 02:14:29PM +0300, Vladislav Bogdanov wrote:
Hi Dejan,
I'm evaluating crmsh in place of pacemaker bundled crm (because of
rsc_ticket
16.01.2012 20:56, Dejan Muhamedagic wrote:
On Mon, Jan 16, 2012 at 06:47:54PM +0300, Vladislav Bogdanov wrote:
Hi Dejan,
thank you very much for a good pointer, you saved me much time.
16.01.2012 16:20, Dejan Muhamedagic wrote:
Hi Vladislav,
On Mon, Jan 16, 2012 at 02:14:29PM +0300
14.03.2012 00:42, William Seligman wrote:
[snip]
These were the log messages, which show that stonith_admin did its job and
CMAN
was notified of the fencing: http://pastebin.com/jaH820Bv.
Could you please look at the output of 'dlm_tool ls' and 'dlm_tool dump'?
You probably have 'kern_stop'
15.03.2012 18:43, William Seligman wrote:
On 3/15/12 3:43 AM, Vladislav Bogdanov wrote:
14.03.2012 00:42, William Seligman wrote:
[snip]
These were the log messages, which show that stonith_admin did its job and
CMAN
was notified of the fencing: http://pastebin.com/jaH820Bv.
Could you
31.10.2012 20:55, Robinson, Eric wrote:
Okay, the two node names are ha09a and ha09b. Starting clean with all
services turned off.
This is what I get in /var/log/corosync.log on ha09a when I start corosync...
Oct 31 10:22:43 corosync [MAIN ] Corosync Cluster Engine ('1.4.3'): started
30.11.2012 00:14, Robinson, Eric wrote:
Bump... does anyone have some insight on this? Google is not turning up
anything useful.
Our newest cluster will not failover master/slave drbd resources. It works
fine manually using drbdadm from a shell prompt, but when we try it using
'crm node
02.12.2012 00:34, Robinson, Eric wrote:
Try to set 'target-role=Started' in both of them.
Okay, but how does that address the problem of error code 11 from drbdadm?
Well, you have error promoting resources. 11 is EAGAIN, usually meaning
you did not demote the other side.
Your logs contain
16.04.2013 12:47, Andreas Mock wrote:
Hi Marek, hi all,
we just investigated this problem a little further while
looking at the sources of fence_imm.
It seems that the IMM device does a soft shutdown despite
documented differently. I can reproduce this with the
ipmitool directly and also
05.06.2013 02:04, Andrew Beekhof wrote:
On 05/06/2013, at 5:08 AM, Ferenc Wagner wf...@niif.hu wrote:
Dejan Muhamedagic deja...@fastmail.fm writes:
On Mon, Jun 03, 2013 at 06:19:06PM +0200, Ferenc Wagner wrote:
I've got a script for resource creation, which puts the new resource in
a
06.06.2013 07:31, Andrew Beekhof wrote:
On 06/06/2013, at 2:27 PM, Vladislav Bogdanov bub...@hoster-ok.com wrote:
05.06.2013 02:04, Andrew Beekhof wrote:
On 05/06/2013, at 5:08 AM, Ferenc Wagner wf...@niif.hu wrote:
Dejan Muhamedagic deja...@fastmail.fm writes:
On Mon, Jun 03, 2013
06.06.2013 08:14, Andrew Beekhof wrote:
On 06/06/2013, at 2:50 PM, Vladislav Bogdanov bub...@hoster-ok.com wrote:
06.06.2013 07:31, Andrew Beekhof wrote:
On 06/06/2013, at 2:27 PM, Vladislav Bogdanov bub...@hoster-ok.com wrote:
05.06.2013 02:04, Andrew Beekhof wrote:
On 05/06/2013
06.06.2013 08:14, Andrew Beekhof wrote:
On 06/06/2013, at 2:50 PM, Vladislav Bogdanov bub...@hoster-ok.com wrote:
06.06.2013 07:31, Andrew Beekhof wrote:
On 06/06/2013, at 2:27 PM, Vladislav Bogdanov bub...@hoster-ok.com wrote:
05.06.2013 02:04, Andrew Beekhof wrote:
On 05/06/2013
06.06.2013 09:02, Andrew Beekhof wrote:
On 06/06/2013, at 3:45 PM, Vladislav Bogdanov bub...@hoster-ok.com wrote:
06.06.2013 08:14, Andrew Beekhof wrote:
On 06/06/2013, at 2:50 PM, Vladislav Bogdanov bub...@hoster-ok.com wrote:
06.06.2013 07:31, Andrew Beekhof wrote:
On 06/06/2013
06.06.2013 08:43, Vladislav Bogdanov wrote:
[...]
I recall that LDAP has similar problem, which is easily worked around
with specifying two values, one is original, second is new.
That way you tell LDAP server:
Replace value Y in attribute X to value Z. And if value is not Y at the
moment
Dejan,
here is the patch to fix parsing of fencing_topology:
--- a/modules/xmlutil.py2013-06-07 07:21:10.0 +
+++ b/modules/xmlutil.py2013-06-13 07:51:09.704924693 +
@@ -937,7 +937,7 @@ def get_set_nodes(e,setname,create = 0):
def xml_noorder_hash(n):
return
26.06.2013 15:57, Dejan Muhamedagic wrote:
On Thu, Jun 06, 2013 at 05:19:03PM +0200, Dejan Muhamedagic wrote:
Hi,
On Thu, Jun 06, 2013 at 03:11:16PM +0300, Vladislav Bogdanov wrote:
06.06.2013 08:43, Vladislav Bogdanov wrote:
[...]
I recall that LDAP has similar problem, which is easily
26.06.2013 18:30, Dejan Muhamedagic wrote:
On Wed, Jun 26, 2013 at 06:13:33PM +0300, Vladislav Bogdanov wrote:
26.06.2013 15:57, Dejan Muhamedagic wrote:
On Thu, Jun 06, 2013 at 05:19:03PM +0200, Dejan Muhamedagic wrote:
Hi,
On Thu, Jun 06, 2013 at 03:11:16PM +0300, Vladislav Bogdanov wrote
Hi,
I'm trying to look if it is now safe to delete non-running nodes
(corosync 2.3, pacemaker HEAD, crmsh tip).
# crm node delete v02-d
WARNING: 2: crm_node bad format: 7 v02-c
WARNING: 2: crm_node bad format: 8 v02-d
WARNING: 2: crm_node bad format: 5 v02-a
WARNING: 2: crm_node bad format: 6
01.07.2013 18:29, Dejan Muhamedagic wrote:
Hi,
On Mon, Jul 01, 2013 at 05:29:31PM +0300, Vladislav Bogdanov wrote:
Hi,
I'm trying to look if it is now safe to delete non-running nodes
(corosync 2.3, pacemaker HEAD, crmsh tip).
# crm node delete v02-d
WARNING: 2: crm_node bad format: 7
28.06.2013 17:47, Dejan Muhamedagic wrote:
...
If you want to test here's a new patch. It does work with
unrelated changes happening in the meantime. I didn't test yet
really concurrent updates.
One thing I see immediately, is that node utilization attributes are
deleted after I do 'load
02.07.2013 12:27, Lars Marowsky-Bree wrote:
On 2013-07-02T11:05:01, Vladislav Bogdanov bub...@hoster-ok.com wrote:
One thing I see immediately, is that node utilization attributes are
deleted after I do 'load update' with empty node utilization sections.
That is probably not specific
02.07.2013 14:55, Andrew Beekhof wrote:
On 02/07/2013, at 8:14 PM, Vladislav Bogdanov bub...@hoster-ok.com wrote:
02.07.2013 12:27, Lars Marowsky-Bree wrote:
On 2013-07-02T11:05:01, Vladislav Bogdanov bub...@hoster-ok.com wrote:
One thing I see immediately, is that node utilization
02.07.2013 15:13, Lars Marowsky-Bree wrote:
On 2013-07-02T13:14:48, Vladislav Bogdanov bub...@hoster-ok.com wrote:
Yes, that's exactly what you need here.
I know, but I do not expect that to be implemented soon.
crm_attribute -l reboot -z doesn't strike me as an unlikely request.
You
03.07.2013 00:16, Lars Marowsky-Bree wrote:
On 2013-07-03T00:11:53, Vladislav Bogdanov bub...@hoster-ok.com wrote:
F.e. I need to do CIB update (think of it as of full replace), because I
generate crmsh configuration with custom template-based system. And I
have some RAs which set
03.07.2013 00:20, Vladislav Bogdanov wrote:
...
But tickets currently are quite limited - they have only 4 states, so
it is impossible to put f.e. number there.
What are you trying to do with that?
That is very convenient way to f.e stop dozen of resources in one shot
for some maintenance
02.07.2013 20:05, Dejan Muhamedagic wrote:
On Tue, Jul 02, 2013 at 11:05:01AM +0300, Vladislav Bogdanov wrote:
28.06.2013 17:47, Dejan Muhamedagic wrote:
...
If you want to test here's a new patch. It does work with
unrelated changes happening in the meantime. I didn't test yet
really
03.07.2013 13:00, Lars Marowsky-Bree wrote:
On 2013-07-03T00:20:19, Vladislav Bogdanov bub...@hoster-ok.com wrote:
I do not edit them. I my setup I generate full crm config with
template-based framework.
And then you do a load/replace? Tough; yes, that'll clearly overwrite
Actually 'load
03.07.2013 15:43, Dejan Muhamedagic wrote:
Hi Lars,
On Wed, Jul 03, 2013 at 12:05:17PM +0200, Lars Marowsky-Bree wrote:
On 2013-07-03T10:26:09, Dejan Muhamedagic deja...@fastmail.fm wrote:
Not sure that is expected by most people.
How you then delete attributes?
Tough call :) Ideas
03.07.2013 02:24, Andrew Beekhof wrote:
...
I don't even know what I'm thinking half the time, I'd not recommend trying
to guess :)
No fundamental objection to such a feature, but I'd be reluctant to add it
until we get an attrd that was truly atomic.
That code is mostly bandages and
03.07.2013 16:28, Vladislav Bogdanov wrote:
...
So I'd probably just hack crmsh to not touch node utilization attributes
if whole 'utilization' part is missing in the update.
Unfortunately this doesn't seem to be possible with my python
programming level (near zero)... :(
It is clear for me
04.07.2013 17:25, Dejan Muhamedagic wrote:
On Wed, Jul 03, 2013 at 04:33:20PM +0300, Vladislav Bogdanov wrote:
03.07.2013 15:43, Dejan Muhamedagic wrote:
Hi Lars,
On Wed, Jul 03, 2013 at 12:05:17PM +0200, Lars Marowsky-Bree wrote:
On 2013-07-03T10:26:09, Dejan Muhamedagic deja...@fastmail.fm
04.07.2013 17:40, Vladislav Bogdanov wrote:
...
The only question is how to remove existing attributes.
Another one is how to forcibly replace the whole section or the whole
object definition, without caring about its original content.
___
Linux-HA
04.07.2013 19:09, Dejan Muhamedagic wrote:
On Thu, Jul 04, 2013 at 05:40:07PM +0300, Vladislav Bogdanov wrote:
04.07.2013 17:25, Dejan Muhamedagic wrote:
On Wed, Jul 03, 2013 at 04:33:20PM +0300, Vladislav Bogdanov wrote:
03.07.2013 15:43, Dejan Muhamedagic wrote:
Hi Lars,
On Wed, Jul 03
05.07.2013 14:38, Dejan Muhamedagic wrote:
On Fri, Jul 05, 2013 at 09:31:07AM +0300, Vladislav Bogdanov wrote:
04.07.2013 19:09, Dejan Muhamedagic wrote:
On Thu, Jul 04, 2013 at 05:40:07PM +0300, Vladislav Bogdanov wrote:
04.07.2013 17:25, Dejan Muhamedagic wrote:
On Wed, Jul 03, 2013 at 04
05.07.2013 16:25, Vladislav Bogdanov wrote:
05.07.2013 14:38, Dejan Muhamedagic wrote:
On Fri, Jul 05, 2013 at 09:31:07AM +0300, Vladislav Bogdanov wrote:
04.07.2013 19:09, Dejan Muhamedagic wrote:
On Thu, Jul 04, 2013 at 05:40:07PM +0300, Vladislav Bogdanov wrote:
04.07.2013 17:25, Dejan
05.07.2013 19:46, Lars Marowsky-Bree wrote:
On 2013-07-05T19:06:54, Vladislav Bogdanov bub...@hoster-ok.com wrote:
params #merge param1=value1 param2=value2
meta #replace ...
utilization #keep
and so on. With default to #replace?
Even more.
If we allow such meta lexems anywhere
03.07.2013 19:31, Dejan Muhamedagic wrote:
On Tue, Jul 02, 2013 at 07:53:52AM +0300, Vladislav Bogdanov wrote:
01.07.2013 18:29, Dejan Muhamedagic wrote:
Hi,
On Mon, Jul 01, 2013 at 05:29:31PM +0300, Vladislav Bogdanov wrote:
Hi,
I'm trying to look if it is now safe to delete non-running
10.07.2013 03:39, Andrew Beekhof wrote:
On 10/07/2013, at 1:51 AM, Vladislav Bogdanov bub...@hoster-ok.com wrote:
03.07.2013 19:31, Dejan Muhamedagic wrote:
On Tue, Jul 02, 2013 at 07:53:52AM +0300, Vladislav Bogdanov wrote:
01.07.2013 18:29, Dejan Muhamedagic wrote:
Hi,
On Mon, Jul 01
10.07.2013 07:05, Andrew Beekhof wrote:
On 10/07/2013, at 2:04 PM, Vladislav Bogdanov bub...@hoster-ok.com wrote:
10.07.2013 03:39, Andrew Beekhof wrote:
On 10/07/2013, at 1:51 AM, Vladislav Bogdanov bub...@hoster-ok.com wrote:
03.07.2013 19:31, Dejan Muhamedagic wrote:
On Tue, Jul 02
10.07.2013 08:13, Andrew Beekhof wrote:
On 10/07/2013, at 2:15 PM, Vladislav Bogdanov bub...@hoster-ok.com wrote:
10.07.2013 07:05, Andrew Beekhof wrote:
On 10/07/2013, at 2:04 PM, Vladislav Bogdanov bub...@hoster-ok.com wrote:
10.07.2013 03:39, Andrew Beekhof wrote:
On 10/07/2013
10.07.2013 08:38, Andrew Beekhof wrote:
On 10/07/2013, at 3:37 PM, Vladislav Bogdanov bub...@hoster-ok.com wrote:
10.07.2013 08:13, Andrew Beekhof wrote:
On 10/07/2013, at 2:15 PM, Vladislav Bogdanov bub...@hoster-ok.com wrote:
10.07.2013 07:05, Andrew Beekhof wrote:
On 10/07/2013
10.07.2013 18:14, Dejan Muhamedagic wrote:
...
[root@v02-b ~]# crm node delete v02-d
ERROR: according to crm_node, node v02-d is still active
You can now:
# crm --force node delete ...
Thanks,
Vladislav
___
Linux-HA mailing list
Hi Dejan,
It seems like resource restart does not work any longer.
# crm resource restart test01-vm
INFO: ordering test01-vm to stop
Traceback (most recent call last):
File /usr/sbin/crm, line 44, in module
main.run()
File /usr/lib64/python2.6/site-packages/crmsh/main.py, line 442, in
12.07.2013 12:06, Vladislav Bogdanov wrote:
Hi Dejan,
It seems like resource restart does not work any longer.
Ah, this seems to be fixed by bb39cce17f20. Sorry for noise.
___
Linux-HA mailing list
Linux-HA@lists.linux-ha.org
http://lists.linux
01.07.2013 17:29, Vladislav Bogdanov wrote:
Hi,
I'm trying to look if it is now safe to delete non-running nodes
(corosync 2.3, pacemaker HEAD, crmsh tip).
# crm node delete v02-d
WARNING: 2: crm_node bad format: 7 v02-c
WARNING: 2: crm_node bad format: 8 v02-d
WARNING: 2: crm_node bad
Hi,
I wanted to add new node into CIB in advance, before it is powered on
(to power it on in a standby mode while cl#5169 is not implemented).
So, I did
==
[root@vd01-a tmp]# cat u
node $id=4 vd01-d \
attributes standby=on virtualization=true
[root@vd01-a tmp]# crm configure load
15.07.2013 12:36, Dejan Muhamedagic wrote:
Hi Vladislav,
On Fri, Jul 12, 2013 at 01:48:34PM +0300, Vladislav Bogdanov wrote:
Hi,
I wanted to add new node into CIB in advance, before it is powered on
(to power it on in a standby mode while cl#5169 is not implemented).
So, I did
22.08.2013 15:08, Ferenc Wagner wrote:
Hi,
Our setup uses some cluster wide pieces of meta information. Think
access control lists for resource instances used by some utilities or
some common configuration data used by the resource agents. Currently
this info is stored in local files on
03.09.2013 07:04, Digimer wrote:
...
To solve problem 1, you can set a delay against one of the nodes. Say
you set the fence primitive for node 01 to have 'delay=15'. When node
1 goes to fence node 2, it starts immediately. When node 2 starts to
fence node 1, it sees the 15 second delay and
03.09.2013 21:45, Digimer wrote:
On 03/09/13 14:14, Vladislav Bogdanov wrote:
03.09.2013 07:04, Digimer wrote:
...
To solve problem 1, you can set a delay against one of the nodes. Say
you set the fence primitive for node 01 to have 'delay=15'. When node
1 goes to fence node 2, it starts
03.09.2013 21:36, Lars Marowsky-Bree wrote:
On 2013-09-03T21:14:02, Vladislav Bogdanov bub...@hoster-ok.com wrote:
To solve problem 2, simply disable corosync/pacemaker from starting on
boot. This way, the fenced node will be (hopefully) back up and running,
so you can ssh into it and look
04.09.2013 07:16, Andrew Beekhof wrote:
On 03/09/2013, at 9:20 PM, Moullé Alain alain.mou...@bull.net
wrote:
Hello,
A simple question : is there a maximum number of resources (let's
say simple primitives) that Pacemaker can support at first at
configuration of ressources via crm, and
Hi Dejan, all,
Didn't find the way to configure rule-controlled resource options
(http://clusterlabs.org/doc/en-US/Pacemaker/1.1/html/Pacemaker_Explained/_using_rules_to_control_resource_options.html)
in crmsh manual.
Is it implemented and, if yes, how to use it?
Thanks,
Vladislav
12.09.2013 11:57, Dejan Muhamedagic wrote:
Hi Vladislav,
On Wed, Sep 11, 2013 at 02:06:12PM +0300, Vladislav Bogdanov wrote:
Hi Dejan, all,
Didn't find the way to configure rule-controlled resource options
(http://clusterlabs.org/doc/en-US/Pacemaker/1.1/html/Pacemaker_Explained
14.09.2013 07:28, Tom Parker wrote:
Hello All
Does anyone know of a good way to prevent pacemaker from declaring a vm
dead if it's rebooted from inside the vm. It seems to be detecting the
vm as stopped for the brief moment between shutting down and starting
up. Often this causes the
17.09.2013 20:51, Tom Parker wrote:
On 09/17/2013 01:13 AM, Vladislav Bogdanov wrote:
14.09.2013 07:28, Tom Parker wrote:
Hello All
Does anyone know of a good way to prevent pacemaker from declaring a vm
dead if it's rebooted from inside the vm. It seems to be detecting the
vm as stopped
13.11.2013 06:10, Jefferson Ogata wrote:
Here's a problem i don't understand, and i'd like a solution to if
possible, or at least i'd like to understand why it's a problem, because
i'm clearly not getting something.
I have an iSCSI target cluster using CentOS 6.4 with stock
13.11.2013 04:46, Jefferson Ogata wrote:
...
In practice i ran into failover problems under load almost immediately.
Under load, when i would initiate a failover, there was a race
condition: the iSCSILogicalUnit RA will take down the LUNs one at a
time, waiting for each connection to
19.11.2013 13:48, Lars Ellenberg wrote:
On Wed, Nov 13, 2013 at 09:02:47AM +0300, Vladislav Bogdanov wrote:
13.11.2013 04:46, Jefferson Ogata wrote:
...
In practice i ran into failover problems under load almost immediately.
Under load, when i would initiate a failover, there was a race
Hi Kristoffer,
do you plan to add support for recently added remote node attributes
feature to chmsh?
Currently (at least as of 2.1, and I do not see anything relevant in the
git log) crmsh fails to update CIB if it contains node attributes for
remote (bare-metal) node, complaining that
20.10.2014 18:23, Dejan Muhamedagic wrote:
Hi Vladislav,
Hi Dejan!
On Mon, Oct 20, 2014 at 09:03:40AM +0300, Vladislav Bogdanov wrote:
Hi Kristoffer,
do you plan to add support for recently added remote node attributes
feature to chmsh?
Currently (at least as of 2.1, and I do not see
22.10.2014 12:02, Dejan Muhamedagic wrote:
On Mon, Oct 20, 2014 at 07:12:23PM +0300, Vladislav Bogdanov wrote:
20.10.2014 18:23, Dejan Muhamedagic wrote:
Hi Vladislav,
Hi Dejan!
On Mon, Oct 20, 2014 at 09:03:40AM +0300, Vladislav Bogdanov wrote:
Hi Kristoffer,
do you plan to add support
28.10.2014 21:15, David Vossel wrote:
- Original Message -
22.10.2014 12:02, Dejan Muhamedagic wrote:
On Mon, Oct 20, 2014 at 07:12:23PM +0300, Vladislav Bogdanov wrote:
20.10.2014 18:23, Dejan Muhamedagic wrote:
Hi Vladislav,
Hi Dejan!
On Mon, Oct 20, 2014 at 09:03:40AM
29.10.2014 12:49, Dejan Muhamedagic wrote:
...
On the other hand, this feature is relatively new (has it ever
been released?) so it is much simpler to fix that breakage in pacemaker.
It's not pacemaker, it's just a resource agent. Which makes it
much easier to fix, just by introducing one
29.10.2014 13:55, Dejan Muhamedagic wrote:
On Wed, Oct 29, 2014 at 01:03:50PM +0300, Vladislav Bogdanov wrote:
29.10.2014 12:49, Dejan Muhamedagic wrote:
...
On the other hand, this feature is relatively new (has it ever
been released?) so it is much simpler to fix that breakage
Hi Kristoffer, Dejan, all.
May be it is time to add '-j' param to 'crm_mon -1' by default (if
supported)?
Best,
Vladislav
___
Linux-HA mailing list
Linux-HA@lists.linux-ha.org
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also:
Hi Kristoffer, Dejan.
Do you have plans to add support to crmsh for 'resource-discovery'
location constraint option (added to pacemaker by David in pull requests
#589 and #605) as well as for the 'pacemaker-next' schema (this one
seems to be trivial)?
Best,
Vladislav
12.11.2014 23:32, Kristoffer Grönlund wrote:
Vladislav Bogdanov bub...@hoster-ok.com writes:
Hi Kristoffer, Dejan.
Do you have plans to add support to crmsh for 'resource-discovery'
location constraint option (added to pacemaker by David in pull requests
#589 and #605) as well
13.11.2014 00:12, Kristoffer Grönlund wrote:
Vladislav Bogdanov bub...@hoster-ok.com writes:
I haven't had time to look closer at resource-discovery, but yes, I
certainly intend to support every option that makes it into a released
version of pacemaker at least.
Great. Can't wait
1 - 100 of 127 matches
Mail list logo