) and before I start samba and set the
virtual IP.
Hope you can help.
Thanks
Michael Smith
___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker
Project Home: http://www.clusterlabs.org
Getting started
Michael Schwartzkopff wrote:
Imagine the consequences for a cloud cluster consisting of 30 nodes hosting
100 virtual machines. All machines would be migrated to the least possible
number of real machines during the night when there no work to do. In the next
morning when work starts virtual
Bart Coninckx wrote:
By the way: things seem better when I change the monitor time out to 30
seconds in stead of 10 seconds. Very strange though, because the resource
agent basically does a xm list --long while monitoring, which takes less
than half a second in a console.
I think sometimes
On Wed, 22 Sep 2010, Andrew Beekhof wrote:
On Tue, Sep 21, 2010 at 3:28 PM, Vadym Chepkov vchep...@gmail.com wrote:
On Tue, Sep 21, 2010 at 9:14 AM, Dan Frincu dfri...@streamwide.ro wrote:
However I don't know of any automatic method to clear the failcount.
in pacemaker 1.0 nothing will
Phil Armstrong wrote:
Hi,
This is my first post to this list so if I'm doing this wrong, please be
patient. I am using pacemaker-1.1.2-0.2.1 on sles11sp1. Thanks in
advance for any help anyone can give me.
Sep 21 10:35:45 pry crmd: [5601]: info: abort_transition_graph:
need_abort:59 -
Andrew Beekhof wrote:
I spoke to Steve, and the only thing he could come up with was that
the group might not be correct.
When the cluster is in this state, please run:
ps x -o pid,euser,ruser,egroup,rgroup,command
And compare it to the normal output.
Also, confirm that there is only one
On Mon, 6 Sep 2010, Andrew Beekhof wrote:
Is /dev/shm full (or not mounted) by any chance?
No - I tried clearing that out, too.
And corosync is actually running?
Yes, it's logging [IPC ] Invalid IPC credentials. when cib tries to
connect.
Mike
Tom Tux wrote:
If I disjoin one clusternode (node01) for maintenance-purposes
(/etc/init.d/openais stop) and reboot this node, then it will not join
himself automatically into the cluster. After the reboot, I have the
following error- and warn-messages in the log:
Sep 3 07:34:15 node01 mgmtd:
On Thu, 2 Sep 2010, Andrew Beekhof wrote:
On Mon, Aug 30, 2010 at 10:04 PM, Michael Smith msm...@cbnco.com wrote:
Hi,
I have a pacemaker/corosync setup on a bunch of fully patched SLES11 SP1
systems. On one of the systems, if I /etc/init.d/openais stop, then
/etc/init.d/openais start
Hi,
I have a pacemaker/corosync setup on a bunch of fully patched SLES11 SP1
systems. On one of the systems, if I /etc/init.d/openais stop, then
/etc/init.d/openais start, pacemaker fails to come up:
Aug 30 15:48:09 xen-test1 cib: [5858]: info: crm_cluster_connect:
Connecting to OpenAIS
Aug
Tim Serong wrote:
On 8/27/2010 at 03:37 PM, Michael Smith msm...@cbnco.com wrote:
I think I'd consider it a bug: I've disabled stonith, so dlm shouldn't
wait forever for a fence operation that isn't going to happen.
I reckon if you set the args parameter of your ocf:pacemaker:controld
Xinwei Hu hxin...@... writes:
2010/8/16 Rainer Lutz rainer.l...@...:
Xinwei Hu hxin...@... writes:
This sounds a like a fixed issue for SLE11SP1 indeed.
Well it is not fixed with SP1, but with some Patch after SP1 - don`t know
which thou, as the clvmd is the same for SP1 before and
Xinwei Hu hxin...@... writes:
That sounds worrying actually.
I think this is logged as bug 585419 on SLES' bugzilla.
If you can reproduce this issue, it worths to reopen it I think.
I've got a pair of fully patched SLES11 SP1 nodes and they're showing
what I guess is the same behaviour:
On Thu, 26 Aug 2010, Tim Serong wrote:
Aug 26 18:31:51 xen-test1 cluster-dlm[8870]: fence_node_time: Node
236655788/xen-test2 has not been shot yet
Do you have STONITH configured? Note that it says xen-test2 has not
been shot yet and clvmd ... not fenced. It's just going to sit there
14 matches
Mail list logo