On 5/2/07, Otte, Joerg [EMAIL PROTECTED] wrote:
I am trying to get heartbeat 2.08/stable running under Solaris 10 /
I386.
OS: SunOS bcm20-a 5.10 Generic_125101-03 i86pc i386 i86pc
Whereas V1 configuration seem to work properly (I didn't go into details
yet),
I currently have the following
On Thu, 3 May 2007, Andrew Beekhof wrote:
On 5/2/07, Otte, Joerg [EMAIL PROTECTED] wrote:
I am trying to get heartbeat 2.08/stable running under Solaris 10 /
I386.
OS: SunOS bcm20-a 5.10 Generic_125101-03 i86pc i386 i86pc
Whereas V1 configuration seem to work properly (I didn't go into
Thanks for the patch!. I will try it Monday.
-Ursprüngliche Nachricht-
Von: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] Im Auftrag von ext Andrew
Beekhof
Gesendet: Donnerstag, 3. Mai 2007 11:29
An: High-Availability Linux Development List
Betreff: Re: [Linux-ha-dev] Hb-2.08/stable:
What return code should an OCF RA return on a monitor operation when the
service is running but broken (for ex. process present, but services not
available)?
If the RA returns OCF_NOT_RUNNING, then will hb do a stop before any start
when going from unmanaged to managed for example?
What about
hard to comment without seeing your config
On 5/2/07, Rene Purcell [EMAIL PROTECTED] wrote:
Hi all, just want to know if this kind of setup is possible with heartbeat.
- There's two nodes. ( node1 and node2 )
- On each nodes there's two DomU ( vm01 on node 1 and vm01 on node2 ) they
all have a
Started and Slave are basically the same state - so nothing is wrong
as such - though it might be nice if it did in fact show Slave instead
of Started.
On 5/2/07, Doug Knight [EMAIL PROTECTED] wrote:
When I initially start up a master_slave drbd resource (ms_dbrd_7788),
using a Place constraint
Hi Horms,
Thanks for the update, but when we try this version:
http://www.vergenet.net/~horms/linux/ldirectord/download/ldirectord.
2007-05-01.e022c4b33b0e
we get:
TCP 192.168.0.5:87 wrr
- 10.10.11.87:87 Masq0 0 0
TCP 10.10.11.89:87 wrr
-
On Fri, Apr 27, 2007 at 03:10:22PM -0400, Doug Knight wrote:
I now have a working configuration with DRBD master/slave, and a
filesystem/pgsql/ipaddr group following it around. So far, I've been
using a Place constraint and modifying its uname value to test the fail
over of the resources. Can
On Thu, May 03, 2007 at 12:27:23PM +0200, Kristoffer Egefelt wrote:
Hi Horms,
Thanks for the update, but when we try this version:
http://www.vergenet.net/~horms/linux/ldirectord/download/ldirectord.2007-05-01.e022c4b33b0e
we get:
TCP 192.168.0.5:87 wrr
- 10.10.11.87:87
On Wed, May 02, 2007 at 09:31:16AM +1000, Alex Strachan wrote:
Hi Alan,
Please excuse my ignorance but could you expand on 'meta-data operation'.
Is there meta-data operations that I need to do when start/stop or
monitoring a resource, or is it a definition that need changing?
The
Dejan Muhamedagic wrote:
On Fri, Apr 27, 2007 at 11:37:52AM -0400, Doug Knight wrote:
I'm getting the following warnings in the log, is it something I should
investigate or is it not to worry? I've seen 14 in the last 18 hours, on
a pair of fairly lightly loaded development servers, with no
Yan Fitterer wrote:
What return code should an OCF RA return on a monitor operation when
the service is running but broken (for ex. process present, but
services not available)?
If the RA returns OCF_NOT_RUNNING, then will hb do a stop before
any start when going from unmanaged to managed
Lee Hinman wrote:
Hi Everyone,
For some reason, when heartbeat is started, it logs an error over and
over and over in the logs about failing to find the resource script
lava2042 (lava2042 is the hostname of the machine).
Here's the error I'm seeing:
May 2 16:31:54 lava2042
Thanks Dejan, I'll try the kill -9. One thing I'm seeing is that I can
easily move the resources between nodes using the location constraint,
but if I shutdown heartbeat on one node (/etc/init.d/heartbeat stop) I
run into problems. If I shutdown the node with the active resources,
heartbeat
Hmm, kill -9 on the active node is not sufficient to simulate a node
going down. Heartbeat goes away, but the file system remains mounted and
drbd remains primary on what was the active node.
On Thu, 2007-05-03 at 09:08 -0400, Doug Knight wrote:
Thanks Dejan, I'll try the kill -9. One thing I'm
On Mon, 30 Apr 2007, David Lee wrote:
[...]
We already have such code, and already have it duplicated (ouch!) in
resources/OCF/IPaddr.in and resources/heartbeat/IPaddr.in. And
pingd.sh is in danger of making this triplicate.
[...]
Following my own email above, and going to a slightly
On Thu, May 03, 2007 at 09:08:12AM -0400, Doug Knight wrote:
Thanks Dejan, I'll try the kill -9. One thing I'm seeing is that I can
easily move the resources between nodes using the location constraint,
but if I shutdown heartbeat on one node (/etc/init.d/heartbeat stop) I
run into problems.
On 2007-05-03T09:25:12, Yan Fitterer [EMAIL PROTECTED] wrote:
What return code should an OCF RA return on a monitor operation when
the service is running but broken (for ex. process present, but
services not available)?
If the RA returns OCF_NOT_RUNNING, then will hb do a stop before any
I've already read this document, with this method it's working.. they have
two VM and each node can access these VM to start it.. they are on a iscsi
a fake SAN
in My question I was trying to see if it's possible to have two different VM
on each node, with the same name.. VM01 and VM02 on node1
Hi,
I am trying to build a 2-node cluster serving DRBD+NFS, among other
things. It has been operational on Debian Sarge, with Heartbeat 1.2, but
recently, both machines were upgraded to Debian Etch, and today I
upgraded Heartbeat to 2.0.7. I maintained the R1 style configuration.
Heartbeat is
On Thu, 2007-05-03 at 16:12 +0200, Dejan Muhamedagic wrote:
On Thu, May 03, 2007 at 09:08:12AM -0400, Doug Knight wrote:
Thanks Dejan, I'll try the kill -9. One thing I'm seeing is that I can
easily move the resources between nodes using the location constraint,
but if I shutdown heartbeat
Dejan Muhamedagic wrote:
On Thu, May 03, 2007 at 09:26:56AM -0400, Doug Knight wrote:
Hmm, kill -9 on the active node is not sufficient to simulate a node
going down. Heartbeat goes away, but the file system remains mounted and
drbd remains primary on what was the active node.
Is there a
Martijn Grendelman wrote:
Hi,
I am trying to build a 2-node cluster serving DRBD+NFS, among other
things. It has been operational on Debian Sarge, with Heartbeat 1.2, but
recently, both machines were upgraded to Debian Etch, and today I
upgraded Heartbeat to 2.0.7. I maintained the R1 style
Default_action_timeout did not seem to make a difference, but changing the
cluster-delay did manage to change the timeout of the stonith.
This:
nvpair value=120s id=cluster-delay name=cluster-delay/
Gave me a 60s timeout waiting for the Stonith (version 2.0.8).
But unfortunately the problem was
Hi,
Sorry for my beginner's question.
Is it possible to add resources using crm admin tools? I read the man pages
for crmadmin, crm_resources, but can't see any command to do this.
Did I miss anything?
Thanks,
Tao
___
Linux-HA mailing list
Using cibadmin
For example to create a node:
Create a file node.xml containing your node definition ex:
node uname=node1.domain.com type=normal id=jlkfjdslkf-fdsflkjlfkds
instance_attributes id=nodes-jlkfjdslkf-fdsflkjlfkds
attributes/
/instance_attributes
/node
yeah ok so as I can see in the src code of the ocf xen module.. he do a xm
list and check if the vm name contained in the xenfile is running..
so even if it's 2 different vm running on each node if their xen name are
both vm01 am I wrong to think that the ressource agent will not see the
Tao Yu wrote:
Thanks for the information!
By doing that, I guess the resource will be added as the last one. Is that
correct?
Is there any control on the order?
You can do anything you want, it's just that getting picky means
becoming more clever. You can replace subtrees in the XML tree,
28 matches
Mail list logo