On 2007-04-20T11:47:29, Simon Horman [EMAIL PROTECTED] wrote:
I sense that the first idea may be problematic. I'm not
sure that there is anything stopping syslog implementations from
having a private copy of the tag, in which case subsequent modifications
to the original buffer will have
On Fri, Apr 20, 2007 at 11:47:29AM +0900, Simon Horman wrote:
On Thu, Apr 19, 2007 at 09:07:17PM +0200, Dejan Muhamedagic wrote:
Hello,
Recently I introduced a bug which today I fixed, but the whole
thing left unpleasant taste. The bug gets excercised in case
ha_logd has been setup to
Hi,
die you set notify=true for the drbd master_slave resource ? This seemed to
help to get drbd promoted when I was playing around with the drbd OCF RA.
Still I did not manage to get it running smoothly and had no time to further
investigate, so I reverted back to drbddisk.
I'm very curious
Carson Gaspar wrote:
Alan Robertson wrote:
Carson Gaspar wrote:
Xinwei Hu wrote:
The known issue is that I don't know how to daemonize in bash,
so the pingd RA needs a little tweak also.
You can't daemonize in bash, unless your OS comes with some executable
that daemonizes arbitrary
Here's an additional question, one that stems from the crm_verify -L I
ran on both nodes (as recommended by an error message in the debug log).
My master/slave resource is configured as:
master_slave notify=true id=ms_drbd_7788
instance_attributes id=ms_drbd_7788_instance_attrs
Yan Fitterer wrote:
In the attached pe- warn, why is resource R_audit being started
on idm01 when there is an INFINITY constraint with uname eq
idm04?
BTW - idm04 is in standby at the moment. That should hardly
matter. I expect the resource to be cannot run anywhere.
I really hope it's
On 4/19/07, Doug Knight [EMAIL PROTECTED] wrote:
Hi Alan,
I had a question about the master location constraint you provided. Are
the prefixes on the rsc_location (loc:) and rule (rule:) ids just a
convention you used or are they required?
convention only
Doug
On Tue, 2007-04-17 at 12:24
On 4/20/07, Knight, Doug [EMAIL PROTECTED] wrote:
OK, here's what happened. The drbd resources were both successfully
running in Secondary mode on both servers, and both partitions were
synched. My Filesystem resource was stopped, with the colocation, order,
and place constraints in place. When
On 4/19/07, Doug Knight [EMAIL PROTECTED] wrote:
After a closer look at the DTD and my xml file, I found two things: 1) I
had rsc_location where I should have had rsc_colocatioon, which is why
crm_verify was choking on the lack of an rsc; and 2) I can only have one
constraint (rsc_colocation,
I changed the constraints to point to the master_slave ID, and voila,
even without the Filesystem resource running, the drbd resource
recognized the place constraint and the GUI now indicates master running
wher I expected it to. One down, one to go. Now, just to be sure, here's
the modified group
On 4/19/07, Francesco Ciocchetti [EMAIL PROTECTED] wrote:
Hi all,
I'm setting up a HA 2.0.8 cluster of 2 nodes with a drbd data replication.
I set up a group containing following resources:
primitive class=ocf id=IPaddr_10_237_84_226 provider=heartbeat
type=IPaddr
primitive class=heartbeat
On 4/20/07, Bernd Schubert [EMAIL PROTECTED] wrote:
Hi,
when upgrading from heartbeat-2.0.5 to 2.0.8 another problem occured. We have
a script the admin can call to clean occured errors.
At the end of the script the date of the last cleaning action is set, using
the command crm_attribute -n
On Thu, 19 Apr 2007, Xinwei Hu wrote:
[David Lee had earlier written:]
5. ping -q -c 1 $ping_host. The options for ping are notoriously
variable from system to system. Keep it simple. (For example my system
doesn't have a -q option; and it says that -c n is for a thing
called traffic
On Fri, Apr 20, 2007 at 3:42 PM, in message
[EMAIL PROTECTED], Andrew Beekhof
[EMAIL PROTECTED] wrote:
On 4/20/07, Yan Fitterer [EMAIL PROTECTED] wrote:
In the attached pe- warn, why is resource R_audit being started on
idm01 when there is an INFINITY constraint with uname eq idm04?
In the interim I set the filesystem group to unmanaged to test failing
the drbd master/slave processes back and forth, using the the value part
of the place constraint. On my first attempt to switch nodes, it
basically took both drbd processes down, and they stayed down. When I
checked the logs on
On 4/19/07, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:
Hi all,
After many tests, I can't understand what the difference
between the
default_resource_failure_stickiness
how badly you want resources to move after their monitor action fails
default_resource_stickiness.
how badly you want
On 4/20/07, Yan Fitterer [EMAIL PROTECTED] wrote:
In the attached pe- warn, why is resource R_audit being started on
idm01 when there is an INFINITY constraint with uname eq idm04?
BTW - idm04 is in standby at the moment. That should hardly matter. I
expect the resource to be cannot run
On Fri, Apr 20, 2007 at 10:25:04AM -0400, Bjorn Oglefjorn wrote:
If it seems counter intuitive, think of it like this:
* test-1_DRAC is the DRAC installed in the chassis of
test-1.domainwhich has an address of
test-1.drac.domain
No, actually it's not counter intuitive.
Then look here:
I have a 2 node setup giving HA for an IP for mysql servers.
The Mysql servers are replicating and therefore should always be running. So
I don't want them to be managed by heartbeat. But I do need the monitoring.
So I made a classic 2 node IP sharing with pingd clones to check for it's
Hi all,
Im having trouble with setting up a pair of machines with a single
resource (an IP address) to failover between them.
The machines are Sun Netras (T1 105) running Gentoo Linux.
The scenario is as follows:
fw1: (primary resource holder)
eth0: 192.168.1.52
eth1: 10.0.0.2
On Tuesday 17 April 2007 20:39, Alan Robertson wrote:
But the problem stays.
So:
Is it possible to do this nfs failover without shared storage? If it's
not possible, what is the best approach for this beside using shared
storage? Actually at first, it's an ftp server cluster only and it
21 matches
Mail list logo