HA), or the github issue
tracker (https://github.com/ClusterLabs/hawk) if not.
Regards,
Tim
--
Tim Serong
Senior Clustering Engineer
SUSE
tser...@suse.com
___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailma
sume is what's
going to happen this time), or all virtual. Mixing the two is
exceedingly difficult to do well, IMO.
Regards,
Tim
--
Tim Serong
Senior Clustering Engineer
SUSE
tser...@suse.com
___
Pacemaker mailing list: Pacemaker@oss.clusterlab
k (service hawk restart)
and possibly logging out and back in in your web browser should have
been enough to resolve it.
Regards,
Tim
--
Tim Serong
Senior Clustering Engineer
SUSE
tser...@suse.com
___
Pacemaker mailing list: Pacemaker@oss.clusterlabs
&package=cluster-glue
--
Tim Serong
Senior Clustering Engineer
SUSE
tser...@suse.com
___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker
Project Home: http://www.clusterlabs.
storage (drbd)
- Basically, anything in network:ha-clustering:* on OBS is
on topic :)
If you'd like to subscribe, just send an email to:
opensuse-ha+subscr...@opensuse.org
Please also see the wiki page at:
https://en.opensuse.org/openSUSE:High_Availability
Happy clustering!
Tim
--
Tim S
On 07/26/2013 09:58 PM, Tim Serong wrote:
> On 07/25/2013 03:59 PM, Tim Serong wrote:
>> Hi All,
>>
>> This is just a quick heads-up. We're in the process of reorganising the
>> network:ha-clustering repository on build.opensuse.org. If you don't
>>
On 07/25/2013 03:59 PM, Tim Serong wrote:
> Hi All,
>
> This is just a quick heads-up. We're in the process of reorganising the
> network:ha-clustering repository on build.opensuse.org. If you don't
> use any of the software from this repo feel free to stop reading now
roject for openSUSE:Factory)
This means that if you're currently using packages from
network:ha-clustering, you'll need to point to
network:ha-clustering:Stable instead (once we've finished shuffling
everything around).
I'll send another email out when this is done.
Regards,
Tim
--
s (FC
18 ships rails 3.2).
I do have a reasonable rails 3.2 port which I'll make available "soon",
but I still have some work in progress, bugs to fix, things to clean up,
etc. etc. before announcing a release.
Regards,
Tim
--
Tim Serong
Senior Clustering Engineer
S
On 11/08/2012 07:56 PM, Andrew Beekhof wrote:
> On Thu, Nov 8, 2012 at 5:16 PM, Tim Serong wrote:
>> On 11/08/2012 12:11 PM, Andrew Beekhof wrote:
>>> On Thu, Nov 8, 2012 at 9:59 AM, Matthew O'Connor wrote:
>>>> Follow-up and additional info:
>>>&g
s is not the most desirable solution.
>
> I think thats as good a solution as any.
> I wonder where other distros are getting it from.
SLES 11 SP2:
# rpm -qf /sbin/killproc
sysvinit-2.86-210.1
openSUSE 12.2:
# rpm -qf /sbin/killproc
sysvinit-tools-2.88+-77.3.1.x86_64
Can't speak for any
quot;bak" are kind of
meaningless assuming identical nodes (and the nomenclature gets
confusing when you start talking about masters and slaves on top of that).
Anyway...
Original Message
Subject: Re: How can I make the secondary machine elect itself owner of
the float
n't just opened. :)
>>
>
> I think the only thing you missed was proposing a meta-project to rule
> them all :-)
...One Totem Ring to rule them all, one Totem Ring to find them...
If only Sauron had implemented RRP during the Second Age, thing
for configuration/setup -
these are pretty much equally applicable for both SLES and openSUSE.
Regards,
Tim
--
Tim Serong
Senior Clustering Engineer
SUSE
tser...@suse.com
___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/
roxy-lua-scripts
> /usr/share/doc/packages/mysql-proxy/examples/tutorial-basic.lua" \
This is --proxy-lua-scripts (plural). I'm guessing maybe that's the
problem.
HTH,
Tim
--
Tim Serong
Senior Clustering Engineer
SUSE
tser...@suse.com
___
ith preference on node_0 (score 100), or
node_1 (score 50), or some other node if neither node_0 nor node_1 are
available (and assuming you have more than two nodes).
HTH,
Tim
--
Tim Serong
Senior Clustering Engineer
SUSE
tser...@suse.com
__
e quorate node, because of loss
of DLM comms.
If STONITH is configured, the non-quorate node should be killed after a
failed (or timed out) stop, and the quorate node should resume behaving
normally.
HTH,
Tim
--
Tim Serong
Senior Clustering Engineer
SUSE
tser...@suse.com
that Florian mentions "prototype", hmm...)
Anyway, IMO, overloading the word "template" isn't /too/ bad. It could
be qualified if necessary as "resource template" (the new feature we're
talking about here) and "configuration template" (existing she
ought of that.
Adding the user to the haclient group removes any restrictions as I was
able to
write to the config without error.
Did you set "crm configure property enable-acl=true"? Without this, all
users in the haclient group have full access.
Regards,
Tim
--
Tim Serong
Se
tanza, so there's no need
for a separate /etc/corosync/service.d/pcmk file (although you can use
that if you want, just don't have both!)
HTH,
Tim
--
Tim Serong
Senior Clustering Engineer
SUSE
tser...@suse.com
___
Pacemaker mailing list: Pace
h document(s) have I missed please?
http://clusterlabs.org/doc/crm_cli.html
Also, just run "crm", it has tab completion, online help, etc.
Regards,
Tim
--
Tim Serong
Senior Clustering Engineer
SUSE
tser...@suse.com
___
Pac
encing for any prototype system that's
going to need fencing when put into production :)
Regards,
Tim
--
Tim Serong
Senior Clustering Engineer
SUSE
tser...@suse.com
___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterl
bs.org/pacemaker/1.1/file/9971ebba4494/lib/common/ais.c#l327
but note ais.c moved to corosync.c in newer source tree on github
--
Tim Serong
Senior Clustering Engineer
SUSE
tser...@suse.com
___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss
On 11/02/2011 06:35 PM, Florian Haas wrote:
On 2011-11-02 04:33, Tim Serong wrote:
I vaguely recall reading the FSF considered headers generally
exempt from GPL provisions, provided they're "boring", i.e. just
structs, function definitions etc. If they're a whole lotta inl
npack_rsc_op() functions from Pacemaker's lib/pengine/unpack.c in
$other_language_of_your_choice.
Regards,
Tim
[1] http://clusterlabs.org/wiki/Hawk
[2]
http://www.clusterlabs.org/doc/en-US/Pacemaker/1.1/html/Pacemaker_Explained/ch-status.html
--
Tim Serong
Senior Clusterin
be possible for that state file to be empty.
Unless, somehow (wild guess), permissions on the state file or some
parent directory prohibit writing?
Regards,
Tim
--
Tim Serong
Senior Clustering Engineer
SUSE
tser...@suse.com
___
Pacemaker mai
ay to specify that a resource can run on any node without having
to add a location constraint for each node as they are added?
You could try one constraint per resource, covering all nodes, something
like:
location some-res-on-all-nodes some-resource \
rule 0: #uname eq
ds what location constraints
you configure.
HTH,
Tim
[1] Depending on your definition, it might also mean "the exact same
resource is running on at least two nodes, e.g.: a clustered filesystem.
--
Tim Serong
Senior Clustering Engineer
SUSE
tser...@suse.com
__
ause the element is bogus (apparently the
cibadmin man page needs tweaking). Try:
Better yet, use the crm shell instead of cibadmin, and you can forget
about the XML :)
Regards,
Tim
--
Tim Serong
Senior Clustering Engineer
S
at does with orphans.
Regards,
Tim
--
Tim Serong
Senior Clustering Engineer
SUSE
tser...@suse.com
___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker
Project Home: http://www.clusterlabs.org
G
On 29/08/11 13:24, Tim Serong wrote:
On 28/08/11 21:43, Sebastian Kaps wrote:
Hi,
on our two-node cluster (SLES11-SP1+HAE; corosync 1.3.1, pacemaker
1.1.5) we have defined the following FS resource and its corresponding
clone:
primitive p_fs_wwwdata ocf:heartbeat:Filesystem \
params device
upported-by-SUSE-but-best-effort-support-by-me)
build, you can try hawk-0.4.1 from:
http://software.opensuse.org/search?q=Hawk&baseproject=SUSE%3ASLE-11%3ASP1&lang=en
Alternately, if you can reproduce the issue then send me the output of
"cibadmin -Q" (offlist is fine)
<http://www.dizopsin.net/debian-and-ubuntu-packages-for-clusterlabs-ha>
Best regards,
Joerg
Many thanks for your work!
Regards,
Tim
--
Tim Serong
Senior Clustering Engineer
SUSE
tser...@suse.com
___
Pacemaker mailing list: Pacemaker@oss.clus
separate instance of the above for each OCFS2 volume being
managed by Corosync/Pacemaker cluster?
Nope, just the one.
Regards,
Tim
--
Tim Serong
Senior Clustering Engineer
SUSE
tser...@suse.com
___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http
C code that does the
same thing). If you only care about state, you probably only care about
the *last* op.
I should also take the opportunity to plug Hawk, if you need a web based
thing for managing Pacemaker clusters:
http://www.clusterlabs.org/wiki/Hawk
HTH,
T
ce Constraints chapter of Pacemaker explained
(http://www.clusterlabs.org/doc/en-US/Pacemaker/1.1/html/Pacemaker_Explained/)
or the mailing list archives (this has come up a few times in recent
memory).
HTH,
Tim
--
Tim Serong
Senior Clustering Engineer
SUSE
tser...@suse.com
list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker
Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://developerbugs.linux-foundation.org/enter_bug.cgi?product=Pacemaker
--
Tim
On 22/06/11 22:14, Ciro Iriarte wrote:
2011/6/21 Tim Serong:
On 22/06/11 08:57, Ciro Iriarte wrote:
Hi, I'm trying pacemaker from OBS and I don't see any init script for
corosync or pacemaker, am I overlooking something obvious?
Name: pacemakerRelocat
: 1.1 Build Date: Thu Apr 14 04:08:04 2011
Regards,
Install openais as well - it includes /etc/init.d/openais which starts
corosync.
Regards,
Tim
--
Tim Serong
Senior Clustering Engineer, OPS Engineering, Novell Inc.
___
Pacemaker ma
o the latest version (hawk-0.4.1-2.1.$ARCH.rpm).
Regards,
Tim
--
Tim Serong
Senior Clustering Engineer, OPS Engineering, Novell Inc.
___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker
o connect to Samba on the remaining host if you use the
host's physical IP address, rather than a virtual IP?
Are there any errors in /var/log/samba/log.smbd and/or
/var/log/ctdb/log.ctdb?
Regards,
Tim
--
Tim Serong
Senior Clustering Enginee
is fine with both "adaugherity" and "ADaugherity" but
hawk/crm_gui require the mixed-case version.
They go via the PAM backends too, so this is surprising ... Thanks for
pointing this out.
Noted. I'm not sure what's going on there yet
On 19/05/11 00:43, Tim Serong wrote:
Hi Everybody,
This is to announce version 0.4.1 of Hawk, a web-based GUI for managing
and monitoring Pacemaker High-Availability clusters.
[...]
Building an RPM for Fedora/Red Hat is still just as easy as last time:
# hg clone http://hg.clusterlabs.org
rmation is available at:
http://www.clusterlabs.org/wiki/Hawk
Please direct comments, feedback, questions, etc. to myself and/or
(preferably) the Pacemaker mailing list.
Happy clustering,
Tim
[1]
http://www.clusterlabs.org/doc/en-US/Pacemaker/1.1/html/Pacemaker_Explained/ch-constraints.html
port for that entire time
period.
Regards,
Tim
-Original Message-
From: Tim Serong [mailto:tser...@novell.com]
Sent: 13 May 2011 04:22
To: The Pacemaker cluster resource manager (pacemaker@oss.clusterlabs.org)
Subject: Re: [Pacemaker] Failover when storage fails
On 5/12/2011 at 02:2
[2561]: info: perform_op:2884: operation
> stop[202] on ocf::Filesystem::MyApp_fs_graph for client 31850, its
> parameters: fstype=[ext4] crm_feature_set=[3.0.2]
> device=[/dev/VolGroupB00/abb_graph] CRM_meta_timeout=[2]
> directory=[/naab1] for rsc is already running.
> May 11 12:34:59 host002 lrmd: [2561]: info: perform_op:2894: postponing all
> ops on resour
need the "freshly installed"
> condition of pacemaker without reinstalling the complete package, because a
> "fresh" node joins without problems...how can this be done?
I'd suggest double-checking the corosync config and network settings
(IP addresses and preferably disable an
usterlabs.org:8010/builders/opensuse-11.3-i386-devel/builds/
> 48/steps/cli_test/logs/stdio
> > and
> >
> http://build.clusterlabs.org:8010/builders/fedora-13-x86_64-devel/builds/48
> /steps/cli_test/logs/stdio
> >
> > What distro are you on?
> &g
.
> >
> >>
> >>
> >>> >>
> >>> >>
> >>
> >> While we're messing with sets anyway, I'd like to re-hash the idea I
> >> brought up on pcmk-devel. To make configuration more com
e or form, preferably configurable
> through an RA parameter. What was discussed in Boston is that in an
> initial step, Subscriber could simply take an XSLT script, apply it to
> the CIB stream with xsltproc, and then update its local CIB with the re
On 4/22/2011 at 10:14 PM, Nikita Michalko wrote:
> Am Dienstag 19 April 2011 12:59:35 schrieb Tim Serong:
> > Greetings All,
> >
> > This is to announce version 0.4.0 of Hawk, a web-based GUI for
> > managing and monitoring Pacemaker High-Availability clusters.
ther information is available at:
http://www.clusterlabs.org/wiki/Hawk
Please direct comments, feedback, questions, etc. to myself
and/or (preferably) the Pacemaker mailing list.
Happy clustering,
Tim
--
Tim Serong
Senior Clustering Engineer, OPS Engi
>>> On 4/13/2011 at 04:37 PM, Andrew Beekhof wrote:
> On Wed, Apr 13, 2011 at 8:28 AM, Tim Serong wrote:
> > On 4/12/2011 at 05:48 PM, Andrew Beekhof wrote:
> >> Here's an example of the before and after. Thoughts?
> >
> > Looks pretty good t
>
>
>
>
>
>
>
> After:
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> On Mon, Apr 11, 2011 at 6:02 PM, Andrew Beekhof wrote:
> > On
x27;em. They're invaluable for debugging failures, BTW.
Were those 7000 pe-inputs all created over that 7 day period? Because
that's a transition every 1.44 minutes. Is it just me, or does that
sound like a rather busy cluster?
Regards,
Tim
--
Tim Serong
Senior Cluster
>>> On 4/11/2011 at 10:23 PM, Andrew Beekhof wrote:
> On Mon, Apr 11, 2011 at 2:18 PM, Tim Serong wrote:
> > On 4/11/2011 at 09:37 PM, Andrew Beekhof wrote:
> >> On Mon, Apr 11, 2011 at 12:57 PM, Tim Serong wrote:
> >> > On 3/21/
On 4/11/2011 at 09:37 PM, Andrew Beekhof wrote:
> On Mon, Apr 11, 2011 at 12:57 PM, Tim Serong wrote:
> > On 3/21/2011 at 08:20 PM, Andrew Beekhof wrote:
> >>
> >> Small improvement to:
> >> + The only thing that matters is that in order for any mem
ight now should thus
be changed as follows in order to match the diagram:
-
-
+
+
-
-
+
+
Regards,
Tim
--
Tim Serong
Senior Clustering Engineer, OPS Engineering, Novell Inc.
_
On 4/4/2011 at 04:29 AM, Ron Kerry wrote:
> On 7/22/64 2:59 PM, Tim Serong wrote:
> > On 4/2/2011 at 09:42 PM, Ron Kerry wrote:
> > > On 7/22/64 2:59 PM, Serge Dubrouski wrote:
> > > > On Fri, Apr 1, 2011 at 2:09 PM, Ron Kerry wrote:
> > > >
tor
> failure when the resource
> is stopped. Pacemaker then takes the 'onfail' action defined for the monitor
> operation. In other
> words, the resource is still being managed to some degree. If the monitor
> operation was still
> running but no action
report if I wanted more info. So now I
> have an hb_report ready to go. Excuse the naive question, but where/how
> do I submit it?
http://developerbugs.linux-foundation.org/enter_bug.cgi
HTH,
Tim
--
Tim Serong
Senior Clustering Engineer, OPS
-slow-resource (ocf::heartbeat:Delay) Started :
my-slow-resource_start_0 (node=node-0, call=86, rc=0): complete
Regards,
Tim
--
Tim Serong
Senior Clustering Engineer, OPS Engineering, Novell Inc.
___
Pacemaker mailing list: Pacemaker@oss.
he most recent op and rc on each node (highest call ID) tells
you the state of the resource on that node.
Regards,
Tim
--
Tim Serong
Senior Clustering Engineer, OPS Engineering, Novell Inc.
___
Pacemaker mailing list: Pacemaker@oss.clusterlab
-role="Master"
> clone clone-libvirtd p-libvirtd \
> meta interleave="true"
> clone clone-lvm_gh p-lvm_gh \
> meta interleave="true"
> location cli-standby-p-vd_vg.test1 p-vd_vg.test1 \
> rule $id="cli-standby-rule-p-vd
y updated the version number to 1.0.10. So, yes, you do have
version 1.0.10. Try to think of it as an unfortunate typo :)
Regards,
Tim
--
Tim Serong
Senior Clustering Engineer, OPS Engineering, Novell Inc.
___
Pacemaker mailing list: Pacemaker@os
shipped won't start
> > pacemaker, I'm not sure if that's on purpose or not, but I found it a
> > bit confusing after being used to it 'just working' previously.
>
> Ah. Understandably confusing. That got fixed post-SP1, in a
> maintenance upda
#x27;t start
> pacemaker, I'm not sure if that's on purpose or not, but I found it a
> bit confusing after being used to it 'just working' previously.
Ah. Understandably confusing. That got fixed post-SP1, in a
maintenance update that went out in September or there
#x27;s intentional, see:
http://hg.linux-ha.org/glue/rev/5ef3f9370458
You really don't want to rely on SSH STONITH in a production environment.
Regards,
Tim
--
Tim Serong
Senior Clustering Engineer, OPS Engineering, Novell Inc.
__
hat done automatically?
No, you need to define them.
> If I need to do it specifically, how do I do that now that I have it all up
> and running without defining monitor actions?
Run "crm configure edit" and add whichever monitor ops you need.
Have a look at Clusters from
" is running. Right?
>
> Probably
To my intense amazement, you can do this:
mkfs.ocfs2 --cluster-stack=pcmk --cluster-name=pacemaker /dev/foo
This works when the cluster is not running. These parameters are
not mentioned anywhere at all in the mkfs.ocfs2 manpage.
*sigh*
Tim
--
On 11/30/2010 at 10:11 AM, Alan Jones wrote:
> On Thu, Nov 25, 2010 at 6:32 AM, Tim Serong wrote:
> > Can you elaborate on why you want this particular behaviour? Maybe
> > there's some other way to approach the problem?
>
> I have explained the issue as c
ou elaborate on why you want this particular behaviour? Maybe
there's some other way to approach the problem? (Or maybe someone else
can think of a way to express this...)
Regards,
Tim
--
Tim Serong
Senior Clustering Engineer, OPS Engineering, Novell Inc.
_
nodeB. Then it tries
to place resX, wants to place it where resY is not (nodeA), but can't, due to
the -inf score for resX on nodeA. So in this case, resX lands on nodeB as
well.
If it decides where to put resX first, it puts resX on nodeB because of the
-inf score for nodeA. Then
answering these sorts of questions?
There's the Linux Foundation bugzilla:
http://developerbugs.linux-foundation.org/
There's also a few mercurial repos. Commit messages tend to be fairly
informative:
http://hg.clusterlabs.org/pacemaker/1.1/
http://hg.linux-ha.org/ag
ot; or
"Promote/Demote/Stop FOO (...)", it means something has changed. Scroll up
a bit, to above where pengine is saying "unpack_config", "determine_node_status"
etc. and you should see a message suggesting the cause for the change (failed
op, timeout, ping attribu
x that problem?
I've not seen that before, but, just to rule out one possibility... What
happens if you just run:
/usr/sbin/httpd -DSTATUS -f /etc/httpd/conf/httpd.conf
Does that ever return? If no, I'd suggest apache is broken. If yes,
I'd start pointing my finger towards o
ell of a lot better than scp and having to
remember where to copy what to, and when :)
There's a little section on csync2 in the SLE HAE Guide under
"Transferring the Configuration to All Nodes" at:
http://www.novell.com/documentation/sle_ha/book_sleha/?page=/documentation/sle_ha/b
any reason.
>
> So that's strange it was zeroed out. You might need to check the
> modification time to recall what was happening.
Wild guess - was your system STONITH'd or otherwise forcibly reset,
immediately after installing pacemaker
On 8/27/2010 at 03:22 PM, Michael Smith wrote:
> Hi,
>
> I have a pacemaker setup using the Xen resource agent and I've found
> something weird during migration: if a VM is in the middle of
> live-migrating from node 1 to node 2, and I stop the resource in crm,
> pacemaker forgets about
On 8/27/2010 at 03:37 PM, Michael Smith wrote:
> On Thu, 26 Aug 2010, Tim Serong wrote:
>
> > > for now I have stonith-enabled="false" in
> > > my CIB. Is there a way to make clvmd/dlm respect it?
> >
> > No. At least, I don't think s
On 8/27/2010 at 01:49 PM, Michael Smith wrote:
> On Thu, 26 Aug 2010, Tim Serong wrote:
>
> > > Aug 26 18:31:51 xen-test1 cluster-dlm[8870]: fence_node_time: Node
> > > 236655788/xen-test2 has not been shot yet
>
> > Do you have STONITH configured? Not
1:51 xen-test1 crmd: [8489]: info: ais_dispatch: Membership
> 1260: quorum still lost
> Aug 26 18:31:51 xen-test1 cluster-dlm: [8870]: info: ais_dispatch:
> Membership 1260: quorum still lost
Do you have STONITH configured? Note that it says "xen-test2 has not
been shot yet"
ource-agents 1.0.3
Happy clustering,
Tim
--
Tim Serong
Senior Clustering Engineer, OPS Engineering, Novell Inc.
___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker
Project Home:
y the only chance we get to collaborate in one place
> this whole year.
I actually can't see the original CFP email in the linux-cluster archives.
On the bold assumption that *this* email somehow magically makes it to that
list, here's the URL to submit proposals:
http://www.li
d
resource-agents (1.0.3). Heartbeat is a bit out of date (2.99.3).
There's one problem I'm aware of (can't start openais/corosync on x86_64)
but this can be worked around by creating a few symlinks, see the bug
report for details:
https://bugzilla.novell.co
has been a public service announcement. Thank you for reading.
Tim
--
Tim Serong
Senior Clustering Engineer, OPS Engineering, Novell Inc.
___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pace
On 7/5/2010 at 04:54 PM, Andrew Beekhof wrote:
> On Mon, Jul 5, 2010 at 6:21 AM, Tim Serong wrote:
> > On 6/30/2010 at 09:42 PM, Andrew Beekhof wrote:
> >> On Thu, Jun 24, 2010 at 5:41 PM, Lars Marowsky-Bree
> >> wrote:
> >> > Hi,
> >>
ook twice right?
Just for the record, a use case of this came up on IRC last week:
you could specify cluster-wide standby="on", so new nodes joining the
cluster would automatically join in standby mode, with the admin
activating them later (per-node standby="off" thus overridi
/ctdb and replace it with its own like in SLES11.. at least
> Ubuntu isn't.
Curious. It's *meant* to replace that file. Anything interesting that you
can specify in that file should be specified using RA instance parameters.
For some notes on this, see:
http://linux-ha.org/w
On 6/2/2010 at 11:10 AM, Cnut Jansen wrote:
> Am 31.05.2010 05:47, schrieb Tim Serong:
> > On 5/31/2010 at 12:57 PM, Cnut Jansen wrote:
> >
> >> Current constraints:
> >> colocation TEST_colocO2cb inf: cloneO2cb cloneDlm
> >> colocation c
cl-mysqld cl-apache
>
> If i want to apply this rule to each node, what setting should i configure?
Try cloning a group, something like:
group mysqld-with-apache mysqld apache
clone cl-mysqld-with-apache mysqld-with-apache
Regards,
Tim
--
Tim Serong
>
> No LSB Primitive named ppsd-6. It was LSB but I had changed it to ocf
> recently and somehow still tried to execute the former LSB script.
That sounds like bad behaviour. Can you please open a bug and include
an hb_report for a time period which shows the errant run of th
ave out the ":start" specifiers as this is
implicit.
> Constraints added to "work around" at least the DRBD-resources left in
> state "started (unmanaged) failed":
> order GNAH_orderDrbdMysql_stop 0: cloneMountMysql:stop
on to modify the apache
> RA ;-)
>
> Could this be a runnable approach?
> Also to put for example totally new personal RAs in new dirs?
Nothing wrong with that. That's actually exactly what you *should* do
if you're writing your own RAs that aren't going to
> params externalip="192.168.0.50" \
> op monitor interval="10s" timeout="90s" \
> op start interval="0" timeout="1800s" \
> op stop interval="0" timeout="180s" \
> me
e 2 until node 2 fails, at which point they'd
> migrate to node 1.
Yes, you want the "resource-stickiness" property. Using "crm configure",
per resource:
# primitive foo \
meta resource-stickiness="1"
Or, to make everything a bit sticky:
#
]
> >
> > Don't clone the SBD stonith resource, you only need a single primitive
> > here (not that this should be causing your startup trouble).
>
> sbd fence must be on each node.
The sbd daemon needs to be ru
ed everything to come online if
you just wait a few minutes. You can watch status changes (if any) as
they occur, with "crm_mon -r". It's worth checking /var/log/messages etc.
on each node too, to see if anything is obviously screaming in pain.
> Full list of resources:
&
urces do not run. What does the output of "crm_mon -r1" show
in this case?
Regards,
Tim
--
Tim Serong
Senior Clustering Engineer, OPS Engineering, Novell Inc.
___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clust
On 4/14/2010 at 01:59 AM, Dejan Muhamedagic wrote:
> On Tue, Apr 13, 2010 at 05:45:02AM -0600, Tim Serong wrote:
> > On 4/13/2010 at 08:13 PM, Dejan Muhamedagic wrote:
> > > Hi,
> > >
> > > On Mon, Apr 12, 2010 at 10:56:30PM +0200, Roberto Giordani wr
1 - 100 of 132 matches
Mail list logo