Hi,
On Fri, 26 Sep 2014 12:46:30 +0200 Felix Zachlod
wrote:
> I am currently trying to add a third node only for giving a quorum a
> to the cluster in case a servicing node fails. But oviously I can't
> get it right.
> This is the message from crm status:
>
> Failed actions:
> drbd_testdat
On Sun, 6 Jul 2014 21:58:47 + Dan Journo
wrote:
> >> The resources I have (and the order I need them to start are)
> >> - IPAddr
> >> - Promote DRBD
> >> - Asterisk
> >> They also need to be running on the same node.
> >
> >order ord-my-IPaddr-before-my-DRBD inf: \
>
On Thu, 28 Nov 2013 12:04:01 +1100 Andrew Beekhof
wrote:
> If you find yourself asking $subject at some point in the next couple
> of months, the answer is that I'm taking leave to look after our new
> son (Lawson Tiberius Beekhof) who was born on Tuesday.
Concrats!
And remember: If you want HA,
Hi,
Am Fri, 25 Oct 2013 10:34:50 +0200 schrieb Michael Schwartzkopff
:
> In DRBD you can add a option to limit the sync rate. So there will be
> enough bandwith left for corosync.
As far as I remember, the syncer-rate only limits the sync-traffic,
that is when one side was down/disconnected and i
Am Thu, 17 Oct 2013 11:36:51 +0200
schrieb "Andreas Mock" :
> But how can I stop a clone resource on one
> node? Is there a way with crm?
>
> The only thing which comes to my mind is
> creating a -inf location contraint temporarily.
I do that with a time-based rule to prevent the running of your
On Sun, 6 Oct 2013 02:08:24 +0800 Gray Wen Wen
wrote:
> Hi all,
> now I am trying to configure a dual DRBD with mysql
> i wanna use active/active mode without any loadbalance.
> so my drbd is primary/primary on node1 and node2.
> the mount point is /mysql
> and i configure everything for mysql
> t
On Mon, 12 Aug 2013 19:27:33 +0200 Adrián López Tejedor
wrote:
> The problem is the network is out of my control. All the nodes are
> virtual machines over some VMWare ESX.
> We have two different networks, one for the service, and the other
> for the cluster.
> One idea is to create a second ring
On Mon, 29 Jul 2013 22:24:52 +0200 Oriol Mula-Valls
wrote:
> There is a bug already open:
> http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=697357
Thanks for the link.
I did a combination of the things written there:
- I slightly changed my network-setup. Now its one active-passive bond
to
Am 2013-07-29 18:46, schrieb Arnold Krille:
Hi all,
I have a little problem here and would like to get some help:
I have (had?) a working three-node cluster of two active nodes
(nebel1 and nebel2) and one standby-node (nebel3) running debian
squeeze + backports. That is pacemaker 1.1.7-1~bpo60
Hi all,
I have a little problem here and would like to get some help:
I have (had?) a working three-node cluster of two active nodes (nebel1
and nebel2) and one standby-node (nebel3) running debian squeeze +
backports. That is pacemaker 1.1.7-1~bpo60+1 and corosync
1.4.2-1~bpo60+1.
Now I up
On Wed, 24 Apr 2013 09:49:00 -0500 Robert Parsons
wrote:
>
> We are building a new web farm to replace our 7 year old system. The
> old system used ipvs/ldirectord/heartbeat to implement redundant load
> balancers. All web server nodes were physical boxes.
>
> The proposed new system will utili
On Thu, 14 Mar 2013 14:06:36 + Owen Le Blanc
wrote:
> I have a number of pacemaker managed clusters. We use an independent
> heartbeat network for corosync, and we use another network for the
> managed services. The heartbeat network is routed using different
> hardware from the service netw
On Monday 12 November 2012 10:50:57 Dejan Muhamedagic wrote:
> Hi Arnold,
>
> On Sun, Nov 11, 2012 at 07:37:29PM +0100, Arnold Krille wrote:
> > On Sun, 11 Nov 2012 18:37:04 +0100 Dejan Muhamedagic
> >
> > wrote:
> > > On Fri, Nov 09, 2012 at 05:22:
On Friday 19 October 2012 10:25:32 Andrew Beekhof wrote:
> On Fri, Oct 19, 2012 at 3:29 AM, Arnold Krille wrote:
> > On Thursday 18 October 2012 11:24:25 Andrew Beekhof wrote:
> >> On Thu, Oct 18, 2012 at 9:58 AM, Arnold Krille
wrote:
> >> > On Wed, 17 Oct 2012
On Sun, 11 Nov 2012 18:37:04 +0100 Dejan Muhamedagic
wrote:
> On Fri, Nov 09, 2012 at 05:22:08PM +0100, Lars Marowsky-Bree wrote:
> > On 2012-11-09T14:06:29, Dejan Muhamedagic
> > wrote:
> > > > And also doesn't really help with getting the state/readiness of
> > > > services the guest might prov
On Thursday 18 October 2012 11:24:25 Andrew Beekhof wrote:
> On Thu, Oct 18, 2012 at 9:58 AM, Arnold Krille wrote:
> > On Wed, 17 Oct 2012 14:21:24 -0400 Digimer wrote:
> >> On 10/17/2012 02:10 PM, Jean-Francois Malouin wrote:
> >> > Hi,
> >> >
>
On Wed, 17 Oct 2012 14:21:24 -0400 Digimer wrote:
> On 10/17/2012 02:10 PM, Jean-Francois Malouin wrote:
> > Hi,
> >
> > A simple question for a simple 2-nodes cluster running
> > pacemaker-1.0.9, corosync-1.2.1 (Debian/Squeeze):
> >
> > will the online node stonith the other standby node if I
On Fri, 12 Oct 2012 09:22:13 +0200 Florian Haas
wrote:
> For most people, this issue doesn't occur on system boot, as libvirtd
> would normally start before corosync, or corosync/pacemaker isn't part
> of the system bootup sequence at all (the latter is preferred for
> two-node clusters to prevent
On Tue, 2 Oct 2012 10:02:05 +
James Harper wrote:
> >
> > On Mon, Oct 1, 2012 at 5:49 PM, James Harper
> > wrote:
> > >>
> > >> On 2012-09-28 16:24, James Harper wrote:
> > >> > I have two nodes running identical hardware which run Xen
> > >> > VM's, and want
> > >> to add a third node to t
On Monday 01 October 2012 11:30:19 James Harper wrote:
> I am trying to configure stonith. ipmilan was a complete disaster as it
> kept passing port 623 as the hostname, so now I'm trying external/ipmi.
>
> The stonith resource is running as expected, but I still have
> stonith-enabled="false" in
On Sat, 22 Sep 2012 20:05:44 +0200
Lars Marowsky-Bree wrote:
> On 2012-09-21T22:24:47, Arnold Krille wrote:
> > > > There are lots of messages that shows when starting corosync,
> > > > may watchdog driver was tested to load into kernel.
> > > Load the m
Hi,
it would be nice if stonith-resources that only shoot one node automatically
set the anti-location constraint. But I fear you gotta do this by hand.
While an ipmi-stonith killing itself will work in that the machine dies, it
can't report back to pacemaker and then afair pacemaker hangs and
On Fri, 21 Sep 2012 21:16:02 +0200 Lars Marowsky-Bree
wrote:
> On 2012-09-20T10:55:59, Mia Lueng wrote:
> > There are lots of messages that shows when starting corosync, may
> > watchdog driver was tested to load into kernel.
> Load the module you want to use via initrd, or specify it in the
> mo
On Wed, 12 Sep 2012 08:56:49 + Kashif Jawed Siddiqui
wrote:
> I would like to know is there a way to add new primitive resource
> to an already existing group.
>
> I know crm configure edit requires manual editing.
>
> But is there a direct command?
>
> Like,
>
> crm co
Hi,
I went with running lsb:libvirt in the cluster too. That way you can define
dependencies. And when corosync isn't started automatically (like when you do
maintainance without switching init-levels) libvirt is stopped too.
Have fun,
Arnold
Ps: please excuse the phone-induced top-post.
--
Hi,
Michael Schwartzkopff schrieb:
>how can I set up a aktive/aktive pacmaker cluster on RHEL 6.2 iwth GFS?
>Is this possible at all? DLM? Is there any good HOWTO?
For first ideas I would look into a certain book;-)
I actually tried to make that work, not knowing that I would need cman. And
wi
On Thursday 26 July 2012 12:43:20 Andrew Widdersheim wrote:
> One of my resources failed to stop due to it hitting the timeout setting.
> The resource went into a failed state and froze the cluster until I
> manually fixed the problem. My question is what is pacemaker's default
> action when it enc
This is what we did (spoiler: no pacemaker)
We connect the openvpn-hosts via tinc (could also be openvpn but tinc is
more flexible when servers both initiate the connection) and put these
tunnels into a bridge (with stp).
Then all these nodes have openvpn with server-certificates from the same
ca
On Friday 29 June 2012 20:53:21 emmanuel segura wrote:
> change to_syslog: yes to to_syslog: no like that you put all your logs in
> /var/log/corosync.log
Unfortunately that didn't help. But changing "use_logd: yes" to "use_logd: no"
made all the output appear in /var/log/corosync/corosync.log..
On Friday 29 June 2012 13:13:37 Arnold Krille wrote:
> this is a strange problem I seem to have: What pacemaker/corosync logs to
> rsyslog has deamon-names, severity-levels but no messages. So I see a lot of
> entries in the logs, I see where they come from and whether the are
> imp
Hi all,
this is a strange problem I seem to have: What pacemaker/corosync logs to
rsyslog has deamon-names, severity-levels but no messages. So I see a lot of
entries in the logs, I see where they come from and whether the are important
but I don't see the messages themselv. That is the parts t
On 12.06.2012 17:52, Walter Feddern wrote:
> I have a 4 node cluster running about 120 tomcat resources. Currently they
> are using the stock tomcat resource script ( ocf:heartbeat:tomcat )
>
> As I may need to make some adjustments to the script for our environment, I
> would like to move it ou
On 10.06.2012 21:48, Florian Haas wrote:
> However, why do you want automatic failback? If your cluster nodes are
> interchangeable in terms of performance, you shouldn't need to care
> which node is the master. In other words the concept of having a
> "preferred master" is normally moot in well-de
On 29.05.2012 10:15, Anton Altaparmakov wrote:
> On 28 May 2012, at 23:46, Steven Silk wrote:
>> I am trying to setup a two node system making NFS highly available
>> We have run this in the past with heartbeat and drbd. Now we would like to
>> use pacemaker and corosync. I have been told no
On Friday 11 May 2012 17:49:38 Steve Davidson wrote:
> We want to run the Corosync heartbeat on the private net and, as a
> backup heartbeat, allow Corosync heartbeat on our "public" net as well.
>
> Thus in /etc/corosync/corosync.conf we need something like:
>
> bindaddr_primary: 192.168.57.0
>
On Saturday 05 May 2012 13:14:23 Jake Smith wrote:
> what switches do you have?
- HP ProCurve 1810G - 24 GE, P.2.2, eCos-2.0, CFE-2.1
- SMC8024L2
signature.asc
Description: This is a digitally signed message part.
___
Pacemaker mailing list: Pacemaker
Hi all,
please excuse (and ignore) this mail when you think its not appropriate for
this list or to faq.
We had our servers all connected via one gigabit switch and used bonds to have
2GB links for each of them (using drbd and pacemaker/corosync to keep our data
distributed and services/machin
On Tuesday 01 May 2012 12:38:58 Francois Gaudreault wrote:
> From what we can see in the logs, it appears that the DRBD resource,
> for some reason, is not waiting for getting an established connection
> (to get initial sync) before changing its role to Primary. (I apologize
> for the length of t
On Tuesday 24 April 2012 09:40:41 emmanuel segura wrote:
> I would like to know if it's possible use vlan interface for a cluster
> network?
Anything working at and above the IP-layer can't know anything about vlans or
be in any way affected by it.
So, yes, pacemaker/corosync doesn't care whether
Hi,
we have been running a two-node pacemaker/corosync cluster for the past half
year quite successfully.
Now we extended it to a three-node cluster with quorum and are still happy.
But here is a question:
We have a group of resources ("ClusterGroup") that contains a number of
services (tftp,
On Saturday 14 April 2012 13:24:29 S, MOHAMED ** CTR ** wrote:
> The Pacemaker_Explained.pdf document says that
> " setting of migration-threshold=2 and failure-timeout=60s would cause the
> resource to move to a new node after 2 failures, and allow it to move back
> (depending on the stickiness an
Hi,
On 15.03.2012 14:39, Andreas Kurz wrote:
On 03/15/2012 02:23 PM, Tim Ward wrote:
From: Jake Smith [mailto:jsm...@argotec.com]
Maybe totally in the wrong direction for what you want but...
Put commands in a script and add a until loop with a pgrep
test and sleep 1 till the specific resource
On Wednesday 14 March 2012 17:52:21 Dejan Muhamedagic wrote:
> On Wed, Mar 14, 2012 at 02:48:11PM +0100, Benjamin Kiessling wrote:
> > Hi,
> >
> > On 2012.03.14 14:24:10 +0100, Dejan Muhamedagic wrote:
> > > > dnsCache_start_0 (node=router1, call=56, rc=-2, status=Timed Out):
> > > > unknown exec
Hi,
On Monday 05 March 2012 12:58:16 José Alonso wrote:
> I have 2 Debian nodes with heartbeat and pacemaker 1.1.6 installed, and
> almost everything is working fine, I have only apache configured for
> testing, when a node goes down the failover is done correctly, but there's
> a problem when a n
On Tuesday 24 January 2012 23:55:57 Dejan Muhamedagic wrote:
> On Tue, Jan 24, 2012 at 09:51:49PM +0100, Arnold Krille wrote:
> > Using the same subnet for two communication-rings will disturb corosync
> > as it uses multicasts for communication. And that is best done with on
On Tuesday 24 January 2012 09:47:18 M Siddiqui wrote:
> I have a situation where two cluster nodes are connected over the VPN; each
> node
> is configured with two interfaces to provide ring redundancy for corosync:
> NODE1:
> eth1: 192.168.1.111/24
> eth2: 192.168.1.112/24
> NODE2:
> eth1: 1
On Tuesday 10 January 2012 20:08:50 Dejan Muhamedagic wrote:
> On Thu, Jan 05, 2012 at 04:59:13AM +, shashi wrote:
> > So we have two probable options:
> > 1. An work around to achieve this tri-state efficiently without changing
> > pacemaker internals.
> > 2. Modify pacemaker to add this tri-s
Hi,
On Friday 23 December 2011 16:03:37 Aravind M D wrote:
> I am facing some problem wth corosync and pacemaker implementation. I
> have configured cluster on Debian squeeze, the package for corosync and
> pacemaker is installed from backports.
> I am configuring two node cluster and i have c
On Saturday 17 December 2011 12:09:43 Rasto Levrinc wrote:
> I actually put this "crm configure show" button there for development
> purposes and wanted to remove it, but since people use it, I just left it
> there.
> Now I started to think that it could be made editable with apply button
> and may
Hi,
On Monday 12 December 2011 11:22:51 邱志刚 wrote:
> I have 2-node cluster of pacemaker,I want to migrate the kvm vm with
> command "migrate", but I found the vm isn't migrated,
> actually it is shutdown and then start on other node. I checked the log and
> found the vm is stopped but not migrated
On Wednesday 07 December 2011 09:04:19 James Harper wrote:
> I have a pair of servers running Xen, with the xen config files stored
> on an ocfs2 share mounted on an iscsi volume. A problem has developed
> where ocfs2 seems to get stuck (in the monitor script I think), and
> because I have a depend
51 matches
Mail list logo