On 07/26/2012 11:04 PM, Sachin Gokhale wrote:
Dear Sir/Madam,
Please remove me from this mailgroup list.
Thanks & Regards,
Sachin
At the bottom of every post to this (and most all) mailing lists is a
link to your subscription management interface;
_
Dear Sir/Madam,
Please remove me from this mailgroup list.
Thanks & Regards,
Sachin
--
-
Sachin Gokhale
System Administrator
Indicus Software
Email: sysad...@indicussoftware.com
Indicus on the Web:
Website: http://www.indicuss
Hi. I'm following the cluster from scratch guide to create a simple
active/passive 2 node cluster. I'm using the standard packages that come
with Fedora 17. I have corosync running and linked up. However I cannot
seem to get Pacemaker to run correctly. I don't see all the processes
loaded:
17286 ?
On 07/20/2012 04:05 PM, ihjaz Mohamed wrote:
> Am using pacemaker-1.1.5-5 and I see about 21000 input files in the
> folder. Does that mean this version by default has no such limits?
yes ... IIRC limits are set since 1.1.6
Regards,
Andreas
>
> -
A few more questions, as I test various outage scenarios:
My memcached OCF script appears to give a false positive occasionally, and
pacemaker restarts the service. Under the hood, it uses netcat to
localhost with a 3 second connection timeout. I've run my script manually
in a loop and it never
Ah, that makes sense. Thanks for helping me wrap my head around it.
Working on setting up STONITH now to avoid this in the future.
___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/m
On 07/26/2012 02:16 PM, Cal Heldenbrand wrote:
That seems very handy -- and I don't need to specify 3 clones? Once
my memcached OCF script reports a downed service, one of them will
automatically transition to the current failover node?
There are options for the clone on how many instances o
Thanks for the info Phil! I'm going to play around with my configs with
what you've recommended... but a few questions below:
can be started in any order. You don't need to specify any location
> constraints to say where memcache can run, or to keep the memcache
> instances from running multiple
- Original Message -
> From: "Dongdong Zhou"
> To: Pacemaker@oss.clusterlabs.org
> Sent: Thursday, July 26, 2012 4:41:36 AM
> Subject: [Pacemaker] IP fail over without controlling the services
>
> Hi,
>
> I'm trying to set up IP auto fail over on a two-way master-slave
> MySQL
> replicat
On 07/26/2012 12:43 PM, Andrew Widdersheim wrote:
http://www.clusterlabs.org/doc/en-US/Pacemaker/1.0/html/Pacemaker_Explained/s-failure-migration.html
"If STONITH is not enabled, then the cluster has no way to continue
and will not try to start the resource elsewhere, but will try to stop
it a
On Thursday 26 July 2012 12:43:20 Andrew Widdersheim wrote:
> One of my resources failed to stop due to it hitting the timeout setting.
> The resource went into a failed state and froze the cluster until I
> manually fixed the problem. My question is what is pacemaker's default
> action when it enc
On 07/26/2012 12:34 PM, Cal Heldenbrand wrote:
Hi everybody,
I've read through the Clusters from Scratch document, but it doesn't
seem to help me very well with an N+1 (shared hot spare) style cluster
setup.
My test case, is I have 3 memcache servers. Two are in primary use
(hashed 50/50 b
One of my resources failed to stop due to it hitting the timeout setting. The
resource went into a failed state and froze the cluster until I manually fixed
the problem. My question is what is pacemaker's default action when it
encounters a stop failure and STONITH is not enabled? Is it what I
Hi everybody,
I've read through the Clusters from Scratch document, but it doesn't seem
to help me very well with an N+1 (shared hot spare) style cluster setup.
My test case, is I have 3 memcache servers. Two are in primary use (hashed
50/50 by the clients) and one is a hot failover.
I hacked u
dear members of the pacemaker mailing list,
i am using pacemaker on Debian GNU/Linux testing (wheezy) in combination with
LIO [1] and DRBD. The setup heavily relies on [2].
After fiddling around I am able to move the iscsi storage from one
(storage-)node to the other successfully.
With this set
Hello!
No, I didn't solved my problem; instead, I removed the non-essential
OCFS2 functionality from my cluster. Nevertheless, from what I remember,
this likely comes from a bug, so your answer seems consistent.
It could prove helpful in the future.
Thank you!
Kind regards.
Le 26/07/2012 14:39
Hi,
I don't known if you solved your problem but I just have the same
behavior on my fresh installed pacemaker.
With the 2 lines :
p_o2cb:1_monitor_0 (node=nas1, call=10, rc=5,status=complete): not installed
p_o2cb:0_monitor_0 (node=nas2, call=10, rc=5, status=complete): not
installe
Hi,
I'm trying to set up IP auto fail over on a two-way master-slave MySQL
replication cluster. What I'm trying to achieve is to only fail over the
IP when there's a problem on the mysqld service. I don't want pacemaker
to control the mysqld service.
With my current configuration, pacemaker will
On 07/24/2012 10:23 PM, l...@netcourrier.com wrote:
> I have mysql in master/slave replication on 2 nodes
> (centreon-failover-dbmaster and centreon-failover-dbslave). In normal state,
> role 'master' is on 'centreon-failover-dbmaster' and 'slave' role is on
> 'centreon-failover-dbslave'. The p
19 matches
Mail list logo