On 2019-04-07T12:06:40, Andrei Borzenkov wrote:
> After reading sources and experimenting I still do not see how it can
> help in two node cluster. In this case SBD will assume both nodes are
> out of quorum and both nodes will commit suicide.
It helps by not making a single SBD device a single
On 2016-05-11T08:02:56, Ulrich Windl wrote:
> > $ crm help Checks
> > $ crm options help check-frequency
> > $ crm options help check-mode
> >
> > If none of these settings match Ulrich's preferences, maybe he
> > could pledge his case to introduce more.
>
> Why do we need this? IMHO a Boolean
On 2016-04-27T12:10:10, Klaus Wenninger wrote:
> > Having things in ARGV[] is always risky due to them being exposed more
> > easily via ps. Environment variables or stdin appear better.
> What made you assume the recipient is being passed as argument?
>
> The environment variable CRM_alert_reci
On 2016-04-21T12:50:43, Ken Gaillot wrote:
Hi all,
awesome to see such a cool new feature land! I do have some
questions/feedback though.
> The alerts section can have any number of alerts, which look like:
>
> path="/srv/pacemaker/pcmk_alert_sample.sh">
>
>
On 2016-04-25T12:40:31, Ulrich Windl wrote:
> As we've made good experience with MD-RAID, I really thought about having an
> MD-RAID on one node and export that RAID via iSCSI to all the nodes that need
> access. Unfortunately I cannot compare performance ahead of time 8-(
The additional IO hop
On 2016-04-25T10:10:38, Ulrich Windl wrote:
Hi Ulrich,
I can't really comment on why the cLVM2 is slow (somewhat surprisingly,
because flock is meta-data only and thus shouldn't even be affected by
cLVM2, anyway ...).
But on the subject of performance, you're quite right - we know that
cLVM2 is
On 2016-04-12T19:39:25, Digimer wrote:
Alas, I won't make it to the Summit, but if anyone else is at Vault (the
week before in Raleigh), I'd be happy to meet!
Regards,
Lars
--
Architect SDS
SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284
(AG Nürnberg)
"Exp
On 2016-04-08T12:37:45, Robert Dahlem wrote:
Hi Robert,
> The original source for this was:
>
> rsc_template template-HAPROXY ocf:KORDOBA:haproxy \
> op start timeout="20s" interval="0" on-fail="stop" \
> op stop timeout="20s" interval="0" on-fail="block" \
> op monitor
On 2015-07-09T17:13:01, Ulrich Windl wrote:
> I was watching our Xen-cluster when there were problems, and I found this:
> NameID Mem VCPUs State
> Time(s)
> Domain-0 0 1340124 r-560.6
> [...othe
On 2015-07-07T12:23:44, Ulrich Windl wrote:
> The advantage depends on the alternatives: If two nodes both want to access
> the same filesystem, you can use OCFS2, NFS, or CIFS (list not complete). If
> only one node can access a filesystem, you could try any journaled filesystem
> (a fsck is
On 2015-07-07T14:15:14, Muhammad Sharfuddin wrote:
> now msgwait timeout is set to 10s and a delay/inaccessibility of 15 seconds
> was observed. If a service(App, DB, file server) is installed and running
> from the ocfs2 file system via the surviving/online node, then
> wouldn't that service get
11 matches
Mail list logo