Hi all,
The release process for Pacemaker 1.1.17 will start soon! The most
significant new feature is container bundles, developed by Andrew Beekhof.
Pacemaker's container story has previously been muddled.
For the simplest case, the ocf:heartbeat:docker agent allows you to
launch a docker insta
On 03/31/2017 06:44 AM, Nikhil Utane wrote:
> We are seeing this log in pacemaker.log continuously.
>
> Mar 31 17:13:01 [6372] 0005B932ED72cib: info:
> crm_compress_string: Compressed 436756 bytes into 14635 (ratio 29:1) in
> 284ms
>
> This looks to be the reason for high CPU. What d
We are only using one mount, and that mount has nothing on it currently.
I have fixed the problem. Our OS is Ubuntu 16.04 LTS (Xenial). I added the
17.04 (Zesty) repo to get newer a newer version of Corosync. I upgraded
Corosync, which upgraded a long list of other related packages (Pacemaker
and
- Original Message -
| I can confirm that doing an ifdown is not the source of my corosync issues.
| My cluster is in another state, so I can't pull a cable, but I can down a
| port on a switch. That had the exact same affects as doing an ifdown. Two
| machines got fenced when it should hav
I can confirm that doing an ifdown is not the source of my corosync issues.
My cluster is in another state, so I can't pull a cable, but I can down a
port on a switch. That had the exact same affects as doing an ifdown. Two
machines got fenced when it should have only been one.
---
Seth Reid
S
We are seeing this log in pacemaker.log continuously.
Mar 31 17:13:01 [6372] 0005B932ED72cib: info:
crm_compress_string: Compressed 436756 bytes into 14635 (ratio 29:1) in
284ms
This looks to be the reason for high CPU. What does this log indicate?
-Regards
Nikhil
On Fri, Mar 31,
On Fri, Mar 31, 2017 at 09:46:18AM +0200, Kristoffer Grönlund wrote:
> Ulrich Windl writes:
>
> > I thought the hierarchy is like this:
> > 1) default timeout
> > 2) RA's default timeout
> > 3) user-specified timeout
> >
> > So crm would go from 1) to 3) taking the last value it finds. Isn't it l
Hi,
On Fri, Mar 31, 2017 at 02:39:02AM -0400, Digimer wrote:
> On 31/03/17 02:32 AM, Jan Friesse wrote:
> >> The original message has the logs from nodes 1 and 3. Node 2, the one
> >> that
> >> got fenced in this test, doesn't really show much. Here are the logs from
> >> it:
> >>
> >> Mar 24 16:3
Kristoffer Grönlund writes:
The only solution I know which allows for a configuration like this is
using separate clusters in each data center, and using booth for
transferring ticket ownership between them. Booth requires a data
center-level quorum (meaning at least 3 locations), though the thi
Ulrich Windl writes:
> I thought the hierarchy is like this:
> 1) default timeout
> 2) RA's default timeout
> 3) user-specified timeout
>
> So crm would go from 1) to 3) taking the last value it finds. Isn't it like
> that?
No, step 2) is not taken by crm.
> I mean if there's no timeout in the
10 matches
Mail list logo