Hi, all
I 've encountered an issue of bonded iba with corosync. It seems to work if
the adapter is not Bonded
but fails to show as an option when the adapter is part of a Bonded interface.
corosync[21584]: [MAIN ] Corosync Cluster Engine ('1.4.7'): started and ready
to provide service.
Se
On 21/03/16 09:46 -0500, Ken Gaillot wrote:
> You need more attributes, such as "devices" to specify which SCSI
> devices to cut off, and either "key" or "nodename" to specify the
> node key for SCSI reservations.
Hmm, I keep lamenting that by extending agents metadata with inline
RelaxNG grammar
Hi guys,
Im trying to create a new cluster using pacemaker but Im having a problem with
the fence mechanism.Until de moment I have perform the following tasks:
+ I have installed two different machines with centos 7 (infrastructure mode).+
Each machine have been configured with two network card
--On Monday, March 21, 2016 09:22:40 AM -0500 Ken Gaillot
wrote:
It's actually newer pacemaker versions rather than pcs itself. Fence
agents do not need to be cloned, or even running -- as long as they're
configured and enabled, any node can use the resource.
Excellent. Thanks for the confi
On 03/21/2016 09:34 AM, Ulrich Windl wrote:
Ken Gaillot schrieb am 21.03.2016 um 15:22 in
Nachricht
> <56f003b0.4020...@redhat.com>:
>
> [...]
>> It's actually newer pacemaker versions rather than pcs itself. Fence
>> agents do not need to be cloned, or even running -- as long as they'
On 03/21/2016 08:39 AM, marvin wrote:
>
>
> On 03/15/2016 03:39 PM, Ken Gaillot wrote:
>> On 03/15/2016 09:10 AM, marvin wrote:
>>> Hi,
>>>
>>> I'm trying to get fence_scsi working, but i get "no such device" error.
>>> It's a two node cluster with nodes called "node01" and "node03". The OS
>>> i
>>> Ken Gaillot schrieb am 21.03.2016 um 15:22 in
>>> Nachricht
<56f003b0.4020...@redhat.com>:
[...]
> It's actually newer pacemaker versions rather than pcs itself. Fence
> agents do not need to be cloned, or even running -- as long as they're
> configured and enabled, any node can use the reso
On 03/20/2016 06:20 PM, Devin Reade wrote:
> I'm looking at a new pcs-style two node cluster running on CentOS 7
> (pacemaker 1.1.13, corosync 2.3.4) and crm_mon shows this line
> for my fencing resource, that is the resource running on only one of
> the two nodes:
>
>fence_cl2 (stonith:fence_
On 03/19/2016 03:35 AM, Michael Lychkov wrote:
> Hello everyone,
>
> Is there way to initiate reload operation call of master instance of
> multi-state resource agent?
>
> I have an ocf multi-state resource agent for a daemon service and I
> added reload op into this resource agent:
>
> * two pa
On 03/15/2016 03:39 PM, Ken Gaillot wrote:
On 03/15/2016 09:10 AM, marvin wrote:
Hi,
I'm trying to get fence_scsi working, but i get "no such device" error.
It's a two node cluster with nodes called "node01" and "node03". The OS
is RHEL 7.2.
here is some relevant info:
# pcs status
Cluster
Digimer writes:
> On 16/03/16 04:04 PM, Christopher Harvey wrote:
>> is there some log I can enable that would say
>> "ERROR: hey, I would use stonith here, but you have it disabled! your
>> warranty is void past this point! do not pass go, do not file a bug"?
>
> If I had it my way, that would b
Of course, to catch you up:
>> Still experiencing the same behaviour, killing amavisd returns an rc=7
for
>> the monitoring operation on the "victim" node, this soungs logical, but
the
>> logs contain the same: amavisd and virtualip cannot run anywhere.
>>
>> I made sure systemd is clean (amavisd
>>> Lorand Kelemen schrieb am 21.03.2016 um 10:08 in
Nachricht
:
> Reproduced it again:
>
> Last updated: Mon Mar 21 10:01:18 2016 Last change: Mon Mar 21
> 09:59:27 2016 by root via crm_attribute on mail1
> Stack: corosync
> Current DC: mail2 (version 1.1.13-10.el7_2.2-44eb2dd) - parti
>>> Dennis Jacobfeuerborn schrieb am 19.03.2016 um
>>> 15:10 in
Nachricht <56ed5dc4.9080...@conversis.de>:
[...]
> I think the key issue here is that people think about corosync they
> believe there can only be two state for membership (true or false) when
> in reality there are three possible s
>>> Dennis Jacobfeuerborn schrieb am 19.03.2016 um
>>> 14:32 in
Nachricht <56ed5507.7070...@conversis.de>:
> On 17.03.2016 08:45, Andrei Borzenkov wrote:
>> On Wed, Mar 16, 2016 at 9:35 PM, Mike Bernhardt wrote:
>>> I guess I have to say "never mind!" I don't know what the problem was
>>> yester
Reproduced it again:
Last updated: Mon Mar 21 10:01:18 2016 Last change: Mon Mar 21
09:59:27 2016 by root via crm_attribute on mail1
Stack: corosync
Current DC: mail2 (version 1.1.13-10.el7_2.2-44eb2dd) - partition with
quorum
2 nodes and 10 resources configured
Online: [ mail1 mail2 ]
>>> Lorand Kelemen schrieb am 18.03.2016 um 16:42 in
Nachricht
:
> I reviewed all the logs, but found nothing out of the ordinary, besides the
> "resource cannot run anywhere" line, however after the cluster recheck
> interval expired the services started fine without any suspicious log
> entries.
On 19/03/16 15:43, Digimer wrote:
> On 19/03/16 10:10 AM, Dennis Jacobfeuerborn wrote:
>> On 18.03.2016 00:50, Digimer wrote:
>>> On 17/03/16 07:30 PM, Christopher Harvey wrote:
On Thu, Mar 17, 2016, at 06:24 PM, Ken Gaillot wrote:
> On 03/17/2016 05:10 PM, Christopher Harvey wrote:
>>
18 matches
Mail list logo