Re: [ClusterLabs] Fwd: dlm not starting

2016-02-08 Thread G Spot
Thanks Ken, appreciate your help.

On Mon, Feb 8, 2016 at 11:52 AM, Ken Gaillot  wrote:

> On 02/07/2016 12:21 AM, G Spot wrote:
> > Hi Ken,
> >
> > Thanks for your response, am using ocf:pacemaker:controld resource agent
> > and stonith-enabled=false do I need to configure stonith device to
> > make this work?
>
> Correct. DLM requires access to fencing.
>
> > Regards,
> > afdb
> >
> > On Fri, Feb 5, 2016 at 5:46 PM, Ken Gaillot  wrote:
> >
> >>> am configuring shared storage for 2 nodes (Cent 7) installed
> >>> pcs/gfs2-utils/lvm2-cluster when creating resource unable to start dlm
> >>>
> >>>  crm_verify -LV
> >>>error: unpack_rsc_op:Preventing dlm from re-starting
> anywhere:
> >>> operation start failed 'not configured' (6)
> >>
> >> Are you using the ocf:pacemaker:controld resource agent for dlm?
> >> Normally it logs what the problem is when returning 'not configured',
> >> but I don't see it below. As far as I know, it will return 'not
> >> configured' if stonith-enabled=false or globally-unique=true, as those
> >> are incompatible with DLM.
> >>
> >> There is also a rare cluster error condition that will be reported as
> >> 'not configured', but it will always be accompanied by "Invalid resource
> >> definition" in the logs.
> >>
> >>>
> >>> Feb 05 13:34:26 [24262] libcompute1pengine: info:
> >>> determine_online_status:  Node libcompute1 is online
> >>> Feb 05 13:34:26 [24262] libcompute1pengine: info:
> >>> determine_online_status:  Node libcompute2 is online
> >>> Feb 05 13:34:26 [24262] libcompute1pengine:  warning:
> >>> unpack_rsc_op_failure:Processing failed op start for dlm on
> >>> libcompute1: not configured (6)
> >>> Feb 05 13:34:26 [24262] libcompute1pengine:error:
> unpack_rsc_op:
> >>>  Preventing dlm from re-starting anywhere: operation start failed
> >> 'not
> >>> configured' (6)
> >>> Feb 05 13:34:26 [24262] libcompute1pengine:  warning:
> >>> unpack_rsc_op_failure:Processing failed op start for dlm on
> >>> libcompute1: not configured (6)
> >>> Feb 05 13:34:26 [24262] libcompute1pengine:error:
> unpack_rsc_op:
> >>>  Preventing dlm from re-starting anywhere: operation start failed
> >> 'not
> >>> configured' (6)
> >>> Feb 05 13:34:26 [24262] libcompute1pengine: info: native_print:
> >> dlm
> >>> (ocf::pacemaker:controld):  FAILED libcompute1
> >>> Feb 05 13:34:26 [24262] libcompute1pengine: info:
> >>> get_failcount_full:   dlm has failed INFINITY times on libcompute1
> >>> Feb 05 13:34:26 [24262] libcompute1pengine:  warning:
> >>> common_apply_stickiness:  Forcing dlm away from libcompute1 after
> >>> 100 failures (max=100)
> >>> Feb 05 13:34:26 [24262] libcompute1pengine: info: native_color:
> >>> Resource dlm cannot run anywhere
> >>> Feb 05 13:34:26 [24262] libcompute1pengine:   notice: LogActions:
> >>> Stopdlm (libcompute1)
> >>> Feb 05 13:34:26 [24262] libcompute1pengine:   notice:
> >>> process_pe_message:   Calculated Transition 59:
> >>> /var/lib/pacemaker/pengine/pe-input-176.bz2
> >>> Feb 05 13:34:26 [24263] libcompute1   crmd: info:
> >>> do_state_transition:  State transition S_POLICY_ENGINE ->
> >>> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE
> >>> origin=handle_response ]
> >>> Feb 05 13:34:26 [24263] libcompute1   crmd: info: do_te_invoke:
> >>> Processing graph 59 (ref=pe_calc-dc-1454697266-177) derived from
> >>> /var/lib/pacemaker/pengine/pe-input-176.bz2
>
>
___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] Fwd: dlm not starting

2016-02-08 Thread Ken Gaillot
On 02/07/2016 12:21 AM, G Spot wrote:
> Hi Ken,
> 
> Thanks for your response, am using ocf:pacemaker:controld resource agent
> and stonith-enabled=false do I need to configure stonith device to
> make this work?

Correct. DLM requires access to fencing.

> Regards,
> afdb
> 
> On Fri, Feb 5, 2016 at 5:46 PM, Ken Gaillot  wrote:
> 
>>> am configuring shared storage for 2 nodes (Cent 7) installed
>>> pcs/gfs2-utils/lvm2-cluster when creating resource unable to start dlm
>>>
>>>  crm_verify -LV
>>>error: unpack_rsc_op:Preventing dlm from re-starting anywhere:
>>> operation start failed 'not configured' (6)
>>
>> Are you using the ocf:pacemaker:controld resource agent for dlm?
>> Normally it logs what the problem is when returning 'not configured',
>> but I don't see it below. As far as I know, it will return 'not
>> configured' if stonith-enabled=false or globally-unique=true, as those
>> are incompatible with DLM.
>>
>> There is also a rare cluster error condition that will be reported as
>> 'not configured', but it will always be accompanied by "Invalid resource
>> definition" in the logs.
>>
>>>
>>> Feb 05 13:34:26 [24262] libcompute1pengine: info:
>>> determine_online_status:  Node libcompute1 is online
>>> Feb 05 13:34:26 [24262] libcompute1pengine: info:
>>> determine_online_status:  Node libcompute2 is online
>>> Feb 05 13:34:26 [24262] libcompute1pengine:  warning:
>>> unpack_rsc_op_failure:Processing failed op start for dlm on
>>> libcompute1: not configured (6)
>>> Feb 05 13:34:26 [24262] libcompute1pengine:error: unpack_rsc_op:
>>>  Preventing dlm from re-starting anywhere: operation start failed
>> 'not
>>> configured' (6)
>>> Feb 05 13:34:26 [24262] libcompute1pengine:  warning:
>>> unpack_rsc_op_failure:Processing failed op start for dlm on
>>> libcompute1: not configured (6)
>>> Feb 05 13:34:26 [24262] libcompute1pengine:error: unpack_rsc_op:
>>>  Preventing dlm from re-starting anywhere: operation start failed
>> 'not
>>> configured' (6)
>>> Feb 05 13:34:26 [24262] libcompute1pengine: info: native_print:
>> dlm
>>> (ocf::pacemaker:controld):  FAILED libcompute1
>>> Feb 05 13:34:26 [24262] libcompute1pengine: info:
>>> get_failcount_full:   dlm has failed INFINITY times on libcompute1
>>> Feb 05 13:34:26 [24262] libcompute1pengine:  warning:
>>> common_apply_stickiness:  Forcing dlm away from libcompute1 after
>>> 100 failures (max=100)
>>> Feb 05 13:34:26 [24262] libcompute1pengine: info: native_color:
>>> Resource dlm cannot run anywhere
>>> Feb 05 13:34:26 [24262] libcompute1pengine:   notice: LogActions:
>>> Stopdlm (libcompute1)
>>> Feb 05 13:34:26 [24262] libcompute1pengine:   notice:
>>> process_pe_message:   Calculated Transition 59:
>>> /var/lib/pacemaker/pengine/pe-input-176.bz2
>>> Feb 05 13:34:26 [24263] libcompute1   crmd: info:
>>> do_state_transition:  State transition S_POLICY_ENGINE ->
>>> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE
>>> origin=handle_response ]
>>> Feb 05 13:34:26 [24263] libcompute1   crmd: info: do_te_invoke:
>>> Processing graph 59 (ref=pe_calc-dc-1454697266-177) derived from
>>> /var/lib/pacemaker/pengine/pe-input-176.bz2


___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] Fwd: dlm not starting

2016-02-06 Thread G Spot
Hi Ken,

Thanks for your response, am using ocf:pacemaker:controld resource agent
and stonith-enabled=false do I need to configure stonith device to
make this work?

Regards,
afdb

On Fri, Feb 5, 2016 at 5:46 PM, Ken Gaillot  wrote:

> > am configuring shared storage for 2 nodes (Cent 7) installed
> > pcs/gfs2-utils/lvm2-cluster when creating resource unable to start dlm
> >
> >  crm_verify -LV
> >error: unpack_rsc_op:Preventing dlm from re-starting anywhere:
> > operation start failed 'not configured' (6)
>
> Are you using the ocf:pacemaker:controld resource agent for dlm?
> Normally it logs what the problem is when returning 'not configured',
> but I don't see it below. As far as I know, it will return 'not
> configured' if stonith-enabled=false or globally-unique=true, as those
> are incompatible with DLM.
>
> There is also a rare cluster error condition that will be reported as
> 'not configured', but it will always be accompanied by "Invalid resource
> definition" in the logs.
>
> >
> > Feb 05 13:34:26 [24262] libcompute1pengine: info:
> > determine_online_status:  Node libcompute1 is online
> > Feb 05 13:34:26 [24262] libcompute1pengine: info:
> > determine_online_status:  Node libcompute2 is online
> > Feb 05 13:34:26 [24262] libcompute1pengine:  warning:
> > unpack_rsc_op_failure:Processing failed op start for dlm on
> > libcompute1: not configured (6)
> > Feb 05 13:34:26 [24262] libcompute1pengine:error: unpack_rsc_op:
> >  Preventing dlm from re-starting anywhere: operation start failed
> 'not
> > configured' (6)
> > Feb 05 13:34:26 [24262] libcompute1pengine:  warning:
> > unpack_rsc_op_failure:Processing failed op start for dlm on
> > libcompute1: not configured (6)
> > Feb 05 13:34:26 [24262] libcompute1pengine:error: unpack_rsc_op:
> >  Preventing dlm from re-starting anywhere: operation start failed
> 'not
> > configured' (6)
> > Feb 05 13:34:26 [24262] libcompute1pengine: info: native_print:
> dlm
> > (ocf::pacemaker:controld):  FAILED libcompute1
> > Feb 05 13:34:26 [24262] libcompute1pengine: info:
> > get_failcount_full:   dlm has failed INFINITY times on libcompute1
> > Feb 05 13:34:26 [24262] libcompute1pengine:  warning:
> > common_apply_stickiness:  Forcing dlm away from libcompute1 after
> > 100 failures (max=100)
> > Feb 05 13:34:26 [24262] libcompute1pengine: info: native_color:
> > Resource dlm cannot run anywhere
> > Feb 05 13:34:26 [24262] libcompute1pengine:   notice: LogActions:
> > Stopdlm (libcompute1)
> > Feb 05 13:34:26 [24262] libcompute1pengine:   notice:
> > process_pe_message:   Calculated Transition 59:
> > /var/lib/pacemaker/pengine/pe-input-176.bz2
> > Feb 05 13:34:26 [24263] libcompute1   crmd: info:
> > do_state_transition:  State transition S_POLICY_ENGINE ->
> > S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE
> > origin=handle_response ]
> > Feb 05 13:34:26 [24263] libcompute1   crmd: info: do_te_invoke:
> > Processing graph 59 (ref=pe_calc-dc-1454697266-177) derived from
> > /var/lib/pacemaker/pengine/pe-input-176.bz2
>
>
> ___
> Users mailing list: Users@clusterlabs.org
> http://clusterlabs.org/mailman/listinfo/users
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org
>
___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] Fwd: dlm not starting

2016-02-05 Thread Ken Gaillot
> am configuring shared storage for 2 nodes (Cent 7) installed
> pcs/gfs2-utils/lvm2-cluster when creating resource unable to start dlm
> 
>  crm_verify -LV
>error: unpack_rsc_op:Preventing dlm from re-starting anywhere:
> operation start failed 'not configured' (6)

Are you using the ocf:pacemaker:controld resource agent for dlm?
Normally it logs what the problem is when returning 'not configured',
but I don't see it below. As far as I know, it will return 'not
configured' if stonith-enabled=false or globally-unique=true, as those
are incompatible with DLM.

There is also a rare cluster error condition that will be reported as
'not configured', but it will always be accompanied by "Invalid resource
definition" in the logs.

> 
> Feb 05 13:34:26 [24262] libcompute1pengine: info:
> determine_online_status:  Node libcompute1 is online
> Feb 05 13:34:26 [24262] libcompute1pengine: info:
> determine_online_status:  Node libcompute2 is online
> Feb 05 13:34:26 [24262] libcompute1pengine:  warning:
> unpack_rsc_op_failure:Processing failed op start for dlm on
> libcompute1: not configured (6)
> Feb 05 13:34:26 [24262] libcompute1pengine:error: unpack_rsc_op:
>  Preventing dlm from re-starting anywhere: operation start failed 'not
> configured' (6)
> Feb 05 13:34:26 [24262] libcompute1pengine:  warning:
> unpack_rsc_op_failure:Processing failed op start for dlm on
> libcompute1: not configured (6)
> Feb 05 13:34:26 [24262] libcompute1pengine:error: unpack_rsc_op:
>  Preventing dlm from re-starting anywhere: operation start failed 'not
> configured' (6)
> Feb 05 13:34:26 [24262] libcompute1pengine: info: native_print: dlm
> (ocf::pacemaker:controld):  FAILED libcompute1
> Feb 05 13:34:26 [24262] libcompute1pengine: info:
> get_failcount_full:   dlm has failed INFINITY times on libcompute1
> Feb 05 13:34:26 [24262] libcompute1pengine:  warning:
> common_apply_stickiness:  Forcing dlm away from libcompute1 after
> 100 failures (max=100)
> Feb 05 13:34:26 [24262] libcompute1pengine: info: native_color:
> Resource dlm cannot run anywhere
> Feb 05 13:34:26 [24262] libcompute1pengine:   notice: LogActions:
> Stopdlm (libcompute1)
> Feb 05 13:34:26 [24262] libcompute1pengine:   notice:
> process_pe_message:   Calculated Transition 59:
> /var/lib/pacemaker/pengine/pe-input-176.bz2
> Feb 05 13:34:26 [24263] libcompute1   crmd: info:
> do_state_transition:  State transition S_POLICY_ENGINE ->
> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE
> origin=handle_response ]
> Feb 05 13:34:26 [24263] libcompute1   crmd: info: do_te_invoke:
> Processing graph 59 (ref=pe_calc-dc-1454697266-177) derived from
> /var/lib/pacemaker/pengine/pe-input-176.bz2


___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] Fwd: dlm not starting

2016-02-05 Thread Jan Pokorný
On 05/02/16 14:22 -0500, nameless wrote:
> Hi

Hello >nameless<,

technical aspect aside, it goes without saying that engaging in
a community assumes some level of cultural and social compatibility.
Otherwise there is a danger the cluster will partition, and
that would certainly be unhelpful.

Maybe this is a misunderstanding on my side, but so far, you don't
appear compatible, maturity-wise.

Happy Friday, !

-- 
Jan (Poki)


pgpuDRk12fVfi.pgp
Description: PGP signature
___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org