On Fri, Jan 09, 2009 at 02:26:37PM +0100, Arndt Roth wrote:
> Well, the clone runs on all nodes (see crm_mon output). That's fine and I 
> guess it isn't globally unique then. It is just not verified correctly by 
> crm_verify (I suppose).
> 
> Here's the clone-config:
> 
> 
>        <clone id="clone_ldirectord">
> 
>          <instance_attributes id="clone_ldirectord_inst_attr">
>            <attributes>
>              <nvpair id="clone_ldirector_conf_meta_attr_clone_max" 
> name="clone_max" value="3"/>
>              <nvpair id="clone_ldirector_conf_meta_attr_clone_node_max" 
> name="clone_node_max" value="1"/>

Hence, you have to also set globally_unique to false here. BTW,
shouldn't this be meta_attributes?

Thanks,

Dejan

>            </attributes>
>          </instance_attributes>
> 
>          <group id="group_clone_ldirectord_lvs-monitor">
> 
>            <primitive id="resource_ldirectord" class="ocf" type="ldirectord" 
> provider="heartbeat">
>              <operations>
>                <op id="resource_ldirectord_operation_op" name="monitor" 
> description="ldirectord-monitor" interval="10" timeout="3" start_delay="10s" 
> disabled="false" role="Started" prereq="nothing" on_
> fail="restart"/>
>              </operations>
>              <instance_attributes id="ldirectord_inst_attributes">
>                <attributes>
>                  <nvpair id="ldirector_attr_configfile" name="configfile" 
> value="/etc/ha.d/ldirectord.cf"/>
>                  <nvpair id="ldirector_attr_binary" name="ldirectord" 
> value="/usr/sbin/ldirectord"/>
>                </attributes>
>              </instance_attributes>
>            </primitive>
>            <meta_attributes id="clone_ldirectord-meta-options">
>              <attributes>
>                <nvpair id="clone_ldirectord-meta-options-target-role" 
> name="target-role" value="Started"/>
>                <nvpair id="clone_ldirectord-meta-options-is-managed" 
> name="is-managed" value="true"/>
>                <nvpair id="clone_ldirectord-meta-options-notify" 
> name="notify" value="true"/>
>              </attributes>
>            </meta_attributes>
> 
>            <primitive class="lsb" id="lvs-monitor" type="lvs-monitor" 
> restart_type="restart">
>              <operations>
>                <op id="lvs-monitor_op_start" name="start" timeout="2s"/>
>                <op id="lvs-monitor_op_stop" name="stop" timeout="2s"/>
>                <op id="lvs-monitor_op_status" name="monitor" interval="5s" 
> timeout="2s"/>
>              </operations>
>            </primitive>
> 
>          </group>
> 
>        </clone> 
> 
> -----Urspr?ngliche Nachricht-----
> Von: [email protected] 
> [mailto:[email protected]] Im Auftrag von Dejan Muhamedagic
> Gesendet: Freitag, 9. Januar 2009 13:04
> An: General Linux-HA mailing list
> Betreff: Re: [Linux-HA] crm_verify bug?
> 
> Hi,
> 
> On Fri, Jan 09, 2009 at 12:39:34PM +0100, Arndt Roth wrote:
> > Hi *,
> > 
> >  
> > 
> > I have a group with one OCF and one LSB resource defined into a clone
> > (OCF-Clone + LSB-primitive in a group didn't work). 
> > 
> >  
> > 
> > Clone Set: clone_ldirectord
> > 
> >     Resource Group: group_clone_ldirectord_lvs-monitor:0
> > 
> >         resource_ldirectord:0   (ocf::heartbeat:ldirectord):    Started
> > server2
> > 
> >         lvs-monitor:0   (lsb:lvs-monitor):      Started server2
> > 
> >     Resource Group: group_clone_ldirectord_lvs-monitor:1
> > 
> >         resource_ldirectord:1   (ocf::heartbeat:ldirectord):    Started
> > server1
> > 
> >         lvs-monitor:1   (lsb:lvs-monitor):      Started server1
> > 
> >     Resource Group: group_clone_ldirectord_lvs-monitor:2
> > 
> >         resource_ldirectord:2   (ocf::heartbeat:ldirectord):    Started
> > server3
> > 
> >         lvs-monitor:2   (lsb:lvs-monitor):      Started server3
> > 
> >  
> > 
> > Now I get this confusing error-message on all 3 nodes when verifying
> > with crm_verify:
> > 
> >  
> > 
> > [r...@server2:~]$ crm_verify -LV
> > 
> > crm_verify[20534]: 2009/01/09_12:28:27 WARN: unpack_rsc_op:
> > resource_ldirectord:0_monitor_0 found active resource_ldirectord:0 on
> > server2
> > 
> > crm_verify[20534]: 2009/01/09_12:28:27 WARN: unpack_rsc_op:
> > resource_ldirectord:1_monitor_0 found active resource_ldirectord:1 on
> > server2
> > 
> > crm_verify[20534]: 2009/01/09_12:28:27 WARN: unpack_rsc_op:
> > resource_ldirectord:2_monitor_0 found active resource_ldirectord:2 on
> > server2
> > 
> > crm_verify[20534]: 2009/01/09_12:28:27 WARN: unpack_rsc_op:
> > lvs-monitor:2_monitor_0 found active lvs-monitor:2 on server2
> > 
> > crm_verify[20534]: 2009/01/09_12:28:27 WARN: unpack_rsc_op:
> > lvs-monitor:0_monitor_0 found active lvs-monitor:0 on server2
> > 
> > crm_verify[20534]: 2009/01/09_12:28:27 WARN: unpack_rsc_op:
> > lvs-monitor:1_monitor_0 found active lvs-monitor:1 on server2
> > 
> >  
> > 
> > Why does crm_verify display on all nodes that all resources are active
> > on server2?
> > 
> > Apart from the error-message everything looks ok, but how can I confirm
> > that it is only a crm_verify problem or bug?
> 
> Perhaps you can show the configuration too. Maybe this is due to
> globally-unique set to true (the default) for the clone?
> 
> Thanks,
> 
> Dejan
> 
> >  
> > 
> > Thanks for advice,
> > 
> >  
> > 
> > Arndt
> > 
> >  
> > 
> > 
> > 
> > 
> > _______________________________________________
> > Linux-HA mailing list
> > [email protected]
> > http://lists.linux-ha.org/mailman/listinfo/linux-ha
> > See also: http://linux-ha.org/ReportingProblems
> _______________________________________________
> Linux-HA mailing list
> [email protected]
> http://lists.linux-ha.org/mailman/listinfo/linux-ha
> See also: http://linux-ha.org/ReportingProblems
> 
> 
> 
> _______________________________________________
> Linux-HA mailing list
> [email protected]
> http://lists.linux-ha.org/mailman/listinfo/linux-ha
> See also: http://linux-ha.org/ReportingProblems
_______________________________________________
Linux-HA mailing list
[email protected]
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems

Reply via email to