Hi,
I've got another issue with clones. As you can see in the attached
cib.xml. As you can see the clone_ip200 resource has a clone_max and
clone_node_max set to 2. crm_verify is reporting no errors or
warnings. But the cluster will not start 2 clones on the same machine.
This is the output from crm_mon -1 :
Clone Set: clone_named1
resource_named1:0 (ocf::dns:named): Started slave1
resource_named1:1 (ocf::dns:named): Stopped
Clone Set: clone_ip200
resource_ip200:0 (ocf::heartbeat:IPaddr2): Started slave1
resource_ip200:1 (ocf::heartbeat:IPaddr2): Stopped
I see this in the logs :
slave1 pengine: [3552]: WARN: native_color: Resource resource_ip200:1
cannot run anywhere
As you can see I don't have any constraints so it should start the 2
clones. This was all working fine on a Heartbeat 2.1/Pacemaker 0.6
cluster. I'm currently testing this with pacemaker-1.0.1-3.1 and
heartbeat-2.99.2-6.1.
Any ideas ?
Thanks,
Tim
--
Tim Verhoeven - [EMAIL PROTECTED] - 0479 / 88 11 83
Hoping the problem magically goes away by ignoring it is the
"microsoft approach to programming" and should never be allowed.
(Linus Torvalds)
<cib admin_epoch="1" epoch="104" num_updates="0" validate-with="pacemaker-1.0" have-quorum="1" crm_feature_set="3.0" dc-uuid="849a018c-874f-4413-a218-81aee69e1ff3" cib-last-written="Wed Dec 10 14:46:31 2008">
<configuration>
<crm_config>
<cluster_property_set id="cib-bootstrap-options">
<nvpair id="cib-bootstrap-options-stonith-enabled" name="stonith-enabled" value="0"/>
<nvpair id="cib-bootstrap-options-dc-version" name="dc-version" value="1.0.1-node: 6fc5ce8302abf145a02891ec41e5a492efbe8efe"/>
</cluster_property_set>
</crm_config>
<nodes>
<node id="849a018c-874f-4413-a218-81aee69e1ff3" uname="slave1" type="normal"/>
<node id="1d15f533-e302-4776-bf88-9f14b0a91efd" uname="slave2" type="normal"/>
</nodes>
<resources>
<clone id="clone_named1">
<meta_attributes id="clone1_meta_attrs">
<nvpair id="clone1_metaattr_clone_max" name="clone_max" value="2"/>
<nvpair id="clone1_metaattr_clone_node_max" name="clone_node_max" value="1"/>
</meta_attributes>
<primitive id="resource_named1" class="ocf" type="named" provider="local">
<meta_attributes id="primitive-resource_named1.meta"/>
<instance_attributes id="resource_named1_instance_attrs"/>
<meta_attributes id="resource_named1_meta_attrs"/>
</primitive>
</clone>
<clone id="clone_ip200">
<meta_attributes id="clone2_meta_attrs">
<nvpair id="clone2_metaattr_clone_max" name="clone_max" value="2"/>
<nvpair id="clone2_metaattr_clone_node_max" name="clone_node_max" value="2"/>
<nvpair id="clone_ip200_metaattr_target_role" name="target_role" value="started"/>
</meta_attributes>
<primitive id="resource_ip200" class="ocf" type="IPaddr2" provider="heartbeat">
<instance_attributes id="resource_ip200_instance_attrs">
<nvpair id="resource_ip200_ip" name="ip" value="192.168.1.100"/>
<nvpair id="resource_ip200_hash" name="clusterip_hash" value="sourceip-sourceport"/>
</instance_attributes>
<meta_attributes id="resource_ip200_meta_attrs"/>
</primitive>
</clone>
</resources>
<constraints>
</constraints>
</configuration>
</cib>
_______________________________________________
Linux-HA mailing list
[email protected]
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems