Am Donnerstag, 7. Februar 2008 21:39 schrieb Thomas Glanzmann:
> Hello,
> thank you a lot for the feedback! Now I understand how the failover
> works. Has someone a ready to use cib.xml that I can use for testing. I
> am going to try my luck right now and come back in an hour or so with my
> findings. It would be nice if someone could comment on them.
>
>         Thomas

<clone id="clone_ClusterIP">
 <meta_attributes id="clone_ClusterIP_meta_attrs">
  <attributes>
   <nvpair id="clone_ClusterIP_metaattr_target_role" name="target_role" 
value="stopped"/>
   <nvpair id="clone_ClusterIP_metaattr_clone_max" name="clone_max" 
value="2"/>
   <nvpair id="clone_ClusterIP_metaattr_clone_node_max" name="clone_node_max" 
value="2"/>
   <nvpair id="clone_ClusterIP_metaattr_resource_stickiness" 
name="resource_stickiness" value="0"/>
  </attributes>
 </meta_attributes>
 <primitive id="resource_ClusterIP" class="ocf" type="IPaddr2" 
provider="heartbeat">
  <instance_attributes id="resource_ClusterIP_instance_attrs">
   <attributes>
    <nvpair id="0227c4ba-5799-45df-a3d6-34709e77a0aa" name="ip" 
value="1.2.3.4"/>
    <nvpair id="b74f32a9-0326-413f-8f51-f6239752ce15" name="clusterip_hash" 
value="sourceip-sourceport"/>
   </attributes>
  </instance_attributes>
 </primitive>
</clone>

- Have clone_max set to the number of nodes in your cluster
- Have clone_node_max set to the same number. In clusters with more than two 
nodes, perhaps 2 is enough for backup for one failed node.
- Set resource_stickiness to "0" to allow ressources beeing equally 
distributed.


-- 
Dr. Michael Schwartzkopff
MultiNET Services GmbH
Addresse: Bretonischer Ring 7; 85630 Grasbrunn; Germany
Tel: +49 - 89 - 45 69 11 0
Fax: +49 - 89 - 45 69 11 21
mob: +49 - 174 - 343 28 75

mail: [EMAIL PROTECTED]
web: www.multinet.de

Sitz der Gesellschaft: 85630 Grasbrunn
Registergericht: Amtsgericht München HRB 114375
Geschäftsführer: Günter Jurgeneit, Hubert Martens

---

PGP Fingerprint: F919 3919 FF12 ED5A 2801 DEA6 AA77 57A4 EDD8 979B
Skype: misch42
_______________________________________________
Linux-HA mailing list
[email protected]
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems

Reply via email to