Hi Tom, Thank you very much for sharing your knowledge, this will help not only me! As I am still working on the MySQL part, you mentioned multi-master which I have running indeed, but I suppose there still will need to be some kind of entering point within the cluster. How did you managed this?
Best regards, Tristan van Bokkem Datacenter Operations Contact: E-mail Personal: tristanvanbok...@i3d.net E-mail Support: i...@i3d.net E-mail NOC: n...@i3d.net Website: http://www.i3d.net Office: Interactive 3D B.V. Meent 93b 3011 JG Rotterdam The Netherlands Visit www.smartdc.net – SmartDC is our in-house 36,000 sq. ft. datacenter in Rotterdam, The Netherlands. High density hosting – multiple fiber carriers in-house – Level3 PoP. Interactive 3D (i3D.net) is a company registered in The Netherlands at Meent 93b, Rotterdam. Registration #: 14074337 - VAT # NL 8202.63.886.B01. Interactive 3D (i3D.net) is CDSA certified on content protection and security. We are ranked in the Deloitte Technology Fast 50 as one of the fastest growing technology companies. _____ From: Tom Ellis [mailto:tom.el...@canonical.com] To: openstack@lists.launchpad.net Sent: Thu, 16 Feb 2012 15:09:15 +0100 Subject: Re: [Openstack] Howto Nova setup with HA? On 16/02/12 08:46, i3D.net - Tristan van Bokkem wrote: > Any more thoughts about this subject? Ask? Vish? The team I work in has looked at a number of methods for high availability within the Diablo release and I've included some notes below, hope this helps. Font-end API servers * load balanced with h/w load balancer * use s/w LB for smaller deployments * run nova-scheduler on each MySQL DB * multi-master configuration * alternative: drbd + pacemaker in active/passive RabbitMQ service * Pacemaker with Active-passive configuration * Based on example from RabbitMQ site (already mentioned by someone else) * Virtual IP for the service - used for rabbitmq config in nova.conf * I think there are some corner cases where messages could be dropped during failover and I believe later RabbitMQ versions support full multi-master but require some client side changes - is there any plans to support this? nova-volume service * Current weakness in the HA setup, unless you are willing to use iscsi tgtd with DRBD. I believe this would still have some problems when failing over with the initiators that are logged in. * I'm hoping this will pan out for essex with some of the storage vendors committing nova-volume support via their API's. Glance * Run on multiple servers * Use another VIP in your pacemaker setup or load-balancer * Use swift as backend storage Compute servers * Each run their own copy of nova-api (only instances running on the node use this) * nova-network (multi-host configuration) with private network Swift * Run swift-proxy across all swift-storage nodes on a small setup Regards, Tom _______________________________________________ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
_______________________________________________ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp