Hello, Nelson Sure, it will be very interesting and useful to read a word about your solutions.
On Sat, May 28, 2011 at 12:30 AM, Nelson Nahum <[email protected]>wrote: > At Zadara Storage, we are working on a block storage system for the cloud. > We didn't published much info yet but if somebody is interested I will be > happy to be in a call and explain what we are doing. > > Nelson Nahum > CTO > [email protected] > > > > 2011/5/27 Oleg Gelbukh <[email protected]> > >> Hi >> Our approach was defined by need to combine storage and compute on the >> same hosts. >> Our configuration is dual-primary, so we can run nova-compute and virtual >> servers on both nodes and have them with write access to volumes. DRBD >> allows this mode out-of-box now, but it requires clustered file system or >> great caution when runnning LVM on it. >> But nova-volume must run on one node of drbd-connected pair, while the >> second gets copy of lvm data via drbd. The tricky part is that it seems we >> must activate volumes and volume groups on the peer node, but automation of >> this is relatively easy. >> For now, we are not going to share volumes outside of drbd-peers pair for >> live migration or as attachable volumes, except some special cases like >> migrating VMs between drbd pairs. >> Looking forward to read couple of words on your approach. >> >> 2011/5/26 Diego Parrilla Santamaría <[email protected]> >> >>> Hi Oleg, >>> >>> thank you very much for your post, it's really didactic. We are taking a >>> different approach for HA at storage level, but I have worked formerly with >>> DRBD and I think it's a very good choice. >>> >>> I'm curious about how you have deployed nova-volume nodes in your >>> architecture. You don't specify if the two nodes of the DRBD cluster run one >>> or two instances of nova-volume. If you run one instance probably you have >>> implemented some kind of fault-tolerant active-passive service if the >>> nova-volume process fails in the active node, but I would like to know if >>> you can run an active-active two nova-volume instances on two different >>> physical nodes on top of the DRBD shared resource. >>> >>> Regards >>> Diego >>> >>> -- >>> Diego Parrilla >>> <http://www.stackops.com>*CEO* >>> *www.stackops.com | * [email protected]** | +34 649 94 43 29 | >>> skype:diegoparrilla* >>> * <http://www.stackops.com> >>> >>> >>> >>> On Thu, May 26, 2011 at 1:29 PM, Oleg Gelbukh <[email protected]>wrote: >>> >>>> Hi, >>>> We were researching Openstack for our private cloud, and want to share >>>> experience and get tips from community as we go on. >>>> >>>> We have settled on DRBD as shared storage platform for our installation. >>>> LVM is used over the drbd device to mange logical volumes. OCFS2 file >>>> system >>>> is created on one of volumes, mounted and set up as *image_path* and * >>>> instance_path* in the *nova.conf*, other space is reserved for storage >>>> volumes (managed by nova-volume). >>>> >>>> As a result, we have shared storage suitable for features such as live >>>> migration and snapshots. We also have some level of fault-tolerance, with >>>> DRBD I/O error handling, which automatically redirects I/O requests to peer >>>> node over network in case of primary node failure. We created >>>> script<https://github.com/Mirantis/openstack-utils/blob/master/recovery_instance_by_id.py>for >>>> bootstrapping lost VMs in two crash scenarios: >>>> * dom0 host restart/domU failure: restore VMs on the same host >>>> * dom0 host failure: restore VMs on peer node >>>> We are considering such pair of servers with shared storage as a basic >>>> block for the cloud structure. >>>> >>>> For whom it may interest, the details of DRBD installation are >>>> here<http://mirantis.blogspot.com/2011/05/shared-storage-for-openstack-based-on.html>. >>>> I'll be glad to answer any questions and highly appreciate feedback on >>>> this. >>>> >>>> Oleg S. Gelbukh, >>>> Mirantis Inc. >>>> www.mirantis.com >>>> >>>> _______________________________________________ >>>> Mailing list: https://launchpad.net/~openstack >>>> Post to : [email protected] >>>> Unsubscribe : https://launchpad.net/~openstack >>>> More help : https://help.launchpad.net/ListHelp >>>> >>>> >>> >> >> _______________________________________________ >> Mailing list: https://launchpad.net/~openstack >> Post to : [email protected] >> Unsubscribe : https://launchpad.net/~openstack >> More help : https://help.launchpad.net/ListHelp >> >> >
_______________________________________________ Mailing list: https://launchpad.net/~openstack Post to : [email protected] Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp

