Hey Saverio,

We currently implement it by setting images_type=lvm under [libvirt] in 
nova.conf on hypervisors that have the LVM+RAID0 and then providing different 
flavors (e1.* versus the default m1.* flavors) that launch instances on a host 
aggregate for the LVM-hosting hypervisors. I suspect this system is similar to 
what you use.

The advantage of it is it was very simple to implement and it guarantees that 
the volume will be on the same hypervisor as the instance. The disadvantages 
are probably things you've also experienced:

- no quota management because Nova considers it local storage (Warren Wang and 
I had complained about this in separate postings to this ML)
- can't create additional volumes on the LVM after instance launch because 
they're not managed by Cinder

Our users like it because they've figured out these LVM volumes are exempt from 
quota management, and because it's fast; our most active hypervisors on any 
given cluster are invariably the LVM ones. So far users have also gotten lucky 
with not a single RAID 0 failing in the 6 months since we've begun deploying 
this solution, so there's probably a bit of a perception gap between current 
and actual expected reliability.

I have begun thinking about ways of improving this system so as to bring these 
volumes under the control of Cinder, but have not come up with anything that I 
think would actually work. We discarded implementing iSCSI because of 
administrative overhead (who really wants to manage iSCSI?) and because it 
would negate the automatic forced locality; the whole point of the design was 
to provide maximum possible block storage speed, and if we have iSCSI traffic 
going over the storage network and competing with Ceph traffic, you get latency 
from the network, Ceph performance is degraded, and nobody's happy. I could 
possibly add cinder-volume to all the LVM hypervisors and register each one as 
a Cinder AZ, but I'm not sure if Nova would create the volume in the right AZ 
when scheduling an instance, and it would also  break the fourth wall on users 
knowing what hypervisor is hosting their instance.

From: ziopr...@gmail.com 
Subject: Re: [Openstack-operators] RAID / stripe block storage volumes

> In our environments, we offer two types of storage. Tenants can either use
> Ceph/RBD and trade speed/latency for reliability and protection against
> physical disk failures, or they can launch instances that are realized as
> LVs on an LVM VG that we create on top of a RAID 0 spanning all but the OS
> disk on the hypervisor. This lets the users elect to go all-in on speed and
[..CUT..]

Hello Ned,

how do you implement this ? What is like the user experience of having
two types of storage ?

We generally have Ceph/RBD as storage backend, however we have a use
case where we need LVM because latency is important.

To cope with our use case we have different flavors, where setting a
flavor-key to a specific flavor you can force the VM to be scheduled
to a specific host-aggregate. Then we have a host-aggregate for
hypervisors supporting the LVM storage and another host-aggregate for
hypervisors running the default Ceph/RBD backend.

However, let's say the user just creates a Cinder Volume in Horizon.
In this case the Volume is created to Ceph/RBD. Is there a solution to
support multiple storage backends at the same time and let the user
decide in Horizon which one to use ???

Thanks.

Saverio


_______________________________________________
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

Reply via email to