On 10 September 2013 15:15, Diego Parrilla Santamaría <
[email protected]> wrote:

> You are describing the problems of using a shared filesystem backend for
> cinder, instead of using a driver with direct connection at block-device
> level.
>
>
This is how it is implemented right now in Grizzly. You are right about
block-device level but this is not where it goes. Take a look at
https://blueprints.launchpad.net/cinder/+spec/qemu-assisted-snapshots


> It has improved a lot in the last 18 months or so, specially if you want
> to use as shared storage for your VMs.
>
> Seems the snapshotting feature is on the way:
>
> https://github.com/openstack/cinder/blob/master/cinder/volume/drivers/glusterfs.py
>
> Exactly, using qcow.


> But the killer feature is the direct access from QEMU to Gluster using
> libgfapi. It seems it has been added in Havana and it's in master branch
> since mid August:
> https://review.openstack.org/#/c/39498/
>
> If I had to consider a scalable storage solution for an Openstack
> deployment for the next 10 years, I would consider Gluster.
>

Just out of curiosity. Have you tested any other Cinder backends?

regards
-- 
Maciej Gałkiewicz
Shelly Cloud Sp. z o. o., Sysadmin
http://shellycloud.com/, [email protected]
KRS: 0000440358 REGON: 101504426
_______________________________________________
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to     : [email protected]
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Reply via email to