Hi Julien, 
we do discussed that topic many times. With Havana, things are a bit different, 
but on our previous discussion, we challenged a couple of technologies for the 
shared storage.
It boils down in the end to :
- the resources you have
- what you are trying to achieve

•Ceph cluster IS production ready, that’s the CephFS which is not, and that’s 
that FS which is a shared one. In my testing (supported by others), the FS kept 
hanging on high load, so I considered it to be pretty unstable for OpenStack. 
• iSCSI gave me the best performance so far, what you need to do is to create 
first the iSCSI LUN on you SAN and map is as a block device. Libvirt is able to 
use that as storage.
• NFS was too slow, and I ended up having locks, and a stalled FS
• MooseFS will give you good performance, but it’s not adviced to use it for 
storing and manipulating big images. Make sure to have a solid network backend 
as well for that cluster :)
• GlusterFS is easy to manage, but I only had bad experience with Windows 
instances (aka big instances :D) the replication process was eating all the cpu 
and the I/O were very slow. 

Here is a previous topic:
http://lists.openstack.org/pipermail/openstack-operators/2013-July/003310.html


regards,
Razique


On November 3, 2013 at 7:17:19, Julien De Freitas ([email protected]) wrote:

Hi guys,
I knows that question has been treated hundreds of times but i cannot find 1 
good answer.
Moreover, since Havana extend support for cinder and gluster it could be nice 
to review the question.

What I currently use on my plateform :

I configured Nexenta to provide NFS and iScsi target :
NFS Instance disk : i mounted a volume on each compute node and configured the 
NFS on nova.conf. 
Iscsi for cinder back end : I configured iScsi so when i create a volume it 
create an iScsi volume and i'm then able to mount in inside instance.
But the problem is that the replication module for nexenta to get a HA storage 
system is expensive and it's not a distributed file system.

My goal : store instances ephemeral storage on a performant, highly available 
and cheap storage system configured for live migration :D

To achieve this, I studied read about CephFS  and glusterFS.
But Ceph is marked has not ready for production and GlusterFS seems to have 
some concerned about performance.

What do you think ? Does anyone have production experiences on GlusterFS or 
Ceph ?

Thanks

_______________________________________________  
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack  
Post to : [email protected]  
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack  
-- 
Razique Mahroua

_______________________________________________
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to     : [email protected]
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Reply via email to