I am sure you can find a lot of different articles about Gluster/Ceph 
performance. Here is one of them:

http://iopscience.iop.org/1742-6596/513/4/042014/pdf/1742-6596_513_4_042014.pdf

I think it is not good to use CephFS with CS as it is FUSE and require 
additional layer to perform.  As Andrija mentioned it is also not recommended 
to use it, because it is NOT for production.

Ceph can use RBD devices and Cloudstack understands RBD storage. I see no need 
to use CephFS as it is possible to use RBD directly. If you want to create 
unified storage you are probably interested in S3/Swift interface that Ceph 
offers as well. So you can have primary storage connected to RBD and secondary 
- connected to S3/Swift.  

Vadim.

-----Original Message-----
From: Andrija Panic [mailto:andrija.pa...@gmail.com] 
Sent: Tuesday, November 04, 2014 10:10 AM
To: users@cloudstack.apache.org
Subject: Re: CephFS vs. GFS2

Officially, CephFS is NOT production ready - that being said by the Intank
- you get your answer :)

On 4 November 2014 08:21, Stephan Seitz <s.se...@secretresearchfacility.com>
wrote:

> Am Dienstag, den 04.11.2014, 09:07 +0200 schrieb Vladimir Melnik:
> > Dear colleagues,
> >
> > Did anyone compare CephFS and GFS2 or did anyone read any articles 
> > about comparison of CephFS and GFS2 in regard to their performance? 
> > Alas, I haven't found anything on this topic.
> >
> > Right now I'm using GFS2 as a primary storage filesystem (it's 
> > accessible by iSCSI+multipath), but, of course, I'd like to gain 
> > more performance. Should I try CephFS as an alternative?
>
> Both techniques are quite different. In short, you shouldn't use 
> CephFS in favor of ceph's native RBD.
>
> I'll try to follow up later that day - currently in a hurry.
>
> >
> > Thanks!
> >
>
>


-- 

Andrija Panić
--------------------------------------
  http://admintweets.com
--------------------------------------

Reply via email to