Thanks Mark and Scottix for the helpful comments.

Cheers.

- jupiter

On Wed, Aug 5, 2015 at 3:06 AM, Scottix <[email protected]> wrote:

> I'll be more of a third-party person and try to be factual. =)
>
> I wouldn't throw off Gluster too fast yet.
> Besides what you described with the object and disk storage.
> It uses Amazon Dynamo paper on eventually consistent methodology of
> organizing data.
> Gluster has different features so I would look into that as well.
>
> What I have experienced with Lustre is more geared towards SuperComputing
> and tuning storage to your workload. In terms of scaling Lustre with HA is
> fairly difficult, not impossible but be careful for what you wish for.
>
> It depends on what you are trying to accomplish as the end result. Not
> saying Ceph isn't a great option but make smart choices and even test them
> out. Testing is how I learned about the differences, and that is how we
> ended up with our choice.
>
> Disclaimer: I run a Ceph cluster, so I am more familiar with it but
> Gluster was a big contender for us.
>
>
> On Tue, Aug 4, 2015 at 8:12 AM Mark Nelson <[email protected]> wrote:
>
>> So despite the performance overhead of replication (or EC + cache
>> tiering) I think CephFS is still a really good solution going forward.
>> We still have a lot of testing/tuning to do, but as you said there are
>> definitely advantages.
>>
>> I haven't looked closely at either Lustre or Gluster for several years,
>> so I'd prefer not to comment on the state of either these days. :)
>>
>> Hope that helps!
>>
>> Mark
>>
>> On 08/04/2015 05:38 AM, jupiter wrote:
>> > Hi Mark,
>> >
>> > Thanks for the comments, that was the same arguments people  concern
>> > CephFS performance here. But one thing I like the Ceph is it is
>> > capable to run everything including replications directly to XFS on
>> > commodity hardware disks, I am not clear if the Lustre can do it as
>> > well, or did you allude that the Lustre has to run on top of the RAID
>> > for replications and fault tolerance?
>> >
>> > We are also looking for CephFS and Gluster, apart from the main
>> > difference that Gluster is based on block storage and CephFS is based
>> > on object storage, Ceph is cetainly has much better scalibility, any
>> > insight comments of pros, cons and performance between CephFS and
>> > Gluster?
>> >
>> > Thank you and appreciate it.
>> >
>> > - jupiter
>> >
>> > On 8/3/15, Mark Nelson <[email protected]> wrote:
>> >> On 08/03/2015 06:31 AM, jupiter wrote:
>> >>> Hi,
>> >>>
>> >>> I'd like to deploy Cephfs in a cluster, but I need to have a
>> performance
>> >>> report compared with Lustre and Gluster. Could anyone point me
>> documents
>> >>> / links for performance between CephFS, Gluster and Lustre?
>> >>>
>> >>> Thank you.
>> >>>
>> >>> Kind regards,
>> >>>
>> >>> - j
>> >>
>> >> Hi,
>> >>
>> >> I don't know that anything like this really exists yet to be honest.
>> We
>> >> wrote a paper with ORNL several years ago looking at Ceph performance
>> on
>> >> a DDN SFA10K and basically saw that we could hit about 6GB/s with
>> CephFS
>> >> while Lustre could do closer to 11GB/s.  Primarily that was due to the
>> >> journal on the write side (using local SSDs for journal would have
>> >> improved things dramatically as the limitation was the IB connections
>> >> between the SFA10K and the OSD nodes rather than the disks).  On the
>> >> read side we ended up running out of time to figure it out.  We could
>> do
>> >> about 8GB/s with RADOS but CephFS was again limited to about 6GB/s.
>> >> This was several years ago now so things may have changed.
>> >>
>> >> In general you should expect that Lustre will probably be faster for
>> >> large sequential writes (especially if you use Ceph replication vs
>> RAID6
>> >> for Lustre) and may be faster for large sequential reads.  For small IO
>> >> I suspect that Ceph may do better, and for metadata I would expect the
>> >> situation will be mixed with Ceph faster at some things but possibly
>> >> slower at others since afaik we haven't done a lot of tuning of the MDS
>> >> yet.
>> >>
>> >> Mark
>> >>
>> >>>
>> >>>
>> >>>
>> >>>
>> >>> _______________________________________________
>> >>> ceph-users mailing list
>> >>> [email protected]
>> >>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> >>>
>> >> _______________________________________________
>> >> ceph-users mailing list
>> >> [email protected]
>> >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> >>
>> _______________________________________________
>> ceph-users mailing list
>> [email protected]
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to