Hi Udo,

On Thu, 19 May 2011, Udo Waechter wrote:
> Hi there.
> I'm new to the list, but am following the development of ceph for a 
> while now. Great project. It really looks like the next big things.
> 
> After researching parallel/shared filesystems for a last year, we tried 
> to implement lustre. This project basically died for us with Oracle's 
> aquirement of sun and all the confusion that came afterwards. Anyway, we 
> then decided for another FS, which now turns out to be an error of the 
> big kind. Its unstable und unreliable.

Just out of curiousity, what was the other fs?

> Now, I am in the bad position to quickly have to switch to another FS. 
> Before I start implementing another FS, I would like to ask about the 
> status of ceph. I know that it is not supposed to be used in an 
> production environment. Nevertheless, I am inclined to give ceph a try. 
> Our plans were to switch to ceph anyway as soon as it is considered 
> stable. Our usage scenarios are these:
> 
> * multiple OSDs, multiple MDs.
> * shared storage for KVM and XEN (does not need to be Rados Block device, can 
> be images as we have it now)
> ** the VM-servers are interconnected via internal 2x1Gbit nics and also have 
> 1Gbit nic to the external LAN.
> ** traffic for vm-images or RBD should go over internal NICS (can ceph handle 
> such a scenario?).
> * shared storage for user data. workloads are not very high, but potentially 
> many clients (currently 100-200)
> ** POSIX compatibity is a must (directory permissions, primary and secondary 
> group access, for personal files and working group files)
> ** speed is not of great importance, our LAN is 1Gbit.
> ** Workstations are Debian/Ubuntu Linux (most of them) and OS X, the fewest 
> are Windows. Is it possible (without great hassle) to reexport ceph volumes 
> via nfs/samba (for OS X, windows)

Based on your requirements, it sounds like Ceph is a good fit.  As for 
status, here is the latest update:

- Ceph is not yet ready for production however, we are confident that we 
  are only a few months away
- We have not experienced any data loss on our pseudo-production 
  test clusters for quite some time now.
- We (DreamHost) have expanded the core team on Ceph to 7 people who are
  primarily working on stability and performance
- Each month more and more people are digging in and getting involved in 
  the project which we hope will accelerate development further
- We are very close to launching a hosted beta of the object store layer 
  which will help us identify foundational issues

In addition to the technical efforts, DreamHost is working on formalizing 
the Enterprise Support services. We hope to have this support 
infrastructure in place by end of Q2, beginning of Q3.

A few other specific points:
- The single MDS configuration is much more stable than a clustered one.  
  Given your current scale I suspect a single MDS will be sufficient.
- We haven't done much of any work getting the fuse client to work under 
  OS X.  You'll probably be best off reexporting via cifs or nfs.  There 
  is ongoing work to allow samba talk directly to libceph.

I hope that gives you a bit more information to base your decision on!

sage



> Our storage server currenlty are: 2x Dell R510 with 12x1TB (one of 
> them), 12x2TB (the second), 16Gbs ram and a i7QuadCore. A third (with 
> 12x2Tb) and fourth are on their way. These would make OSDs and MDses, I 
> guess.
> 
> What are your opinions? Is ceph considered unstable in all use-cases, or 
> would a moderate work-load work even in production environments? Are 
> there any compatibility-breaking updates on their way?
> 
> I would rather start using ceph now, even if it is not 100% performant. 
> As long as it is 99% stable and data-loss or corruption is not really 
> expected....
> 
> Thanks for your thoughts,
> udo.
> -- 
> :: udo waechter - r...@zoide.net :: N 52º16'30.5" E 8º3'10.1"
> :: genuine input for your ears: http://auriculabovinari.de 
> ::                          your eyes: http://ezag.zoide.net
> ::                          your brain: http://zoide.net
> 
> 
> 
> 
> 

Reply via email to