When starting this you should be aware that the filesystem is not yet fully supported.
On Thursday, March 20, 2014, Jordi Sion <[email protected]> wrote: > Hello, > > I plan to setup a Ceph cluster for a small size hosting company. The aim > is to have customers data (website and mail folders) in a distributed > cluster. Then to setup different servers like web, smtp, pop and imap, > accessing the the cluster data. > > The goals are: > > * Store all data replicated across different nodes > * Have all data accessible for every server (like www servers). This way, > we can easily move a web from a server into another, or from, let's say > apache to nginx. Or have all email accounts accessible from every pop/imap > server. > > I am about to build a 3 node cluster to start tests: 1 MDS with 240Gb SSD > and 2 OSD+Monitor with 2x2Tb, with 32 Gb of Ram each and will be > interconnected with a 1Gb Private LAN. > The MDS doesn't need any local storage beyond a few config files. :) > > Mainly, the servers using the cluster will provide Web serving, FTP access > and Email SMTP, POP and IMAP. Also I need to provide MySQL database, which > I am not sure how data fits in a Ceph cluster. > > I have some questions: > > 1) The plan is to keep MDS node dedicated. Will OSD's be able to act as > webservers (apache and proftpd) or mail servers (postfix, dovecot, amavis > and spamassassin). > That will depend on how much CPU they have and what clients you're using (you don't want to loop back mount with a kernel client ). > 2) How can I manage to have MySQL data stored in Ceph? Is that a good > idea? Any suggestions? > I'd recommend just using RBD rather than CephFS. That'll give you a block device which you can mount anywhere (but only on one at a time). > 3) To prevent major disasters, What is a good practice/strategy to > backup/replicate data in the cluster? > Hmm, there's not a good tailored answer for CephFS. With RBD there are some options around snapshots and incremental diffs. -Greg > > Thanks in advance, > Jordi > -- Software Engineer #42 @ http://inktank.com | http://ceph.com
_______________________________________________ ceph-users mailing list [email protected] http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
