On Tue, Mar 22, 2016 at 9:37 AM, John Spray <[email protected]> wrote: > On Tue, Mar 22, 2016 at 2:37 PM, Ben Archuleta <[email protected]> wrote: >> Hello All, >> >> I have experience using Lustre but I am new to the Ceph world, I have some >> questions to the Ceph users out there. >> >> I am thinking about deploying a Ceph storage cluster that lives in multiple >> location "Building A" and "Building B”, this cluster will be comprised of >> two dell servers with 10TB (5 * 2TB Disks) of JBOD storage and a MDS server >> over a 10GB network. We will be using CephFS to serve multiple operating >> systems (Windows, Linux, OS X). > > A two node Ceph cluster is rarely wise. If one of your servers goes > down, you're going to be down to a single copy of the data (unless > you've got a whopping 4 replicas to begin with), and so you'd be ill > advised to write anything to the cluster while it's in a degraded > state. If you've only got one MDS server, your system is going to > have a single point of failure anyway. > > You should probably look again at what levels of resilience and > availability you're trying to achieve here and think about whether > what you really want might be two NFS servers backing up to each > other. > >> My main question is how well does CephFS work in a multi-operating system >> environment and how well does it support NFS/CIFS? > > Exporting CephFS over NFS works (either kernel NFS or nfs-ganesha), > beyond that CephFS doesn't care too much. The Samba integration is > less advanced and less tested.
Well, Samba support is probably less advanced, but all of those combinations get run in our nightly tests and do pretty well. > Bug reports are welcome if you try it > out. *thumbs up* _______________________________________________ ceph-users mailing list [email protected] http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
