Mirror the OS disks, use 10 disks for 10 OSD's
> On Aug 12, 2016, at 7:41 AM, Félix Barbeira <fbarbe...@gmail.com> wrote:
> 
> Hi,
> 
> I'm planning to make a ceph cluster but I have a serious doubt. At this 
> moment we have ~10 servers DELL R730xd with 12x4TB SATA disks. The official 
> ceph docs says:
> 
> "We recommend using a dedicated drive for the operating system and software, 
> and one drive for each Ceph OSD Daemon you run on the host."
> 
> I could use for example 1 disk for the OS and 11 for OSD data. In the 
> operating system I would run 11 daemons to control the OSDs. But...what 
> happen to the cluster if the disk with the OS fails?? maybe the cluster 
> thinks that 11 OSD failed and try to replicate all that data over the 
> cluster...that sounds no good.
> 
> Should I use 2 disks for the OS making a RAID1? in this case I'm "wasting" 
> 8TB only for ~10GB that the OS needs.
> 
> In all the docs that i've been reading says ceph has no unique single point 
> of failure, so I think that this scenario must have a optimal solution, maybe 
> somebody could help me.
> 
> Thanks in advance.
> 
> -- 
> Félix Barbeira.
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Rick Stehno


_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to