Re: [ceph-users] Ceph and RAID

2013-10-03 Thread John-Paul Robinson
What is this take on such a configuration? Is it worth the effort of tracking rebalancing at two layers, RAID mirror and possibly Ceph if the pool has a redundancy policy. Or is it better to just let ceph rebalance itself when you lose a non-mirrored disk? If following the raid mirror approach,

Re: [ceph-users] Ceph and RAID

2013-10-03 Thread Scott Devoid
An additional side to the RAID question: when you have a box with more drives than you can front with OSDs due to memory or CPU constraints, is some form of RAID advisable? At the moment one OSD per drive is the recommendation, but from my perspective this does not scale at high drive densities

Re: [ceph-users] Ceph and RAID

2013-10-03 Thread Aronesty, Erik
-Paul Robinson Sent: Thursday, October 03, 2013 12:08 PM To: ceph-users@lists.ceph.com Subject: Re: [ceph-users] Ceph and RAID What is this take on such a configuration? Is it worth the effort of tracking rebalancing at two layers, RAID mirror and possibly Ceph if the pool has a redundancy policy

Re: [ceph-users] Ceph and RAID

2013-10-03 Thread Mike Dawson
03, 2013 12:08 PM To: ceph-users@lists.ceph.com Subject: Re: [ceph-users] Ceph and RAID What is this take on such a configuration? Is it worth the effort of tracking rebalancing at two layers, RAID mirror and possibly Ceph if the pool has a redundancy policy. Or is it better to just let ceph

Re: [ceph-users] Ceph and RAID

2013-10-03 Thread Dimitri Maziuk
On 10/03/2013 12:40 PM, Andy Paluch wrote: Don't you have to take down a ceph node to replace defective drive? If I have a ceph node with 12 disks and one goes bad, would I not have to take the entire node down to replace and then reformat? If I have a hotswap chassis but using just an

Re: [ceph-users] Ceph and RAID

2013-10-03 Thread James Harper
If following the raid mirror approach, would you then skip redundency at the ceph layer to keep your total overhead the same? It seems that would be risky in the even you loose your storage server with the raid-1'd drives. No Ceph level redunancy would then be fatal. But if you do raid-1

[ceph-users] Ceph and RAID

2013-10-02 Thread shacky
Hi. I am going to create my first Ceph cluster using 3 physical servers and Ubuntu distribution. Each server will have three 3Tb hard drives, connected with or without a physycal RAID controller. I would have to be protect on a fault of one of this three servers, having as much as space possible,

Re: [ceph-users] Ceph and RAID

2013-10-02 Thread Loic Dachary
Hi, I would not use RAID5 since it would be redundant with what Ceph provides. My 2cts ;-) On 02/10/2013 13:50, shacky wrote: Hi. I am going to create my first Ceph cluster using 3 physical servers and Ubuntu distribution. Each server will have three 3Tb hard drives, connected with or

Re: [ceph-users] Ceph and RAID

2013-10-02 Thread shacky
Thank you very much for your answer! So I could save the use of hardware RAID controllers on storage servers. Good news. I see in the Ceph documentation that I will have to manually configure the datastore to be efficient, reliable and full fault tolerant. Is there a particular way to configure

Re: [ceph-users] Ceph and RAID

2013-10-02 Thread Loic Dachary
I successfully installed a new cluster recently following the intructions here : http://ceph.com/docs/master/rados/deployment/ Cheers On 02/10/2013 16:32, shacky wrote: Thank you very much for your answer! So I could save the use of hardware RAID controllers on storage servers. Good news.

Re: [ceph-users] Ceph and RAID

2013-10-02 Thread Dimitri Maziuk
On 2013-10-02 07:35, Loic Dachary wrote: Hi, I would not use RAID5 since it would be redundant with what Ceph provides. I would not use raid-5 (or 6) because its safety on modern drives is questionable and because I haven't seen anyone comment on ceph's performance -- e.g. openstack docs