Re: [ceph-users] Placement groups on a 216 OSD cluster with multiple pools

2013-11-15 Thread Andrey Korolyov
Of course, but it means that in case of failure you can no longer trust your data consistency and should recheck it against separately stored checksums or so. I`m leaving aside such fact that Ceph will not probably recover pool properly with replication number lower than 2 in many cases. So general

Re: [ceph-users] Placement groups on a 216 OSD cluster with multiple pools

2013-11-14 Thread Nigel Williams
On 15/11/2013 8:57 AM, Dane Elwell wrote: [2] - I realise the dangers/stupidity of a replica size of 0, but some of the data we wish to store just isn’t /that/ important. We've been thinking of this too. The application is storing boot-images, ISOs, local repository mirrors etc where recovery

[ceph-users] Placement groups on a 216 OSD cluster with multiple pools

2013-11-14 Thread Dane Elwell
Hello, We’ll be going into production with our Ceph cluster shortly and I’m looking for some advice on the number of PGs per pool we should be using. We have 216 OSDs totalling 588TB of storage. We’re intending on having several pools, with not all pools sharing the same replica count - so, som