Thank you for your help, while reading your answer, I realize that I
totally misunderstood how cruh map algo and data placement work in CEPH.
I fix my issue, with this new rule :
"rules": [
{
"rule_id": 0,
"rule_name": "replicated_ruleset",
"ruleset":
Thank you for your answer, but I don't really understand what do you mean.
I use this map to distribute replicat into 2 differents dc, but I don't
know where the mistake is.
Le 14 déc. 2015 7:56 PM, "Samuel Just" a écrit :
> 2 datacenters.
> -Sam
>
> On Mon, Dec 14, 2015 at
Hi,
I got a functionnal and operationnal ceph cluster (in version 0.94.5),
with 3 nodes (acting for MON and OSD), everything was fine.
I added a 4th osd node (same configuration than 3 others) and now cluster
status is health warn (active+remapped).
cluster
2 datacenters.
-Sam
On Mon, Dec 14, 2015 at 10:17 AM, Reno Rainz wrote:
> Hi,
>
> I got a functionnal and operationnal ceph cluster (in version 0.94.5), with
> 3 nodes (acting for MON and OSD), everything was fine.
>
> I added a 4th osd node (same configuration than 3
You most likely have pool size set to 3, but your crush rule requires
replicas to be separated across DCs, of which you have only 2.
-Sam
On Mon, Dec 14, 2015 at 11:12 AM, Reno Rainz wrote:
> Thank you for your answer, but I don't really understand what do you mean.
>
> I