Hello, Will need to see a full export of your crush map rules.
Depends what the failure domain is set to. ,Ash Sent from my iPhone On 26 Jun 2017, at 4:11 PM, Stéphane Klein <[email protected]<mailto:[email protected]>> wrote: Hi, I have this OSD: root@ceph-storage-rbx-1:~# ceph osd tree ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY -1 21.70432 root default -2 10.85216 host ceph-storage-rbx-1 0 3.61739 osd.0 up 1.00000 1.00000 2 3.61739 osd.2 up 1.00000 1.00000 4 3.61739 osd.4 up 1.00000 1.00000 -3 10.85216 host ceph-storage-rbx-2 1 3.61739 osd.1 up 1.00000 1.00000 3 3.61739 osd.3 up 1.00000 1.00000 5 3.61739 osd.5 up 1.00000 1.00000 with: osd_pool_default_size: 2 osd_pool_default_min_size: 1 Question: does Ceph always write data in one osd on host1 and replica on host2? I fear that Ceph sometime write data on osd.0 and replica on osd.2. Best regards, Stéphane -- Stéphane Klein <[email protected]<mailto:[email protected]>> blog: http://stephane-klein.info cv : http://cv.stephane-klein.info Twitter: http://twitter.com/klein_stephane _______________________________________________ ceph-users mailing list [email protected]<mailto:[email protected]> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________ ceph-users mailing list [email protected] http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
