Found a weird behavior (or looks like weird) with ceph 0.67.3 I have 5 servers. Monitor runs on server 1. And server 2 to 5 have one OSD running each (osd.0 - osd.3)
I did a 'ceph pg dump'. I can see PGs got somehow randomly distributed to all 4 OSDs which is expected behavior. However, if I bring up one OSD in the same server running monitor. It seems all PGs has their primary ODS move to this new OSD. After I add a new OSD (osd.4) to the same server running monitor, the 'ceph pg dump' command showing active OSDs as [4,x] for all PGs. Is this expected behavior?? Regards, Chen Ching-Cheng Chen CREDIT SUISSE Information Technology | MDS - New York, KVBB 41 One Madison Avenue | 10010 New York | United States Phone +1 212 538 8031 | Mobile +1 732 216 7939 [email protected]<mailto:[email protected]> | www.credit-suisse.com<http://www.credit-suisse.com> =============================================================================== Please access the attached hyperlink for an important electronic communications disclaimer: http://www.credit-suisse.com/legal/en/disclaimer_email_ib.html ===============================================================================
_______________________________________________ ceph-users mailing list [email protected] http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
