> I have almost the same problem. > my cluster before: 1mon, 2 mds, 3 osd(osd-00,osd-01,osd-02) > when i want to add the fourth osd, osd-03, i did: > mon-00 # ceph mon getmap -o /opt/cluster_debug/monmap > osd-03 # cosd -c /etc/ceph/ceph.conf -i 3 --mkfs --monmap /root/monmap > mon-00 # ceph osd setmaxosd 4 > osd-03 # service ceph start osd3 > ceph osd getcrushmap -o /opt/cluster_debug/crushmap > > crushtool -d /opt/cluster_debug/crushmap -o /opt/cluster_debug/crushmap.txt > > vim /opt/cluster_debug/crushmap.txt > > crushtool -c /opt/cluster_debug/crushmap.txt -o > /opt/cluster_debug/crushmap.new > > ceph osd setcrushmap -i /opt/cluster_debug/crushmap.new > > [root@mon-00 ~]# ceph osd stat > > 2011-04-29 03:25:31.039361 mon <- [osd,stat] > > 2011-04-29 03:25:31.039790 mon0 -> 'e8: 4 osds: 4 up, 4 in' (0) > > but no data was migrated to osd-03, additionally, process cosd on all 4 osd > disappeared!!! > > Attachment is the new crushmap. >
Your crushmap is not correct. You should add an entry: "item device3 weight 1.000" in domain root. -- Henry Chang -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to [email protected] More majordomo info at http://vger.kernel.org/majordomo-info.html
