Thanks *Lewis*
I removed osd as follow, and re-add it. It's solved.
ceph osd out 26
/etc/init.d/ceph stop osd.26
ceph osd crush remove osd.26
ceph auth del osd.26
ceph osd down 26
ceph osd rm 26
On 05/31/2014 04:16 AM, Craig Lewis wrote:
On 5/30/14 03:08 , Ta Ba Tuan wrote:
Dear all,
I'm using Firefly. One disk was false, I replated failure disk and
start that osd.
But that osd 's still down.
Help me,
Thank you
You need to re-initialize the disk after replacing it. Ceph stores
cluster information on the disk, and ceph-osd needs that information
to start. The process is pretty much removing the osd, then adding it
again.
This blog walks you through the details:
http://karan-mj.blogspot.com/2014/03/admin-guide-replacing-failed-disk-in.html
Or you can search through the mailing list for "replace osd" for more
discussions.
--
*Craig Lewis*
Senior Systems Engineer
Office +1.714.602.1309
Email [email protected] <mailto:[email protected]>
*Central Desktop. Work together in ways you never thought possible.*
Connect with us Website <http://www.centraldesktop.com/> | Twitter
<http://www.twitter.com/centraldesktop> | Facebook
<http://www.facebook.com/CentralDesktop> | LinkedIn
<http://www.linkedin.com/groups?gid=147417> | Blog
<http://cdblog.centraldesktop.com/>
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com