Procedure when an OSD is down or Error encountered during Ceph status checks :
Ceph version 0.67.4
1).Is the Cluster just started and has not complete starting OSD’s.
2).Ensure continues Hard Access to the Ceph Node:
-either via HW serial console server and serial console redirect.
-by
Thanks Kyle,
--I'll look into and try out udev and upstart.
-- yes on set noout, definitely a good idea, until for sure that osd is gone
for good.
If osd disk is totally gone,
Then down-n'-out.
Remove from crushmap/Update crushmap.
Verify crushmap
Then used ceph-deploy to add a replacement
This Just happened to me yesterday:
A 90 32GB VM ROOT IMAGES stored in one 3TB RBD/XFS volume.
Mounting this volume gave an error, structure needs cleaning and won't mount.
Ran xfs_check.
Ran xfs_repair.
Mouted rbd volume, no error.
Booted all 90 VM ROOT IMAGES.
Have not used EXT4 on OSD's or
kernel-3.4.59-8.el6.centos.alt.x86_64 is what you want, it has ceph rbd.ko
driver.
Regards,
-Ben
From: ceph-users-boun...@lists.ceph.com [ceph-users-boun...@lists.ceph.com] on
behalf of John-Paul Robinson [j...@uab.edu]
Sent: Wednesday, September 25,
Thanks Raj,
which of these rpm version you've used on production machines.
Thanks again in advance.
Regards,
-ben
From: raj kumar [mailto:rajkumar600...@gmail.com]
Sent: Wednesday, September 18, 2013 6:09 AM
To: Aquino, BenX O
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] rbd
, BenX O
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] CEPH-DEPLOY TRIALS/EVALUATION RESULT ON CEPH VERSION
61.7
On Fri, Aug 9, 2013 at 12:05 PM, Aquino, BenX O benx.o.aqu...@intel.com wrote:
CEPH-DEPLOY EVALUATION ON CEPH VERSION 61.7
ADMINNODE:
root@ubuntuceph900athf1:~# ceph -v
CEPH-DEPLOY EVALUATION ON CEPH VERSION 61.7
ADMINNODE:
root@ubuntuceph900athf1:~# ceph -v
ceph version 0.61.7 (8f010aff684e820ecc837c25ac77c7a05d7191ff)
root@ubuntuceph900athf1:~#
SERVERNODE:
root@ubuntuceph700athf1:/etc/ceph# ceph -v
ceph version 0.61.7 (8f010aff684e820ecc837c25ac77c7a05d7191ff)