Slight correction. I removed and added back only the OSDs that were crashing.
I noticed it seemed to be only certain OSDs that were crashing. Once they were 
rebuilt, they stopped crashing.

Further info, We originally had deployed Luminous code, upgraded to mimic, then 
upgraded to nautilus.
Perhaps there was issues with OSDs related to upgrades? I don’t know.
Perhaps a clean install of 14.2.1 would not have done this? I don’t know.

-Ed

> On Jul 12, 2019, at 11:32 AM, Edward Kalk <ek...@socket.net> wrote:
> 
> It seems that I have been able to workaround my issues.
> I’ve attempted to reproduce by rebooting nodes and using the stop all OSDs 
> wait a bit and start them.
> At this time, no OSDs are crashing like before. OSDs seem to have no problems 
> starting either.
> What I did is remove completely the OSDs one at a time and reissue them 
> allowing CEPH 14.2.1 to reengineer them.
> <remove and reuse a disk.txt> I have attached my doc I use to accomplish 
> this. *BEfore I do it, I mark the OSD as “out” via the GUI or CLI and allow 
> it to reweight to 0%, this is monitored via Ceph -s. I do this so that there 
> is not an actual disk fail which then puts me into dual disk fail when I’m 
> rebuilding an OSD.
> 
> -Edward Kalk
> 
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to