I am using replicated pool, and min_size=1. I do not have any disk failure, so i do not expect incomplete PGs, but it appeared after OSD flaped.
[email protected] From: Eugen Block Date: 2020-08-15 09:39 To: huxiaoyu CC: ceph-users Subject: Re: [ceph-users] how to handle incomplete PGs Hi, did you wait for the backfill to complete before removing the old drives? What is your environment? Are the affected PGs from an EC pool? Does [1] apply to you? [1] http://lists.ceph.com/pipermail/ceph-users-ceph.com/2019-July/035743.html Zitat von [email protected]: > Dear Ceph folks, > > Recently i encountered incomplete PGs when replacing an OSD node > with new handware. I noticed multiple OSD ups and downs, and > eventually a few PGs got stucked at PG incomplete status. > > Questions 1: is there a reliable way to avoid the occurence of > incomplete PGs? > 2: is there a good tool or scriptes to handle > incomplete PGs without lossing data > > best regards, > > samuel > > > > [email protected] > _______________________________________________ > ceph-users mailing list -- [email protected] > To unsubscribe send an email to [email protected] _______________________________________________ ceph-users mailing list -- [email protected] To unsubscribe send an email to [email protected]
