The cause of the stale pg, is a fs_data.r1 1 replica pool. This should
be empty but ceph df shows 128 KiB used.
I have already marked the osd as lost and removed the osd from the crush
map.
PG_AVAILABILITY Reduced data availability: 1 pg stale
pg 30.4 is stuck stale for 407878.113092, current state
stale+active+clean, last acting [31]
[@c01 ~]# ceph pg map 30.4
osdmap e72814 pg 30.4 (30.4) -> up [29] acting [29]
[@c01 ~]# ceph pg 30.4 query
Error ENOENT: i don't have pgid 30.4
-----Original Message-----
To: ceph-users
Subject: [ceph-users] Re: How to fix 1 pg stale+active+clean
I had just one osd go down (31), why is ceph not auto-healing in this
'simple' case?
-----Original Message-----
To: ceph-users
Subject: [ceph-users] How to fix 1 pg stale+active+clean
How to fix 1 pg marked as stale+active+clean
pg 30.4 is stuck stale for 175342.419261, current state
stale+active+clean, last acting [31]
_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]
_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]