On 6/11/19 9:48 PM, J. Eric Ivancich wrote:
> Hi Wido,
> 
> Interleaving below....
> 
> On 6/11/19 3:10 AM, Wido den Hollander wrote:
>>
>> I thought it was resolved, but it isn't.
>>
>> I counted all the OMAP values for the GC objects and I got back:
>>
>> gc.0: 0
>> gc.11: 0
>> gc.14: 0
>> gc.15: 0
>> gc.16: 0
>> gc.18: 0
>> gc.19: 0
>> gc.1: 0
>> gc.20: 0
>> gc.21: 0
>> gc.22: 0
>> gc.23: 0
>> gc.24: 0
>> gc.25: 0
>> gc.27: 0
>> gc.29: 0
>> gc.2: 0
>> gc.30: 0
>> gc.3: 0
>> gc.4: 0
>> gc.5: 0
>> gc.6: 0
>> gc.7: 0
>> gc.8: 0
>> gc.9: 0
>> gc.13: 110996
>> gc.10: 111104
>> gc.26: 111142
>> gc.28: 111292
>> gc.17: 111314
>> gc.12: 111534
>> gc.31: 111956
> 
> Casey Bodley mentioned to me that he's seen similar behavior to what
> you're describing when RGWs are upgraded but not all OSDs are upgraded
> as well. Is it possible that the OSDs hosting gc.13, gc.10, and so forth
> are running a different version of ceph?
> 

Yes, the OSDs are still on 13.2.5. As this is a big (2500 OSD)
production environment we only created a temporary machine with 13.2.6
(just a few hours before it's release) to run the GC.

We did not upgrade the cluster itself as we will have to wait with that
before we have validated the release on the testing cluster before.

Wido

> Eric
> 
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to