Re: [ceph-users] PGs inconsistents because of "size_too_large"

2020-01-14 Thread Massimo Sgaravatto
This is what I see in the OSD.54 log file 2020-01-14 10:35:04.986 7f0c20dca700 -1 log_channel(cluster) log [ERR] : 13.4 soid 13:20fbec66:::%2fhbWPh36KajAKcJUlCjG9XdqLGQMzkwn3NDrrLDi_mTM%2ffile2:head : size 385888256 > 134217728 is too large 2020-01-14 10:35:08.534 7f0c20dca700 -1 log_channel(clust

[ceph-users] PGs inconsistents because of "size_too_large"

2020-01-14 Thread Massimo Sgaravatto
I have just finished the update of a ceph cluster from luminous to nautilus Everything seems running, but I keep receiving notifications (about ~ 10 so far, involving different PGs and different OSDs) of PGs in inconsistent state. rados list-inconsistent-obj pg-id --format=json-pretty (an exampl