Hello,

From several weeks, i have some OSDs flapping before ending out of the
cluster by Ceph…
I was hoping some Ceph's magic and just gave it sometime to auto heal
(and be able to do all the side work…) but it was a bad idea (what a
surprise :D). Also got some inconsistents PGs, but i was waiting a quiet
health cluster before trying to fix them.

Now that i have more time, i also have 6 OSDs down+out on my 5 nodes
cluster and 1~2 OSDs still flapping from time to time, i asking myself
if these PGs might be the (one ?) source of my problem.

The last OSD error on osd.28 gave these logs :
    -2> 2019-10-28 12:57:47.346460 7fefbdc4d700  5 -- 129.20.177.2:6811/47803 
>> 129.20.177.3:6808/4141402 conn(0x55de8211a000 :-1 
s=STATE_OPEN_MESSAGE_READ_FOOTER_AND_DISPATCH pgs=2058 cs=1 l=0). rx osd.25 seq 
169 0x55dea57b3600 MOSDPGPush(2.1d9 191810/191810 
[PushOp(2:9b97b818:::rbd_data.0c16b76b8b4567.000000000001426e:5926, version: 
127481'7241006, data_included: [], data_size: 0, omap_header_size: 0, 
omap_entries_size: 0, attrset_size: 1, recovery_info: 
ObjectRecoveryInfo(2:9b97b818:::rbd_data.0c16b76b8b4567.000000000001426e:5926@127481'7241006,
 size: 4194304, copy_subset: [], clone_subset: {}, snapset: 0=[]:[]), 
after_progress: ObjectRecoveryProgress(!first, data_recovered_to:0, 
data_complete:true, omap_recovered_to:, omap_complete:true, error:false), 
before_progress: ObjectRecoveryProgress(first, data_recovered_to:0, 
data_complete:false, omap_recovered_to:, omap_complete:false, error:false))]) v3
    -1> 2019-10-28 12:57:47.346517 7fefbdc4d700  1 -- 129.20.177.2:6811/47803 
<== osd.25 129.20.177.3:6808/4141402 169 ==== MOSDPGPush(2.1d9 191810/191810 
[PushOp(2:9b97b818:::rbd_data.0c16b76b8b4567.000000000001426e:5926, version: 
127481'7241006, data_included: [], data_size: 0, omap_header_size: 0, 
omap_entries_size: 0, attrset_size: 1, recovery_info: 
ObjectRecoveryInfo(2:9b97b818:::rbd_data.c16b76b8b4567.000000000001426e:5926@127481'7241006,
 size: 4194304, copy_subset: [], clone_subset: {}, snapset: 0=[]:[]), 
after_progress: ObjectRecoveryProgress(!first, data_recovered_to:0, 
data_complete:true, omap_recovered_to:, omap_complete:true, error:false), 
before_progress: ObjectRecoveryProgress(first, data_recovered_to:0, 
data_complete:false, omap_recovered_to:, omap_complete:false, error:false))]) 
v3 ==== 909+0+0 (1239474936 0 0) 0x55dea57b3600 con 0x55de8211a000
     0> 2019-10-28 12:57:47.353680 7fef99441700 -1 
/build/ceph-12.2.12/src/osd/PrimaryLogPG.cc: In function 'virtual void 
PrimaryLogPG::on_local_recover(const hobject_t&, const ObjectRecoveryInfo&, 
ObjectContextRef, bool, ObjectStore::Transaction*)' thread 7fef99441700 time 
2019-10-28 12:57:47.347132
/build/ceph-12.2.12/src/osd/PrimaryLogPG.cc: 354: FAILED 
assert(recovery_info.oi.legacy_snaps.size())

 ceph version 12.2.12 (1436006594665279fe734b4c15d7e08c13ebd777) luminous 
(stable)
 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char 
const*)+0x102) [0x55de72039f32]
 2: (PrimaryLogPG::on_local_recover(hobject_t const&, ObjectRecoveryInfo 
const&, std::shared_ptr<ObjectContext>, bool, 
ObjectStore::Transaction*)+0x135b) [0x55de71be330b]
 3: (ReplicatedBackend::handle_push(pg_shard_t, PushOp const&, PushReplyOp*, 
ObjectStore::Transaction*)+0x31d) [0x55de71d4fadd]
 4: (ReplicatedBackend::_do_push(boost::intrusive_ptr<OpRequest>)+0x18f) 
[0x55de71d4fd7f]
 5: (ReplicatedBackend::_handle_message(boost::intrusive_ptr<OpRequest>)+0x2d1) 
[0x55de71d5ff11]
 6: (PGBackend::handle_message(boost::intrusive_ptr<OpRequest>)+0x50) 
[0x55de71c7d030]
 7: (PrimaryLogPG::do_request(boost::intrusive_ptr<OpRequest>&, 
ThreadPool::TPHandle&)+0x5f1) [0x55de71be87b1]
 8: (OSD::dequeue_op(boost::intrusive_ptr<PG>, boost::intrusive_ptr<OpRequest>, 
ThreadPool::TPHandle&)+0x3f7) [0x55de71a63e97]
 9: (PGQueueable::RunVis::operator()(boost::intrusive_ptr<OpRequest> 
const&)+0x57) [0x55de71cf5077]
 10: (OSD::ShardedOpWQ::_process(unsigned int, 
ceph::heartbeat_handle_d*)+0x108c) [0x55de71a94e1c]
 11: (ShardedThreadPool::shardedthreadpool_worker(unsigned int)+0x88d) 
[0x55de7203fbbd]
 12: (ShardedThreadPool::WorkThreadSharded::entry()+0x10) [0x55de72041b80]
 13: (()+0x8064) [0x7fefc12b5064]
 14: (clone()+0x6d) [0x7fefc03a962d]
 NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to 
interpret this.

--- logging levels ---
   0/ 5 none
   0/ 1 lockdep
   0/ 1 context
   1/ 1 crush
   1/ 5 mds
   1/ 5 mds_balancer
   1/ 5 mds_locker
   1/ 5 mds_log
   1/ 5 mds_log_expire
   1/ 5 mds_migrator
   0/ 1 buffer
   0/ 1 timer
   0/ 1 filer
   0/ 1 striper
   0/ 1 objecter
   0/ 5 rados
   0/ 5 rbd
   0/ 5 rbd_mirror
   0/ 5 rbd_replay
   0/ 5 journaler
   0/ 5 objectcacher
   0/ 5 client
   1/ 5 osd
   0/ 5 optracker
   0/ 5 objclass
   1/ 3 filestore
   1/ 3 journal
   0/ 5 ms
   1/ 5 mon
   0/10 monc
   1/ 5 paxos
   0/ 5 tp
   1/ 5 auth
   1/ 5 crypto
   1/ 1 finisher
   1/ 1 reserver
   1/ 5 heartbeatmap
   1/ 5 perfcounter
   1/ 5 rgw
   1/10 civetweb
   1/ 5 javaclient
   1/ 5 asok
   1/ 1 throttle
   0/ 0 refs
   1/ 5 xio
   1/ 5 compressor
   1/ 5 bluestore
   1/ 5 bluefs
   1/ 3 bdev
   1/ 5 kstore
   4/ 5 rocksdb
   4/ 5 leveldb
   4/ 5 memdb
   1/ 5 kinetic
   1/ 5 fuse
   1/ 5 mgr
   1/ 5 mgrc
   1/ 5 dpdk
   1/ 5 eventtrace
  -2/-2 (syslog threshold)
  -1/-1 (stderr threshold)
  max_recent     10000
  max_new         1000
  log_file /var/log/ceph/ceph-osd.28.log
--- end dump of recent events ---
2019-10-28 12:57:47.374262 7fefaf9a6700  1 leveldb: Generated table #1516991: 
52007 keys, 2143094 bytes
2019-10-28 12:57:47.409924 7fef99441700 -1 *** Caught signal (Aborted) **
 in thread 7fef99441700 thread_name:tp_osd_tp


And the inconsistents PGs :
# Health
sudo ceph health detail
PG_DAMAGED Possible data damage: 3 pgs inconsistent
    pg 2.2ba is active+clean+inconsistent, acting [42,29,30]
    pg 2.2bb is active+clean+inconsistent, acting [25,42,18]
    pg 2.371 is active+clean+inconsistent, acting [42,9,27]

# Results for deep-scrub
sudo ceph -w | grep -E '(2.2ba|2.2bb|2.371)'
2019-10-28 08:37:29.524437 osd.42 [ERR] 2.2ba soid 
2:5d7a2754:::rbd_data.b4537a2ae8944a.000000000000425f:58f4 : data_digest 
0xeca13d4c != data_digest 0x43d61c5d from shard 42
2019-10-28 08:37:29.524441 osd.42 [ERR] 2.2ba shard 30 
2:5d7a2754:::rbd_data.b4537a2ae8944a.000000000000425f:58f4 : missing
2019-10-28 08:37:29.524444 osd.42 [ERR] 2.2ba shard 42 soid 
2:5d7a2754:::rbd_data.b4537a2ae8944a.000000000000425f:58f4 : data_digest 
0x43d61c5d != data_digest 0xeca13d4c from auth oi 
2:5d7a2754:::rbd_data.b4537a2ae8944a.000000000000425f:58f4(94043'5341152 
osd.12.0:2768751 dirty|data_digest|omap_digest s 4194304 uv 5336383 dd eca13d4c 
od ffffffff alloc_hint [0 0 0])
2019-10-28 08:37:29.524565 osd.42 [ERR] deep-scrub 2.2ba 
2:5d7a2754:::rbd_data.b4537a2ae8944a.000000000000425f:58f4 : is an unexpected 
clone

2019-10-28 08:42:09.409287 osd.25 [ERR] 2.2bb soid 
2:dd5b8bb8:::rbd_data.b4537a2ae8944a.0000000000012110:58f4 : object info 
inconsistent
2019-10-28 08:47:26.944926 osd.25 [ERR] 2.2bb deep-scrub 0 missing, 1 
inconsistent objects
2019-10-28 08:47:26.944933 osd.25 [ERR] 2.2bb deep-scrub 1 errors

2019-10-28 09:16:01.484473 osd.42 [ERR] 2.371 shard 9 
2:8ef7ca53:::rbd_data.0c16b76b8b4567.00000000000420bb:5926 : missing
2019-10-28 09:16:01.484478 osd.42 [ERR] 2.371 shard 42 
2:8ef7ca53:::rbd_data.0c16b76b8b4567.00000000000420bb:5926 : missing
2019-10-28 09:16:02.734468 osd.42 [ERR] deep-scrub 2.371 
2:8ef7ca53:::rbd_data.0c16b76b8b4567.00000000000420bb:5926 : is an unexpected 
clone
2019-10-28 09:17:18.728256 osd.42 [ERR] 2.371 deep-scrub 1 missing, 0 
inconsistent objects
2019-10-28 09:17:18.728260 osd.42 [ERR] 2.371 deep-scrub 3 errors


But i never saw such inconsistent errors and can't find topic related to
my case. Does someone has any clue ? Should i try to fix the PGs first ?
With which method ?

All nodes are on the same version of Ceph Luminous − 12.2.12.

More logs and infos are also available on an OwnCloud share (due to the
size of logs) :
https://cloud.ipr.univ-rennes1.fr/index.php/s/BYtuAURnC7YOAQG


Many thanks.

--
Gardais Jérémy
Institut de Physique de Rennes
Université Rennes 1
Téléphone: 02-23-23-68-60
Mail & bonnes pratiques: http://fr.wikipedia.org/wiki/Nétiquette
-------------------------------
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to