[ceph-users] OSD failed, rocksdb: Corruption: missing start of fragmented record

2018-08-09 Thread shrey chauhan
Hi, My OSD failed 2018-08-09 16:31:11.848457 7f49951ddd80 4 rocksdb: [/home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/A VAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/12.2.7/rpm/el7/BUILD/ceph-12.2.7/src/rocksdb/db/version_set.cc:2859] Recov ered

[ceph-users] PG went to Down state on OSD failure

2018-08-01 Thread shrey chauhan
Hi, I am trying to understand what happens when an OSD fails. Few days back I wanted to check what happens when an OSD goes down for that what I did was I just went to the node and stopped one of the osd's service. When OSD went in down and out state pgs started recovering and after sometime

[ceph-users] PGs go to down state when OSD fails

2018-07-20 Thread shrey chauhan
Hi, I am trying to understand what happens when an OSD fails. Few days back I wanted to check what happens when an OSD goes down for that what I did was I just went to the node and stopped one of the osd's service. When OSD went in down state pgs started recovering and after sometime everything

[ceph-users] OSD failed, wont come up

2018-07-19 Thread shrey chauhan
Hi all, I am facing a major issue where my osd is down and not coming up after a reboot. These are the last osd logs 2018-07-20 10:43:00.701904 7f02f1b53d80 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1532063580701900, "job": 1, "event": "recovery_finished"} 2018-07-20 10:43:00.735978 7f02f1b53d80

[ceph-users] OSD failed, wont come up

2018-07-19 Thread shrey chauhan
Hi all, I am facing a major issue where my osd is down and not coming up after a reboot. These are the last osd logs 2018-07-20 10:43:00.701904 7f02f1b53d80 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1532063580701900, "job": 1, "event": "recovery_finished"} 2018-07-20 10:43:00.735978 7f02f1b53d80

[ceph-users] whiteouts mismatch

2018-06-05 Thread shrey chauhan
I am consistently getting whiteout mismatches due to which pgs are going in inconsistent state, and I am not able to figure out why is this happening? though as it was explained before that whiteouts dont exist and its nothing, its still painful to see my pgs in inconsistent statecan any one

Re: [ceph-users] inconsistent pgs :- stat mismatch in whiteouts

2018-06-01 Thread shrey chauhan
yes it is cache, moreover what are these whiteouts? and when does this mismatch occur. Thanks On Fri, Jun 1, 2018 at 3:51 PM, Brad Hubbard wrote: > On Fri, Jun 1, 2018 at 6:41 PM, shrey chauhan > wrote: > > Hi, > > > > I keep getting inconsistent placement g

[ceph-users] inconsistent pgs :- stat mismatch in whiteouts

2018-06-01 Thread shrey chauhan
Hi, I keep getting inconsistent placement groups and every time its the whiteout. cluster [ERR] 9.f repair stat mismatch, got 1563/1563 objects, 0/0 clones, 1551/1551 dirty, 78/78 omap, 0/0 pinned, 12/12 hit_set_archive, 0/-9 whiteouts, 28802382/28802382 bytes, 16107/16107 hit_set_archive