Sure, please let me know where to get and run the binaries. Thanks for the fast 
response !

-- 
Efficiency is Intelligent Laziness
On 2/2/18, 10:31 AM, "Sage Weil" <s...@newdream.net> wrote:

    On Fri, 2 Feb 2018, Frank Li wrote:
    > Yes, I was dealing with an issue where OSD are not peerings, and I was 
trying to see if force-create-pg can help recover the peering.
    > Data lose is an accepted  possibility.
    > 
    >  I hope this is what you are looking for ?
    > 
    >     -3> 2018-01-31 22:47:22.942394 7fc641d0b700  5 
mon.dl1-kaf101@0(electing) e6 _ms_dispatch setting monitor caps on this 
connection
    >     -2> 2018-01-31 22:47:22.942405 7fc641d0b700  5 
mon.dl1-kaf101@0(electing).paxos(paxos recovering c 28110997..28111530) 
is_readable = 0 - now=2018-01-31 22:47:22.942405 lease_expire=0.000000 has v0 
lc 28111530
    >     -1> 2018-01-31 22:47:22.942422 7fc641d0b700  5 
mon.dl1-kaf101@0(electing).paxos(paxos recovering c 28110997..28111530) 
is_readable = 0 - now=2018-01-31 22:47:22.942422 lease_expire=0.000000 has v0 
lc 28111530
    >      0> 2018-01-31 22:47:22.955415 7fc64350e700 -1 
/home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/12.2.2/rpm/el7/BUILD/ceph-12.2.2/src/osd/OSDMapMapping.h:
 In function 'void OSDMapMapping::get(pg_t, std::vector<int>*, int*, 
std::vector<int>*, int*) const' thread 7fc64350e700 time 2018-01-31 
22:47:22.952877
    > 
/home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/12.2.2/rpm/el7/BUILD/ceph-12.2.2/src/osd/OSDMapMapping.h:
 288: FAILED assert(pgid.ps() < p->second.pg_num)
    
    Perfect, thanks!  I have a test fix for this pushed to wip-22847-luminous 
    which should appear on shaman.ceph.com in an hour or so; can you give that 
    a try?  (Only need to install the updated package on the mons.)
    
    Thanks!
    sage
    

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to