So I had two ideas here:

1. Use find as Jan suggested. You probably can bound it by the expected
object naming and limit it to the OSDs that were impacted. This is probably
the best way.
2. Use the osdmaptool against a copy of the osdmap that you pre-grab from
the cluster, ala:
https://www.hastexo.com/resources/hints-and-kinks/which-osd-stores-specific-rados-object

--David

On Fri, Sep 25, 2015 at 10:11 AM, Jan Schermer <[email protected]> wrote:

> Ouch
> 1) I should have read it completely
> 2) I should have tested it :)
> Sorry about that...
>
> You could get the name prefix for each RBD from rbd info, then list all
> objects (run find on the osds?) and then you just need to grep the OSDs for
> each prefix... Should be much faster?
>
> Jan
>
>
>
> > On 25 Sep 2015, at 15:07, Межов Игорь Александрович <[email protected]>
> wrote:
> >
> > Hi!
> >
> > Last week I wrote, that one PG in our Firefly stuck in degraded state
> with 2 replicas instead of 3
> > and do not try to backfill or recovery. We try to investigate, what RBD
> vol's are affected.
> >
> > The working plan are inspired by Sebastian Han's snippet
> > (http://www.sebastien-han.fr/blog/2013/11/19/ceph-rbd-objects-placement/
> )
> > and consists of next steps:
> >
> > 1. rbd -p <pool>  ls - to list all RBD volumes on the pool
> > 2. Get RBD prefix, corresponding the volume
> > 3. Get a list of objects, which belongs to our RBD volume
> > 4. Issue 'ceph osd map <pool> <objectname>' to get PG for object and OSD
> placement
> >
> > After writing some scripts we face a difficulty: running 'ceph osd
> map...' and getting object
> > placement takes about 0.5 second, so iterating all 15 millions  objects
> will take forever.
> >
> > Is there any other way to find to what PGs the specified RBD volume are
> mapped,
> > or may be there is a much faster way to do our step 4 instead of calling
> 'ceph osd map'
> > in loop for every object.
> >
> >
> > Thanks!
> >
> > Megov Igor
> > CIO, Yuterra
> > _______________________________________________
> > ceph-users mailing list
> > [email protected]
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
> _______________________________________________
> ceph-users mailing list
> [email protected]
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>



-- 
David Burley
NOC Manager, Sr. Systems Programmer/Analyst
Slashdot Media

e: [email protected]
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to