Assuming I understand it correctly:

"pg_upmap_items 6.0 [40,20]" refers to replacing (upmapping?) osd.40 with 
osd.20 in the acting set of the placement group '6.0'. Assuming it's a 3 
replica PG, the other two OSDs in the set remain unchanged from the CRUSH 
calculation.

"pg_upmap_items 6.6 [45,46,59,56]" describes two upmap replacements for the PG 
6.6, replacing 45 with 46, and 59 with 56.

Hope that helps.

Cheers,
Tom

> -----Original Message-----
> From: ceph-users <ceph-users-boun...@lists.ceph.com> On Behalf Of
> jes...@krogh.cc
> Sent: 30 December 2018 22:04
> To: Konstantin Shalygin <k0...@k0ste.ru>
> Cc: ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] Balancing cluster with large disks - 10TB HHD
> 
> >> I would still like to have a log somewhere to grep and inspect what
> >> balancer/upmap actually does - when in my cluster. Or some ceph
> >> commands that deliveres some monitoring capabilityes .. any
> >> suggestions?
> > Yes, on ceph-mgr log, when log level is DEBUG.
> 
> Tried the docs .. something like:
> 
> ceph tell mds ... does not seem to work.
> http://docs.ceph.com/docs/master/rados/troubleshooting/log-and-debug/
> 
> > You can get your cluster upmap's in via `ceph osd dump | grep upmap`.
> 
> Got it -- but I really need the README .. it shows the map ..
> ...
> pg_upmap_items 6.0 [40,20]
> pg_upmap_items 6.1 [59,57,47,48]
> pg_upmap_items 6.2 [59,55,75,9]
> pg_upmap_items 6.3 [22,13,40,39]
> pg_upmap_items 6.4 [23,9]
> pg_upmap_items 6.5 [25,17]
> pg_upmap_items 6.6 [45,46,59,56]
> pg_upmap_items 6.8 [60,54,16,68]
> pg_upmap_items 6.9 [61,69]
> pg_upmap_items 6.a [51,48]
> pg_upmap_items 6.b [43,71,41,29]
> pg_upmap_items 6.c [22,13]
> 
> ..
> 
> But .. I dont have any pg's that should only have 2 replicas.. neither any 
> with 4
> .. how should this be interpreted?
> 
> Thanks.
> 
> --
> Jesper
> 
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to