On Sun, Jul 5, 2015 at 5:37 AM, Michael Metz-Martini | SpeedPartner
GmbH <[email protected]> wrote:
> Hi,
>
> after larger moves of serveral placement groups we tried to empty 3 of
> our 66 osds by slowly setting weight of them to 0 within the crushmap.
>
> After move completed we're still experiencing a large amount of files
> left on that osd-devices.
>
> For example pg 5.117:
> osdmap e56712 pg 5.117 (5.117) -> up [7,59] acting [7,59]
>
> But there's still a filled current/5.117_head - Directory on osd 22
>
> Neither scrubbing nor deep scrubbing seems to cleanup the osd's (I think
> that's correct because scrubbing cleans "within pg's".
>
> Is there a "scrub osd" - Command to automatically (and savely) get rid
> of this left files? Is it otherwise save to delete them manually?

If the PG is active+clean and the OSD in question isn't listed in its
mapping, you're definitely safe to delete it. I suspect that the OSD
is simply throttling its deletes down to a slow enough level that it's
taking a while to remove all the files (it throttles them in order to
try and avoid interrupting client IO, though there's not much point to
that in this case).
-Greg

>
> --
> Kind regards
>  Michael Metz-Martini
>
> _______________________________________________
> ceph-users mailing list
> [email protected]
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to