Re: [ceph-users] norecover and nobackfill

2015-04-14 Thread Robert LeBlanc
HmmmI've been deleting the OSD (ceph osd rm X; ceph osd crush rm osd.X) along with removing the auth key. This has caused data movement, but reading your reply and thinking about it made me think it should be done differently. I should just remove the auth key and leave the OSD in the CRUSH

Re: [ceph-users] norecover and nobackfill

2015-04-14 Thread Francois Lafont
Robert LeBlanc wrote: HmmmI've been deleting the OSD (ceph osd rm X; ceph osd crush rm osd.X) along with removing the auth key. This has caused data movement, Maybe but if the flag noout is set, removing an OSD of the cluster doesn't trigger at all data movement (I have tested with

[ceph-users] norecover and nobackfill

2015-04-13 Thread Robert LeBlanc
I'm looking for documentation about what exactly each of these do and I can't find it. Can someone point me in the right direction? The names seem too ambiguous to come to any conclusion about what exactly they do. Thanks, Robert ___ ceph-users mailing

Re: [ceph-users] norecover and nobackfill

2015-04-13 Thread Robert LeBlanc
After doing some testing, I'm a bit confused even more. What I'm trying to achieve is minimal data movement when I have to service a node to replace a failed drive. Since these nodes don't have hot-swap bays, I'll need to power down the box to replace the failed drive. I don't want Ceph to

Re: [ceph-users] norecover and nobackfill

2015-04-13 Thread Francois Lafont
Hi, Robert LeBlanc wrote: What I'm trying to achieve is minimal data movement when I have to service a node to replace a failed drive. [...] I will perhaps say something stupid but it seems to me that it's the goal of the noout flag, isn't it? 1. ceph osd set noout 2. an old OSD disk failed,