Hi!

Hmm.

"ceph osd crush reweight" and "ceph osd reweight" - do the same or not?
I use "ceph osd crush reweight"

----- Original Message -----
From: "Paul Emmerich" <paul.emmer...@croit.io>
To: "Fyodor Ustinov" <u...@ufm.su>
Cc: "David Turner" <drakonst...@gmail.com>, "ceph-users" 
<ceph-users@lists.ceph.com>
Sent: Friday, 1 March, 2019 12:24:37
Subject: Re: [ceph-users] Right way to delete OSD from cluster?

On Fri, Mar 1, 2019 at 11:17 AM Fyodor Ustinov <u...@ufm.su> wrote:
> May be. But "ceph out + ceph osd purge" causes double relocation, and "ceph 
> reweight 0 + ceph osd purge" - causes only one.

No, the commands "ceph osd out X" and "ceph osd reweight X 0" do the
exact same thing: both set reweight to 0.

Paul

>
> ----- Original Message -----
> From: "Paul Emmerich" <paul.emmer...@croit.io>
> To: "Fyodor Ustinov" <u...@ufm.su>
> Cc: "David Turner" <drakonst...@gmail.com>, "ceph-users" 
> <ceph-users@lists.ceph.com>
> Sent: Friday, 1 March, 2019 11:54:20
> Subject: Re: [ceph-users] Right way to delete OSD from cluster?
>
> "out" is internally implemented as "reweight 0"
>
> Paul
>
> --
> Paul Emmerich
>
> Looking for help with your Ceph cluster? Contact us at https://croit.io
>
> croit GmbH
> Freseniusstr. 31h
> 81247 München
> www.croit.io
> Tel: +49 89 1896585 90
>
> On Fri, Mar 1, 2019 at 10:48 AM Fyodor Ustinov <u...@ufm.su> wrote:
> >
> > Hi!
> >
> > As far as I understand, reweight also does not lead to the situation "a 
> > period where one copy / shard
> > is missing".
> >
> > ----- Original Message -----
> > From: "Paul Emmerich" <paul.emmer...@croit.io>
> > To: "Fyodor Ustinov" <u...@ufm.su>
> > Cc: "David Turner" <drakonst...@gmail.com>, "ceph-users" 
> > <ceph-users@lists.ceph.com>
> > Sent: Friday, 1 March, 2019 11:32:54
> > Subject: Re: [ceph-users] Right way to delete OSD from cluster?
> >
> > On Fri, Mar 1, 2019 at 8:55 AM Fyodor Ustinov <u...@ufm.su> wrote:
> > >
> > > Hi!
> > >
> > > Yes. But I am a little surprised by what is written in the documentation:
> >
> > the point of this is that you don't have a period where one copy/shard
> > is missing if you wait for it to take it out.
> > Yeah, there'll be an unnecessary small data movement afterwards, but
> > you are never missing a copy.
> >
> >
> > Paul
> >
> > > http://docs.ceph.com/docs/mimic/rados/operations/add-or-rm-osds/
> > >
> > > ---
> > > Before you remove an OSD, it is usually up and in. You need to take it 
> > > out of the cluster so that Ceph can begin rebalancing and copying its 
> > > data to other OSDs.
> > > ceph osd out {osd-num}
> > > [...]
> > > ---
> > >
> > > That is, it is argued that this is the most correct way (otherwise it 
> > > would not have been written in the documentation).
> > >
> > >
> > >
> > > ----- Original Message -----
> > > From: "David Turner" <drakonst...@gmail.com>
> > > To: "Fyodor Ustinov" <u...@ufm.su>
> > > Cc: "Scottix" <scot...@gmail.com>, "ceph-users" 
> > > <ceph-users@lists.ceph.com>
> > > Sent: Friday, 1 March, 2019 05:13:27
> > > Subject: Re: [ceph-users] Right way to delete OSD from cluster?
> > >
> > > The reason is that an osd still contributes to the host weight in the 
> > > crush
> > > map even while it is marked out. When you out and then purge, the purging
> > > operation removed the osd from the map and changes the weight of the host
> > > which changes the crush map and data moves. By weighting the osd to 0.0,
> > > the hosts weight is already the same it will be when you purge the osd.
> > > Weighting to 0.0 is definitely the best option for removing storage if you
> > > can trust the data on the osd being removed.
> > >
> > > On Tue, Feb 26, 2019, 3:19 AM Fyodor Ustinov <u...@ufm.su> wrote:
> > >
> > > > Hi!
> > > >
> > > > Thank you so much!
> > > >
> > > > I do not understand why, but your variant really causes only one 
> > > > rebalance
> > > > compared to the "osd out".
> > > >
> > > > ----- Original Message -----
> > > > From: "Scottix" <scot...@gmail.com>
> > > > To: "Fyodor Ustinov" <u...@ufm.su>
> > > > Cc: "ceph-users" <ceph-users@lists.ceph.com>
> > > > Sent: Wednesday, 30 January, 2019 20:31:32
> > > > Subject: Re: [ceph-users] Right way to delete OSD from cluster?
> > > >
> > > > I generally have gone the crush reweight 0 route
> > > > This way the drive can participate in the rebalance, and the rebalance
> > > > only happens once. Then you can take it out and purge.
> > > >
> > > > If I am not mistaken this is the safest.
> > > >
> > > > ceph osd crush reweight <id> 0
> > > >
> > > > On Wed, Jan 30, 2019 at 7:45 AM Fyodor Ustinov <u...@ufm.su> wrote:
> > > > >
> > > > > Hi!
> > > > >
> > > > > But unless after "ceph osd crush remove" I will not got the undersized
> > > > objects? That is, this is not the same thing as simply turning off the 
> > > > OSD
> > > > and waiting for the cluster to be restored?
> > > > >
> > > > > ----- Original Message -----
> > > > > From: "Wido den Hollander" <w...@42on.com>
> > > > > To: "Fyodor Ustinov" <u...@ufm.su>, "ceph-users" <
> > > > ceph-users@lists.ceph.com>
> > > > > Sent: Wednesday, 30 January, 2019 15:05:35
> > > > > Subject: Re: [ceph-users] Right way to delete OSD from cluster?
> > > > >
> > > > > On 1/30/19 2:00 PM, Fyodor Ustinov wrote:
> > > > > > Hi!
> > > > > >
> > > > > > I thought I should first do "ceph osd out", wait for the end
> > > > relocation of the misplaced objects and after that do "ceph osd purge".
> > > > > > But after "purge" the cluster starts relocation again.
> > > > > >
> > > > > > Maybe I'm doing something wrong? Then what is the correct way to
> > > > delete the OSD from the cluster?
> > > > > >
> > > > >
> > > > > You are not doing anything wrong, this is the expected behavior. There
> > > > > are two CRUSH changes:
> > > > >
> > > > > - Marking it out
> > > > > - Purging it
> > > > >
> > > > > You could do:
> > > > >
> > > > > $ ceph osd crush remove osd.X
> > > > >
> > > > > Wait for all good
> > > > >
> > > > > $ ceph osd purge X
> > > > >
> > > > > The last step should then not initiate any data movement.
> > > > >
> > > > > Wido
> > > > >
> > > > > > WBR,
> > > > > >     Fyodor.
> > > > > > _______________________________________________
> > > > > > ceph-users mailing list
> > > > > > ceph-users@lists.ceph.com
> > > > > > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> > > > > >
> > > > > _______________________________________________
> > > > > ceph-users mailing list
> > > > > ceph-users@lists.ceph.com
> > > > > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> > > >
> > > >
> > > >
> > > > --
> > > > T: @Thaumion
> > > > IG: Thaumion
> > > > scot...@gmail.com
> > > > _______________________________________________
> > > > ceph-users mailing list
> > > > ceph-users@lists.ceph.com
> > > > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> > > >
> > > _______________________________________________
> > > ceph-users mailing list
> > > ceph-users@lists.ceph.com
> > > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to