Re: [ceph-users] Right way to delete OSD from cluster?

2019-03-01 Thread Vitaliy Filippov
+1, I also think's it's strange that deleting OSD by "osd out -> osd  
purge" causes two rebalances instead of one.


--
With best regards,
  Vitaliy Filippov
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Right way to delete OSD from cluster?

2019-03-01 Thread Alexandru Cucu
More on the subject can be found here:
https://ceph.com/geen-categorie/difference-between-ceph-osd-reweight-and-ceph-osd-crush-reweight/

On Fri, Mar 1, 2019 at 2:22 PM Darius Kasparavičius  wrote:
>
> Hi,
>
> Setting crush weight to 0 removes the osds weight from crushmap, by
> modifying hosts total weight. Which forces rebalancing of data across
> all the cluster. Setting and OSD to out only modifies "REWEIGHT"
> status, which balances data inside the same host.
>
> On Fri, Mar 1, 2019 at 12:25 PM Paul Emmerich  wrote:
> >
> > On Fri, Mar 1, 2019 at 11:17 AM Fyodor Ustinov  wrote:
> > > May be. But "ceph out + ceph osd purge" causes double relocation, and 
> > > "ceph reweight 0 + ceph osd purge" - causes only one.
> >
> > No, the commands "ceph osd out X" and "ceph osd reweight X 0" do the
> > exact same thing: both set reweight to 0.
> >
> > Paul
> >
> > >
> > > - Original Message -
> > > From: "Paul Emmerich" 
> > > To: "Fyodor Ustinov" 
> > > Cc: "David Turner" , "ceph-users" 
> > > 
> > > Sent: Friday, 1 March, 2019 11:54:20
> > > Subject: Re: [ceph-users] Right way to delete OSD from cluster?
> > >
> > > "out" is internally implemented as "reweight 0"
> > >
> > > Paul
> > >
> > > --
> > > Paul Emmerich
> > >
> > > Looking for help with your Ceph cluster? Contact us at https://croit.io
> > >
> > > croit GmbH
> > > Freseniusstr. 31h
> > > 81247 München
> > > www.croit.io
> > > Tel: +49 89 1896585 90
> > >
> > > On Fri, Mar 1, 2019 at 10:48 AM Fyodor Ustinov  wrote:
> > > >
> > > > Hi!
> > > >
> > > > As far as I understand, reweight also does not lead to the situation "a 
> > > > period where one copy / shard
> > > > is missing".
> > > >
> > > > - Original Message -
> > > > From: "Paul Emmerich" 
> > > > To: "Fyodor Ustinov" 
> > > > Cc: "David Turner" , "ceph-users" 
> > > > 
> > > > Sent: Friday, 1 March, 2019 11:32:54
> > > > Subject: Re: [ceph-users] Right way to delete OSD from cluster?
> > > >
> > > > On Fri, Mar 1, 2019 at 8:55 AM Fyodor Ustinov  wrote:
> > > > >
> > > > > Hi!
> > > > >
> > > > > Yes. But I am a little surprised by what is written in the 
> > > > > documentation:
> > > >
> > > > the point of this is that you don't have a period where one copy/shard
> > > > is missing if you wait for it to take it out.
> > > > Yeah, there'll be an unnecessary small data movement afterwards, but
> > > > you are never missing a copy.
> > > >
> > > >
> > > > Paul
> > > >
> > > > > http://docs.ceph.com/docs/mimic/rados/operations/add-or-rm-osds/
> > > > >
> > > > > ---
> > > > > Before you remove an OSD, it is usually up and in. You need to take 
> > > > > it out of the cluster so that Ceph can begin rebalancing and copying 
> > > > > its data to other OSDs.
> > > > > ceph osd out {osd-num}
> > > > > [...]
> > > > > ---
> > > > >
> > > > > That is, it is argued that this is the most correct way (otherwise it 
> > > > > would not have been written in the documentation).
> > > > >
> > > > >
> > > > >
> > > > > - Original Message -
> > > > > From: "David Turner" 
> > > > > To: "Fyodor Ustinov" 
> > > > > Cc: "Scottix" , "ceph-users" 
> > > > > 
> > > > > Sent: Friday, 1 March, 2019 05:13:27
> > > > > Subject: Re: [ceph-users] Right way to delete OSD from cluster?
> > > > >
> > > > > The reason is that an osd still contributes to the host weight in the 
> > > > > crush
> > > > > map even while it is marked out. When you out and then purge, the 
> > > > > purging
> > > > > operation removed the osd from the map and changes the weight of the 
> > > > > host
> > > > > which changes the crush map and data moves. By weighting the osd to 
> > > >

Re: [ceph-users] Right way to delete OSD from cluster?

2019-03-01 Thread Darius Kasparavičius
Hi,

Setting crush weight to 0 removes the osds weight from crushmap, by
modifying hosts total weight. Which forces rebalancing of data across
all the cluster. Setting and OSD to out only modifies "REWEIGHT"
status, which balances data inside the same host.

On Fri, Mar 1, 2019 at 12:25 PM Paul Emmerich  wrote:
>
> On Fri, Mar 1, 2019 at 11:17 AM Fyodor Ustinov  wrote:
> > May be. But "ceph out + ceph osd purge" causes double relocation, and "ceph 
> > reweight 0 + ceph osd purge" - causes only one.
>
> No, the commands "ceph osd out X" and "ceph osd reweight X 0" do the
> exact same thing: both set reweight to 0.
>
> Paul
>
> >
> > - Original Message -
> > From: "Paul Emmerich" 
> > To: "Fyodor Ustinov" 
> > Cc: "David Turner" , "ceph-users" 
> > 
> > Sent: Friday, 1 March, 2019 11:54:20
> > Subject: Re: [ceph-users] Right way to delete OSD from cluster?
> >
> > "out" is internally implemented as "reweight 0"
> >
> > Paul
> >
> > --
> > Paul Emmerich
> >
> > Looking for help with your Ceph cluster? Contact us at https://croit.io
> >
> > croit GmbH
> > Freseniusstr. 31h
> > 81247 München
> > www.croit.io
> > Tel: +49 89 1896585 90
> >
> > On Fri, Mar 1, 2019 at 10:48 AM Fyodor Ustinov  wrote:
> > >
> > > Hi!
> > >
> > > As far as I understand, reweight also does not lead to the situation "a 
> > > period where one copy / shard
> > > is missing".
> > >
> > > - Original Message -
> > > From: "Paul Emmerich" 
> > > To: "Fyodor Ustinov" 
> > > Cc: "David Turner" , "ceph-users" 
> > > 
> > > Sent: Friday, 1 March, 2019 11:32:54
> > > Subject: Re: [ceph-users] Right way to delete OSD from cluster?
> > >
> > > On Fri, Mar 1, 2019 at 8:55 AM Fyodor Ustinov  wrote:
> > > >
> > > > Hi!
> > > >
> > > > Yes. But I am a little surprised by what is written in the 
> > > > documentation:
> > >
> > > the point of this is that you don't have a period where one copy/shard
> > > is missing if you wait for it to take it out.
> > > Yeah, there'll be an unnecessary small data movement afterwards, but
> > > you are never missing a copy.
> > >
> > >
> > > Paul
> > >
> > > > http://docs.ceph.com/docs/mimic/rados/operations/add-or-rm-osds/
> > > >
> > > > ---
> > > > Before you remove an OSD, it is usually up and in. You need to take it 
> > > > out of the cluster so that Ceph can begin rebalancing and copying its 
> > > > data to other OSDs.
> > > > ceph osd out {osd-num}
> > > > [...]
> > > > ---
> > > >
> > > > That is, it is argued that this is the most correct way (otherwise it 
> > > > would not have been written in the documentation).
> > > >
> > > >
> > > >
> > > > - Original Message -
> > > > From: "David Turner" 
> > > > To: "Fyodor Ustinov" 
> > > > Cc: "Scottix" , "ceph-users" 
> > > > 
> > > > Sent: Friday, 1 March, 2019 05:13:27
> > > > Subject: Re: [ceph-users] Right way to delete OSD from cluster?
> > > >
> > > > The reason is that an osd still contributes to the host weight in the 
> > > > crush
> > > > map even while it is marked out. When you out and then purge, the 
> > > > purging
> > > > operation removed the osd from the map and changes the weight of the 
> > > > host
> > > > which changes the crush map and data moves. By weighting the osd to 0.0,
> > > > the hosts weight is already the same it will be when you purge the osd.
> > > > Weighting to 0.0 is definitely the best option for removing storage if 
> > > > you
> > > > can trust the data on the osd being removed.
> > > >
> > > > On Tue, Feb 26, 2019, 3:19 AM Fyodor Ustinov  wrote:
> > > >
> > > > > Hi!
> > > > >
> > > > > Thank you so much!
> > > > >
> > > > > I do not understand why, but your variant really causes only one 
> > > > > rebalance
> > > > > compared to the "osd out".
> > > > >
&

Re: [ceph-users] Right way to delete OSD from cluster?

2019-03-01 Thread Fyodor Ustinov
Hi!

Hmm.

"ceph osd crush reweight" and "ceph osd reweight" - do the same or not?
I use "ceph osd crush reweight"

- Original Message -
From: "Paul Emmerich" 
To: "Fyodor Ustinov" 
Cc: "David Turner" , "ceph-users" 

Sent: Friday, 1 March, 2019 12:24:37
Subject: Re: [ceph-users] Right way to delete OSD from cluster?

On Fri, Mar 1, 2019 at 11:17 AM Fyodor Ustinov  wrote:
> May be. But "ceph out + ceph osd purge" causes double relocation, and "ceph 
> reweight 0 + ceph osd purge" - causes only one.

No, the commands "ceph osd out X" and "ceph osd reweight X 0" do the
exact same thing: both set reweight to 0.

Paul

>
> - Original Message -
> From: "Paul Emmerich" 
> To: "Fyodor Ustinov" 
> Cc: "David Turner" , "ceph-users" 
> 
> Sent: Friday, 1 March, 2019 11:54:20
> Subject: Re: [ceph-users] Right way to delete OSD from cluster?
>
> "out" is internally implemented as "reweight 0"
>
> Paul
>
> --
> Paul Emmerich
>
> Looking for help with your Ceph cluster? Contact us at https://croit.io
>
> croit GmbH
> Freseniusstr. 31h
> 81247 München
> www.croit.io
> Tel: +49 89 1896585 90
>
> On Fri, Mar 1, 2019 at 10:48 AM Fyodor Ustinov  wrote:
> >
> > Hi!
> >
> > As far as I understand, reweight also does not lead to the situation "a 
> > period where one copy / shard
> > is missing".
> >
> > - Original Message -
> > From: "Paul Emmerich" 
> > To: "Fyodor Ustinov" 
> > Cc: "David Turner" , "ceph-users" 
> > 
> > Sent: Friday, 1 March, 2019 11:32:54
> > Subject: Re: [ceph-users] Right way to delete OSD from cluster?
> >
> > On Fri, Mar 1, 2019 at 8:55 AM Fyodor Ustinov  wrote:
> > >
> > > Hi!
> > >
> > > Yes. But I am a little surprised by what is written in the documentation:
> >
> > the point of this is that you don't have a period where one copy/shard
> > is missing if you wait for it to take it out.
> > Yeah, there'll be an unnecessary small data movement afterwards, but
> > you are never missing a copy.
> >
> >
> > Paul
> >
> > > http://docs.ceph.com/docs/mimic/rados/operations/add-or-rm-osds/
> > >
> > > ---
> > > Before you remove an OSD, it is usually up and in. You need to take it 
> > > out of the cluster so that Ceph can begin rebalancing and copying its 
> > > data to other OSDs.
> > > ceph osd out {osd-num}
> > > [...]
> > > ---
> > >
> > > That is, it is argued that this is the most correct way (otherwise it 
> > > would not have been written in the documentation).
> > >
> > >
> > >
> > > - Original Message -
> > > From: "David Turner" 
> > > To: "Fyodor Ustinov" 
> > > Cc: "Scottix" , "ceph-users" 
> > > 
> > > Sent: Friday, 1 March, 2019 05:13:27
> > > Subject: Re: [ceph-users] Right way to delete OSD from cluster?
> > >
> > > The reason is that an osd still contributes to the host weight in the 
> > > crush
> > > map even while it is marked out. When you out and then purge, the purging
> > > operation removed the osd from the map and changes the weight of the host
> > > which changes the crush map and data moves. By weighting the osd to 0.0,
> > > the hosts weight is already the same it will be when you purge the osd.
> > > Weighting to 0.0 is definitely the best option for removing storage if you
> > > can trust the data on the osd being removed.
> > >
> > > On Tue, Feb 26, 2019, 3:19 AM Fyodor Ustinov  wrote:
> > >
> > > > Hi!
> > > >
> > > > Thank you so much!
> > > >
> > > > I do not understand why, but your variant really causes only one 
> > > > rebalance
> > > > compared to the "osd out".
> > > >
> > > > - Original Message -
> > > > From: "Scottix" 
> > > > To: "Fyodor Ustinov" 
> > > > Cc: "ceph-users" 
> > > > Sent: Wednesday, 30 January, 2019 20:31:32
> > > > Subject: Re: [ceph-users] Right way to delete OSD from cluster?
> > > >
> > > > I generally have gone the crush reweight 0 route
> > > > This way the drive can participate in the rebalance, and the rebalance
> > > > o

Re: [ceph-users] Right way to delete OSD from cluster?

2019-03-01 Thread Paul Emmerich
On Fri, Mar 1, 2019 at 11:17 AM Fyodor Ustinov  wrote:
> May be. But "ceph out + ceph osd purge" causes double relocation, and "ceph 
> reweight 0 + ceph osd purge" - causes only one.

No, the commands "ceph osd out X" and "ceph osd reweight X 0" do the
exact same thing: both set reweight to 0.

Paul

>
> - Original Message -
> From: "Paul Emmerich" 
> To: "Fyodor Ustinov" 
> Cc: "David Turner" , "ceph-users" 
> 
> Sent: Friday, 1 March, 2019 11:54:20
> Subject: Re: [ceph-users] Right way to delete OSD from cluster?
>
> "out" is internally implemented as "reweight 0"
>
> Paul
>
> --
> Paul Emmerich
>
> Looking for help with your Ceph cluster? Contact us at https://croit.io
>
> croit GmbH
> Freseniusstr. 31h
> 81247 München
> www.croit.io
> Tel: +49 89 1896585 90
>
> On Fri, Mar 1, 2019 at 10:48 AM Fyodor Ustinov  wrote:
> >
> > Hi!
> >
> > As far as I understand, reweight also does not lead to the situation "a 
> > period where one copy / shard
> > is missing".
> >
> > - Original Message -
> > From: "Paul Emmerich" 
> > To: "Fyodor Ustinov" 
> > Cc: "David Turner" , "ceph-users" 
> > 
> > Sent: Friday, 1 March, 2019 11:32:54
> > Subject: Re: [ceph-users] Right way to delete OSD from cluster?
> >
> > On Fri, Mar 1, 2019 at 8:55 AM Fyodor Ustinov  wrote:
> > >
> > > Hi!
> > >
> > > Yes. But I am a little surprised by what is written in the documentation:
> >
> > the point of this is that you don't have a period where one copy/shard
> > is missing if you wait for it to take it out.
> > Yeah, there'll be an unnecessary small data movement afterwards, but
> > you are never missing a copy.
> >
> >
> > Paul
> >
> > > http://docs.ceph.com/docs/mimic/rados/operations/add-or-rm-osds/
> > >
> > > ---
> > > Before you remove an OSD, it is usually up and in. You need to take it 
> > > out of the cluster so that Ceph can begin rebalancing and copying its 
> > > data to other OSDs.
> > > ceph osd out {osd-num}
> > > [...]
> > > ---
> > >
> > > That is, it is argued that this is the most correct way (otherwise it 
> > > would not have been written in the documentation).
> > >
> > >
> > >
> > > - Original Message -
> > > From: "David Turner" 
> > > To: "Fyodor Ustinov" 
> > > Cc: "Scottix" , "ceph-users" 
> > > 
> > > Sent: Friday, 1 March, 2019 05:13:27
> > > Subject: Re: [ceph-users] Right way to delete OSD from cluster?
> > >
> > > The reason is that an osd still contributes to the host weight in the 
> > > crush
> > > map even while it is marked out. When you out and then purge, the purging
> > > operation removed the osd from the map and changes the weight of the host
> > > which changes the crush map and data moves. By weighting the osd to 0.0,
> > > the hosts weight is already the same it will be when you purge the osd.
> > > Weighting to 0.0 is definitely the best option for removing storage if you
> > > can trust the data on the osd being removed.
> > >
> > > On Tue, Feb 26, 2019, 3:19 AM Fyodor Ustinov  wrote:
> > >
> > > > Hi!
> > > >
> > > > Thank you so much!
> > > >
> > > > I do not understand why, but your variant really causes only one 
> > > > rebalance
> > > > compared to the "osd out".
> > > >
> > > > - Original Message -
> > > > From: "Scottix" 
> > > > To: "Fyodor Ustinov" 
> > > > Cc: "ceph-users" 
> > > > Sent: Wednesday, 30 January, 2019 20:31:32
> > > > Subject: Re: [ceph-users] Right way to delete OSD from cluster?
> > > >
> > > > I generally have gone the crush reweight 0 route
> > > > This way the drive can participate in the rebalance, and the rebalance
> > > > only happens once. Then you can take it out and purge.
> > > >
> > > > If I am not mistaken this is the safest.
> > > >
> > > > ceph osd crush reweight  0
> > > >
> > > > On Wed, Jan 30, 2019 at 7:45 AM Fyodor Ustinov  wrote:
> > > > >
> > > > > Hi!
> > > > >
> > > >

Re: [ceph-users] Right way to delete OSD from cluster?

2019-03-01 Thread Fyodor Ustinov
Hi!

May be. But "ceph out + ceph osd purge" causes double relocation, and "ceph 
reweight 0 + ceph osd purge" - causes only one.

- Original Message -
From: "Paul Emmerich" 
To: "Fyodor Ustinov" 
Cc: "David Turner" , "ceph-users" 

Sent: Friday, 1 March, 2019 11:54:20
Subject: Re: [ceph-users] Right way to delete OSD from cluster?

"out" is internally implemented as "reweight 0"

Paul

-- 
Paul Emmerich

Looking for help with your Ceph cluster? Contact us at https://croit.io

croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89 1896585 90

On Fri, Mar 1, 2019 at 10:48 AM Fyodor Ustinov  wrote:
>
> Hi!
>
> As far as I understand, reweight also does not lead to the situation "a 
> period where one copy / shard
> is missing".
>
> - Original Message -
> From: "Paul Emmerich" 
> To: "Fyodor Ustinov" 
> Cc: "David Turner" , "ceph-users" 
> 
> Sent: Friday, 1 March, 2019 11:32:54
> Subject: Re: [ceph-users] Right way to delete OSD from cluster?
>
> On Fri, Mar 1, 2019 at 8:55 AM Fyodor Ustinov  wrote:
> >
> > Hi!
> >
> > Yes. But I am a little surprised by what is written in the documentation:
>
> the point of this is that you don't have a period where one copy/shard
> is missing if you wait for it to take it out.
> Yeah, there'll be an unnecessary small data movement afterwards, but
> you are never missing a copy.
>
>
> Paul
>
> > http://docs.ceph.com/docs/mimic/rados/operations/add-or-rm-osds/
> >
> > ---
> > Before you remove an OSD, it is usually up and in. You need to take it out 
> > of the cluster so that Ceph can begin rebalancing and copying its data to 
> > other OSDs.
> > ceph osd out {osd-num}
> > [...]
> > ---
> >
> > That is, it is argued that this is the most correct way (otherwise it would 
> > not have been written in the documentation).
> >
> >
> >
> > - Original Message -
> > From: "David Turner" 
> > To: "Fyodor Ustinov" 
> > Cc: "Scottix" , "ceph-users" 
> > Sent: Friday, 1 March, 2019 05:13:27
> > Subject: Re: [ceph-users] Right way to delete OSD from cluster?
> >
> > The reason is that an osd still contributes to the host weight in the crush
> > map even while it is marked out. When you out and then purge, the purging
> > operation removed the osd from the map and changes the weight of the host
> > which changes the crush map and data moves. By weighting the osd to 0.0,
> > the hosts weight is already the same it will be when you purge the osd.
> > Weighting to 0.0 is definitely the best option for removing storage if you
> > can trust the data on the osd being removed.
> >
> > On Tue, Feb 26, 2019, 3:19 AM Fyodor Ustinov  wrote:
> >
> > > Hi!
> > >
> > > Thank you so much!
> > >
> > > I do not understand why, but your variant really causes only one rebalance
> > > compared to the "osd out".
> > >
> > > - Original Message -
> > > From: "Scottix" 
> > > To: "Fyodor Ustinov" 
> > > Cc: "ceph-users" 
> > > Sent: Wednesday, 30 January, 2019 20:31:32
> > > Subject: Re: [ceph-users] Right way to delete OSD from cluster?
> > >
> > > I generally have gone the crush reweight 0 route
> > > This way the drive can participate in the rebalance, and the rebalance
> > > only happens once. Then you can take it out and purge.
> > >
> > > If I am not mistaken this is the safest.
> > >
> > > ceph osd crush reweight  0
> > >
> > > On Wed, Jan 30, 2019 at 7:45 AM Fyodor Ustinov  wrote:
> > > >
> > > > Hi!
> > > >
> > > > But unless after "ceph osd crush remove" I will not got the undersized
> > > objects? That is, this is not the same thing as simply turning off the OSD
> > > and waiting for the cluster to be restored?
> > > >
> > > > - Original Message -
> > > > From: "Wido den Hollander" 
> > > > To: "Fyodor Ustinov" , "ceph-users" <
> > > ceph-users@lists.ceph.com>
> > > > Sent: Wednesday, 30 January, 2019 15:05:35
> > > > Subject: Re: [ceph-users] Right way to delete OSD from cluster?
> > > >
> > > > On 1/30/19 2:00 PM, Fyodor Ustinov wrote:
> > > > > Hi!
> > > > >
> > > > > I thought I shoul

Re: [ceph-users] Right way to delete OSD from cluster?

2019-03-01 Thread Paul Emmerich
"out" is internally implemented as "reweight 0"

Paul

-- 
Paul Emmerich

Looking for help with your Ceph cluster? Contact us at https://croit.io

croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89 1896585 90

On Fri, Mar 1, 2019 at 10:48 AM Fyodor Ustinov  wrote:
>
> Hi!
>
> As far as I understand, reweight also does not lead to the situation "a 
> period where one copy / shard
> is missing".
>
> - Original Message -
> From: "Paul Emmerich" 
> To: "Fyodor Ustinov" 
> Cc: "David Turner" , "ceph-users" 
> 
> Sent: Friday, 1 March, 2019 11:32:54
> Subject: Re: [ceph-users] Right way to delete OSD from cluster?
>
> On Fri, Mar 1, 2019 at 8:55 AM Fyodor Ustinov  wrote:
> >
> > Hi!
> >
> > Yes. But I am a little surprised by what is written in the documentation:
>
> the point of this is that you don't have a period where one copy/shard
> is missing if you wait for it to take it out.
> Yeah, there'll be an unnecessary small data movement afterwards, but
> you are never missing a copy.
>
>
> Paul
>
> > http://docs.ceph.com/docs/mimic/rados/operations/add-or-rm-osds/
> >
> > ---
> > Before you remove an OSD, it is usually up and in. You need to take it out 
> > of the cluster so that Ceph can begin rebalancing and copying its data to 
> > other OSDs.
> > ceph osd out {osd-num}
> > [...]
> > ---
> >
> > That is, it is argued that this is the most correct way (otherwise it would 
> > not have been written in the documentation).
> >
> >
> >
> > - Original Message -
> > From: "David Turner" 
> > To: "Fyodor Ustinov" 
> > Cc: "Scottix" , "ceph-users" 
> > Sent: Friday, 1 March, 2019 05:13:27
> > Subject: Re: [ceph-users] Right way to delete OSD from cluster?
> >
> > The reason is that an osd still contributes to the host weight in the crush
> > map even while it is marked out. When you out and then purge, the purging
> > operation removed the osd from the map and changes the weight of the host
> > which changes the crush map and data moves. By weighting the osd to 0.0,
> > the hosts weight is already the same it will be when you purge the osd.
> > Weighting to 0.0 is definitely the best option for removing storage if you
> > can trust the data on the osd being removed.
> >
> > On Tue, Feb 26, 2019, 3:19 AM Fyodor Ustinov  wrote:
> >
> > > Hi!
> > >
> > > Thank you so much!
> > >
> > > I do not understand why, but your variant really causes only one rebalance
> > > compared to the "osd out".
> > >
> > > - Original Message -
> > > From: "Scottix" 
> > > To: "Fyodor Ustinov" 
> > > Cc: "ceph-users" 
> > > Sent: Wednesday, 30 January, 2019 20:31:32
> > > Subject: Re: [ceph-users] Right way to delete OSD from cluster?
> > >
> > > I generally have gone the crush reweight 0 route
> > > This way the drive can participate in the rebalance, and the rebalance
> > > only happens once. Then you can take it out and purge.
> > >
> > > If I am not mistaken this is the safest.
> > >
> > > ceph osd crush reweight  0
> > >
> > > On Wed, Jan 30, 2019 at 7:45 AM Fyodor Ustinov  wrote:
> > > >
> > > > Hi!
> > > >
> > > > But unless after "ceph osd crush remove" I will not got the undersized
> > > objects? That is, this is not the same thing as simply turning off the OSD
> > > and waiting for the cluster to be restored?
> > > >
> > > > - Original Message -
> > > > From: "Wido den Hollander" 
> > > > To: "Fyodor Ustinov" , "ceph-users" <
> > > ceph-users@lists.ceph.com>
> > > > Sent: Wednesday, 30 January, 2019 15:05:35
> > > > Subject: Re: [ceph-users] Right way to delete OSD from cluster?
> > > >
> > > > On 1/30/19 2:00 PM, Fyodor Ustinov wrote:
> > > > > Hi!
> > > > >
> > > > > I thought I should first do "ceph osd out", wait for the end
> > > relocation of the misplaced objects and after that do "ceph osd purge".
> > > > > But after "purge" the cluster starts relocation again.
> > > > >
> > > > > Maybe I'm doing something wrong? Then what is the correct way to
> > > delete the OSD from the cluster?
> 

Re: [ceph-users] Right way to delete OSD from cluster?

2019-03-01 Thread Fyodor Ustinov
Hi!

As far as I understand, reweight also does not lead to the situation "a period 
where one copy / shard
is missing".

- Original Message -
From: "Paul Emmerich" 
To: "Fyodor Ustinov" 
Cc: "David Turner" , "ceph-users" 

Sent: Friday, 1 March, 2019 11:32:54
Subject: Re: [ceph-users] Right way to delete OSD from cluster?

On Fri, Mar 1, 2019 at 8:55 AM Fyodor Ustinov  wrote:
>
> Hi!
>
> Yes. But I am a little surprised by what is written in the documentation:

the point of this is that you don't have a period where one copy/shard
is missing if you wait for it to take it out.
Yeah, there'll be an unnecessary small data movement afterwards, but
you are never missing a copy.


Paul

> http://docs.ceph.com/docs/mimic/rados/operations/add-or-rm-osds/
>
> ---
> Before you remove an OSD, it is usually up and in. You need to take it out of 
> the cluster so that Ceph can begin rebalancing and copying its data to other 
> OSDs.
> ceph osd out {osd-num}
> [...]
> ---
>
> That is, it is argued that this is the most correct way (otherwise it would 
> not have been written in the documentation).
>
>
>
> - Original Message -
> From: "David Turner" 
> To: "Fyodor Ustinov" 
> Cc: "Scottix" , "ceph-users" 
> Sent: Friday, 1 March, 2019 05:13:27
> Subject: Re: [ceph-users] Right way to delete OSD from cluster?
>
> The reason is that an osd still contributes to the host weight in the crush
> map even while it is marked out. When you out and then purge, the purging
> operation removed the osd from the map and changes the weight of the host
> which changes the crush map and data moves. By weighting the osd to 0.0,
> the hosts weight is already the same it will be when you purge the osd.
> Weighting to 0.0 is definitely the best option for removing storage if you
> can trust the data on the osd being removed.
>
> On Tue, Feb 26, 2019, 3:19 AM Fyodor Ustinov  wrote:
>
> > Hi!
> >
> > Thank you so much!
> >
> > I do not understand why, but your variant really causes only one rebalance
> > compared to the "osd out".
> >
> > - Original Message -
> > From: "Scottix" 
> > To: "Fyodor Ustinov" 
> > Cc: "ceph-users" 
> > Sent: Wednesday, 30 January, 2019 20:31:32
> > Subject: Re: [ceph-users] Right way to delete OSD from cluster?
> >
> > I generally have gone the crush reweight 0 route
> > This way the drive can participate in the rebalance, and the rebalance
> > only happens once. Then you can take it out and purge.
> >
> > If I am not mistaken this is the safest.
> >
> > ceph osd crush reweight  0
> >
> > On Wed, Jan 30, 2019 at 7:45 AM Fyodor Ustinov  wrote:
> > >
> > > Hi!
> > >
> > > But unless after "ceph osd crush remove" I will not got the undersized
> > objects? That is, this is not the same thing as simply turning off the OSD
> > and waiting for the cluster to be restored?
> > >
> > > - Original Message -
> > > From: "Wido den Hollander" 
> > > To: "Fyodor Ustinov" , "ceph-users" <
> > ceph-users@lists.ceph.com>
> > > Sent: Wednesday, 30 January, 2019 15:05:35
> > > Subject: Re: [ceph-users] Right way to delete OSD from cluster?
> > >
> > > On 1/30/19 2:00 PM, Fyodor Ustinov wrote:
> > > > Hi!
> > > >
> > > > I thought I should first do "ceph osd out", wait for the end
> > relocation of the misplaced objects and after that do "ceph osd purge".
> > > > But after "purge" the cluster starts relocation again.
> > > >
> > > > Maybe I'm doing something wrong? Then what is the correct way to
> > delete the OSD from the cluster?
> > > >
> > >
> > > You are not doing anything wrong, this is the expected behavior. There
> > > are two CRUSH changes:
> > >
> > > - Marking it out
> > > - Purging it
> > >
> > > You could do:
> > >
> > > $ ceph osd crush remove osd.X
> > >
> > > Wait for all good
> > >
> > > $ ceph osd purge X
> > >
> > > The last step should then not initiate any data movement.
> > >
> > > Wido
> > >
> > > > WBR,
> > > > Fyodor.
> > > > ___
> > > > ceph-users mailing list
> > > > ceph-users@lists.ceph.com
> > > > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> > > >
> > > ___
> > > ceph-users mailing list
> > > ceph-users@lists.ceph.com
> > > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
> >
> >
> > --
> > T: @Thaumion
> > IG: Thaumion
> > scot...@gmail.com
> > ___
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Right way to delete OSD from cluster?

2019-03-01 Thread Paul Emmerich
On Fri, Mar 1, 2019 at 8:55 AM Fyodor Ustinov  wrote:
>
> Hi!
>
> Yes. But I am a little surprised by what is written in the documentation:

the point of this is that you don't have a period where one copy/shard
is missing if you wait for it to take it out.
Yeah, there'll be an unnecessary small data movement afterwards, but
you are never missing a copy.


Paul

> http://docs.ceph.com/docs/mimic/rados/operations/add-or-rm-osds/
>
> ---
> Before you remove an OSD, it is usually up and in. You need to take it out of 
> the cluster so that Ceph can begin rebalancing and copying its data to other 
> OSDs.
> ceph osd out {osd-num}
> [...]
> ---
>
> That is, it is argued that this is the most correct way (otherwise it would 
> not have been written in the documentation).
>
>
>
> - Original Message -
> From: "David Turner" 
> To: "Fyodor Ustinov" 
> Cc: "Scottix" , "ceph-users" 
> Sent: Friday, 1 March, 2019 05:13:27
> Subject: Re: [ceph-users] Right way to delete OSD from cluster?
>
> The reason is that an osd still contributes to the host weight in the crush
> map even while it is marked out. When you out and then purge, the purging
> operation removed the osd from the map and changes the weight of the host
> which changes the crush map and data moves. By weighting the osd to 0.0,
> the hosts weight is already the same it will be when you purge the osd.
> Weighting to 0.0 is definitely the best option for removing storage if you
> can trust the data on the osd being removed.
>
> On Tue, Feb 26, 2019, 3:19 AM Fyodor Ustinov  wrote:
>
> > Hi!
> >
> > Thank you so much!
> >
> > I do not understand why, but your variant really causes only one rebalance
> > compared to the "osd out".
> >
> > - Original Message -
> > From: "Scottix" 
> > To: "Fyodor Ustinov" 
> > Cc: "ceph-users" 
> > Sent: Wednesday, 30 January, 2019 20:31:32
> > Subject: Re: [ceph-users] Right way to delete OSD from cluster?
> >
> > I generally have gone the crush reweight 0 route
> > This way the drive can participate in the rebalance, and the rebalance
> > only happens once. Then you can take it out and purge.
> >
> > If I am not mistaken this is the safest.
> >
> > ceph osd crush reweight  0
> >
> > On Wed, Jan 30, 2019 at 7:45 AM Fyodor Ustinov  wrote:
> > >
> > > Hi!
> > >
> > > But unless after "ceph osd crush remove" I will not got the undersized
> > objects? That is, this is not the same thing as simply turning off the OSD
> > and waiting for the cluster to be restored?
> > >
> > > - Original Message -
> > > From: "Wido den Hollander" 
> > > To: "Fyodor Ustinov" , "ceph-users" <
> > ceph-users@lists.ceph.com>
> > > Sent: Wednesday, 30 January, 2019 15:05:35
> > > Subject: Re: [ceph-users] Right way to delete OSD from cluster?
> > >
> > > On 1/30/19 2:00 PM, Fyodor Ustinov wrote:
> > > > Hi!
> > > >
> > > > I thought I should first do "ceph osd out", wait for the end
> > relocation of the misplaced objects and after that do "ceph osd purge".
> > > > But after "purge" the cluster starts relocation again.
> > > >
> > > > Maybe I'm doing something wrong? Then what is the correct way to
> > delete the OSD from the cluster?
> > > >
> > >
> > > You are not doing anything wrong, this is the expected behavior. There
> > > are two CRUSH changes:
> > >
> > > - Marking it out
> > > - Purging it
> > >
> > > You could do:
> > >
> > > $ ceph osd crush remove osd.X
> > >
> > > Wait for all good
> > >
> > > $ ceph osd purge X
> > >
> > > The last step should then not initiate any data movement.
> > >
> > > Wido
> > >
> > > > WBR,
> > > > Fyodor.
> > > > ___
> > > > ceph-users mailing list
> > > > ceph-users@lists.ceph.com
> > > > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> > > >
> > > ___
> > > ceph-users mailing list
> > > ceph-users@lists.ceph.com
> > > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
> >
> >
> > --
> > T: @Thaumion
> > IG: Thaumion
> > scot...@gmail.com
> > ___
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Right way to delete OSD from cluster?

2019-02-28 Thread Fyodor Ustinov
Hi!

Yes. But I am a little surprised by what is written in the documentation:
http://docs.ceph.com/docs/mimic/rados/operations/add-or-rm-osds/

---
Before you remove an OSD, it is usually up and in. You need to take it out of 
the cluster so that Ceph can begin rebalancing and copying its data to other 
OSDs.
ceph osd out {osd-num}
[...]
---

That is, it is argued that this is the most correct way (otherwise it would not 
have been written in the documentation).



- Original Message -
From: "David Turner" 
To: "Fyodor Ustinov" 
Cc: "Scottix" , "ceph-users" 
Sent: Friday, 1 March, 2019 05:13:27
Subject: Re: [ceph-users] Right way to delete OSD from cluster?

The reason is that an osd still contributes to the host weight in the crush
map even while it is marked out. When you out and then purge, the purging
operation removed the osd from the map and changes the weight of the host
which changes the crush map and data moves. By weighting the osd to 0.0,
the hosts weight is already the same it will be when you purge the osd.
Weighting to 0.0 is definitely the best option for removing storage if you
can trust the data on the osd being removed.

On Tue, Feb 26, 2019, 3:19 AM Fyodor Ustinov  wrote:

> Hi!
>
> Thank you so much!
>
> I do not understand why, but your variant really causes only one rebalance
> compared to the "osd out".
>
> - Original Message -
> From: "Scottix" 
> To: "Fyodor Ustinov" 
> Cc: "ceph-users" 
> Sent: Wednesday, 30 January, 2019 20:31:32
> Subject: Re: [ceph-users] Right way to delete OSD from cluster?
>
> I generally have gone the crush reweight 0 route
> This way the drive can participate in the rebalance, and the rebalance
> only happens once. Then you can take it out and purge.
>
> If I am not mistaken this is the safest.
>
> ceph osd crush reweight  0
>
> On Wed, Jan 30, 2019 at 7:45 AM Fyodor Ustinov  wrote:
> >
> > Hi!
> >
> > But unless after "ceph osd crush remove" I will not got the undersized
> objects? That is, this is not the same thing as simply turning off the OSD
> and waiting for the cluster to be restored?
> >
> > - Original Message -
> > From: "Wido den Hollander" 
> > To: "Fyodor Ustinov" , "ceph-users" <
> ceph-users@lists.ceph.com>
> > Sent: Wednesday, 30 January, 2019 15:05:35
> > Subject: Re: [ceph-users] Right way to delete OSD from cluster?
> >
> > On 1/30/19 2:00 PM, Fyodor Ustinov wrote:
> > > Hi!
> > >
> > > I thought I should first do "ceph osd out", wait for the end
> relocation of the misplaced objects and after that do "ceph osd purge".
> > > But after "purge" the cluster starts relocation again.
> > >
> > > Maybe I'm doing something wrong? Then what is the correct way to
> delete the OSD from the cluster?
> > >
> >
> > You are not doing anything wrong, this is the expected behavior. There
> > are two CRUSH changes:
> >
> > - Marking it out
> > - Purging it
> >
> > You could do:
> >
> > $ ceph osd crush remove osd.X
> >
> > Wait for all good
> >
> > $ ceph osd purge X
> >
> > The last step should then not initiate any data movement.
> >
> > Wido
> >
> > > WBR,
> > > Fyodor.
> > > ___
> > > ceph-users mailing list
> > > ceph-users@lists.ceph.com
> > > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> > >
> > ___
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
>
> --
> T: @Thaumion
> IG: Thaumion
> scot...@gmail.com
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Right way to delete OSD from cluster?

2019-02-28 Thread David Turner
The reason is that an osd still contributes to the host weight in the crush
map even while it is marked out. When you out and then purge, the purging
operation removed the osd from the map and changes the weight of the host
which changes the crush map and data moves. By weighting the osd to 0.0,
the hosts weight is already the same it will be when you purge the osd.
Weighting to 0.0 is definitely the best option for removing storage if you
can trust the data on the osd being removed.

On Tue, Feb 26, 2019, 3:19 AM Fyodor Ustinov  wrote:

> Hi!
>
> Thank you so much!
>
> I do not understand why, but your variant really causes only one rebalance
> compared to the "osd out".
>
> - Original Message -
> From: "Scottix" 
> To: "Fyodor Ustinov" 
> Cc: "ceph-users" 
> Sent: Wednesday, 30 January, 2019 20:31:32
> Subject: Re: [ceph-users] Right way to delete OSD from cluster?
>
> I generally have gone the crush reweight 0 route
> This way the drive can participate in the rebalance, and the rebalance
> only happens once. Then you can take it out and purge.
>
> If I am not mistaken this is the safest.
>
> ceph osd crush reweight  0
>
> On Wed, Jan 30, 2019 at 7:45 AM Fyodor Ustinov  wrote:
> >
> > Hi!
> >
> > But unless after "ceph osd crush remove" I will not got the undersized
> objects? That is, this is not the same thing as simply turning off the OSD
> and waiting for the cluster to be restored?
> >
> > - Original Message -
> > From: "Wido den Hollander" 
> > To: "Fyodor Ustinov" , "ceph-users" <
> ceph-users@lists.ceph.com>
> > Sent: Wednesday, 30 January, 2019 15:05:35
> > Subject: Re: [ceph-users] Right way to delete OSD from cluster?
> >
> > On 1/30/19 2:00 PM, Fyodor Ustinov wrote:
> > > Hi!
> > >
> > > I thought I should first do "ceph osd out", wait for the end
> relocation of the misplaced objects and after that do "ceph osd purge".
> > > But after "purge" the cluster starts relocation again.
> > >
> > > Maybe I'm doing something wrong? Then what is the correct way to
> delete the OSD from the cluster?
> > >
> >
> > You are not doing anything wrong, this is the expected behavior. There
> > are two CRUSH changes:
> >
> > - Marking it out
> > - Purging it
> >
> > You could do:
> >
> > $ ceph osd crush remove osd.X
> >
> > Wait for all good
> >
> > $ ceph osd purge X
> >
> > The last step should then not initiate any data movement.
> >
> > Wido
> >
> > > WBR,
> > > Fyodor.
> > > ___
> > > ceph-users mailing list
> > > ceph-users@lists.ceph.com
> > > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> > >
> > ___
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
>
> --
> T: @Thaumion
> IG: Thaumion
> scot...@gmail.com
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Right way to delete OSD from cluster?

2019-02-26 Thread Fyodor Ustinov
Hi!

Thank you so much!

I do not understand why, but your variant really causes only one rebalance 
compared to the "osd out".

- Original Message -
From: "Scottix" 
To: "Fyodor Ustinov" 
Cc: "ceph-users" 
Sent: Wednesday, 30 January, 2019 20:31:32
Subject: Re: [ceph-users] Right way to delete OSD from cluster?

I generally have gone the crush reweight 0 route
This way the drive can participate in the rebalance, and the rebalance
only happens once. Then you can take it out and purge.

If I am not mistaken this is the safest.

ceph osd crush reweight  0

On Wed, Jan 30, 2019 at 7:45 AM Fyodor Ustinov  wrote:
>
> Hi!
>
> But unless after "ceph osd crush remove" I will not got the undersized 
> objects? That is, this is not the same thing as simply turning off the OSD 
> and waiting for the cluster to be restored?
>
> - Original Message -
> From: "Wido den Hollander" 
> To: "Fyodor Ustinov" , "ceph-users" 
> Sent: Wednesday, 30 January, 2019 15:05:35
> Subject: Re: [ceph-users] Right way to delete OSD from cluster?
>
> On 1/30/19 2:00 PM, Fyodor Ustinov wrote:
> > Hi!
> >
> > I thought I should first do "ceph osd out", wait for the end relocation of 
> > the misplaced objects and after that do "ceph osd purge".
> > But after "purge" the cluster starts relocation again.
> >
> > Maybe I'm doing something wrong? Then what is the correct way to delete the 
> > OSD from the cluster?
> >
>
> You are not doing anything wrong, this is the expected behavior. There
> are two CRUSH changes:
>
> - Marking it out
> - Purging it
>
> You could do:
>
> $ ceph osd crush remove osd.X
>
> Wait for all good
>
> $ ceph osd purge X
>
> The last step should then not initiate any data movement.
>
> Wido
>
> > WBR,
> > Fyodor.
> > ___
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



-- 
T: @Thaumion
IG: Thaumion
scot...@gmail.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Right way to delete OSD from cluster?

2019-01-30 Thread Scottix
I generally have gone the crush reweight 0 route
This way the drive can participate in the rebalance, and the rebalance
only happens once. Then you can take it out and purge.

If I am not mistaken this is the safest.

ceph osd crush reweight  0

On Wed, Jan 30, 2019 at 7:45 AM Fyodor Ustinov  wrote:
>
> Hi!
>
> But unless after "ceph osd crush remove" I will not got the undersized 
> objects? That is, this is not the same thing as simply turning off the OSD 
> and waiting for the cluster to be restored?
>
> - Original Message -
> From: "Wido den Hollander" 
> To: "Fyodor Ustinov" , "ceph-users" 
> Sent: Wednesday, 30 January, 2019 15:05:35
> Subject: Re: [ceph-users] Right way to delete OSD from cluster?
>
> On 1/30/19 2:00 PM, Fyodor Ustinov wrote:
> > Hi!
> >
> > I thought I should first do "ceph osd out", wait for the end relocation of 
> > the misplaced objects and after that do "ceph osd purge".
> > But after "purge" the cluster starts relocation again.
> >
> > Maybe I'm doing something wrong? Then what is the correct way to delete the 
> > OSD from the cluster?
> >
>
> You are not doing anything wrong, this is the expected behavior. There
> are two CRUSH changes:
>
> - Marking it out
> - Purging it
>
> You could do:
>
> $ ceph osd crush remove osd.X
>
> Wait for all good
>
> $ ceph osd purge X
>
> The last step should then not initiate any data movement.
>
> Wido
>
> > WBR,
> > Fyodor.
> > ___
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



-- 
T: @Thaumion
IG: Thaumion
scot...@gmail.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Right way to delete OSD from cluster?

2019-01-30 Thread Fyodor Ustinov
Hi!

But unless after "ceph osd crush remove" I will not got the undersized objects? 
That is, this is not the same thing as simply turning off the OSD and waiting 
for the cluster to be restored?

- Original Message -
From: "Wido den Hollander" 
To: "Fyodor Ustinov" , "ceph-users" 
Sent: Wednesday, 30 January, 2019 15:05:35
Subject: Re: [ceph-users] Right way to delete OSD from cluster?

On 1/30/19 2:00 PM, Fyodor Ustinov wrote:
> Hi!
> 
> I thought I should first do "ceph osd out", wait for the end relocation of 
> the misplaced objects and after that do "ceph osd purge".
> But after "purge" the cluster starts relocation again.
> 
> Maybe I'm doing something wrong? Then what is the correct way to delete the 
> OSD from the cluster?
> 

You are not doing anything wrong, this is the expected behavior. There
are two CRUSH changes:

- Marking it out
- Purging it

You could do:

$ ceph osd crush remove osd.X

Wait for all good

$ ceph osd purge X

The last step should then not initiate any data movement.

Wido

> WBR,
> Fyodor.
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Right way to delete OSD from cluster?

2019-01-30 Thread Wido den Hollander



On 1/30/19 2:00 PM, Fyodor Ustinov wrote:
> Hi!
> 
> I thought I should first do "ceph osd out", wait for the end relocation of 
> the misplaced objects and after that do "ceph osd purge".
> But after "purge" the cluster starts relocation again.
> 
> Maybe I'm doing something wrong? Then what is the correct way to delete the 
> OSD from the cluster?
> 

You are not doing anything wrong, this is the expected behavior. There
are two CRUSH changes:

- Marking it out
- Purging it

You could do:

$ ceph osd crush remove osd.X

Wait for all good

$ ceph osd purge X

The last step should then not initiate any data movement.

Wido

> WBR,
> Fyodor.
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Right way to delete OSD from cluster?

2019-01-30 Thread Fyodor Ustinov
Hi!

I thought I should first do "ceph osd out", wait for the end relocation of the 
misplaced objects and after that do "ceph osd purge".
But after "purge" the cluster starts relocation again.

Maybe I'm doing something wrong? Then what is the correct way to delete the OSD 
from the cluster?

WBR,
Fyodor.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com