Re: [ceph-users] Degraded objects afte: ceph osd in $osd

2018-11-29 Thread Marco Gaiarin
I reply to myself. > I've added a new node, added slowly 4 new OSD, but in the meantime an > OSD (not the new, not the node to remove) died. My situation now is: > root@blackpanther:~# ceph osd df tree > ID WEIGHT REWEIGHT SIZE USE AVAIL %USE VAR TYPE NAME > -1

Re: [ceph-users] Degraded objects afte: ceph osd in $osd

2018-11-26 Thread Gregory Farnum
On Mon, Nov 26, 2018 at 3:30 AM Janne Johansson wrote: > Den sön 25 nov. 2018 kl 22:10 skrev Stefan Kooman : > > > > Hi List, > > > > Another interesting and unexpected thing we observed during cluster > > expansion is the following. After we added extra disks to the cluster, > > while

Re: [ceph-users] Degraded objects afte: ceph osd in $osd

2018-11-26 Thread Marco Gaiarin
Mandi! Janne Johansson In chel di` si favelave... > It is a slight mistake in reporting it in the same way as an error, even if > it looks to the > cluster just as if it was in error and needs fixing. I think i'm hit a similar situation, and also i'm feeling that something have to be 'fixed'.

Re: [ceph-users] Degraded objects afte: ceph osd in $osd

2018-11-26 Thread Janne Johansson
Den mån 26 nov. 2018 kl 09:39 skrev Stefan Kooman : > > It is a slight mistake in reporting it in the same way as an error, > > even if it looks to the > > cluster just as if it was in error and needs fixing. This gives the > > new ceph admins a > > sense of urgency or danger whereas it should be

Re: [ceph-users] Degraded objects afte: ceph osd in $osd

2018-11-26 Thread Stefan Kooman
Quoting Janne Johansson (icepic...@gmail.com): > Yes, when you add a drive (or 10), some PGs decide they should have one or > more > replicas on the new drives, a new empty PG is created there, and > _then_ that replica > will make that PG get into the "degraded" mode, meaning if it had 3 > fine

Re: [ceph-users] Degraded objects afte: ceph osd in $osd

2018-11-26 Thread Janne Johansson
Den sön 25 nov. 2018 kl 22:10 skrev Stefan Kooman : > > Hi List, > > Another interesting and unexpected thing we observed during cluster > expansion is the following. After we added extra disks to the cluster, > while "norebalance" flag was set, we put the new OSDs "IN". As soon as > we did that

[ceph-users] Degraded objects afte: ceph osd in $osd

2018-11-25 Thread Stefan Kooman
Hi List, Another interesting and unexpected thing we observed during cluster expansion is the following. After we added extra disks to the cluster, while "norebalance" flag was set, we put the new OSDs "IN". As soon as we did that a couple of hundered objects would become degraded. During that