Re: [ceph-users] Upgrade to Luminous (mon+osd)

2018-12-03 Thread Dan van der Ster
On Mon, Dec 3, 2018 at 5:00 PM Jan Kasprzak  wrote:
>
> Dan van der Ster wrote:
> : It's not that simple see http://tracker.ceph.com/issues/21672
> :
> : For the 12.2.8 to 12.2.10 upgrade it seems the selinux module was
> : updated -- so the rpms restart the ceph.target.
> : What's worse is that this seems to happen before all the new updated
> : files are in place.
> :
> : Our 12.2.8 to 12.2.10 upgrade procedure is:
> :
> : systemctl stop ceph.target
> : yum update
> : systemctl start ceph.target
>
> Yes, this looks reasonable. Except that when upgrading
> from Jewel, even after the restart the OSDs do not work until
> _all_ mons are upgraded. So effectively if a PG happens to be placed
> on the mon hosts only, there will be service outage during upgrade
> from Jewel.
>
> So I guess the upgrade procedure described here:
>
> http://docs.ceph.com/docs/master/releases/luminous/#upgrade-from-jewel-or-kraken
>
> is misleading - the mons and osds get restarted anyway by the package
> upgrade itself. The user should be warned that for this reason the package
> upgrades should be run sequentially. And that the upgrade is not possible
> without service outage, when there are OSDs on the mon hosts and when
> the cluster is running under SELinux.

Note that ceph-selinux will only restart ceph.target if selinux is enabled.

So probably you could set /etc/selinux/config ... SELINUX=disabled,
reboot, then upgrade the rpms and restart the daemons selectively.

And BTW, setenforce 0 apparently doesn't disable enough of selinux --
you really do need to reboot.

# setenforce 0
# /usr/sbin/selinuxenabled
# echo $?
0

-- dan

>
> Also, there is another important thing omitted by the above upgrade
> procedure: After "ceph osd require-osd-release luminous"
> I have got HEALTH_WARN saying "application not enabled on X pool(s)".
> I have fixed this by running the following scriptlet:
>
> ceph osd pool ls | while read pool; do ceph osd pool application enable $pool 
> rbd; done
>
> (yes, all of my pools are used for rbd for now). Maybe this should be fixed
> in the release notes as well. Thanks,
>
> -Yenya
>
> : On Mon, Dec 3, 2018 at 12:42 PM Paul Emmerich  
> wrote:
> : >
> : > Upgrading Ceph packages does not restart the services -- exactly for
> : > this reason.
> : >
> : > This means there's something broken with your yum setup if the
> : > services are restarted when only installing the new version.
> : >
> : >
> : > Paul
> : >
> : > --
> : > Paul Emmerich
> : >
> : > Looking for help with your Ceph cluster? Contact us at https://croit.io
> : >
> : > croit GmbH
> : > Freseniusstr. 31h
> : > 81247 München
> : > www.croit.io
> : > Tel: +49 89 1896585 90
> : >
> : > Am Mo., 3. Dez. 2018 um 11:56 Uhr schrieb Jan Kasprzak :
> : > >
> : > > Hello, ceph users,
> : > >
> : > > I have a small(-ish) Ceph cluster, where there are osds on each host,
> : > > and in addition to that, there are mons on the first three hosts.
> : > > Is it possible to upgrade the cluster to Luminous without service
> : > > interruption?
> : > >
> : > > I have tested that when I run "yum --enablerepo Ceph update" on a
> : > > mon host, the osds on that host remain down until all three mons
> : > > are upgraded to Luminous. Is it possible to upgrade ceph-mon only,
> : > > and keep ceph-osd running the old version (Jewel in my case) as long
> : > > as possible? It seems RPM dependencies forbid this, but with --nodeps
> : > > it could be done.
> : > >
> : > > Is there a supported way how to upgrade host running both mon and osd
> : > > to Luminous?
> : > >
> : > > Thanks,
> : > >
> : > > -Yenya
> : > >
> : > > --
> : > > | Jan "Yenya" Kasprzak  private}> |
> : > > | http://www.fi.muni.cz/~kas/ GPG: 
> 4096R/A45477D5 |
> : > >  This is the world we live in: the way to deal with computers is to 
> google
> : > >  the symptoms, and hope that you don't have to watch a video. --P. 
> Zaitcev
> : > > ___
> : > > ceph-users mailing list
> : > > ceph-users@lists.ceph.com
> : > > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> : > ___
> : > ceph-users mailing list
> : > ceph-users@lists.ceph.com
> : > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
> --
> | Jan "Yenya" Kasprzak  |
> | http://www.fi.muni.cz/~kas/ GPG: 4096R/A45477D5 |
>  This is the world we live in: the way to deal with computers is to google
>  the symptoms, and hope that you don't have to watch a video. --P. Zaitcev
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Upgrade to Luminous (mon+osd)

2018-12-03 Thread Jan Kasprzak
Dan van der Ster wrote:
: It's not that simple see http://tracker.ceph.com/issues/21672
: 
: For the 12.2.8 to 12.2.10 upgrade it seems the selinux module was
: updated -- so the rpms restart the ceph.target.
: What's worse is that this seems to happen before all the new updated
: files are in place.
: 
: Our 12.2.8 to 12.2.10 upgrade procedure is:
: 
: systemctl stop ceph.target
: yum update
: systemctl start ceph.target

Yes, this looks reasonable. Except that when upgrading
from Jewel, even after the restart the OSDs do not work until
_all_ mons are upgraded. So effectively if a PG happens to be placed
on the mon hosts only, there will be service outage during upgrade
from Jewel.

So I guess the upgrade procedure described here:

http://docs.ceph.com/docs/master/releases/luminous/#upgrade-from-jewel-or-kraken

is misleading - the mons and osds get restarted anyway by the package
upgrade itself. The user should be warned that for this reason the package
upgrades should be run sequentially. And that the upgrade is not possible
without service outage, when there are OSDs on the mon hosts and when
the cluster is running under SELinux.

Also, there is another important thing omitted by the above upgrade
procedure: After "ceph osd require-osd-release luminous"
I have got HEALTH_WARN saying "application not enabled on X pool(s)".
I have fixed this by running the following scriptlet:

ceph osd pool ls | while read pool; do ceph osd pool application enable $pool 
rbd; done

(yes, all of my pools are used for rbd for now). Maybe this should be fixed
in the release notes as well. Thanks,

-Yenya

: On Mon, Dec 3, 2018 at 12:42 PM Paul Emmerich  wrote:
: >
: > Upgrading Ceph packages does not restart the services -- exactly for
: > this reason.
: >
: > This means there's something broken with your yum setup if the
: > services are restarted when only installing the new version.
: >
: >
: > Paul
: >
: > --
: > Paul Emmerich
: >
: > Looking for help with your Ceph cluster? Contact us at https://croit.io
: >
: > croit GmbH
: > Freseniusstr. 31h
: > 81247 München
: > www.croit.io
: > Tel: +49 89 1896585 90
: >
: > Am Mo., 3. Dez. 2018 um 11:56 Uhr schrieb Jan Kasprzak :
: > >
: > > Hello, ceph users,
: > >
: > > I have a small(-ish) Ceph cluster, where there are osds on each host,
: > > and in addition to that, there are mons on the first three hosts.
: > > Is it possible to upgrade the cluster to Luminous without service
: > > interruption?
: > >
: > > I have tested that when I run "yum --enablerepo Ceph update" on a
: > > mon host, the osds on that host remain down until all three mons
: > > are upgraded to Luminous. Is it possible to upgrade ceph-mon only,
: > > and keep ceph-osd running the old version (Jewel in my case) as long
: > > as possible? It seems RPM dependencies forbid this, but with --nodeps
: > > it could be done.
: > >
: > > Is there a supported way how to upgrade host running both mon and osd
: > > to Luminous?
: > >
: > > Thanks,
: > >
: > > -Yenya
: > >
: > > --
: > > | Jan "Yenya" Kasprzak  
|
: > > | http://www.fi.muni.cz/~kas/ GPG: 4096R/A45477D5 
|
: > >  This is the world we live in: the way to deal with computers is to google
: > >  the symptoms, and hope that you don't have to watch a video. --P. Zaitcev
: > > ___
: > > ceph-users mailing list
: > > ceph-users@lists.ceph.com
: > > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
: > ___
: > ceph-users mailing list
: > ceph-users@lists.ceph.com
: > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

-- 
| Jan "Yenya" Kasprzak  |
| http://www.fi.muni.cz/~kas/ GPG: 4096R/A45477D5 |
 This is the world we live in: the way to deal with computers is to google
 the symptoms, and hope that you don't have to watch a video. --P. Zaitcev
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Upgrade to Luminous (mon+osd)

2018-12-03 Thread Oliver Freyermuth

There's also an additional issue which made us activate
CEPH_AUTO_RESTART_ON_UPGRADE=yes
(and of course, not have automatic updates of Ceph):
  When using compression e.g. with Snappy, it seems that already running OSDs 
which try to dlopen() the snappy library
  for some version upgrades become unhappy if the version mismatches 
expectation (i.e. symbols don't match).

So effectively, it seems that in some cases you can not get around restarting 
the OSDs when updating the corresponding packages.

Cheers,
Oliver

Am 03.12.18 um 15:51 schrieb Dan van der Ster:

It's not that simple see http://tracker.ceph.com/issues/21672

For the 12.2.8 to 12.2.10 upgrade it seems the selinux module was
updated -- so the rpms restart the ceph.target.
What's worse is that this seems to happen before all the new updated
files are in place.

Our 12.2.8 to 12.2.10 upgrade procedure is:

systemctl stop ceph.target
yum update
systemctl start ceph.target

-- Dan

On Mon, Dec 3, 2018 at 12:42 PM Paul Emmerich  wrote:


Upgrading Ceph packages does not restart the services -- exactly for
this reason.

This means there's something broken with your yum setup if the
services are restarted when only installing the new version.


Paul

--
Paul Emmerich

Looking for help with your Ceph cluster? Contact us at https://croit.io

croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89 1896585 90

Am Mo., 3. Dez. 2018 um 11:56 Uhr schrieb Jan Kasprzak :


 Hello, ceph users,

I have a small(-ish) Ceph cluster, where there are osds on each host,
and in addition to that, there are mons on the first three hosts.
Is it possible to upgrade the cluster to Luminous without service
interruption?

I have tested that when I run "yum --enablerepo Ceph update" on a
mon host, the osds on that host remain down until all three mons
are upgraded to Luminous. Is it possible to upgrade ceph-mon only,
and keep ceph-osd running the old version (Jewel in my case) as long
as possible? It seems RPM dependencies forbid this, but with --nodeps
it could be done.

Is there a supported way how to upgrade host running both mon and osd
to Luminous?

Thanks,

-Yenya

--
| Jan "Yenya" Kasprzak  |
| http://www.fi.muni.cz/~kas/ GPG: 4096R/A45477D5 |
  This is the world we live in: the way to deal with computers is to google
  the symptoms, and hope that you don't have to watch a video. --P. Zaitcev
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com





smime.p7s
Description: S/MIME Cryptographic Signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Upgrade to Luminous (mon+osd)

2018-12-03 Thread Dan van der Ster
It's not that simple see http://tracker.ceph.com/issues/21672

For the 12.2.8 to 12.2.10 upgrade it seems the selinux module was
updated -- so the rpms restart the ceph.target.
What's worse is that this seems to happen before all the new updated
files are in place.

Our 12.2.8 to 12.2.10 upgrade procedure is:

systemctl stop ceph.target
yum update
systemctl start ceph.target

-- Dan

On Mon, Dec 3, 2018 at 12:42 PM Paul Emmerich  wrote:
>
> Upgrading Ceph packages does not restart the services -- exactly for
> this reason.
>
> This means there's something broken with your yum setup if the
> services are restarted when only installing the new version.
>
>
> Paul
>
> --
> Paul Emmerich
>
> Looking for help with your Ceph cluster? Contact us at https://croit.io
>
> croit GmbH
> Freseniusstr. 31h
> 81247 München
> www.croit.io
> Tel: +49 89 1896585 90
>
> Am Mo., 3. Dez. 2018 um 11:56 Uhr schrieb Jan Kasprzak :
> >
> > Hello, ceph users,
> >
> > I have a small(-ish) Ceph cluster, where there are osds on each host,
> > and in addition to that, there are mons on the first three hosts.
> > Is it possible to upgrade the cluster to Luminous without service
> > interruption?
> >
> > I have tested that when I run "yum --enablerepo Ceph update" on a
> > mon host, the osds on that host remain down until all three mons
> > are upgraded to Luminous. Is it possible to upgrade ceph-mon only,
> > and keep ceph-osd running the old version (Jewel in my case) as long
> > as possible? It seems RPM dependencies forbid this, but with --nodeps
> > it could be done.
> >
> > Is there a supported way how to upgrade host running both mon and osd
> > to Luminous?
> >
> > Thanks,
> >
> > -Yenya
> >
> > --
> > | Jan "Yenya" Kasprzak  |
> > | http://www.fi.muni.cz/~kas/ GPG: 4096R/A45477D5 |
> >  This is the world we live in: the way to deal with computers is to google
> >  the symptoms, and hope that you don't have to watch a video. --P. Zaitcev
> > ___
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Upgrade to Luminous (mon+osd)

2018-12-03 Thread Jan Kasprzak
Paul Emmerich wrote:
: Upgrading Ceph packages does not restart the services -- exactly for
: this reason.
: 
: This means there's something broken with your yum setup if the
: services are restarted when only installing the new version.

Interesting. I have verified that I have

CEPH_AUTO_RESTART_ON_UPGRADE=no

in my /etc/sysconfig/ceph, yet my ceph-osd daemons get restarted on upgrade.
I have watched "ps ax|grep ceph-osd" output during
"yum --enablerepo Ceph update", and it seems the OSDs got restarted
near the time ceph-selinux got upgraded:

  Updating   : 2:ceph-base-12.2.10-0.el7.x86_64  74/248 
  Updating   : 2:ceph-selinux-12.2.10-0.el7.x86_64   75/248
  Updating   : 2:ceph-mon-12.2.10-0.el7.x86_64   76/248 

And indeed, rpm -q --scripts ceph-selinux shows that this package restarts
the whole ceph.target when the labels got changed:

[...]
# Check whether the daemons are running
/usr/bin/systemctl status ceph.target > /dev/null 2>&1
STATUS=$?

# Stop the daemons if they were running
if test $STATUS -eq 0; then
/usr/bin/systemctl stop ceph.target > /dev/null 2>&1
fi
[...]

So maybe ceph-selinux should also honor CEPH_AUTO_RESTART_ON_UPGRADE=no
in /etc/sysconfig/ceph ? But I am not sure whether it is possible at all,
when the labels got changed.

-Yenya

: Am Mo., 3. Dez. 2018 um 11:56 Uhr schrieb Jan Kasprzak :
: >
: > I have a small(-ish) Ceph cluster, where there are osds on each host,
: > and in addition to that, there are mons on the first three hosts.
: > Is it possible to upgrade the cluster to Luminous without service
: > interruption?
: >
: > I have tested that when I run "yum --enablerepo Ceph update" on a
: > mon host, the osds on that host remain down until all three mons
: > are upgraded to Luminous. Is it possible to upgrade ceph-mon only,
: > and keep ceph-osd running the old version (Jewel in my case) as long
: > as possible? It seems RPM dependencies forbid this, but with --nodeps
: > it could be done.
: >
: > Is there a supported way how to upgrade host running both mon and osd
: > to Luminous?

-- 
| Jan "Yenya" Kasprzak  |
| http://www.fi.muni.cz/~kas/ GPG: 4096R/A45477D5 |
 This is the world we live in: the way to deal with computers is to google
 the symptoms, and hope that you don't have to watch a video. --P. Zaitcev
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Upgrade to Luminous (mon+osd)

2018-12-03 Thread Paul Emmerich
Upgrading Ceph packages does not restart the services -- exactly for
this reason.

This means there's something broken with your yum setup if the
services are restarted when only installing the new version.


Paul

-- 
Paul Emmerich

Looking for help with your Ceph cluster? Contact us at https://croit.io

croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89 1896585 90

Am Mo., 3. Dez. 2018 um 11:56 Uhr schrieb Jan Kasprzak :
>
> Hello, ceph users,
>
> I have a small(-ish) Ceph cluster, where there are osds on each host,
> and in addition to that, there are mons on the first three hosts.
> Is it possible to upgrade the cluster to Luminous without service
> interruption?
>
> I have tested that when I run "yum --enablerepo Ceph update" on a
> mon host, the osds on that host remain down until all three mons
> are upgraded to Luminous. Is it possible to upgrade ceph-mon only,
> and keep ceph-osd running the old version (Jewel in my case) as long
> as possible? It seems RPM dependencies forbid this, but with --nodeps
> it could be done.
>
> Is there a supported way how to upgrade host running both mon and osd
> to Luminous?
>
> Thanks,
>
> -Yenya
>
> --
> | Jan "Yenya" Kasprzak  |
> | http://www.fi.muni.cz/~kas/ GPG: 4096R/A45477D5 |
>  This is the world we live in: the way to deal with computers is to google
>  the symptoms, and hope that you don't have to watch a video. --P. Zaitcev
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com