> Am 12.02.2019 um 00:03 schrieb Patrick Donnelly <pdonn...@redhat.com>: > > On Mon, Feb 11, 2019 at 12:10 PM Götz Reinicke > <goetz.reini...@filmakademie.de> wrote: >> as 12.2.11 is out for some days and no panic mails showed up on the list I >> was planing to update too. >> >> I know there are recommended orders in which to update/upgrade the cluster >> but I don’t know how rpm packages are handling restarting services after a >> yum update. E.g. when MDS and MONs are on the same server. > > This should be fine. The MDS only uses a new executable file if you > explicitly restart it via systemd (or, the MDS fails and systemd > restarts it). > > More info: when the MDS respawns in normal circumstances, it passes > the /proc/self/exe file to execve. An intended side-effect is that the > MDS will continue using the same executable file across execs. > >> And regarding an MDS Cluster I like to ask, if the upgrading instructions >> regarding only running one MDS during upgrading also applies for an update? >> >> http://docs.ceph.com/docs/mimic/cephfs/upgrading/ > > If you upgrade an MDS, it may update the compatibility bits in the > Monitor's MDSMap. Other MDSs will abort when they see this change. The > upgrade process intended to help you avoid seeing those errors so you > don't inadvertently think something went wrong. > > If you don't mind seeing those errors and you're using 1 active MDS, > then don't worry about it.
Thanks for your feedback and clarification! I have one active MDS and one standby, bot on the same version. So I might see some errors during upgrade, but don’t have to stop the standby MDS?! Or to be save should stop the standby? Thanks if you can comment on that. Regards . Götz
smime.p7s
Description: S/MIME cryptographic signature
_______________________________________________ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com