i filed this bug specifically for hyperconverged environments. Upgrading
monitor nodes first and then upgrading separate OSD nodes is probably
doable, but in a hyperconverged environment you can not separate.

I tried do-release-upgrade (a couple of times) without rebooting at the end, 
but found the monitors and OSDs were upgraded and deadlocked at the end.
I tried shutting down all Ceph services first and then do-release upgrade. 
Which started my Ceph services and destroyed my cluster.
I tried manually upgrading Ceph, which is thwarted by the dependencies, it's 
all or nothing.

I finally accomplished the upgrade by marking all Ceph packages held,
then digging myself through the dependency jungle to upgrade the
packages in the right sequence. This was an absolute nightmare and took
me more than an hour per node. Obviously is not a production ready way
to do so, but at least Ceph Octopus is running in 20.04 now now.

There are two asks here: 

Separate the dependencies so that ceph-mon, ceph-mgr and ceph-osd can be
installed separately (with the appropriate dependencies, but in a way
that upgrading ceph-mon does not try to upgrade ceph-osd also. There is
no good reason why upgrade of ceph-mon should go down and back up the
dependency tree and try to upgrade ceph-osd too. In fact, I would not
want monitor packages on my OSD nodes and vice versa in a traditional

And fix do-release-upgrades, so a Ceph cluster does not get restarted
when the upgrade procedure ends. I can vouch for the services being
restarted, i tried it several times, once even with the services shut
down before do-release-upgrade was started.

An upgrade procedure that breaks customer data should be fixed.

You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.

  ceph-osd can't connect after upgrade to focal

To manage notifications about this bug go to:

ubuntu-bugs mailing list

Reply via email to