Hello,

I've installed a 3 nodes ceph cluster manually to test this fix, like
the one we've got in production.

Installed with 19.2.0~git20240301.4c76c50-0ubuntu6 version, with a "ceph
-s" diplaying HEALTH_OK, I used thoose commands to upgrade the cluster :

From one of the monitors :
~# ceph osd set noout; ceph osd set norecover; ceph osd set norebalance; ceph 
osd set nobackfill; ceph osd set nodown; ceph osd set pause

On each nodes (which contains MON, MGR, OSD and MDS) : 
- systemctl stop ceph.target

- cat /etc/apt/sources.list.d/ubuntu-proposed.sources
Types: deb
URIs: http://archive.ubuntu.com/ubuntu
Suites: noble-proposed
Components: main restricted universe multiverse
Signed-By: /usr/share/keyrings/ubuntu-archive-keyring.gpg

- cat /etc/apt/preferences.d/proposed-updates
# Configure apt to allow selective installs of packages from proposed
Package: *
Pin: release a=noble-proposed
Pin-Priority: -1

- apt update
- apt-mark unhold ceph-base ceph-common ceph-mds ceph-mgr-modules-core ceph-mgr 
ceph-mon ceph-osd ceph-volume ceph libcephfs2 librados2 libradosstriper1 
librbd1 libsqlite3-mod-ceph python3-ceph-argparse python3-ceph-common 
python3-cephfs python3-rados python3-rbd
- apt-get -s install ceph/noble-proposed libradosstriper1/noble-proposed

Here, all the nodes have to be at the same point (ceph stopped and
system configured with the updated proposed repository).

- apt-get install ceph/noble-proposed libradosstriper1/noble-proposed
- ceph --version
ceph version 19.2.1 (9efac4a81335940925dd17dbf407bfd6d3860d28) squid (stable)
- ceph osd unset noout; ceph osd unset norecover; ceph osd unset norebalance; 
ceph osd unset nobackfill; ceph osd unset nodown; ceph osd unset pause
- ceph -s
  cluster:
    id:     319b894e-5d7a-4730-aa8c-964c5517fbbc
    health: HEALTH_OK

  services:
    mon: 3 daemons, quorum ugfsicpd01,ugfsicpd02,ugfsicpd03 (age 35m)
    mgr: ugfsicpd01(active, since 35m), standbys: ugfsicpd02
    mds: 2/2 daemons up, 1 standby
    osd: 3 osds: 3 up (since 35m), 3 in (since 4M)

  data:
    volumes: 2/2 healthy
    pools:   5 pools, 209 pgs
    objects: 581 objects, 2.0 GiB
    usage:   6.1 GiB used, 84 GiB / 90 GiB avail
    pgs:     209 active+clean


At the end, ceph services was started by packages installation and a "ceph -s" 
displays a HEALTH_OK cluster status. I was able to access previous created test 
datas on a CephFS volume.

That seems to be good. I'm gonna retest this two or three times, hoping to have 
the same results. 
As this fix was published as a newer version (19.2.1 regarding previous 19.2.0 
one), does this mean that this version will be compatible with futures packages 
security upgrades, with no regression ?

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2089565

Title:
  MON and MDS crash upgrading  CEPH  on ubuntu 24.04 LTS

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ceph/+bug/2089565/+subscriptions


-- 
ubuntu-bugs mailing list
[email protected]
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

Reply via email to