Hello Matthew and Chris,

I've replied to the questions in the original message. To wit:

- All of our Ceph charms will be tested. These are: ceph-mon, ceph-osd,
ceph-radosgw, ceph-fs, ceph-nfs, ceph-nvme, ceph-proxy, ceph-rbd-mirror,
ceph-dashboard.

- Our test suite includes upgrade testing from the current stable
version (19.2.1) to proposed (19.2.3). We also perform integration
testing with Openstack charms.

- We will test it manually. We will deploy a Ceph cluster with 19.2.3,
then enable tracing to make sure it works. Afterwards, we will disable
it to make sure no more tracing output is produced and that everything
works as normal (Tracing is enabled/disabled via a simple config
option).

- It's not possible to test this reliably as this data loss was caused
under very hard to reproduce circumstances. We can leave radosgw running
for a bit and verify that no data loss happened, but that would not
imply that the bug is no longer present (FWIW, the ceph-radosgw tests
already do something like this).

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2119024

Title:
  [SRU] Squid: Ceph new point release 19.2.3

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/2119024/+subscriptions


-- 
ubuntu-bugs mailing list
[email protected]
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

Reply via email to