[ceph-users] Re: quincy v17.2.6 QE Validation status

2023-03-23 Thread Ken Dreyer
How much more time do we need to get PR 50549 in if we delayed v17.2.6? - Ken On Thu, Mar 23, 2023 at 2:44 PM Laura Flores wrote: > We are all good on the Core end of things. > https://github.com/ceph/ceph/pull/50549 is needed for downstream, but it > should not block upstream. > > On Thu, Mar

[ceph-users] Re: quincy v17.2.6 QE Validation status

2023-03-23 Thread Laura Flores
I will have a review of the rados suite ready soon. On Thu, Mar 23, 2023 at 1:44 PM Laura Flores wrote: > We are all good on the Core end of things. > https://github.com/ceph/ceph/pull/50549 is needed for downstream, but it > should not block upstream. > > On Thu, Mar 23, 2023 at 12:59 PM Laura

[ceph-users] Re: quincy v17.2.6 QE Validation status

2023-03-23 Thread Laura Flores
We are all good on the Core end of things. https://github.com/ceph/ceph/pull/50549 is needed for downstream, but it should not block upstream. On Thu, Mar 23, 2023 at 12:59 PM Laura Flores wrote: > https://github.com/ceph/ceph/pull/50575 was also merged. > > On Thu, Mar 23, 2023 at 12:36 PM

[ceph-users] Re: quincy v17.2.6 QE Validation status

2023-03-23 Thread Laura Flores
https://github.com/ceph/ceph/pull/50575 was also merged. On Thu, Mar 23, 2023 at 12:36 PM Yuri Weinstein wrote: > We are still working on core PRs: > > https://github.com/ceph/ceph/pull/50549 > https://github.com/ceph/ceph/pull/50625 - merged > https://github.com/ceph/ceph/pull/50575 > > Will

[ceph-users] Re: ln: failed to create hard link 'file name': Read-only file system

2023-03-23 Thread Frank Schilder
Hi Xiubo and Gregory, sorry for the slow reply, I did some more debugging and didn't have too much time. First some questions to collecting logs, but please see also below for reproducing the issue yourselves. I can reproduce it reliably but need some input for these: > enabling the kclient

[ceph-users] Re: quincy v17.2.6 QE Validation status

2023-03-23 Thread Yuri Weinstein
We are still working on core PRs: https://github.com/ceph/ceph/pull/50549 https://github.com/ceph/ceph/pull/50625 - merged https://github.com/ceph/ceph/pull/50575 Will update as soon as we are ready for the next steps. On Thu, Mar 23, 2023 at 10:34 AM Casey Bodley wrote: > > On Wed, Mar 22,

[ceph-users] Re: quincy v17.2.6 QE Validation status

2023-03-23 Thread Casey Bodley
On Wed, Mar 22, 2023 at 9:27 AM Casey Bodley wrote: > > On Tue, Mar 21, 2023 at 4:06 PM Yuri Weinstein wrote: > > > > Details of this release are summarized here: > > > > https://tracker.ceph.com/issues/59070#note-1 > > Release Notes - TBD > > > > The reruns were in the queue for 4 days because

[ceph-users] With Ceph Quincy, the "ceph" package does not include ceph-volume anymore

2023-03-23 Thread Geert Kloosterman
Hi all, Until Ceph Pacific, installing just the "ceph" package was enough to get everything needed to deploy Ceph. However, with Quincy, ceph-volume was split off into its own package, and it is not automatically installed anymore. Here we can see it is not listed as a dependency: $ rpm

[ceph-users] Re: Ceph Mgr/Dashboard Python depedencies: a new approach

2023-03-23 Thread Casey Bodley
hi Ernesto and lists, > [1] https://github.com/ceph/ceph/pull/47501 are we planning to backport this to quincy so we can support centos 9 there? enabling that upgrade path on centos 9 was one of the conditions for dropping centos 8 support in reef, which i'm still keen to do if not, can we find

[ceph-users] Re: Unexpected ceph pool creation error with Ceph Quincy

2023-03-23 Thread Geert Kloosterman
Hi, Thanks again for your input. The value of mon_max_pool_pg_num was at its default. It turns out I had missed a few steps in my earlier effort: After I removed the old default settings for osd_pool_default_pg_num and osd_pool_default_pgp_num from ceph.conf on *all* nodes, and restarted all

[ceph-users] Re: Almalinux 9

2023-03-23 Thread Dario Graña
I made some tests with a virtual environment, mons, mds and OSDs. The OSDs were 3 VMs with 3 disks each. Now I'm testing Ceph Quincy on AlmaLinux9 without problems in a test environment. I'm using VMs for mons (3) and mds(2) but the OSDs (8) are all physical nodes with 24 HDD. The installation