[ceph-users] Re: Ceph debian/ubuntu packages build

2022-08-10 Thread David Galloway
Well, so that sounds like you ended up with actual .debs so you should have been able to just run reprepro after installing it. Were you trying to then create a repo? BTW, it may be worthwhile to update the docs to mention reprepro being a prerequisite.

[ceph-users] Re: linux distro requirements for reef

2022-08-10 Thread Reed Dier
I will chime in just from my ubuntu perspective, if I compare previous (LTS) releases of ceph to ubuntu, there has typically been a 2 release cadence per ubuntu release. version U14 U16 U18 U20 U22 U24 jewel X X luminous X X mimic X X nautilous X X octopus X X pacific X X

[ceph-users] Re: Request for Info: bluestore_compression_mode?

2022-08-10 Thread Mark Nelson
On 8/10/22 10:08, Frank Schilder wrote: Hi Mark. I actually had no idea that you needed both the yaml option and the pool option configured I guess you are referring to ceph-adm deployments, which I'm not using. In the ceph config data base, both options mush be enabled irrespective of how

[ceph-users] Re: linux distro requirements for reef

2022-08-10 Thread Marc
> > The immediate driver is both a switch to newer versions of python, and > to > newer compilers supporting more C++20 features. But this is known for 'decades', don't you incorporate this in your long term development planning? It is not like you are making some useless temporary photo or

[ceph-users] Re: Multi-active MDS cache pressure

2022-08-10 Thread Eugen Block
Hi, This thread contains some really insightful information. Thanks Eugen for sharing the explanation by the SUSE team. Definitely the doc can be updated with this, it might help a lot of people indeed. Can you help creating a tracker for this? I wish to add the info to doc and push a PR for

[ceph-users] Re: linux distro requirements for reef

2022-08-10 Thread Gregory Farnum
The immediate driver is both a switch to newer versions of python, and to newer compilers supporting more C++20 features. More generally, supporting multiple versions of a distribution is a lot of work and when Reef comes out next year, CentOS9 will be over a year old. We generally move new

[ceph-users] Re: linux distro requirements for reef

2022-08-10 Thread Konstantin Shalygin
Ken, can you please describe what incompatibilities or dependencies are causing to not build packages for c8s? It's not obvious from the first message, from community side  Thanks, k Sent from my iPhone > On 10 Aug 2022, at 20:02, Ken Dreyer wrote: > > On Wed, Aug 10, 2022 at 11:35 AM

[ceph-users] Re: linux distro requirements for reef

2022-08-10 Thread Ken Dreyer
On Wed, Aug 10, 2022 at 11:35 AM Konstantin Shalygin wrote: > > Hi Ken, > > CentOS 8 Stream will continue to receive packages or have some barrires for R? No, starting with Reef, we will no longer build nor ship RPMs for CentOS 8 Stream (and debs for Ubuntu Focal) from download.ceph.com. The

[ceph-users] Re: linux distro requirements for reef

2022-08-10 Thread Konstantin Shalygin
Hi Ken, CentOS 8 Stream will continue to receive packages or have some barrires for R? Thanks, k Sent from my iPhone > On 10 Aug 2022, at 18:08, Ken Dreyer wrote: > > Hi folks, > > In the Ceph Leadership Team meeting today we discussed dropping > support for older distros in our Reef

[ceph-users] linux distro requirements for reef

2022-08-10 Thread Ken Dreyer
Hi folks, In the Ceph Leadership Team meeting today we discussed dropping support for older distros in our Reef release. CentOS 9 and Ubuntu Jammy (22.04) have been out for a while. With recent changes in Ceph's main branch, it will make it easier to minimally require CentOS 9 and Ubuntu Jammy

[ceph-users] Re: 16.2.9 High rate of Segmentation fault on ceph-osd processes

2022-08-10 Thread Paul JURCO
Hi! All restarted as required in upgrade plan in the proper order, all software was upgraded on all nodes. We are on Ubuntu 18 (all nodes). "ceph versions" output shows all is on "16.2.9". Thank you! -- Paul Jurco On Wed, Aug 10, 2022 at 5:43 PM Eneko Lacunza wrote: > Hi Paul, > > Did you

[ceph-users] 16.2.9 High rate of Segmentation fault on ceph-osd processes

2022-08-10 Thread Paul JURCO
Hi, We have two similar clusters in number of hosts and disks, about the same age with pacific 16.2.9. Both have a mix of hosts with 1TB and 2TB disks (disks' capacity is not mixed on hosts for OSDs). One of the clusters has 21 osd process crashes in the last 7 days, the other has just 3. Full

[ceph-users] Re: Multi-active MDS cache pressure

2022-08-10 Thread Dhairya Parmar
Hi there, This thread contains some really insightful information. Thanks Eugen for sharing the explanation by the SUSE team. Definitely the doc can be updated with this, it might help a lot of people indeed. Can you help creating a tracker for this? I wish to add the info to doc and push a PR

[ceph-users] Re: [Ceph-maintainers] Re: Re: v15.2.17 Octopus released

2022-08-10 Thread Ilya Dryomov
On Wed, Aug 10, 2022 at 3:03 AM Laura Flores wrote: > > Hey Satoru and others, > > Try this link: > https://ceph.io/en/news/blog/2022/v15-2-17-octopus-released/ Note that this release also includes the fix for CVE-2022-0670 [1] (same as in v16.2.10 and v17.2.2 hotfix releases). I have updated

[ceph-users] Re: ceph debian/ubuntu packages

2022-08-10 Thread Marc
find / -mtime -1 ? > > I have a naive question: after I run ./make-debs.sh to build > debian/ubuntu > packages, where can I find those generated artifacts? > ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to

[ceph-users] Re: CephFS: permissions of the .snap directory do not inherit ACLs

2022-08-10 Thread Robert Sander
Am 09.08.22 um 22:31 schrieb Patrick Donnelly: It sounds like a bug. Please create a tracker ticket with details about your environment and an example. Just created https://tracker.ceph.com/issues/57084 Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin

[ceph-users] Re: ceph drops privilege before creating /var/run/ceph

2022-08-10 Thread Marc
> > I've built a ceph container image based on ubuntu and used rook to > install > ceph in my GKE cluster, but I found in the ceph-mon log that the run-dir > is > not created: > warning: unable to create /var/run/ceph: (13) Permission denied > debug 2022-08-05T00:38:06.472+ 7f0960c2c540 -1