[ceph-users] v16.2.9 Pacific released

2022-05-18 Thread David Galloway
16.2.9 is a hotfix release to address a bug in 16.2.8 that can cause the MGRs to deadlock. See https://tracker.ceph.com/issues/55687. Getting Ceph * Git at git://github.com/ceph/ceph.git * Tarball at https://download.ceph.com/tarballs/ceph-16.2.9.tar.gz * Containers at

[ceph-users] Re: osd_disk_thread_ioprio_class deprecated?

2022-05-18 Thread Richard Bade
> See this PR > https://github.com/ceph/ceph/pull/19973 > Doing "git log -Sosd_disk_thread_ioprio_class -u > src/common/options.cc" in the Ceph source indicates that they were > removed in commit 3a331c8be28f59e2b9d952e5b5e864256429d9d5 which first > appeared in Mimic. Thanks Matthew and Josh

[ceph-users] Re: Options for RADOS client-side write latency monitoring

2022-05-18 Thread stéphane chalansonnet
Hello, In my opinion the better way is to deploy a batch fio pod (PVC volume on your rook ceph) on your K8S. IO profile depend of your workload but you can try 8Kb (postgresql default) random read/write and seq In this way, you will be as close as possible from the client side Export on Json the

[ceph-users] Re: S3 and RBD backup

2022-05-18 Thread stéphane chalansonnet
Hello, In fact S3 should be replicated on another region or AZ , and backup should be managed with versioning on bucket. But, in our case, we needed to secure the backup of databases (on K8S) into our external backup solution (EMC Networker) We implemented Ganesha and create an export NFS link

[ceph-users] Re: MDS fails to start with error PurgeQueue.cc: 286: FAILED ceph_assert(readable)

2022-05-18 Thread Eugen Block
Hi, I don’t know what could cause that error, but could you share more details? You seem to have multiple active MDSs, is that correct? Could they be overloaded? What happened exactly, did one MDS fail or all of them? Do the standby MDS report anything different? Zitat von Kuko Armas :

[ceph-users] Re: S3 and RBD backup

2022-05-18 Thread Sanjeev Jha
Thanks Janne for the information in detail. We have RHCS 4.2 non-collocated setup in one DC only. There are few RBD volumes mapped to MariaDB Database. Also, S3 endpoint with bucket is being used to upload objects. There is no multisite zone has been implemented yet. My Requirement is to take

[ceph-users] Re: Best way to change disk in controller disk without affect cluster

2022-05-18 Thread Anthony D'Atri
First question: why do you want to do this? There are some deployment scenarios in which moving the drives will Just Work, and others in which it won’t. If you try, I suggest shutting the system down all the way, exchanging just two drives, then powering back on — and see if all is well

[ceph-users] Re: osd_disk_thread_ioprio_class deprecated?

2022-05-18 Thread Matthew H
See this PR https://github.com/ceph/ceph/pull/19973 From: Josh Baergen Sent: Wednesday, May 18, 2022 10:54 AM To: Richard Bade Cc: Ceph Users Subject: [ceph-users] Re: osd_disk_thread_ioprio_class deprecated? Hi Richard, > Could anyone confirm this? And

[ceph-users] Re: Best way to change disk in controller disk without affect cluster

2022-05-18 Thread Jorge JP
Hello, Have I check same global flag for this operation? Thanks! De: Stefan Kooman Enviado: miércoles, 18 de mayo de 2022 14:13 Para: Jorge JP Asunto: Re: [ceph-users] Best way to change disk in controller disk without affect cluster On 5/18/22 13:06, Jorge

[ceph-users] Re: MDS upgrade to Quincy

2022-05-18 Thread Patrick Donnelly
Hi Jimmy, On Fri, Apr 22, 2022 at 11:02 AM Jimmy Spets wrote: > > Does cephadm automatically reduce ranks to 1 or does that have to be done > manually? Automatically. -- Patrick Donnelly, Ph.D. He / Him / His Principal Software Engineer Red Hat, Inc. GPG:

[ceph-users] May Ceph Science Virtual User Group

2022-05-18 Thread Kevin Hrpcek
Hey all, We will be having a Ceph science/research/big cluster call on Tuesday May 24th. Please note we're doing this on a Tuesday not the usual Wednesday we've done in the past. If anyone wants to discuss something specific they can add it to the pad linked below. If you have questions or

[ceph-users] Re: Moving data between two mounts of the same CephFS

2022-05-18 Thread Magnus HAGDORN
Hi Mathias, I have noticed in the past the moving directories within the same mount point can take a very long time using the system mv command. I use a python script to archive old user directories by moving them to a different part of the filesystem which is not exposed to the users. I use the

[ceph-users] Moving data between two mounts of the same CephFS

2022-05-18 Thread Kuhring, Mathias
Dear Ceph community, Let's say I want to make different sub-directories of my CephFS separately available on a client system, i.e. without exposing the parent directories (because it contains other sensitive data, for instance). I can simply mount specific different folders, as primitively

[ceph-users] Re: Upgrade from v15.2.16 to v16.2.7 not starting

2022-05-18 Thread Eugen Block
Do you see anything suspicious in /var/log/ceph/cephadm.log? Also check the mgr logs for any hints. Zitat von Lo Re Giuseppe : Hi, We have happily tested the upgrade from v15.2.16 to v16.2.7 with cephadm on a test cluster made of 3 nodes and everything went smoothly. Today we started

[ceph-users] MDS fails to start with error PurgeQueue.cc: 286: FAILED ceph_assert(readable)

2022-05-18 Thread Kuko Armas
Hello, I've been having problems with my MDS and they got stuck in up:reply state The journal was ok and everything seemed ok, so I reset the journal and now all MDS fail to start with the following error: 2022-05-18 12:27:40.092 7f8748561700 -1

[ceph-users] Re: No rebalance after ceph osd crush unlink

2022-05-18 Thread Dan van der Ster
Hi, It's interesting that crushtool doesn't include the shadow tree -- I am pretty sure that used to be included. I don't suggest editing the crush map, compiling, then re-injecting -- I don't know what it will do in this case. What you could do instead is something like: * ceph osd getcrushmap

[ceph-users] Best way to change disk in controller disk without affect cluster

2022-05-18 Thread Jorge JP
Hello! I have a cluster ceph with 6 nodes with 6 HDD disks in each one. The status of my cluster is OK and the pool 45.25% (95.55 TB of 211.14 TB). I don't have any problem. I want change the position of a various disks in the disk controller of some nodes and I don't know what is the way.

[ceph-users] Re: No rebalance after ceph osd crush unlink

2022-05-18 Thread Dan van der Ster
Hi Frank, Did you check the shadow tree (the one with tilde's in the name, seen with `ceph osd crush tree --show-shadow`)? Maybe the host was removed in the outer tree, but not the one used for device-type selection. There were bugs in this area before, e.g. https://tracker.ceph.com/issues/48065

[ceph-users] Upgrade from v15.2.16 to v16.2.7 not starting

2022-05-18 Thread Lo Re Giuseppe
Hi, We have happily tested the upgrade from v15.2.16 to v16.2.7 with cephadm on a test cluster made of 3 nodes and everything went smoothly. Today we started the very same operation on the production one (20 OSD servers, 720 HDDs) and the upgrade process doesn’t do anything at all… To be more