[ceph-users] does the RBD client block write when the Watcher times out?

2024-05-22 Thread Yuma Ogami
Hello. I'm currently verifying the behavior of RBD on failure. I'm wondering about the consistency of RBD images after network failures. As a result of my investigation, I found that RBD sets a Watcher to RBD image if a client mounts this volume to prevent multiple mounts. In addition, I found

[ceph-users] Re: cephfs-data-scan orphan objects while mds active?

2024-05-22 Thread Olli Rajala
Hmm... seems I might have been blinded and looking in the wrong place. I did some scripting and took a look at all the *. objects' "parent" xattrs on the pool. Nothing funky there and no files with a backtrace pointing to that deleted folder. No considerable amount of these inode object

[ceph-users] User + Dev Meetup Tomorrow!

2024-05-22 Thread Laura Flores
Hi all, The User + Dev Meetup will be held tomorrow at 10:00 AM EDT. We will be discussing the results of the latest survey, and users who attend will have the opportunity to provide additional feedback in real time. See you there! Laura Flores Meeting Details:

[ceph-users] Re: cephadm bootstraps cluster with bad CRUSH map(?)

2024-05-22 Thread Matthew Vernon
Hi, On 22/05/2024 12:44, Eugen Block wrote: you can specify the entire tree in the location statement, if you need to: [snip] Brilliant, that's just the ticket, thank you :) This should be made a bit clearer in the docs [0], I added Zac. I've opened a MR to update the docs, I hope it's

[ceph-users] Re: Reef RGWs stop processing requests

2024-05-22 Thread Enrico Bocchi
Hi Iain, Can you check if it relates to this? -- https://tracker.ceph.com/issues/63373 There is a bug when bulk deleting objects, causing the RGWs to deadlock. Cheers, Enrico On 5/17/24 11:24, Iain Stott wrote: Hi, We are running 3 clusters in multisite. All 3 were running Quincy 17.2.6

[ceph-users] Re: cephadm bootstraps cluster with bad CRUSH map(?)

2024-05-22 Thread Eugen Block
Hi, you can specify the entire tree in the location statement, if you need to: ceph:~ # cat host-spec.yaml service_type: host hostname: ceph addr: location: root: default rack: rack2 and after the bootstrap it looks like expected: ceph:~ # ceph osd tree ID CLASS WEIGHT TYPE NAME

[ceph-users] Re: How network latency affects ceph performance really with NVME only storage?

2024-05-22 Thread Frank Schilder
Hi Stefan, ahh OK, misunderstood your e-mail. It sounded like it was a custom profile, not a standard one shipped with tuned. Thanks for the clarification! = Frank Schilder AIT Risø Campus Bygning 109, rum S14 From: Stefan Bauer Sent:

[ceph-users] Re: How network latency affects ceph performance really with NVME only storage?

2024-05-22 Thread Stefan Bauer
Hi Frank, it's pretty straightforward. Just follow the steps: apt install tuned tuned-adm profile network-latency According to [1]: network-latency A server profile focused on lowering network latency. This profile favors performance over power savings by setting |intel_pstate| and

[ceph-users] Re: How network latency affects ceph performance really with NVME only storage?

2024-05-22 Thread Frank Schilder
Hi Stefan, can you provide a link to or copy of the contents of the tuned-profile so others can also profit from it? Thanks! = Frank Schilder AIT Risø Campus Bygning 109, rum S14 From: Stefan Bauer Sent: Wednesday, May 22, 2024 10:51 AM

[ceph-users] Re: How network latency affects ceph performance really with NVME only storage?

2024-05-22 Thread Stefan Bauer
Hi Anthony and others, thank you for your reply.  To be honest, I'm not even looking for a solution, i just wanted to ask if latency affects the performance at all in my case and how others handle this ;) One of our partners delivered a solution with a latency-optimized profile for

[ceph-users] Re: CephFS as Offline Storage

2024-05-22 Thread Joachim Kraftmayer
I have already installed multiple one node ceph cluster with cephfs for non-productive workloads in the last few years. Had no major issue, e.g. once a broken HDD. The question is what kind of EC or replication you will use. Also only powered off the node in a clean and healthy state ;-) What