[ceph-users] Re: Reef: RGW Multisite object fetch limits

2024-05-15 Thread Jayanth Reddy
Hello Community, In addition, we've 3+ Gbps links and the average object size is 200 kilobytes. So the utilization is about 300 Mbps to ~ 1.8 Gbps and not more than that. We seem to saturate the link when the secondary zone fetches bigger objects sometimes but the objects per second always seem to

[ceph-users] Please discuss about Slow Peering

2024-05-15 Thread 서민우
Env: - OS: Ubuntu 20.04 - Ceph Version: Octopus 15.0.0.1 - OSD Disk: 2.9TB NVMe - BlockStorage (Replication 3) Symptom: - Peering when OSD's node up is very slow. Peering speed varies from PG to PG, and some PG may even take 10 seconds. But, there is no log for 10 seconds. - I checked the effect

[ceph-users] Reef: RGW Multisite object fetch limits

2024-05-15 Thread Jayanth Reddy
Hello Community, We've two zones with Reef (v18.2.1) and trying to sync over 2 billion RGW objects to the secondary zone. We've added a fresh secondary zone and each zones have 2 RGW dedicated daemons (behind LB) each and only for multisite; whereas others don't run sync threads. Strange thing is

[ceph-users] Reminder: User + Dev Monthly Meetup rescheduled to May 23rd

2024-05-15 Thread Laura Flores
Hi all, Those of you subscribed to the Ceph Community Calendar may have already gotten an update that the meeting was rescheduled, but I wanted to send a reminder here as well. The meeting will be held at the usual time a week from now on May 23rd. If you haven't already, please take the new

[ceph-users] Re: ceph dashboard reef 18.2.2 radosgw

2024-05-15 Thread Christopher Durham
Pierre, This is indeed the problem. I modified the line in /usr/share/ceph/mgr/dashboard/controllers/rgw.py 'port': int(re.findall(r'port=(\d+)', metadata['frontend_config#0'])[0]) to just be: port: 443 and all works. I see that in the pull that if it cannot find port= or in one of endpoint

[ceph-users] Re: Write issues on CephFS mounted with root_squash

2024-05-15 Thread Fabien Sirjean
Hi, We have the same issue. It seems to come from this bug : https://access.redhat.com/solutions/6982902 We had to disable root_squash, which of course is a huge issue... Cheers, Fabien On 5/15/24 12:54, Nicola Mori wrote: Dear Ceph users, I'm trying to export a CephFS with the

[ceph-users] Re: Write issues on CephFS mounted with root_squash

2024-05-15 Thread Nicola Mori
Thank you Bailey, I'll give it a try ASAP. By the way, is this issue with the kernel driver something that will be fixed at a given point? If I'm correct the kernel driver has better performance than FUSE so I'd like to use it. Cheers, Nicola smime.p7s Description: S/MIME Cryptographic

[ceph-users] Re: Write issues on CephFS mounted with root_squash

2024-05-15 Thread Bailey Allison
Hey Nicola, Try mounting cephfs with fuse instead of kernel, we have seen before sometimes the kernel mount does not properly support that option but the fuse mount does. Regards, Bailey > -Original Message- > From: Nicola Mori > Sent: May 15, 2024 7:55 AM > To: ceph-users >

[ceph-users] Write issues on CephFS mounted with root_squash

2024-05-15 Thread Nicola Mori
Dear Ceph users, I'm trying to export a CephFS with the root_squash option. This is the client configuration: client.wizardfs_rootsquash key: caps: [mds] allow rw fsname=wizardfs root_squash caps: [mon] allow r fsname=wizardfs caps: [osd] allow rw

[ceph-users] ceph tell mds.0 dirfrag split - syntax of the "frag" argument

2024-05-15 Thread Alexander E. Patrakov
Hello, In the context of https://tracker.ceph.com/issues/64298, I decided to do something manually. In the help output of "ceph tell" for an MDS, I found these possibly useful commands: dirfrag ls : List fragments in directory dirfrag merge : De-fragment directory by path dirfrag split :