[ceph-users] Exporting CephFS using Samba preferred method

2021-04-13 Thread Martin Palma
Hello, what is the currently preferred method, in terms of stability and performance, for exporting a CephFS directory with Samba? - locally mount the CephFS directory and export it via Samba? - using the "vfs_ceph" module of Samba? Best, Martin ___ ce

[ceph-users] Re: As the cluster is filling up, write performance decreases

2021-04-13 Thread Dylan McCulloch
We noticed this degraded write performance too recently when the nearfull flag is present (cephfs kernel client, kernel 4.19.154). Appears to be due to forced synchronous writes when nearfull. https://github.com/ceph/ceph-client/blob/558b4510f622a3d96cf9d95050a04e7793d343c7/fs/ceph/file.c#L1837-L1

[ceph-users] Swift Stat Timeout

2021-04-13 Thread Dylan Griff
Hey folks! We have a user with ~1900 buckets in our RGW service and running this stat command results in a timeout for them: swift -A https://:443/auth/1.0 -U -K stat Running the same command, but specifiying one of their buckets, returns promptly. Running the command for a different user wi

[ceph-users] Re: As the cluster is filling up, write performance decreases

2021-04-13 Thread Nathan Fish
It might be more accurate to say that the default nearfull is 85% for that reason, among others. Raising it will probably not get you enough storage to be worth the hassle. On Tue, Apr 13, 2021 at 7:18 AM zp_8483 wrote: > > Backend: > > XFS for the filestore back-end. > > > In our testing, we fou

[ceph-users] Re: ceph rgw why are reads faster for larger than 64kb object size

2021-04-13 Thread Ronnie Puthukkeril
After much digging figured this was due to Nagle’s Algorithm and due to the fact that I had the cosbench services on same host as the RGW daemons. Fix was to disable Nagle’s Algorithm by using the option tcp_nodelay=1 in the rgw_frontends configuration. This option works for both Civetweb and Be

[ceph-users] How to disable ceph-grafana during cephadm bootstrap

2021-04-13 Thread mabi
Hello, When bootstrapping a new ceph Octopus cluster with "cephadm bootstrap", how can I tell the cephadm bootstrap NOT to install the ceph-grafana container? Thank you very much in advance for your answer. Best regards, Mabi ___ ceph-users mailing li

[ceph-users] Announcing go-ceph v0.9.0

2021-04-13 Thread John Mulligan
I'm happy to announce another release of the go-ceph API bindings. This is a regular release following our every-two-months release cadence. https://github.com/ceph/go-ceph/releases/tag/v0.9.0 Changes in the release are detailed in the link above. The bindings aim to play a similar role to the

[ceph-users] Revisit Large OMAP Objects

2021-04-13 Thread DHilsbos
All; We run 2 Nautilus clusters, with RADOSGW replication (14.2.11 --> 14.2.16). Initially our bucket grew very quickly, as I was loading old data into it and we quickly ran into Large OMAP Object warnings. I have since done a couple manual reshards, which has fixed the warning on the primary

[ceph-users] Re: BADAUTHORIZER in Nautilus, unknown PGs, slow peering, very slow client I/O

2021-04-13 Thread Nico Schottelius
Follow up myself, some notes, what helped: - Finding OSDs with excessive bad authorizer logs, killing them, restarting them In many cases this cleared the unknown PGs and restored to more normal I/O. However some OSDs continued with a high amount of messages for some more hours even after resta

[ceph-users] Re: has anyone enabled bdev_enable_discard?

2021-04-13 Thread Dan van der Ster
On Tue, Apr 13, 2021 at 12:35 PM Mark Nelson wrote: > > On 4/13/21 4:07 AM, Dan van der Ster wrote: > > > On Tue, Apr 13, 2021 at 9:00 AM Wido den Hollander wrote: > >> > >> > >> On 4/12/21 5:46 PM, Dan van der Ster wrote: > >>> Hi all, > >>> > >>> bdev_enable_discard has been in ceph for several

[ceph-users] As the cluster is filling up, write performance decreases

2021-04-13 Thread zp_8483
Backend: XFS for the filestore back-end. In our testing, we found the performance decreases when cluster usage exceed default nearfull ratio(85%), is it by design? ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email t

[ceph-users] Re: has anyone enabled bdev_enable_discard?

2021-04-13 Thread Mark Nelson
On 4/13/21 4:07 AM, Dan van der Ster wrote: On Tue, Apr 13, 2021 at 9:00 AM Wido den Hollander wrote: On 4/12/21 5:46 PM, Dan van der Ster wrote: Hi all, bdev_enable_discard has been in ceph for several major releases now but it is still off by default. Did anyone try it recently -- is it

[ceph-users] Enable Dashboard Active Alerts

2021-04-13 Thread E Taka
Hi, this is documented with many links to other documents, which unfortunately only confused me. In our 6-Node-Ceph-Cluster (Pacific) the Dashboard tells me that I should "provide the URL to the API of Prometheus' Alertmanager". We only use Grafana and Prometheus which are deployed by cephadm. We

[ceph-users] Re: has anyone enabled bdev_enable_discard?

2021-04-13 Thread Dan van der Ster
On Tue, Apr 13, 2021 at 9:00 AM Wido den Hollander wrote: > > > > On 4/12/21 5:46 PM, Dan van der Ster wrote: > > Hi all, > > > > bdev_enable_discard has been in ceph for several major releases now > > but it is still off by default. > > Did anyone try it recently -- is it safe to use? And do you

[ceph-users] Re: has anyone enabled bdev_enable_discard?

2021-04-13 Thread Wido den Hollander
On 4/12/21 5:46 PM, Dan van der Ster wrote: > Hi all, > > bdev_enable_discard has been in ceph for several major releases now > but it is still off by default. > Did anyone try it recently -- is it safe to use? And do you have perf > numbers before and after enabling? > I have done so on SATA