Hello,
Over the last week I have tried optimising the performance of our MDS
nodes for the large amount of files and concurrent clients we have. It
turns out that despite various stability fixes in recent releases, the
default configuration still doesn't appear to be optimal for keeping the
cache
Hi,
I’ve just upgrade my cluster from Jewel to Nautilus (still running filestore).
Since I'v stopped deepscrub for several days for this upgrade, now I got
warning of "3 pgs not deep-scrubbed in time”. I tried to increase
osd_max_scrubs to 3, osd_scrub_load_threshold to 5.0 and
On Fri, Jan 24, 2020 at 1:43 PM Frank Schilder wrote:
>
> Dear Ilya,
>
> I had exactly the same problem with authentication of cephfs clients on a
> mimic-13.2.2 cluster. The key created with "ceph fs authorize ..." did not
> grant access to the data pool. I ended up adding "rw" access to this