Hi Burkhard, hi list,
I checked the 'mds_max_caps_per_client' setting and it turned out that
it was set to the default value of 1 million. The
'mds_cache_memory_limit' setting, however I had previously set to 40GB.
Given this, I now started to play around with the max_caps and set
Hi Burkhard,
thanks so much for the quick reply and the explanation and suggestions.
I'll check these settings and eventually change them and report back.
Best
Dietmar
On 1/18/21 6:00 PM, Burkhard Linke wrote:
Hi,
On 1/18/21 5:46 PM, Dietmar Rieder wrote:
Hi all,
we noticed a massive
Hi,
On 1/18/21 5:46 PM, Dietmar Rieder wrote:
Hi all,
we noticed a massive drop in requests per second a cephfs client is
able to perform when we do a recursive chown over a directory with
millions of files. As soon as we see about 170k caps on the MDS, the
client performance drops from
Hi all,
we noticed a massive drop in requests per second a cephfs client is able
to perform when we do a recursive chown over a directory with millions
of files. As soon as we see about 170k caps on the MDS, the client
performance drops from about 660 reqs/sec to 70 reqs/sec.
When we then
Hello Cephers,
On a new cluster, I only have 2 RBD block images, and the Dashboard
doesn't manage to list them correctly.
I have this message :
Warning
Displaying previously cached data for pool veeam-repos.
Sometime it disappears, but as soon as I reload or return to the listing
page,
Hi,
Is there any reason why RBD image size isn't exported in the Prometheus
module?
Thanks.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io