Thank you Eugen so much for your insights! We will definitely apply this
method next time. :-)
Best Regards,
Mary
On Sat, Apr 27, 2024 at 1:29 AM Eugen Block wrote:
> If the rest of the cluster is healthy and your resiliency is
> configured properly, for example to sustain the loss of one or
Actually should I be excluding my whole cephfs filesystem? Like, if I
mount it as /cephfs, should my stanza looks something like:
{
"files.watcherExclude": {
"**/.git/objects/**": true,
"**/.git/subtree-cache/**": true,
"**/node_modules/*/**": true,
"**/.cache/**":
Hi folks!
Thanks for a great Ceph Day event in NYC! I wanted to make sure I posted
my slides before I forget (and encourage others to do the same). Feel
free to reach out in the Ceph Slack
https://ceph.io/en/community/connect/
How we Operate Ceph at Scale (DigitalOcean):
-
I don't know why, but I miss my topic when I reply to it. Moderators, please
delete unnecessary topics and move my answer to the correct topic.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Colleagues, thank you for the advice to check the operability of MGRs. In fact,
it is strange also: we checked our nodes for the network issues (ip
connectivity, sockets, ACL, DNS) and find nothing wrong - but suddenly just the
restart of all MGRs solved the problem with stale PGs and with ceph
Colleagues, thank you for the advice to check the operability of MGRs. In fact,
it is strange also: we checked our nodes for the network issues (ip
connectivity, sockets, ACL, DNS) and find nothing wrong - but suddenly just the
restart of all MGRs solved the problem with stale PGs and with ceph
If the rest of the cluster is healthy and your resiliency is
configured properly, for example to sustain the loss of one or more
hosts at a time, you don’t need to worry about a single disk. Just
take it out and remove it (forcefully) so it doesn’t have any clients
anymore. Ceph will
Hi, I didn’t find any other config options other than you already did.
Just wanted to note that I did read your message. :-)
Maybe one of the Devs can comment.
Zitat von Stefan Kooman :
Hi,
We're testing with rbd-mirror (mode snapshot) and try to get status
updates about snapshots as fast
Hi Erich,
hope it helps. Let us know.
Dietmar
Am 26. April 2024 15:52:06 MESZ schrieb Erich Weiler :
>Hi Dietmar,
>
>We do in fact have a bunch of users running vscode on our HPC head node as
>well (in addition to a few of our general purpose interactive compute
>servers). I'll suggest they
Dear ceph community,
I’ve a ceph cluster which got upgraded from nautilus/pacific/…to reef over
time. Now I added two new nodes to an existing EC pool as I did with the
previous versions of ceph.
Now I face the fact, that the previous „backfilling tuning“ I’v used by
increasing injectargs
10 matches
Mail list logo