One side affect of using sub volumes is that you can then only take a snap
at the sub volume level, nothing further down the tree.
I find you can use the same path on the auth without the sub volume unless
I’m missing something in this thread.
On Mon, Jan 2, 2023 at 10:21 AM Jonas Schwab <
jonas.
I use this very example, few more servers. I have no outage windows for my
ceph deployments as they support several production environments.
MDS is your focus, there are many knobs, but MDS is the key to client
experience. In my environment, MDS failover takes 30-180 seconds,
depending on how mu
Host networking is used by default as the network layer (no ip forwarding
requirement), so if your OS is jumbo your containers are.
As for the resources I’ll let more knowledgeable answer that, but you can
certainly run mon’s and OSD’s on the same box assuming you have enough CPU
and memory. I ha
In my experience it just falls back to behave like its un-pinned.
For my use case I do the following:
/ pinned to rank 0
/env1 to rank 1
/env2 to rank 2
/env 3 to rank 3
If I do an upgrade it will collapse to single rank, all access/IO continues
after what would be a normal failover type of inte
>
> But this reminded me that it may be possible to apply a complete new set
> of configs via
> ceph orch apply mon --placement="..."
>
> and that worked out. I hope this creates no further problems down the line
> when I want to reintegrate a new sn01 node.
>
> Tha
You try ceph orch daemon rm already?
On Thu, Jul 21, 2022 at 3:58 AM Dominik Baack <
dominik.ba...@cs.uni-dortmund.de> wrote:
> Hi,
>
> after removing a node from our cluster we are currently on cleanup:
>
> OSDs are removed and cluster is (mostly) healthy again
>
> mds were changed
>
>
> But we
This brings up a good follow on…. Rebooting in general for OS patching.
I have not been leveraging the maintenance mode function, as I found it was
really no different than just setting noout and doing the reboot. I find
if the box is the active manager the failover happens quick, painless and
au
I’d say one thing to keep in mind is the higher you have your cache, and
the more that is currently consumed, the LONGER it will take in the event
the reply has to take over…
While standby-reply does help to improve takeover times, its not
significant if there is a lot of clients with a lot of ope
Thanks for sharing, hope I never need the info, but glad to know it’s here
and doable!
On Tue, Jun 28, 2022 at 10:36 AM Florian Jonas
wrote:
> Dear all,
>
> just when we received Eugens message, we managed (with additional help
> via zoom from other experts) to recover our filesystem. Thank you
I saw a major boost after having the sleep_hdd set to 0. Only after that
did I start staying at around 500MiB to 1.2GiB/sec and 1.5k obj/sec to 2.5k
obj/sec.
Eventually it tapered back down, but for me sleep was the key, and
specifically in my case:
osd_recovery_sleep_hdd
On Mon, Jun 27, 2022 a
place).
If this file name data was already in some DB somewhere that could be
queried “off line” we wouldn’t have to waste the cycles actively capturing
that data etc.
Appreciate the response!
On Mon, Feb 14, 2022 at 11:57 PM William Edwards
wrote:
>
> > Op 15 feb. 2022 om 02:19 hee
Had the question posed to me and couldn’t find an immediate answer.
Is there anyway we can query the MDS or some other component in the ceph
stack that would give essentially immediate access to all file names
contained in ceph?
in HDFS we have the ability to pull the fsimage from the name nodes
12 matches
Mail list logo