ry likely there is a SLC cache in use.
Best regards,
Michael
-Ursprüngliche Nachricht-
Von: Marc
Gesendet: Dienstag, 21. Februar 2023 11:27
An: Michael Wodniok ; ceph-users@ceph.io; Phil Regnauld
Betreff: RE: [ceph-users] Re: Do not use SSDs with (small) SLC cache
What fio test woul
Hi Ken,
thank you for your hint - any input is appreciated. Please note that Ceph does
highly random IO (especially when having small object sizes), AnandTech also
states:
"Some of our other tests have shown a few signs that the 870 EVO's write
performance can drop when the SLC cache runs out,
review-the-best-just-got-better
[2]
https://ceph.io/en/news/blog/2022/mclock-vs-wpq-testing-with-background-ops-part1/
Happy Storing!
Michael Wodniok
--
Michael Wodniok M.Sc.
WorNet AG
Bürgermeister-Graf-Ring 28
82538 Geretsried
Simply42 und SecuMail sind Marken der WorNet AG.
http://www.wor.net/
e was fixed and i could start
the upgrade again for the other daemons on octopus (all osds and mds).
Maybe someone in future has a similar issue and this idea helps.
Regards,
Michael
-Ursprüngliche Nachricht-
Von: Michael Wodniok
Gesendet: Donnerstag, 12. August 2021 15:08
An: ceph-users@ce
ot;defective" mon to not start.
However this state is everything but sane.
Are there some hints how to find the issue?
Kind Regards,
Michael
--
RZ-Status Mitteilungen auf https://twitter.com/WorNetAG
Michael Wodniok M.Sc.
WorNet AG
Bürgermeister-Graf-Ring 28
82538 Geretsried
Telef
so check `cephadm ls` on the
mds nodes if the containers have been removed.
Regards,
Eugen
Zitat von Michael Wodniok :
> Hi,
>
> we created multiple CephFS, this invloved deploying mutliple
> mds-services using `ceph orch apply mds [...]`. Worked like a charm.
>
> Now the fi
Hi,
we created multiple CephFS, this invloved deploying mutliple mds-services using
`ceph orch apply mds [...]`. Worked like a charm.
Now the filesystem has been removed and the leftovers of the filesystem should
also be removed, but I can't delete the services as cephadm/orchestration
module