[ceph-users] Re: About lost disk with erasure code

2023-12-26 Thread Phong Tran Thanh
Thank you for your knowledge. I have a question. Which pool is affected when the PG is down, and how can I show it? When a PG is down, is only one pool affected or are multiple pools affected? Vào Th 3, 26 thg 12, 2023 vào lúc 16:15 Janne Johansson < icepic...@gmail.com> đã viết: > Den tis 26

[ceph-users] CephFS delayed deletion

2023-12-26 Thread Miroslav Svoboda
Hi, how can I increase a files deletion speed? Every files was deleted from cephfs on my pool, but ceph df still show 50% usage of pool. I know about delayed deletion (https://docs.ceph.com/en/latest), but is there some way to little speed up this? I significantly increase mds_max_purge_ops

[ceph-users] Re: Building new cluster had a couple of questions

2023-12-26 Thread Drew Weaver
Okay so NVMe is the only path forward? I was simply going to replace the PERC H750s with some HBA350s but if that will not work I will just wait until I have a pile of NVMe servers that we aren't using in a few years, I guess. Thanks, -Drew From: Anthony D'Atri Sent: Friday, December 22,

[ceph-users] Re: About lost disk with erasure code

2023-12-26 Thread Janne Johansson
Den tis 26 dec. 2023 kl 08:45 skrev Phong Tran Thanh : > > Hi community, > > I am running ceph with block rbd with 6 nodes, erasure code 4+2 with > min_size of pool is 4. > > When three osd is down, and an PG is state down, some pools is can't write > data, suppose three osd can't start and pg