[ceph-users] Re: Why a lot of pgs are degraded after host(+osd) restarted?

2024-03-25 Thread jaemin joo
I understood the mechanism more through your answer. I'm using erasure coding and backfilling step took quite a long time :( If there was just a lot of pg peering. I think it's reasonable. but I was curious why there was a lot of backfill_wait instead of peering. (e.g. pg 9.5a is stuck undersized

[ceph-users] Why a lot of pgs are degraded after host(+osd) restarted?

2024-03-20 Thread Jaemin Joo
Hi all, While I am testing host failover, there are a lot of degraded pg after host(+osd) is up. In spite that it takes a short time to restart, I don't understand why pg should check all objects related to the failed host(+osd). I'd like to know how to prevent to become degraded pg when osd

[ceph-users] RECENT_CRASH: x daemons have recently crashed

2024-02-13 Thread Jaemin Joo
Hi everyone, I'd like to ask you why this message happened. There are no symptoms with a warning message. After archiving the message, It happened again. It will be very helpful if you give me the hint about what part I should look at. [log]---

[ceph-users] Re: Does it impact write performance when SSD applies into block.wal (not block.db)

2024-02-12 Thread jaemin joo
Thank you for your idea. I realize that the number of SSD is important as well as the capacity of SSD for block.wal. > Naturally the best solution is to not use HDDs at all ;) You are right! :) ___ ceph-users mailing list -- ceph-users@ceph.io To

[ceph-users] RGW Index pool(separated SSD) tuning factor

2024-02-08 Thread Jaemin Joo
Hi everyone, I confirmed that write performance has increased too much even if I apply just SSD for the index pool of rgw. I know that ~200 Bytes per object in the index pool is created. When I checked the index pool size, it's around 300 bytes ~ 400 bytes calculated. like me, If it uses index

[ceph-users] Does it impact write performance when SSD applies into block.wal (not block.db)

2024-02-08 Thread Jaemin Joo
Hi everyone, I saw the bluestore can separate block.db, block.wal. In my case, I'd like to apply hybrid device which uses SSD, HDD to improve the small data write performance. but I don't have enough SSD to cover block.db and block.wal. so I think it can impact performance even though SSD applies

[ceph-users] Indexless bucket constraints of ceph-rgw

2024-01-18 Thread Jaemin Joo
I am considering Indexless bucket to increase the small file performance and maximum number of files using just HDDs. When I checked the constraints of indexless bucket in ceph-docs, it indicates the possible for bucket list problem, so I just know it makes the problem of versioning and sync. I