I understood the mechanism more through your answer.
I'm using erasure coding and backfilling step took quite a long time :(
If there was just a lot of pg peering. I think it's reasonable. but I was
curious why there was a lot of backfill_wait instead of peering.
(e.g. pg 9.5a is stuck undersized
Hi all,
While I am testing host failover, there are a lot of degraded pg after
host(+osd) is up. In spite that it takes a short time to restart, I don't
understand why pg should check all objects related to the failed host(+osd).
I'd like to know how to prevent to become degraded pg when osd
Hi everyone,
I'd like to ask you why this message happened. There are no symptoms with a
warning message. After archiving the message, It happened again.
It will be very helpful if you give me the hint about what part I should
look at.
[log]---
Thank you for your idea.
I realize that the number of SSD is important as well as the capacity of SSD
for block.wal.
> Naturally the best solution is to not use HDDs at all ;)
You are right! :)
___
ceph-users mailing list -- ceph-users@ceph.io
To
Hi everyone,
I confirmed that write performance has increased too much even if I apply
just SSD for the index pool of rgw.
I know that ~200 Bytes per object in the index pool is created.
When I checked the index pool size, it's around 300 bytes ~ 400 bytes
calculated.
like me, If it uses index
Hi everyone,
I saw the bluestore can separate block.db, block.wal.
In my case, I'd like to apply hybrid device which uses SSD, HDD to improve
the small data write performance.
but I don't have enough SSD to cover block.db and block.wal.
so I think it can impact performance even though SSD applies
I am considering Indexless bucket to increase the small file performance
and maximum number of files using just HDDs.
When I checked the constraints of indexless bucket in ceph-docs, it
indicates the possible for bucket list problem, so I just know it makes the
problem of versioning and sync.
I