Hi George,
> May I ask if enabling pool compression helps for the future space
> amplification?
If the amplification is indeed due to min_alloc_size, then I don't
think that compression will help. My understanding is that compression
is applied post-EC (and thus probably won't even activate due
Hi Marc,
Thanks for participating. At first I thought this is an incorrect report and
maybe I need to upgrade to for a bugfix.
But I couldn’t find a such a report and I asked here.
When people shared experiences it appears there may be two causes. Unbalanced
OSDs or Storage Amplification.
As
May I ask if enabling pool compression helps for the future space amplification?
> George Yil şunları yazdı (27 Oca 2021 18:57):
>
> Thank you. This helps a lot.
>
>> Josh Baergen şunları yazdı (27 Oca 2021 17:08):
>>
>> On Wed, Jan 27, 2021 at 12:24 AM George Yil wrote:
>>> May I ask if
On Wed, Jan 27, 2021 at 12:24 AM George Yil wrote:
> May I ask if it can be dynamically changed and any disadvantages should be
> expected?
Unless there's some magic I'm unaware of, there is no way to
dynamically change this. Each OSD must be recreated with the new
min_alloc_size setting. In
Thank you. This helps a lot.
> Josh Baergen şunları yazdı (27 Oca 2021 17:08):
>
> On Wed, Jan 27, 2021 at 12:24 AM George Yil wrote:
>> May I ask if it can be dynamically changed and any disadvantages should be
>> expected?
>
> Unless there's some magic I'm unaware of, there is no way to
>
I did not. Honestly I was not aware of such a thing. Thanks for the
notification. And I hope this is not bad news.
May I ask if it can be dynamically changed and any disadvantages should be
expected?
> On 27 Jan 2021, at 01:33, Josh Baergen wrote:
>
> > I created radosgw pools.
> I created radosgw pools. secondaryzone.rgw.buckets.data pool is
configured as EC 8+2 (jerasure).
Did you override the default bluestore_min_alloc_size_hdd (64k in that
version IIRC) when creating your hdd OSDs? If not, all of the small objects
produced by that EC configuration will be leading
Sorry for replying late :(. And thanks for the tips.
This is a fresh cluster. And I didn’t think data distribution would be a
problem. Is this normal?
Below is the ceph osd df output. The related pool is hdd only
(prod.rgw.buckets.data). I guess there is variance but I couldn’t get the