Typo below, I meant "I doubled bluestore_compression_min_blob_size_hdd ..."
________________________________________
From: Frank Schilder
Sent: 20 June 2019 19:02
To: Dan van der Ster; ceph-users
Subject: Re: [ceph-users] understanding the bluestore blob, chunk and 
compression params

Hi Dan,

this older thread 
(https://www.mail-archive.com/[email protected]/msg49339.html) contains 
details about:

- how to get bluestore compression working (must be enabled on pool as well as 
OSD)
- what the best compression ratio is depending on the application (if 
applications do not give hints, it is 
bluestore_min_alloc_size_hdd/bluestore_compression_min_blob_size_hdd, which is 
usually 0.5 as you observe).

I doubled bluestore_min_alloc_size_hdd to get to 0.25. There are trade-offs for 
random I/O performance. However, since I use EC pools, I have those any ways. 
For replicated pools, the aggregated IOPs might be heavily affected. I have, 
however, no data on that case.

Hope that helps,
Frank

=================
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14

________________________________________
From: ceph-users <[email protected]> on behalf of Dan van der 
Ster <[email protected]>
Sent: 20 June 2019 17:23:51
To: ceph-users
Subject: Re: [ceph-users] understanding the bluestore blob, chunk and 
compression params

P.S. I know this has been discussed before, but the
compression_(mode|algorithm) pool options [1] seem completely broken
-- With the pool mode set to force, we see that sometimes the
compression is invoked and sometimes it isn't. AFAICT,
the only way to compress every object is to set
bluestore_compression_mode=force on the osd.

-- dan

[1] http://docs.ceph.com/docs/master/rados/operations/pools/#set-pool-values


On Thu, Jun 20, 2019 at 4:33 PM Dan van der Ster <[email protected]> wrote:
>
> Hi all,
>
> I'm trying to compress an rbd pool via backfilling the existing data,
> and the allocated space doesn't match what I expect.
>
> Here is the test: I marked osd.130 out and waited for it to erase all its 
> data.
> Then I set (on the pool) compression_mode=force and 
> compression_algorithm=zstd.
> Then I marked osd.130 to get its PGs/objects back (this time compressing 
> them).
>
> After a few 10s of minutes we have:
>         "bluestore_compressed": 989250439,
>         "bluestore_compressed_allocated": 3859677184,
>         "bluestore_compressed_original": 7719354368,
>
> So, the allocated is exactly 50% of original, but we are wasting space
> because compressed is 12.8% of original.
>
> I don't understand why...
>
> The rbd images all use 4MB objects, and we use the default chunk and
> blob sizes (in v13.2.6):
>    osd_recovery_max_chunk = 8MB
>    bluestore_compression_max_blob_size_hdd = 512kB
>    bluestore_compression_min_blob_size_hdd = 128kB
>    bluestore_max_blob_size_hdd = 512kB
>    bluestore_min_alloc_size_hdd = 64kB
>
> From my understanding, backfilling should read a whole 4MB object from
> the src osd, then write it to osd.130's bluestore, compressing in
> 512kB blobs. Those compress on average at 12.8% so I would expect to
> see allocated being closer to bluestore_min_alloc_size_hdd /
> bluestore_compression_max_blob_size_hdd = 12.5%.
>
> Does someone understand where the 0.5 ratio is coming from?
>
> Thanks!
>
> Dan
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to