I *think* this will happen over time as each OSD’s RocksDB compacts, which 
might be incremental.
> 
> 
> Also, could anybody clarify whether this setting is `compression_algorithm` 
> from
> https://docs.ceph.com/en/squid/rados/operations/pools/#setting-pool-values
> https://docs.ceph.com/en/squid/rados/configuration/bluestore-config-ref/#confval-bluestore_compression_algorithm
> or if that's something different (e.g. if that's "actual data" instead of 
> "metadata")?
> 
> I suspect it's different, because `ceph config show-with-defaults osd.6 | 
> grep compression` reveals:
> 
>    bluestore_compression_algorithm                             snappy
>    bluestore_compression_mode                                  none
>    bluestore_rocksdb_options                                   
> compression=kLZ4Compression,...
> 
> From this it looks like `bluestore_compression*` is the "Inline compression" 
> for data 
> (https://docs.ceph.com/en/reef/rados/configuration/bluestore-config-ref/#inline-compression),
>  and `bluestore_rocksdb_options` is what the "RocksDB compression" is about.

I believe so.

> Still the question remains on how to bring all existing data over to be 
> compressed.

I think the only way is to rewrite the data. There are scripts out there to do 
this for CephFS, where the same process should work as can be used when 
migrating existing data to a new data pool.  For RBD and RGW, re-write from a 
client.  With RBD you might do something like

rbd export myimage | rbd import - myimage.new; rbd rm myimage ; rbd rename 
myimage.new myimage

dance, with caution regarding space, names, watchers, client attachments, phase 
of moon, etc.

With RGW maybe something like rclone or Chorus to copy into a new bucket, rm 
the old bucket.


> 
> Thanks!
> _______________________________________________
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to