On 10/18/2018 7:49 PM, Nick Fisk wrote:
Hi,

Ceph Version = 12.2.8
8TB spinner with 20G SSD partition

Perf dump shows the following:

"bluefs": {
         "gift_bytes": 0,
         "reclaim_bytes": 0,
         "db_total_bytes": 21472731136,
         "db_used_bytes": 3467640832,
         "wal_total_bytes": 0,
         "wal_used_bytes": 0,
         "slow_total_bytes": 320063143936,
         "slow_used_bytes": 4546625536,
         "num_files": 124,
         "log_bytes": 11833344,
         "log_compactions": 4,
         "logged_bytes": 316227584,
         "files_written_wal": 2,
         "files_written_sst": 4375,
         "bytes_written_wal": 204427489105,
         "bytes_written_sst": 248223463173

Am I reading that correctly, about 3.4GB used out of 20GB on the SSD, yet 4.5GB 
of DB is stored on the spinning disk?
Correct. Most probably the rationale for this is the layered scheme RocksDB uses to keep its sst. For each level It has a maximum threshold (determined by level no, some base value and corresponding multiplier - see max_bytes_for_level_base & max_bytes_for_level_multiplier at https://github.com/facebook/rocksdb/wiki/RocksDB-Tuning-Guide) If the next level  (at its max size) doesn't fit into the space available at DB volume - it's totally spilled over to slow device. IIRC level_base is about 250MB and multiplier is 10 so the third level needs 25Gb and hence doesn't fit into your DB volume.

In fact  DB volume of 20GB is VERY small for 8TB OSD - just 0.25% of the slow one. AFAIR current recommendation is about 4%.


Am I also understanding correctly that BlueFS has reserved 300G of space on the 
spinning disk?
Right.
Found a previous bug tracker for something which looks exactly the same case, 
but should be fixed now:
https://tracker.ceph.com/issues/22264

Thanks,
Nick

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to