so you mean that rocksdb and osdmap filled disk about 40G for only 800k
I think it's not reasonable and it's too high
On Mon, Feb 12, 2018 at 5:06 PM, David Turner <drakonst...@gmail.com> wrote:
> Some of your overhead is the Wal and rocksdb that are on the OSDs. The Wal
> is pretty static in size, but rocksdb grows with the amount of objects you
> have. You also have copies of the osdmap on each osd. There's just overhead
> that adds up. The biggest is going to be rocksdb with how many objects you
> On Mon, Feb 12, 2018, 8:06 AM Behnam Loghmani <behnam.loghm...@gmail.com>
>> Hi there,
>> I am using ceph Luminous 12.2.2 with:
>> 3 osds (each osd is 100G) - no WAL/DB separation.
>> 3 mons
>> 1 rgw
>> cluster size 3
>> I stored lots of thumbnails with very small size on ceph with radosgw.
>> Actual size of files is something about 32G but it filled 70G of each osd.
>> what's the reason of this high disk usage?
>> should I change "bluestore_min_alloc_size_hdd"? and If I change it and
>> set it to smaller size, does it impact on performance?
>> what is the best practice for storing small files on bluestore?
>> Best regards,
>> Behnam Loghmani
>> ceph-users mailing list
ceph-users mailing list