Re: [ceph-users] Bluestore with so many small files

2019-04-23 Thread Frédéric Nass
Hi, You probably forgot to recreate the OSD after changing bluestore_min_alloc_size. Regards, Frédéric. - Le 22 Avr 19, à 5:41, 刘 俊 a écrit : > Hi All , > I still see this issue with latest ceph Luminous 12.2.11 and 12.2.12. > I have set bluestore_min_alloc_size = 4096 before the tes

[ceph-users] Bluestore with so many small files

2019-04-21 Thread 刘 俊
Hi All, I still see this issue with latest ceph Luminous 12.2.11 and 12.2.12. I have set bluestore_min_alloc_size = 4096 before the test. when I write 10 small objects less than 64KB through rgw, the RAW USED showed in "ceph df" looks incorrect. For example, I test three times and clean up

Re: [ceph-users] Bluestore with so many small files

2018-02-13 Thread Igor Fedotov
Hi Behnam, On 2/12/2018 4:06 PM, Behnam Loghmani wrote: Hi there, I am using ceph Luminous 12.2.2 with: 3 osds (each osd is 100G) - no WAL/DB separation. 3 mons 1 rgw cluster size 3 I stored lots of thumbnails with very small size on ceph with radosgw. Actual size of files is something about

Re: [ceph-users] Bluestore with so many small files

2018-02-12 Thread Wido den Hollander
On 02/12/2018 03:16 PM, Behnam Loghmani wrote: so you mean that rocksdb and osdmap filled disk about 40G for only 800k files? I think it's not reasonable and it's too high Could you check the output of the OSDs using a 'perf dump' on their admin socket? The 'bluestore' and 'bluefs' sectio

Re: [ceph-users] Bluestore with so many small files

2018-02-12 Thread Behnam Loghmani
so you mean that rocksdb and osdmap filled disk about 40G for only 800k files? I think it's not reasonable and it's too high On Mon, Feb 12, 2018 at 5:06 PM, David Turner wrote: > Some of your overhead is the Wal and rocksdb that are on the OSDs. The Wal > is pretty static in size, but rocksdb g

Re: [ceph-users] Bluestore with so many small files

2018-02-12 Thread David Turner
Some of your overhead is the Wal and rocksdb that are on the OSDs. The Wal is pretty static in size, but rocksdb grows with the amount of objects you have. You also have copies of the osdmap on each osd. There's just overhead that adds up. The biggest is going to be rocksdb with how many objects yo

[ceph-users] Bluestore with so many small files

2018-02-12 Thread Behnam Loghmani
Hi there, I am using ceph Luminous 12.2.2 with: 3 osds (each osd is 100G) - no WAL/DB separation. 3 mons 1 rgw cluster size 3 I stored lots of thumbnails with very small size on ceph with radosgw. Actual size of files is something about 32G but it filled 70G of each osd. what's the reason of t