Hi Erich,
I'm also on searching for improvements.
You should use the right mountoptions, to prevent fragmentation (for XFS).
[osd]
osd mount options xfs = rw,noatime,inode64,logbsize=256k,delaylog,allocsize=4M
osd_op_threads = 4
osd_disk_threads = 4
With 45 OSDs per node you need an powerfull
Hi, all
My cluster has been running for about 4 month now. I have about 108
osds and all are 600G SAS Disk. Their disk usage is between 70% and 85%.
It seems that ceph cannot distribute data evenly by default settings. Is
there any configuration that helps distribute data more evenly?
Thanks
Did you also increase the pgp_num?
On Saturday, June 28, 2014, Jianing Yang jianingy.y...@gmail.com wrote:
Actually, I did increase PG number to 32768 (120 osds) and I also use
tunable optimal. But the data still not distribute evenly.
On Sun, Jun 29, 2014 at 3:42 AM, Konrad Gutkowski
Of course, both to 32768.
On Sun, Jun 29, 2014 at 9:15 AM, Gregory Farnum g...@inktank.com wrote:
Did you also increase the pgp_num?
On Saturday, June 28, 2014, Jianing Yang jianingy.y...@gmail.com wrote:
Actually, I did increase PG number to 32768 (120 osds) and I also use
tunable
Hi all,
in the last few days i have worked on improving the uWSGI plugin for rados
(its first version was released a year ago, but was buggy and little
integrated with the whole project).
http://uwsgi-docs.readthedocs.org/en/latest/Rados.html
(for those who do not know uWSGI, it is a