Of course, both to 32768.
On Sun, Jun 29, 2014 at 9:15 AM, Gregory Farnum <[email protected]> wrote: > Did you also increase the "pgp_num"? > > > On Saturday, June 28, 2014, Jianing Yang <[email protected]> wrote: > >> Actually, I did increase PG number to 32768 (120 osds) and I also use >> "tunable optimal". But the data still not distribute evenly. >> >> >> On Sun, Jun 29, 2014 at 3:42 AM, Konrad Gutkowski < >> [email protected]> wrote: >> >>> Hi, >>> >>> Increasing PG number for pools that hold data might help if you didn't >>> do that already. >>> >>> Check out this thread: >>> http://lists.ceph.com/pipermail/ceph-users-ceph.com/ >>> 2014-January/027094.html >>> >>> You might find some tips there (although it was pre firefly). >>> >>> W dniu 28.06.2014 o 14:44 Jianing Yang <[email protected]> pisze: >>> >>> >>>> Hi, all >>>> >>>> My cluster has been running for about 4 month now. I have about 108 >>>> osds and all are 600G SAS Disk. Their disk usage is between 70% and 85%. >>>> It seems that ceph cannot distribute data evenly by default settings. Is >>>> there any configuration that helps distribute data more evenly? >>>> >>>> Thanks very much >>>> _______________________________________________ >>>> ceph-users mailing list >>>> [email protected] >>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com >>>> >>> >>> >>> -- >>> >>> Konrad Gutkowski >>> _______________________________________________ >>> ceph-users mailing list >>> [email protected] >>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com >>> >> >> > > -- > Software Engineer #42 @ http://inktank.com | http://ceph.com >
_______________________________________________ ceph-users mailing list [email protected] http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
