I believe that Luminous has an ability like this as you can specify how
many objects you anticipate a pool to have when you create it.  However, if
you're creating pools in Luminous, you're probably using bluestore.  For
Jewel and before, pre-splitting PGs doesn't help as much as you'd think.
As soon as a PG moves to a new OSD due to a lost drive or added storage, it
rebuilds the PG at the new location with the current filestore subfolder
settings and undoes your pre-splitting.

On Sun, Apr 8, 2018 at 1:59 AM shadow_lin <shadow_...@163.com> wrote:

> Thank you.
> I will look into the script.
> For fixed object size application(rbd,cephfs), do you think it is good
> idea to pre-split the folders to the point when each folder contains about
> 1-2k  objects when the cluster is full. I think by doing this can avoid the
> performce impact of splitting folder when clients are writing data into the
> cluster.
> 2018-04-08
> ------------------------------
> shadow_lin
> ------------------------------
>
> *发件人:*David Turner <drakonst...@gmail.com>
> *发送时间:*2018-04-07 03:33
>
> *主题:*Re: [ceph-users] Does jewel 10.2.10 support
> filestore_split_rand_factor?
> *收件人:*"shadow_lin"<shadow_...@163.com>
>
> *抄送:*"Pavan Rallabhandi"<prallabha...@walmartlabs.com>,"ceph-users"<
> ceph-users@lists.ceph.com>
>
>
> You could randomize your ceph.conf settings for filestore_merge_threshold
> and filestore_split_multiple.  It's not pretty, but it would spread things
> out.  You could even do this as granularly as you'd like down to the
> individual OSDs while only having a single ceph.conf file to maintain.
>
> I would probably go the route of manually splitting your subfolders,
> though.  I've been using this [1] script for some time to do just that.  I
> tried to make it fairly environment agnostic so people would have an easier
> time implementing it for their needs.
>
> [1] https://gist.github.com/drakonstein/cb76c7696e65522ab0e699b7ea1ab1c4
>
> On Sun, Apr 1, 2018 at 10:42 AM shadow_lin <shadow_...@163.com> wrote:
>
>> Thanks.
>> Is there any workaround for 10.2.10 to avoid all osd start spliting at
>> the same time?
>>
>> 2018-04-01
>> ------------------------------
>> shadowlin
>>
>> ------------------------------
>>
>> *发件人:*Pavan Rallabhandi <prallabha...@walmartlabs.com>
>> *发送时间:*2018-04-01 22:39
>> *主题:*Re: [ceph-users] Does jewel 10.2.10 support
>> filestore_split_rand_factor?
>> *收件人:*"shadow_lin"<shadow_...@163.com>,"ceph-users"<
>> ceph-users@lists.ceph.com>
>> *抄送:*
>>
>>
>>
>> No, it is supported in the next version of Jewel
>> http://tracker.ceph.com/issues/22658
>>
>>
>>
>> *From: *ceph-users <ceph-users-boun...@lists.ceph.com> on behalf of
>> shadow_lin <shadow_...@163.com>
>> *Date: *Sunday, April 1, 2018 at 3:53 AM
>> *To: *ceph-users <ceph-users@lists.ceph.com>
>> *Subject: *EXT: [ceph-users] Does jewel 10.2.10 support
>> filestore_split_rand_factor?
>>
>>
>>
>> Hi list,
>>
>> The document page of jewel has filestore_split_rand_factor config but I
>> can't find the config by using 'ceph daemon osd.x config'.
>>
>>
>>
>> ceph version 10.2.10 (5dc1e4c05cb68dbf62ae6fce3f0700e4654fdbbe)
>>
>> ceph daemon osd.0 config show|grep split
>>     "mon_osd_max_split_count": "32",
>>     "journaler_allow_split_entries": "true",
>>     "mds_bal_split_size": "10000",
>>     "mds_bal_split_rd": "25000",
>>     "mds_bal_split_wr": "10000",
>>     "mds_bal_split_bits": "3",
>>     "filestore_split_multiple": "4",
>>     "filestore_debug_verify_split": "false",
>>
>>
>>
>> 2018-04-01
>> ------------------------------
>>
>> shadow_lin
>>
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to