Re: [ceph-users] Does jewel 10.2.10 support filestore_split_rand_factor?

2018-04-23 Thread David Turner
I believe that Luminous has an ability like this as you can specify how
many objects you anticipate a pool to have when you create it.  However, if
you're creating pools in Luminous, you're probably using bluestore.  For
Jewel and before, pre-splitting PGs doesn't help as much as you'd think.
As soon as a PG moves to a new OSD due to a lost drive or added storage, it
rebuilds the PG at the new location with the current filestore subfolder
settings and undoes your pre-splitting.

On Sun, Apr 8, 2018 at 1:59 AM shadow_lin  wrote:

> Thank you.
> I will look into the script.
> For fixed object size application(rbd,cephfs), do you think it is good
> idea to pre-split the folders to the point when each folder contains about
> 1-2k  objects when the cluster is full. I think by doing this can avoid the
> performce impact of splitting folder when clients are writing data into the
> cluster.
> 2018-04-08
> --
> shadow_lin
> --
>
> *发件人:*David Turner 
> *发送时间:*2018-04-07 03:33
>
> *主题:*Re: [ceph-users] Does jewel 10.2.10 support
> filestore_split_rand_factor?
> *收件人:*"shadow_lin"
>
> *抄送:*"Pavan Rallabhandi","ceph-users"<
> ceph-users@lists.ceph.com>
>
>
> You could randomize your ceph.conf settings for filestore_merge_threshold
> and filestore_split_multiple.  It's not pretty, but it would spread things
> out.  You could even do this as granularly as you'd like down to the
> individual OSDs while only having a single ceph.conf file to maintain.
>
> I would probably go the route of manually splitting your subfolders,
> though.  I've been using this [1] script for some time to do just that.  I
> tried to make it fairly environment agnostic so people would have an easier
> time implementing it for their needs.
>
> [1] https://gist.github.com/drakonstein/cb76c7696e65522ab0e699b7ea1ab1c4
>
> On Sun, Apr 1, 2018 at 10:42 AM shadow_lin  wrote:
>
>> Thanks.
>> Is there any workaround for 10.2.10 to avoid all osd start spliting at
>> the same time?
>>
>> 2018-04-01
>> ------
>> shadowlin
>>
>> --
>>
>> *发件人:*Pavan Rallabhandi 
>> *发送时间:*2018-04-01 22:39
>> *主题:*Re: [ceph-users] Does jewel 10.2.10 support
>> filestore_split_rand_factor?
>> *收件人:*"shadow_lin","ceph-users"<
>> ceph-users@lists.ceph.com>
>> *抄送:*
>>
>>
>>
>> No, it is supported in the next version of Jewel
>> http://tracker.ceph.com/issues/22658
>>
>>
>>
>> *From: *ceph-users  on behalf of
>> shadow_lin 
>> *Date: *Sunday, April 1, 2018 at 3:53 AM
>> *To: *ceph-users 
>> *Subject: *EXT: [ceph-users] Does jewel 10.2.10 support
>> filestore_split_rand_factor?
>>
>>
>>
>> Hi list,
>>
>> The document page of jewel has filestore_split_rand_factor config but I
>> can't find the config by using 'ceph daemon osd.x config'.
>>
>>
>>
>> ceph version 10.2.10 (5dc1e4c05cb68dbf62ae6fce3f0700e4654fdbbe)
>>
>> ceph daemon osd.0 config show|grep split
>> "mon_osd_max_split_count": "32",
>> "journaler_allow_split_entries": "true",
>> "mds_bal_split_size": "1",
>> "mds_bal_split_rd": "25000",
>> "mds_bal_split_wr": "1",
>> "mds_bal_split_bits": "3",
>> "filestore_split_multiple": "4",
>> "filestore_debug_verify_split": "false",
>>
>>
>>
>> 2018-04-01
>> --
>>
>> shadow_lin
>>
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Does jewel 10.2.10 support filestore_split_rand_factor?

2018-04-08 Thread shadow_lin
Thank you.
I will look into the script.
For fixed object size application(rbd,cephfs), do you think it is good idea to 
pre-split the folders to the point when each folder contains about 1-2k  
objects when the cluster is full. I think by doing this can avoid the performce 
impact of splitting folder when clients are writing data into the cluster.
2018-04-08 

shadow_lin 



发件人:David Turner 
发送时间:2018-04-07 03:33
主题:Re: [ceph-users] Does jewel 10.2.10 support filestore_split_rand_factor?
收件人:"shadow_lin"
抄送:"Pavan 
Rallabhandi","ceph-users"

You could randomize your ceph.conf settings for filestore_merge_threshold and 
filestore_split_multiple.  It's not pretty, but it would spread things out.  
You could even do this as granularly as you'd like down to the individual OSDs 
while only having a single ceph.conf file to maintain.


I would probably go the route of manually splitting your subfolders, though.  
I've been using this [1] script for some time to do just that.  I tried to make 
it fairly environment agnostic so people would have an easier time implementing 
it for their needs.


[1] https://gist.github.com/drakonstein/cb76c7696e65522ab0e699b7ea1ab1c4


On Sun, Apr 1, 2018 at 10:42 AM shadow_lin  wrote:

Thanks.
Is there any workaround for 10.2.10 to avoid all osd start spliting at the same 
time?

2018-04-01 


shadowlin




发件人:Pavan Rallabhandi 
发送时间:2018-04-01 22:39
主题:Re: [ceph-users] Does jewel 10.2.10 support filestore_split_rand_factor?
收件人:"shadow_lin","ceph-users"
抄送:

No, it is supported in the next version of Jewel 
http://tracker.ceph.com/issues/22658

From: ceph-users  on behalf of shadow_lin 

Date: Sunday, April 1, 2018 at 3:53 AM
To: ceph-users 
Subject: EXT: [ceph-users] Does jewel 10.2.10 support 
filestore_split_rand_factor?

Hi list,
The document page of jewel has filestore_split_rand_factor config but I can't 
find the config by using 'ceph daemon osd.x config'.

ceph version 10.2.10 (5dc1e4c05cb68dbf62ae6fce3f0700e4654fdbbe)
ceph daemon osd.0 config show|grep split
"mon_osd_max_split_count": "32",
"journaler_allow_split_entries": "true",
"mds_bal_split_size": "1",
"mds_bal_split_rd": "25000",
"mds_bal_split_wr": "1",
"mds_bal_split_bits": "3",
"filestore_split_multiple": "4",
"filestore_debug_verify_split": "false",



2018-04-01



shadow_lin 
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Does jewel 10.2.10 support filestore_split_rand_factor?

2018-04-06 Thread David Turner
You could randomize your ceph.conf settings for filestore_merge_threshold
and filestore_split_multiple.  It's not pretty, but it would spread things
out.  You could even do this as granularly as you'd like down to the
individual OSDs while only having a single ceph.conf file to maintain.

I would probably go the route of manually splitting your subfolders,
though.  I've been using this [1] script for some time to do just that.  I
tried to make it fairly environment agnostic so people would have an easier
time implementing it for their needs.

[1] https://gist.github.com/drakonstein/cb76c7696e65522ab0e699b7ea1ab1c4

On Sun, Apr 1, 2018 at 10:42 AM shadow_lin  wrote:

> Thanks.
> Is there any workaround for 10.2.10 to avoid all osd start spliting at the
> same time?
>
> 2018-04-01
> --
> shadowlin
>
> --
>
> *发件人:*Pavan Rallabhandi 
> *发送时间:*2018-04-01 22:39
> *主题:*Re: [ceph-users] Does jewel 10.2.10 support
> filestore_split_rand_factor?
> *收件人:*"shadow_lin","ceph-users"<
> ceph-users@lists.ceph.com>
> *抄送:*
>
>
>
> No, it is supported in the next version of Jewel
> http://tracker.ceph.com/issues/22658
>
>
>
> *From: *ceph-users  on behalf of
> shadow_lin 
> *Date: *Sunday, April 1, 2018 at 3:53 AM
> *To: *ceph-users 
> *Subject: *EXT: [ceph-users] Does jewel 10.2.10 support
> filestore_split_rand_factor?
>
>
>
> Hi list,
>
> The document page of jewel has filestore_split_rand_factor config but I
> can't find the config by using 'ceph daemon osd.x config'.
>
>
>
> ceph version 10.2.10 (5dc1e4c05cb68dbf62ae6fce3f0700e4654fdbbe)
>
> ceph daemon osd.0 config show|grep split
> "mon_osd_max_split_count": "32",
> "journaler_allow_split_entries": "true",
> "mds_bal_split_size": "1",
> "mds_bal_split_rd": "25000",
> "mds_bal_split_wr": "1",
> "mds_bal_split_bits": "3",
> "filestore_split_multiple": "4",
> "filestore_debug_verify_split": "false",
>
>
>
> 2018-04-01
> --
>
> shadow_lin
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Does jewel 10.2.10 support filestore_split_rand_factor?

2018-04-01 Thread shadow_lin
Thanks.
Is there any workaround for 10.2.10 to avoid all osd start spliting at the same 
time?

2018-04-01 


shadowlin




发件人:Pavan Rallabhandi 
发送时间:2018-04-01 22:39
主题:Re: [ceph-users] Does jewel 10.2.10 support filestore_split_rand_factor?
收件人:"shadow_lin","ceph-users"
抄送:

No, it is supported in the next version of Jewel 
http://tracker.ceph.com/issues/22658
 
From: ceph-users  on behalf of shadow_lin 

Date: Sunday, April 1, 2018 at 3:53 AM
To: ceph-users 
Subject: EXT: [ceph-users] Does jewel 10.2.10 support 
filestore_split_rand_factor?
 
Hi list,
The document page of jewel has filestore_split_rand_factor config but I can't 
find the config by using 'ceph daemon osd.x config'.
 
ceph version 10.2.10 (5dc1e4c05cb68dbf62ae6fce3f0700e4654fdbbe)
ceph daemon osd.0 config show|grep split
"mon_osd_max_split_count": "32",
"journaler_allow_split_entries": "true",
"mds_bal_split_size": "1",
"mds_bal_split_rd": "25000",
"mds_bal_split_wr": "1",
"mds_bal_split_bits": "3",
"filestore_split_multiple": "4",
"filestore_debug_verify_split": "false",


 
2018-04-01



shadow_lin ___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Does jewel 10.2.10 support filestore_split_rand_factor?

2018-04-01 Thread Pavan Rallabhandi
No, it is supported in the next version of Jewel 
http://tracker.ceph.com/issues/22658

From: ceph-users  on behalf of shadow_lin 

Date: Sunday, April 1, 2018 at 3:53 AM
To: ceph-users 
Subject: EXT: [ceph-users] Does jewel 10.2.10 support 
filestore_split_rand_factor?

Hi list,
The document page of jewel has filestore_split_rand_factor config but I can't 
find the config by using 'ceph daemon osd.x config'.

ceph version 10.2.10 (5dc1e4c05cb68dbf62ae6fce3f0700e4654fdbbe)
ceph daemon osd.0 config show|grep split
"mon_osd_max_split_count": "32",
"journaler_allow_split_entries": "true",
"mds_bal_split_size": "1",
"mds_bal_split_rd": "25000",
"mds_bal_split_wr": "1",
"mds_bal_split_bits": "3",
"filestore_split_multiple": "4",
"filestore_debug_verify_split": "false",


2018-04-01

shadow_lin
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com