On Fri, Dec 30, 2016 at 7:17 PM, Wido den Hollander <[email protected]> wrote:
>
>> Op 30 december 2016 om 11:06 schreef Kees Meijs <[email protected]>:
>>
>>
>> Hi Asley,
>>
>> We experience (using Hammer) a similar issue. Not that I have a perfect
>> solution to share, but I felt like mentioning a "me too". ;-)
>>
>> On a side note: we configured correct weight per drive as well.
>>
>
> Ceph will never balance 100% perfect for a few reasons:
>
> - CRUSH is not perfect nor will it ever be
This is not entirely true. Any **distributed** storage systems do not
realize this capabilities.
> - Object sizes vary
> - The amount of Placement Groups matters
>
> For this reason you can do a OSD reweight by running the 'ceph osd
> reweight-by-utilization' command or do it manually with 'ceph osd reweight X
> 0-1'
>
> Wido
>
>> Regards,
>> Kees
>>
>> On 29-12-16 11:54, Ashley Merrick wrote:
>> >
>> > Hello,
>> >
>> >
>> >
>> > I currently have 5 servers within my CEPH Cluster
>> >
>> >
>> >
>> > 2 x (10 * 8TB Disks)
>> >
>> > 3 x (10 * 4TB Disks)
>> >
>> >
>> >
>> > Currently seeing a larger difference in OSD use across the two
>> > separate server types, as well as within the server itself.
>> >
>> >
>> >
>> > For example on one 4TB server I have an OSD at 64% and one at 84%,
>> > where on the 8TB servers the OSD range from 49% to 64%, where the
>> > highest used OSD’s are on the 4TB.
>> >
>> >
>> >
>> > Each drive has a weight set correctly for the drive size and each
>> > server has the correct weight set, below is my crush map. Apart from
>> > running the command to adjust the re-weight is there anything I am
>> > doing wrong or should change for better spread of data, not looking
>> > for near perfect but where the 8TB drives are sitting at 64% max and
>> > 4TB are sitting at 80%’s causes a big inbalance.
>> >
>> >
>> >
>> > # begin crush map
>> >
>> > tunable choose_local_tries 0
>> >
>> > tunable choose_local_fallback_tries 0
>> >
>> > tunable choose_total_tries 50
>> >
>> > tunable chooseleaf_descend_once 1
>> >
>> > tunable chooseleaf_vary_r 1
>> >
>> > tunable straw_calc_version 1
>> >
>> > tunable allowed_bucket_algs 54
>> >
>> >
>> >
>> > # buckets
>> >
>> > host sn1 {
>> >
>> > id -2 # do not change unnecessarily
>> >
>> > # weight 72.800
>> >
>> > alg straw2
>> >
>> > hash 0 # rjenkins1
>> >
>> > item osd.0 weight 7.280
>> >
>> > item osd.1 weight 7.280
>> >
>> > item osd.3 weight 7.280
>> >
>> > item osd.4 weight 7.280
>> >
>> > item osd.2 weight 7.280
>> >
>> > item osd.5 weight 7.280
>> >
>> > item osd.6 weight 7.280
>> >
>> > item osd.7 weight 7.280
>> >
>> > item osd.8 weight 7.280
>> >
>> > item osd.9 weight 7.280
>> >
>> > }
>> >
>> > host sn3 {
>> >
>> > id -6 # do not change unnecessarily
>> >
>> > # weight 72.800
>> >
>> > alg straw2
>> >
>> > hash 0 # rjenkins1
>> >
>> > item osd.10 weight 7.280
>> >
>> > item osd.11 weight 7.280
>> >
>> > item osd.12 weight 7.280
>> >
>> > item osd.13 weight 7.280
>> >
>> > item osd.14 weight 7.280
>> >
>> > item osd.15 weight 7.280
>> >
>> > item osd.16 weight 7.280
>> >
>> > item osd.17 weight 7.280
>> >
>> > item osd.18 weight 7.280
>> >
>> > item osd.19 weight 7.280
>> >
>> > }
>> >
>> > host sn4 {
>> >
>> > id -7 # do not change unnecessarily
>> >
>> > # weight 36.060
>> >
>> > alg straw2
>> >
>> > hash 0 # rjenkins1
>> >
>> > item osd.20 weight 3.640
>> >
>> > item osd.21 weight 3.640
>> >
>> > item osd.22 weight 3.640
>> >
>> > item osd.23 weight 3.640
>> >
>> > item osd.24 weight 3.640
>> >
>> > item osd.25 weight 3.640
>> >
>> > item osd.26 weight 3.640
>> >
>> > item osd.27 weight 3.640
>> >
>> > item osd.28 weight 3.640
>> >
>> > item osd.29 weight 3.300
>> >
>> > }
>> >
>> > host sn5 {
>> >
>> > id -8 # do not change unnecessarily
>> >
>> > # weight 36.060
>> >
>> > alg straw2
>> >
>> > hash 0 # rjenkins1
>> >
>> > item osd.30 weight 3.640
>> >
>> > item osd.31 weight 3.640
>> >
>> > item osd.32 weight 3.640
>> >
>> > item osd.33 weight 3.640
>> >
>> > item osd.34 weight 3.640
>> >
>> > item osd.35 weight 3.640
>> >
>> > item osd.36 weight 3.640
>> >
>> > item osd.37 weight 3.640
>> >
>> > item osd.38 weight 3.640
>> >
>> > item osd.39 weight 3.640
>> >
>> > }
>> >
>> > host sn6 {
>> >
>> > id -9 # do not change unnecessarily
>> >
>> > # weight 36.060
>> >
>> > alg straw2
>> >
>> > hash 0 # rjenkins1
>> >
>> > item osd.40 weight 3.640
>> >
>> > item osd.41 weight 3.640
>> >
>> > item osd.42 weight 3.640
>> >
>> > item osd.43 weight 3.640
>> >
>> > item osd.44 weight 3.640
>> >
>> > item osd.45 weight 3.640
>> >
>> > item osd.46 weight 3.640
>> >
>> > item osd.47 weight 3.640
>> >
>> > item osd.48 weight 3.640
>> >
>> > item osd.49 weight 3.640
>> >
>> > }
>> >
>> > root default {
>> >
>> > id -1 # do not change unnecessarily
>> >
>> > # weight 253.780
>> >
>> > alg straw2
>> >
>> > hash 0 # rjenkins1
>> >
>> > item sn1 weight 72.800
>> >
>> > item sn3 weight 72.800
>> >
>> > item sn4 weight 36.060
>> >
>> > item sn5 weight 36.060
>> >
>> > item sn6 weight 36.060
>> >
>> > }
>> >
>> >
>> >
>> > # rules
>> >
>> > rule replicated_ruleset {
>> >
>> > ruleset 0
>> >
>> > type replicated
>> >
>> > min_size 1
>> >
>> > max_size 10
>> >
>> > step take default
>> >
>> > step chooseleaf firstn 0 type host
>> >
>> > step emit
>> >
>> > }
>> >
>> >
>> >
>> > Thanks,
>> >
>> > Ashley
>> >
>> >
>> >
>> > _______________________________________________
>> > ceph-users mailing list
>> > [email protected]
>> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>> _______________________________________________
>> ceph-users mailing list
>> [email protected]
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> _______________________________________________
> ceph-users mailing list
> [email protected]
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com