On Mon, Jan 2, 2017 at 4:25 AM, Jens Dueholm Christensen
wrote:
> On Friday, December 30, 2016 07:05 PM Brian Andrus wrote:
>
> > We have a set it and forget it cronjob setup once an hour to keep things
> a bit more balanced.
> >
> > 1 * * * * /bin/bash
On Friday, December 30, 2016 07:05 PM Brian Andrus wrote:
> We have a set it and forget it cronjob setup once an hour to keep things a
> bit more balanced.
>
> 1 * * * * /bin/bash /home/briana/reweight_osd.sh 2>&1 | /usr/bin/logger -t
> ceph_reweight
>
> The script checks and makes sure cluster
We have similar problems in our clusters and sometimes we do manual
reweight. Also we noticed smaller PG's (more of them in pool) help with
balancing too.
Arvydas
On Dec 30, 2016 21:01, "Shinobu Kinjo" wrote:
> The best practice to reweight OSDs is to run
>
The best practice to reweight OSDs is to run
test-reweight-by-utilization which is dry-run of reweighting OSDs
before running reweight-by-utilization.
On Sat, Dec 31, 2016 at 3:05 AM, Brian Andrus
wrote:
> We have a set it and forget it cronjob setup once an hour to
We have a set it and forget it cronjob setup once an hour to keep things a
bit more balanced.
1 * * * * /bin/bash /home/briana/reweight_osd.sh 2>&1 | /usr/bin/logger -t
ceph_reweight
The script checks and makes sure cluster health is OK and no other
rebalancing is going on. It will also check
On Fri, Dec 30, 2016 at 7:27 PM, Kees Meijs wrote:
> Thanks, I'll try a manual reweight at first.
Great.
CRUSH would probably be able to be more clever in the future anyway.
>
> Have a happy new year's eve (yes, I know it's a day early)!
>
> Regards,
> Kees
>
> On 30-12-16
Thanks, I'll try a manual reweight at first.
Have a happy new year's eve (yes, I know it's a day early)!
Regards,
Kees
On 30-12-16 11:17, Wido den Hollander wrote:
> For this reason you can do a OSD reweight by running the 'ceph osd
> reweight-by-utilization' command or do it manually with
On Fri, Dec 30, 2016 at 7:17 PM, Wido den Hollander wrote:
>
>> Op 30 december 2016 om 11:06 schreef Kees Meijs :
>>
>>
>> Hi Asley,
>>
>> We experience (using Hammer) a similar issue. Not that I have a perfect
>> solution to share, but I felt like mentioning a "me
> Op 30 december 2016 om 11:06 schreef Kees Meijs :
>
>
> Hi Asley,
>
> We experience (using Hammer) a similar issue. Not that I have a perfect
> solution to share, but I felt like mentioning a "me too". ;-)
>
> On a side note: we configured correct weight per drive as well.
>
Hi Asley,
We experience (using Hammer) a similar issue. Not that I have a perfect
solution to share, but I felt like mentioning a "me too". ;-)
On a side note: we configured correct weight per drive as well.
Regards,
Kees
On 29-12-16 11:54, Ashley Merrick wrote:
>
> Hello,
>
>
>
> I
Hello,
I currently have 5 servers within my CEPH Cluster
2 x (10 * 8TB Disks)
3 x (10 * 4TB Disks)
Currently seeing a larger difference in OSD use across the two separate server
types, as well as within the server itself.
For example on one 4TB server I have an OSD at 64% and one at 84%,
11 matches
Mail list logo