Re: [ceph-users] osds with different disk sizes may killing, > performance (?? ?)

2018-04-19 Thread Van Leeuwen, Robert
>> There is no way to fill up all disks evenly with the same number of >> Bytes and then stop filling the small disks when they're full and >> only continue filling the larger disks. >This is possible with adjusting crush weights. Initially the smaller >drives are weighted more highly than

[ceph-users] osds with different disk sizes may killing, > performance (?? ?)

2018-04-18 Thread Chad William Seys
You'll find it said time and time agin on the ML... avoid disks of different sizes in the same cluster. It's a headache that sucks. It's not impossible, it's not even overly hard to pull off... but it's very easy to cause a mess and a lot of headaches. It will also make it harder to diagnose

[ceph-users] osds with different disk sizes may killing, > performance (?? ?)

2018-04-16 Thread Chad William Seys
You'll find it said time and time agin on the ML... avoid disks of different sizes in the same cluster. It's a headache that sucks. It's not impossible, it's not even overly hard to pull off... but it's very easy to cause a mess and a lot of headaches. It will also make it harder to diagnose

Re: [ceph-users] osds with different disk sizes may killing performance (?? ?)

2018-04-13 Thread David Turner
You'll find it said time and time agin on the ML... avoid disks of different sizes in the same cluster. It's a headache that sucks. It's not impossible, it's not even overly hard to pull off... but it's very easy to cause a mess and a lot of headaches. It will also make it harder to diagnose

Re: [ceph-users] osds with different disk sizes may killing performance (?? ?)

2018-04-12 Thread Ronny Aasen
On 13. april 2018 05:32, Chad William Seys wrote: Hello,   I think your observations suggest that, to a first approximation, filling drives with bytes to the same absolute level is better for performance than filling drives to the same percentage full. Assuming random distribution of PGs,

Re: [ceph-users] osds with different disk sizes may killing performance (?? ?)

2018-04-12 Thread Chad William Seys
Hello, I think your observations suggest that, to a first approximation, filling drives with bytes to the same absolute level is better for performance than filling drives to the same percentage full. Assuming random distribution of PGs, this would cause the smallest drives to be as active

Re: [ceph-users] osds with different disk sizes may killing performance

2018-04-12 Thread Steve Taylor
] Sent: donderdag 12 april 2018 4:36 To: ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com> Subject: *SPAM* [ceph-users] osds with different disk sizes may killing performance Importance: High Hi, For anybody who may be interested, here I share a process of locating the rea

Re: [ceph-users] osds with different disk sizes may killing performance

2018-04-12 Thread ulembke
Hi, you can also set the primary_affinity to 0.5 at the 8TB-disks to lower the reading access (in this case you don't waste 50% of space). Udo Am 2018-04-12 04:36, schrieb ? ??: Hi,  For anybody who may be interested, here I share a process of locating the reason for ceph cluster

Re: [ceph-users] osds with different disk sizes may killing performance

2018-04-12 Thread Marc Roos
: ? ?? [mailto:yaozong...@outlook.com] Sent: donderdag 12 april 2018 4:36 To: ceph-users@lists.ceph.com Subject: *SPAM* [ceph-users] osds with different disk sizes may killing performance Importance: High Hi,  For anybody who may be interested, here I share a process of locating the reason for ceph

Re: [ceph-users] osds with different disk sizes may killing performance

2018-04-12 Thread Konstantin Shalygin
On 04/12/2018 11:21 AM, 宗友 姚 wrote: Currently, this can only be done by hand. Maybe we need some scripts to handle this automatically. Mixed hosts, i.e. half old disks + half new disks is better than "old hosts" and "new hosts" in your case. k

Re: [ceph-users] osds with different disk sizes may killing performance

2018-04-12 Thread Wido den Hollander
On 04/12/2018 04:36 AM, ? ?? wrote: > Hi,  > > For anybody who may be interested, here I share a process of locating the > reason for ceph cluster performance slow down in our environment. > > Internally, we have a cluster with capacity 1.1PB, used 800TB, and raw user > data is about 500TB.

Re: [ceph-users] osds with different disk sizes may killing performance

2018-04-11 Thread 宗友 姚
; Sent: Thursday, April 12, 2018 12:00 To: ceph-users@lists.ceph.com Cc: ?? ? Subject: Re: [ceph-users] osds with different disk sizes may killing performance On 04/12/2018 10:58 AM, ?? ? wrote: > Yes, according to crush algorithms, large drives are given high weight, this > is expected. By default,

Re: [ceph-users] osds with different disk sizes may killing performance

2018-04-11 Thread Konstantin Shalygin
On 04/12/2018 10:58 AM, ?? ? wrote: Yes, according to crush algorithms, large drives are given high weight, this is expected. By default, crush gives no consideration of each drive's performance, which may cause the performance distribution is not balanced. And the highest io util osd may

Re: [ceph-users] osds with different disk sizes may killing performance

2018-04-11 Thread ?? ?
. From: Konstantin Shalygin <k0...@k0ste.ru> Sent: Thursday, April 12, 2018 11:29 To: ceph-users@lists.ceph.com Cc: ? ?? Subject: Re: [ceph-users] osds with different disk sizes may killing performance > After digging into our internal system stats, we fin

Re: [ceph-users] osds with different disk sizes may killing performance

2018-04-11 Thread Konstantin Shalygin
After digging into our internal system stats, we find the new added's disk io util is about two times than the old. This is obviously and expected. Yours 8Tb drives weighted double against 4Tb and do *double* crush work in comparison. k ___

[ceph-users] osds with different disk sizes may killing performance

2018-04-11 Thread ? ??
Hi,  For anybody who may be interested, here I share a process of locating the reason for ceph cluster performance slow down in our environment. Internally, we have a cluster with capacity 1.1PB, used 800TB, and raw user data is about 500TB. Each day, 3TB' data is uploaded and 3TB oldest data