>> There is no way to fill up all disks evenly with the same number of
>> Bytes and then stop filling the small disks when they're full and
>> only continue filling the larger disks.
>This is possible with adjusting crush weights. Initially the smaller
>drives are weighted more highly than
You'll find it said time and time agin on the ML... avoid disks of
different sizes in the same cluster. It's a headache that sucks. It's
not impossible, it's not even overly hard to pull off... but it's
very easy to cause a mess and a lot of headaches. It will also make
it harder to diagnose
You'll find it said time and time agin on the ML... avoid disks of
different sizes in the same cluster. It's a headache that sucks. It's
not impossible, it's not even overly hard to pull off... but it's
very easy to cause a mess and a lot of headaches. It will also make
it harder to diagnose
You'll find it said time and time agin on the ML... avoid disks of
different sizes in the same cluster. It's a headache that sucks. It's not
impossible, it's not even overly hard to pull off... but it's very easy to
cause a mess and a lot of headaches. It will also make it harder to
diagnose
On 13. april 2018 05:32, Chad William Seys wrote:
Hello,
I think your observations suggest that, to a first approximation,
filling drives with bytes to the same absolute level is better for
performance than filling drives to the same percentage full. Assuming
random distribution of PGs,
Hello,
I think your observations suggest that, to a first approximation,
filling drives with bytes to the same absolute level is better for
performance than filling drives to the same percentage full. Assuming
random distribution of PGs, this would cause the smallest drives to be
as active
]
Sent: donderdag 12 april 2018 4:36
To: ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>
Subject: *SPAM* [ceph-users] osds with different disk sizes may
killing performance
Importance: High
Hi,
For anybody who may be interested, here I share a process of locating
the rea
Hi,
you can also set the primary_affinity to 0.5 at the 8TB-disks to lower
the reading access (in this case you don't waste 50% of space).
Udo
Am 2018-04-12 04:36, schrieb ? ??:
Hi,
For anybody who may be interested, here I share a process of locating
the reason for ceph cluster
: ? ?? [mailto:yaozong...@outlook.com]
Sent: donderdag 12 april 2018 4:36
To: ceph-users@lists.ceph.com
Subject: *SPAM* [ceph-users] osds with different disk sizes may
killing performance
Importance: High
Hi,
For anybody who may be interested, here I share a process of locating
the reason for ceph
On 04/12/2018 11:21 AM, 宗友 姚 wrote:
Currently, this can only be done by hand. Maybe we need some scripts to handle
this automatically.
Mixed hosts, i.e. half old disks + half new disks is better than "old
hosts" and "new hosts" in your case.
k
On 04/12/2018 04:36 AM, ? ?? wrote:
> Hi,
>
> For anybody who may be interested, here I share a process of locating the
> reason for ceph cluster performance slow down in our environment.
>
> Internally, we have a cluster with capacity 1.1PB, used 800TB, and raw user
> data is about 500TB.
;
Sent: Thursday, April 12, 2018 12:00
To: ceph-users@lists.ceph.com
Cc: ?? ?
Subject: Re: [ceph-users] osds with different disk sizes may killing performance
On 04/12/2018 10:58 AM, ?? ? wrote:
> Yes, according to crush algorithms, large drives are given high weight, this
> is expected. By default,
On 04/12/2018 10:58 AM, ?? ? wrote:
Yes, according to crush algorithms, large drives are given high weight, this is
expected. By default, crush gives no consideration of each drive's performance,
which may cause the performance distribution is not balanced. And the highest
io util osd may
.
From: Konstantin Shalygin <k0...@k0ste.ru>
Sent: Thursday, April 12, 2018 11:29
To: ceph-users@lists.ceph.com
Cc: ? ??
Subject: Re: [ceph-users] osds with different disk sizes may killing performance
> After digging into our internal system stats, we fin
After digging into our internal system stats, we find the new added's disk io
util is about two times than the old.
This is obviously and expected. Yours 8Tb drives weighted double against
4Tb and do *double* crush work in comparison.
k
___
Hi,
For anybody who may be interested, here I share a process of locating the
reason for ceph cluster performance slow down in our environment.
Internally, we have a cluster with capacity 1.1PB, used 800TB, and raw user
data is about 500TB. Each day, 3TB' data is uploaded and 3TB oldest data
16 matches
Mail list logo