HI again
Last message I thing that I figure out was happen to my ceph 6 server
cluster, but I didn't at all!
Cluster still slow performance.
'till this morning.
I reweight the osd in the low speed disk's with this command:
ceph osd crush reweight osd.ID weight
Now everything is ok!
Thanks to
Thanks for all buddies that replied my messages.
Indeed I used
ceph osd primary-affinity
And we felt some performance increment.
What's help here is that we have 6 proxmox ceph server:
ceph01 - HDD with 5 900 rpm
ceph02 - HDD with 7 200 rpm
ceph03 - HDD with 7 200 rpm
ceph04 - HDD with 7 200
Hope you did change a single disk at a time !
Be warned (if not) that moving an OSD from a server to another triggers
a rebalancing of almost the complete datas stored upon in order to
follow crushmap.
For instance exchanging two OSDs between servers result in a complete
rebalance of the two
Right now the ceph are very slow
343510/2089155 objects misplaced (16.443%)
Status
HEALTH_WARN
Monitors
pve-ceph01:
pve-ceph02:
pve-ceph03:
pve-ceph04:
pve-ceph05:
pve-ceph06:
OSDs
In Out
Up 21 0
Down 0 0
Total: 21
PGs
active+clean:
157
active+recovery_wait+remapped:
1
SO, what you guys think about this HDD distribuiton?
CEPH-01
1x 3 TB
1x 2 TB
CEPH-02
1x 4 TB
1x 3 TB
CEPH-03
1x 4 TB
1x 3 TB
CEPH-04
1x 4 TB
1x 3 TB
1x 2 TB
CEPH-05
1x 8 TB
1x 2 TB
CEPH-06
1x 3 TB
1x 1 TB
1x 8 TB
---
Gilberto Nunes Ferreira
(47) 3025-5907
(47) 99676-7530 - Whatsapp /
El 30/08/18 a las 14:37, Mark Schouten escribió:
On Thu, 2018-08-30 at 09:30 -0300, Gilberto Nunes wrote:
Any advice to, at least, mitigate the low performance?
Balance the number of spinning disks and the size per server. This will
probably be the safest.
It's not said that not balancing
On Thu, 2018-08-30 at 09:30 -0300, Gilberto Nunes wrote:
> Any advice to, at least, mitigate the low performance?
Balance the number of spinning disks and the size per server. This will
probably be the safest.
It's not said that not balancing degrades performance, it's said that
it might
The environmente has this configuration:
CEPH-01
4x 4 TB
CEPH-02
4x 3 TB
CEPH-03
2x 3 TB
1x 2 TB
CEPH-04
4x 2 TB
CEPH-05
2x 8 TB
CEPH-06
2x 3 TB
1x 2 TB
1x 1 TB
Any advice to, at least, mitigate the low performance?
Thanks
---
Gilberto Nunes Ferreira
(47) 3025-5907
(47) 99676-7530 -
On Wed, 2018-08-29 at 14:04 +0200, Eneko Lacunza wrote:
> You should change the weight of the 8TB disk, so that they have the
> same
> as the other 4TB disks.
>
> Thanks should fix the performance issue, but you'd waste half space
> on
> those 8TB disks :)
Wouldn't it be more efficient to do
Hi there Eneko
Sorry Can you show me how can I do that? I meant, change de weight???
Thanks
---
Gilberto Nunes Ferreira
(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram
Skype: gilberto.nunes36
2018-08-29 9:04 GMT-03:00 Eneko Lacunza :
> You should change the weight of the 8TB
You should change the weight of the 8TB disk, so that they have the same
as the other 4TB disks.
Thanks should fix the performance issue, but you'd waste half space on
those 8TB disks :)
El 23/08/18 a las 00:19, Brian : escribió:
Its really not a great idea because the larger drives will
Its really not a great idea because the larger drives will tend to
get more writes so your performance won't be as good as all the same
size where the writes will be distributed more evenly.
On Wed, Aug 22, 2018 at 8:05 PM Gilberto Nunes
wrote:
>
> Hi there
>
>
> It's possible create a Ceph
Yes, you can mix and match drive sizes on ceph.
Caution: heterogeneous environments do provide challenges. You will want to set
your osd weight on the 8TB drives to 2x what the 4TB drives are. In doing so,
however, realize the 8TB drives will be expected to "perform" 2x as much as the
4TB
Hi there
It's possible create a Ceph cluster with 4 servers, which has differents
disk sizes:
Server A - 2x 4TB
Server B, C - 2x 8TB
Server D - 2x 4TB
This is ok?
Thanks
---
Gilberto Nunes Ferreira
(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram
Skype: gilberto.nunes36
14 matches
Mail list logo