[UPDATE]
I notice that in [node] -> Ceph -> Pool in Used % column the values is
decrease over time! Perhaps need wait to adjust it and than see if
active+remapped+backfill_wait and active+remapped+backfilling end it's
operations...
---
Gilberto Nunes Ferreira
Em sáb., 28 de mar. de 2020 às 11:04, Gilberto Nunes <
gilberto.nune...@gmail.com> escreveu:
> Help with Ceph in PVE 6
>
> Hi
>
> I have a ceph cluster created with 3 server
> ServerA has 3 SAS 512GB HDD and 1 SAS 1.3 TB
> ServerB has 3 SAS 512GB HDD and 1 SAS 1.3 TB
> ServerS has 3 SAS 512GB HDD and 1 SAS 1.3 TB
>
> I have one pool named VMS with size/min 3/2 and pg_num initially created
> with 256 but I have increased to 512 and an hour ago to 768 but it's see
> hava not effect on it...
>
> Ceph health apparently is ok but get this with ceph -s command:
>
> ceph -s
> cluster:
> id: 93c55c6b-ce64-4e1a-92bc-0bc529d695f2
> health: HEALTH_OK
>
> services:
> mon: 5 daemons, quorum pve3,pve4,pve5,pve7,pve6 (age 15h)
> mgr: pve3(active, since 15h), standbys: pve4, pve5, pve7, pve6
> osd: 12 osds: 12 up (since 10m), 12 in (since 10m); 497 remapped pgs
>
> data:
> pools: 1 pools, 768 pgs
> objects: 279.34k objects, 1.1 TiB
> usage: 3.0 TiB used, 6.2 TiB / 9.1 TiB avail
> pgs: 375654/838011 objects misplaced (44.827%)
> 494 active+remapped+backfill_wait
> 271 active+clean
> 3 active+remapped+backfilling
>
> io:
> client: 140 KiB/s rd, 397 KiB/s wr, 12 op/s rd, 64 op/s wr
> recovery: 52 MiB/s, 14 objects/s
>
>
> Is there any action I can take to fix this?
>
> Thanks
> ---
> Gilberto Nunes Ferreira
>
> (47) 3025-5907
> (47) 99676-7530 - Whatsapp / Telegram
>
> Skype: gilberto.nunes36
>
>
>
>
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user