Hello.

I am trying for the first time to setup an erasure coded pool with PVE 8.0


Cluster is 6 hosts, 24 OSD each one (12 ssd + 12 sas), 4x10Gbit.

One pool is replica 3 for ssd class drives (2048 PGs).
Another pool is replica 3 for hdd class drives (with 0.45 target ratio and 1024 
PGs)
The last one, is an EC pool (4+2) sharing the same hdd class drives (with 0.5 
target ratio, and 256 PGs).
Of course, pveceph creates the ec metadata pool, with replica 3, hdd class 
drives, 0.05 target ratio and 32 PGs.

If I move a drive from another storage to EC pool, i loose the sparse setting, 
so I need to "rbd sparsify" the rbd file.
But, when I do it on an EC pool, i can see ceph-osd processes using 100% cpu, 
and see on ceph mgr log, PGs  in "active + laggy" state.

No any problem when I use the vm with drive image in erasure coded pool (but I 
think because is less IO intensive than a "sparsify").

Someone had the same problem or use EC pool for virtual machines ? I would like 
to have some "slow volumes" for archiving only purpose, but i am afraid that 
all the cluster could be impacted.

Regards, Fabrizio 



_______________________________________________
pve-user mailing list
[email protected]
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user

Reply via email to