Hi all,
we use an EC-Pool with an small cache tier in front of, for our
archive-data (4 * 16TB VM-disks).
The ec-pool has k=3;m=2 because we startet with 5 nodes and want to
migrate to an new ec-pool with k=5;m=2. Therefor we migrate one VM-disk
(16TB) from the ceph-cluster to an fc-raid with the
On 27/02/2015, at 17.04, Udo Lembke ulem...@polarzone.de wrote:
ceph health detail
HEALTH_WARN pool ssd-archiv has too few pgs
Slightly different I had an issue with my Ceph Cluster underneath a PVE cluster
yesterday.
Had two Ceph pools for RBD virt disks, vm_images (boot hdd images) +