Hello @all,
givin the following config:
* ceph.conf:
...
mon osd down out subtree limit = host
osd_pool_default_size = 3
osd_pool_default_min_size = 2
...
* each OSD has its journal on a 30GB partition on a PCIe-Flash-Card
* 3 hosts
What
Hi,
my ceph cluster has two pools and I reinstalled the osds of one complete
host. Ceph now recovers from this.
I was expecting when using
ceph osd pool set pool_a recovery_priority 5
ceph osd pool set pool_b recovery_priority 10
that this would lead to recovering the pool_a first (btw. I
Hi,
lets assume we have size=3 min_size=2 and lost some osds and now have
some placement groups with only one copy left.
Is there a setting to tell ceph to start recovering those pgs first in
order to reach min_size and so get the cluster online faster?
Regards,
Dennis
Hi,
we use PetaSAN for our VMWare-Cluster. It provides an webinterface for
management and does clustered active-active ISCSI. For us the easy
management was the point to choose this, so we need not to think about
how to configure ISCSI...
Regards,
Dennis
Am 28.05.2018 um 21:42 schrieb
Hi,
at the moment we use ceph with one big rbd pool and size=4 and use a
rule to ensure that 2 copies are in each of our two rooms. This works
great for VMs. But there is some big data which should be stored online
but a bit cheaper. We think about using cephfs for it with erasure
coding and