Hi,

My cluster is up and running. I saw a note in ceph status that 1 pg was 
undersized. I read about the amount of pgs and the recommended value 
(OSD's*100/poolsize => 6*100/3 = 200). The pg_num should be raised carfully, so 
I raised it to 2 and ceph status was fine again. So I left it like it was.

Than I created a new pool: libvirt-pool.

Now ceph status is again in warning regarding pgs. I raised pg_num_max of the 
libvirt_pool to 265 and pg_num to 128.

Ceph status stays in warning.
root@hvs001:/# ceph status
...
    health: HEALTH_WARN
            Reduced data availability: 64 pgs inactive
            Degraded data redundancy: 68 pgs undersized
...
   pgs:     94.118% pgs not active
             4/6 objects misplaced (66.667%) -This is there from the beginning 
of the creation of the cluster-
             64 undersized+peered
             4  active+undersized+remapped

I also get a progress: global Recovery Event (0s) which only go's away with 
'ceph progress clear'

My autoscale-status is the following:
root@hvs001:/# ceph osd pool autoscale-status
POOL            SIZE  TARGET SIZE  RATE  RAW CAPACITY   RATIO  TARGET RATIO  
EFFECTIVE RATIO  BIAS  PG_NUM  NEW PG_NUM  AUTOSCALE  BULK
.mgr          576.5k                3.0         1743G  0.0000                   
               1.0       1              on         False
libvirt-pool      0                 3.0         1743G  0.0000                   
               1.0      64              on         False

(It's a 3 node cluster with 2 OSD's per node.)

The documentation doesn't help me much here. What should I do?

Greetings,

Dominique.


_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]

Reply via email to