On 16-12-22 13:20, Stéphane Klein wrote:


2016-12-22 12:18 GMT+01:00 Henrik Korkuc <[email protected] <mailto:[email protected]>>:

    On 16-12-22 13:12, Stéphane Klein wrote:
    HEALTH_WARN 43 pgs degraded; 43 pgs stuck unclean; 43 pgs
    undersized; recovery 24/70 objects degraded (34.286%); too few
    PGs per OSD (28 < min 30); 1/3 in osds are down;

    it says 1/3 OSDs are down. By default Ceph pools are setup with
    size 3. If your setup is same it will not be able to restore to
    normal status without size decrease or additional OSDs


I have this config:

ceph_conf_overrides:
   global:
      osd_pool_default_size: 2
      osd_pool_default_min_size: 1

see: https://github.com/harobed/poc-ceph-ansible/blob/master/vagrant-3mons-3osd/hosts/group_vars/all.yml#L11

Can you please provide outputs of "ceph -s" "ceph osd tree" and "ceph osd dump |grep size"?

_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to