Hi,

On 06/22/2016 12:10 PM, min fang wrote:
Hi, I created a new ceph cluster, and create a pool, but see "stuck unclean since forever" errors happen(as the following), can help point out the possible reasons for this? thanks.

ceph -s
    cluster 602176c1-4937-45fc-a246-cc16f1066f65
     health HEALTH_WARN
            8 pgs degraded
            8 pgs stuck unclean
            8 pgs undersized
            too few PGs per OSD (2 < min 30)
monmap e1: 1 mons at {ceph-01=172.0.0.11:6789/0 <http://172.0.0.11:6789/0>}
            election epoch 14, quorum 0 ceph-01
     osdmap e89: 3 osds: 3 up, 3 in
            flags
      pgmap v310: 8 pgs, 1 pools, 0 bytes data, 0 objects
            60112 MB used, 5527 GB / 5586 GB avail
                   8 active+undersized+degraded

*snipsnap*

With three OSDs and a single host you need to change the crush ruleset for the pool, since it tries to distribute the data across 3 different _host_ by default.

Regards,
Burkhard
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to