Hi.

Because the disc requires three different hosts, the default number of
replications 3.

2014-10-29 10:56 GMT+03:00 Vickie CH <[email protected]>:

> Hi all,
>       Try to use two OSDs to create a cluster. After the deply finished, I
> found the health status is "88 active+degraded" "104 active+remapped".
> Before use 2 osds to create cluster the result is ok. I'm confuse why this
> situation happened. Do I need to set crush map to fix this problem?
>
>
> ----------ceph.conf---------------------------------
> [global]
> fsid = c404ded6-4086-4f0b-b479-89bc018af954
> mon_initial_members = storage0
> mon_host = 192.168.1.10
> auth_cluster_required = cephx
> auth_service_required = cephx
> auth_client_required = cephx
> filestore_xattr_use_omap = true
> osd_pool_default_size = 2
> osd_pool_default_min_size = 1
> osd_pool_default_pg_num = 128
> osd_journal_size = 2048
> osd_pool_default_pgp_num = 128
> osd_mkfs_type = xfs
> ---------------------------------------------------------
>
> -----------ceph -s-----------------------------------
> cluster c404ded6-4086-4f0b-b479-89bc018af954
>      health HEALTH_WARN 88 pgs degraded; 192 pgs stuck unclean
>      monmap e1: 1 mons at {storage0=192.168.10.10:6789/0}, election epoch
> 2, quorum 0 storage0
>      osdmap e20: 2 osds: 2 up, 2 in
>       pgmap v45: 192 pgs, 3 pools, 0 bytes data, 0 objects
>             79752 kB used, 1858 GB / 1858 GB avail
>                   88 active+degraded
>                  104 active+remapped
> --------------------------------------------------------
>
>
> Best wishes,
> Mika
>
> _______________________________________________
> ceph-users mailing list
> [email protected]
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>


-- 
С уважением, Фасихов Ирек Нургаязович
Моб.: +79229045757
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to