Hi all,
Thanks for you all.
Like Mark's information this problem is releate to CRUSH Map.
After create 2 OSDs on 2 different host, healthy check is OK.
Appreciate the information again~

Best wishes,
Mika

2014-10-29 17:19 GMT+08:00 Vickie CH <[email protected]>:

> Hi:
> -----------------------------ceph osd
> tree-----------------------------------
> # id    weight  type name       up/down reweight
> -1      1.82    root default
> -2      1.82            host storage1
> 0       0.91                    osd.0   up      1
> 1       0.91                    osd.1   up      1
>
> Best wishes,
> Mika
>
> 2014-10-29 17:05 GMT+08:00 Irek Fasikhov <[email protected]>:
>
>> ceph osd tree please :)
>>
>> 2014-10-29 12:03 GMT+03:00 Vickie CH <[email protected]>:
>>
>>> Dear all,
>>> Thanks for the reply.
>>> Pool replicated size is 2. Because the replicated size parameter already
>>> write into ceph.conf before deploy.
>>> Because not familiar crush map.  I will according Mark's information to
>>> do a test that change the crush map to see the result.
>>>
>>> -----------ceph.conf------------------
>>> [global]
>>> fsid = c404ded6-4086-4f0b-b479-
>>> 89bc018af954
>>> mon_initial_members = storage0
>>> mon_host = 192.168.1.10
>>> auth_cluster_required = cephx
>>> auth_service_required = cephx
>>> auth_client_required = cephx
>>> filestore_xattr_use_omap = true
>>>
>>> *osd_pool_default_size = 2osd_pool_default_min_size = 1*
>>> osd_pool_default_pg_num = 128
>>> osd_journal_size = 2048
>>> osd_pool_default_pgp_num = 128
>>> osd_mkfs_type = xfs
>>> -------------------------------------------
>>>
>>> ----------------------ceph osd dump result -----------------------------
>>> pool 0 'data' replicated size 2 min_size 1 crush_ruleset 0 object_hash
>>> rjenkins pg_num 64 pgp_num 64 last_change 14 flags hashpspool
>>> crash_replay_interval 45 stripe_width 0
>>> pool 1 'metadata' replicated size 2 min_size 1 crush_ruleset 0
>>> object_hash rjenkins pg_num 64 pgp_num 64 last_change 15 flags hashpspool
>>> stripe_width 0
>>> pool 2 'rbd' replicated size 2 min_size 1 crush_ruleset 0 object_hash
>>> rjenkins pg_num 64 pgp_num 64 last_change 16 flags hashpspool stripe_width 0
>>> max_osd 2
>>>
>>> ------------------------------------------------------------------------------
>>>
>>> Best wishes,
>>> Mika
>>>
>>> Best wishes,
>>> Mika
>>>
>>> 2014-10-29 16:56 GMT+08:00 Mark Kirkwood <[email protected]>
>>> :
>>>
>>>> That is not my experience:
>>>>
>>>> $ ceph -v
>>>> ceph version 0.86-579-g06a73c3 (06a73c39169f2f332dec760f56d3ec
>>>> 20455b1646)
>>>>
>>>> $ cat /etc/ceph/ceph.conf
>>>> [global]
>>>> ...
>>>> osd pool default size = 2
>>>>
>>>> $ ceph osd dump|grep size
>>>> pool 2 'hot' replicated size 2 min_size 1 crush_ruleset 0 object_hash
>>>> rjenkins pg_num 128 pgp_num 128 last_change 47 flags
>>>> hashpspool,incomplete_clones tier_of 1 cache_mode writeback target_bytes
>>>> 2000000000 hit_set bloom{false_positive_probability: 0.05,
>>>> target_size: 0, seed: 0} 3600s x1 stripe_width 0
>>>> pool 10 '.rgw.root' replicated size 2 min_size 1 crush_ruleset 0
>>>> object_hash rjenkins pg_num 8 pgp_num 8 last_change 102 owner
>>>> 18446744073709551615 flags hashpspool stripe_width 0
>>>> pool 11 '.rgw.control' replicated size 2 min_size 1 crush_ruleset 0
>>>> object_hash rjenkins pg_num 8 pgp_num 8 last_change 104 owner
>>>> 18446744073709551615 flags hashpspool stripe_width 0
>>>> pool 12 '.rgw' replicated size 2 min_size 1 crush_ruleset 0 object_hash
>>>> rjenkins pg_num 8 pgp_num 8 last_change 106 owner 18446744073709551615
>>>> flags hashpspool stripe_width 0
>>>> pool 13 '.rgw.gc' replicated size 2 min_size 1 crush_ruleset 0
>>>> object_hash rjenkins pg_num 8 pgp_num 8 last_change 107 owner
>>>> 18446744073709551615 flags hashpspool stripe_width 0
>>>> pool 14 '.users.uid' replicated size 2 min_size 1 crush_ruleset 0
>>>> object_hash rjenkins pg_num 8 pgp_num 8 last_change 108 owner
>>>> 18446744073709551615 flags hashpspool stripe_width 0
>>>> pool 15 '.rgw.buckets.index' replicated size 2 min_size 1 crush_ruleset
>>>> 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 110 owner
>>>> 18446744073709551615 flags hashpspool stripe_width 0
>>>> pool 16 '.rgw.buckets' replicated size 2 min_size 1 crush_ruleset 0
>>>> object_hash rjenkins pg_num 8 pgp_num 8 last_change 112 owner
>>>> 18446744073709551615 flags hashpspool stripe_width 0
>>>> pool 17 'rbd' replicated size 2 min_size 1 crush_ruleset 0 object_hash
>>>> rjenkins pg_num 1024 pgp_num 1024 last_change 186 flags hashpspool
>>>> stripe_width 0
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> On 29/10/14 21:46, Irek Fasikhov wrote:
>>>>
>>>>> Hi.
>>>>> This parameter does not apply to pools by default.
>>>>> ceph osd dump | grep pool. see size=?
>>>>>
>>>>>
>>>>> 2014-10-29 11:40 GMT+03:00 Vickie CH <[email protected]
>>>>> <mailto:[email protected]>>:
>>>>>
>>>>>     Der Irek:
>>>>>
>>>>>     Thanks for your reply.
>>>>>     Even already set "osd_pool_default_size = 2" the cluster still need
>>>>>     3 different hosts right?
>>>>>     Is this default number can be changed by user and write into
>>>>>     ceph.conf before deploy?
>>>>>
>>>>>
>>>>>     Best wishes,
>>>>>     Mika
>>>>>
>>>>>     2014-10-29 16:29 GMT+08:00 Irek Fasikhov <[email protected]
>>>>>     <mailto:[email protected]>>:
>>>>>
>>>>>         Hi.
>>>>>
>>>>>         Because the disc requires three different hosts, the default
>>>>>         number of replications 3.
>>>>>
>>>>>         2014-10-29 10:56 GMT+03:00 Vickie CH <[email protected]
>>>>>         <mailto:[email protected]>>:
>>>>>
>>>>>
>>>>>             Hi all,
>>>>>                    Try to use two OSDs to create a cluster. After the
>>>>>             deply finished, I found the health status is "88
>>>>>             active+degraded" "104 active+remapped". Before use 2 osds
>>>>> to
>>>>>             create cluster the result is ok. I'm confuse why this
>>>>>             situation happened. Do I need to set crush map to fix this
>>>>>             problem?
>>>>>
>>>>>
>>>>>             ----------ceph.conf---------------------------------
>>>>>             [global]
>>>>>             fsid = c404ded6-4086-4f0b-b479-89bc018af954
>>>>>             mon_initial_members = storage0
>>>>>             mon_host = 192.168.1.10
>>>>>             auth_cluster_required = cephx
>>>>>             auth_service_required = cephx
>>>>>             auth_client_required = cephx
>>>>>             filestore_xattr_use_omap = true
>>>>>             osd_pool_default_size = 2
>>>>>             osd_pool_default_min_size = 1
>>>>>             osd_pool_default_pg_num = 128
>>>>>             osd_journal_size = 2048
>>>>>             osd_pool_default_pgp_num = 128
>>>>>             osd_mkfs_type = xfs
>>>>>             ---------------------------------------------------------
>>>>>
>>>>>             -----------ceph -s-----------------------------------
>>>>>             cluster c404ded6-4086-4f0b-b479-89bc018af954
>>>>>                   health HEALTH_WARN 88 pgs degraded; 192 pgs stuck
>>>>> unclean
>>>>>                   monmap e1: 1 mons at {storage0=192.168.10.10:6789/0
>>>>>             <http://192.168.10.10:6789/0>}, election epoch 2, quorum 0
>>>>>             storage0
>>>>>                   osdmap e20: 2 osds: 2 up, 2 in
>>>>>                    pgmap v45: 192 pgs, 3 pools, 0 bytes data, 0 objects
>>>>>                          79752 kB used, 1858 GB / 1858 GB avail
>>>>>                                88 active+degraded
>>>>>                               104 active+remapped
>>>>>             --------------------------------------------------------
>>>>>
>>>>>
>>>>>             Best wishes,
>>>>>             Mika
>>>>>
>>>>>             _______________________________________________
>>>>>             ceph-users mailing list
>>>>>             [email protected] <mailto:[email protected].
>>>>> com>
>>>>>             http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>         --
>>>>>         С уважением, Фасихов Ирек Нургаязович
>>>>>         Моб.: +79229045757
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> С уважением, Фасихов Ирек Нургаязович
>>>>> Моб.: +79229045757
>>>>>
>>>>>
>>>>> _______________________________________________
>>>>> ceph-users mailing list
>>>>> [email protected]
>>>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>>>
>>>>>
>>>>
>>>
>>
>>
>> --
>> С уважением, Фасихов Ирек Нургаязович
>> Моб.: +79229045757
>>
>
>
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to