I adjusted the crush map, everything's OK now. Thanks for your help!

On Wed, 23 Mar 2016 at 23:13 Matt Conner <[email protected]> wrote:

> Hi Zhang,
>
> In a 2 copy pool, each placement group is spread across 2 OSDs - that is
> why you see such a high number of placement groups per OSD. There is a PG
> calculator at http://ceph.com/pgcalc/. Based on your setup, it may be
> worth using 2048 instead of 4096.
>
> As for stuck/degraded PGs, most are reporting as being on osd.0. Looking
> at your OSD Tree, you somehow have 21 OSDs being reported with 2 being
> labeled as osd.0; both up and in. I'd recommend trying to get rid of the
> one listed on host 148_96 and see if it clears the issues.
>
>
>
> On Tue, Mar 22, 2016 at 6:28 AM, Zhang Qiang <[email protected]>
> wrote:
>
>> Hi Reddy,
>> It's over a thousand lines, I pasted it on gist:
>> https://gist.github.com/dotSlashLu/22623b4cefa06a46e0d4
>>
>> On Tue, 22 Mar 2016 at 18:15 M Ranga Swami Reddy <[email protected]>
>> wrote:
>>
>>> Hi,
>>> Can you please share the "ceph health detail" output?
>>>
>>> Thanks
>>> Swami
>>>
>>> On Tue, Mar 22, 2016 at 3:32 PM, Zhang Qiang <[email protected]>
>>> wrote:
>>> > Hi all,
>>> >
>>> > I have 20 OSDs and 1 pool, and, as recommended by the
>>> > doc(
>>> http://docs.ceph.com/docs/master/rados/operations/placement-groups/), I
>>> > configured pg_num and pgp_num to 4096, size 2, min size 1.
>>> >
>>> > But ceph -s shows:
>>> >
>>> > HEALTH_WARN
>>> > 534 pgs degraded
>>> > 551 pgs stuck unclean
>>> > 534 pgs undersized
>>> > too many PGs per OSD (382 > max 300)
>>> >
>>> > Why the recommended value, 4096, for 10 ~ 50 OSDs doesn't work?  And
>>> what
>>> > does it mean by "too many PGs per OSD (382 > max 300)"? If per OSD has
>>> 382
>>> > PGs I would have had 7640 PGs.
>>> >
>>> > _______________________________________________
>>> > ceph-users mailing list
>>> > [email protected]
>>> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>> >
>>>
>>
>> _______________________________________________
>> ceph-users mailing list
>> [email protected]
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>>
>
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to