HI Bogdan,

Please paste the output of `ceph osd dump` and ceph osd tree`

Thanks
Sahana

On Fri, Mar 20, 2015 at 11:47 AM, Bogdan SOLGA <[email protected]>
wrote:

> Hello, Nick!
>
> Thank you for your reply! I have tested both with setting the replicas
> number to 2 and 3, by setting the 'osd pool default size = (2|3)' in the
> .conf file. Either I'm doing something incorrectly, or they seem to produce
> the same result.
>
> Can you give any troubleshooting advice? I have purged and re-created the
> cluster several times, but the result is the same.
>
> Thank you for your help!
>
> Regards,
> Bogdan
>
>
> On Thu, Mar 19, 2015 at 11:29 PM, Nick Fisk <[email protected]> wrote:
>
>>
>>
>>
>>
>> > -----Original Message-----
>> > From: ceph-users [mailto:[email protected]] On Behalf
>> Of
>> > Bogdan SOLGA
>> > Sent: 19 March 2015 20:51
>> > To: [email protected]
>> > Subject: [ceph-users] PGs issue
>> >
>> > Hello, everyone!
>> > I have created a Ceph cluster (v0.87.1-1) using the info from the 'Quick
>> > deploy' page, with the following setup:
>> > • 1 x admin / deploy node;
>> > • 3 x OSD and MON nodes;
>> > o each OSD node has 2 x 8 GB HDDs;
>>
>> > The setup was made using Virtual Box images, on Ubuntu 14.04.2.
>> > After performing all the steps, the 'ceph health' output lists the
>> cluster in the
>> > HEALTH_WARN state, with the following details:
>> > HEALTH_WARN 64 pgs degraded; 64 pgs stuck degraded; 64 pgs stuck
>> > unclean; 64 pgs stuck undersized; 64 pgs undersized; too few pgs per
>> osd (10
>> > < min 20)
>> > The output of 'ceph -s':
>> >     cluster b483bc59-c95e-44b1-8f8d-86d3feffcfab
>> >      health HEALTH_WARN 64 pgs degraded; 64 pgs stuck degraded; 64 pgs
>> > stuck unclean; 64 pgs stuck undersized; 64 pgs undersized; too few pgs
>> per
>> > osd (10 < min 20)
>> >      monmap e1: 3 mons at {osd-003=192.168.122.23:6789/0,osd-
>> > 002=192.168.122.22:6789/0,osd-001=192.168.122.21:6789/0}, election
>> epoch
>> > 6, quorum 0,1,2 osd-001,osd-002,osd-003
>> >      osdmap e20: 6 osds: 6 up, 6 in
>> >       pgmap v36: 64 pgs, 1 pools, 0 bytes data, 0 objects
>> >             199 MB used, 18166 MB / 18365 MB avail
>> >                   64 active+undersized+degraded
>> >
>> > I have tried to increase the pg_num and pgp_num to 512, as advised here,
>> > but Ceph refused to do that, with the following error:
>> > Error E2BIG: specified pg_num 512 is too large (creating 384 new PGs on
>> ~6
>> > OSDs exceeds per-OSD max of 32)
>> >
>> > After changing the pg*_num to 256, as advised here, the warning was
>> > changed to:
>> > health HEALTH_WARN 256 pgs degraded; 256 pgs stuck unclean; 256 pgs
>> > undersized
>> >
>> > What is the issue behind these warning? and what do I need to do to fix
>> it?
>>
>> It's basically telling you that you current available OSD's don't meet
>> the requirements to suit the number of replica's you have requested.
>>
>> What replica size have you configured for that pool?
>>
>> >
>> > I'm a newcomer in the Ceph world, so please don't shoot me if this
>> issue has
>> > been answered / discussed countless times before :) I have searched the
>> > web and the mailing list for the answers, but I couldn't find a valid
>> solution.
>> > Any help is highly appreciated. Thank you!
>> > Regards,
>> > Bogdan
>>
>>
>>
>>
>>
>
> _______________________________________________
> ceph-users mailing list
> [email protected]
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to