Hello, Sahana!

The output of the requested commands is listed below:

admin@cp-admin:~/safedrive$ ceph osd dump
epoch 26
fsid 7db3cf23-ddcb-40d9-874b-d7434bd8463d
created 2015-03-20 07:53:37.948969
modified 2015-03-20 08:11:18.813790
flags
pool 0 'rbd' replicated size 2 min_size 1 crush_ruleset 0 object_hash
rjenkins pg_num 256 pgp_num 256 last_change 26 flags hashpspool
stripe_width 0
max_osd 6
osd.0 up   in  weight 1 up_from 4 up_thru 24 down_at 0 last_clean_interval
[0,0) 192.168.122.21:6800/10437 192.168.122.21:6801/10437
192.168.122.21:6802/10437 192.168.122.21:6803/10437 exists,up
c6f241e1-2e98-4fb5-b376-27bade093428
osd.1 up   in  weight 1 up_from 8 up_thru 0 down_at 0 last_clean_interval
[0,0) 192.168.122.21:6805/11079 192.168.122.21:6806/11079
192.168.122.21:6807/11079 192.168.122.21:6808/11079 exists,up
a4f2aeea-4e45-4d5f-ab9e-dff8295fb5ea
osd.2 up   in  weight 1 up_from 11 up_thru 0 down_at 0 last_clean_interval
[0,0) 192.168.122.22:6800/9375 192.168.122.22:6801/9375
192.168.122.22:6802/9375 192.168.122.22:6803/9375 exists,up
f879ef15-7c9a-41a8-88a6-cde013dc2d07
osd.3 up   in  weight 1 up_from 14 up_thru 0 down_at 0 last_clean_interval
[0,0) 192.168.122.22:6805/10008 192.168.122.22:6806/10008
192.168.122.22:6807/10008 192.168.122.22:6808/10008 exists,up
99b3ff05-78b9-4f9f-a8f1-dbead9baddc6
osd.4 up   in  weight 1 up_from 17 up_thru 0 down_at 0 last_clean_interval
[0,0) 192.168.122.23:6800/9158 192.168.122.23:6801/9158
192.168.122.23:6802/9158 192.168.122.23:6803/9158 exists,up
9217fcdd-201b-47c1-badf-b352a639d122
osd.5 up   in  weight 1 up_from 20 up_thru 0 down_at 0 last_clean_interval
[0,0) 192.168.122.23:6805/9835 192.168.122.23:6806/9835
192.168.122.23:6807/9835 192.168.122.23:6808/9835 exists,up
ec2c4764-5e30-431b-bc3e-755a7614b90d

admin@cp-admin:~/safedrive$ ceph osd tree
# id    weight    type name    up/down    reweight
-1    0    root default
-2    0        host osd-001
0    0            osd.0    up    1
1    0            osd.1    up    1
-3    0        host osd-002
2    0            osd.2    up    1
3    0            osd.3    up    1
-4    0        host osd-003
4    0            osd.4    up    1
5    0            osd.5    up    1

Please let me know if there's anything else I can / should do.

Thank you very much!

Regards,
Bogdan


On Fri, Mar 20, 2015 at 9:17 AM, Sahana <[email protected]> wrote:

> HI Bogdan,
>
>
> Please paste the output of `ceph osd dump` and ceph osd tree`
>
> Thanks
> Sahana
>
> On Fri, Mar 20, 2015 at 11:47 AM, Bogdan SOLGA <[email protected]>
> wrote:
>
>> Hello, Nick!
>>
>> Thank you for your reply! I have tested both with setting the replicas
>> number to 2 and 3, by setting the 'osd pool default size = (2|3)' in the
>> .conf file. Either I'm doing something incorrectly, or they seem to produce
>> the same result.
>>
>> Can you give any troubleshooting advice? I have purged and re-created the
>> cluster several times, but the result is the same.
>>
>> Thank you for your help!
>>
>> Regards,
>> Bogdan
>>
>>
>> On Thu, Mar 19, 2015 at 11:29 PM, Nick Fisk <[email protected]> wrote:
>>
>>>
>>>
>>>
>>>
>>> > -----Original Message-----
>>> > From: ceph-users [mailto:[email protected]] On Behalf
>>> Of
>>> > Bogdan SOLGA
>>> > Sent: 19 March 2015 20:51
>>> > To: [email protected]
>>> > Subject: [ceph-users] PGs issue
>>> >
>>> > Hello, everyone!
>>> > I have created a Ceph cluster (v0.87.1-1) using the info from the
>>> 'Quick
>>> > deploy' page, with the following setup:
>>> > • 1 x admin / deploy node;
>>> > • 3 x OSD and MON nodes;
>>> > o each OSD node has 2 x 8 GB HDDs;
>>>
>>> > The setup was made using Virtual Box images, on Ubuntu 14.04.2.
>>> > After performing all the steps, the 'ceph health' output lists the
>>> cluster in the
>>> > HEALTH_WARN state, with the following details:
>>> > HEALTH_WARN 64 pgs degraded; 64 pgs stuck degraded; 64 pgs stuck
>>> > unclean; 64 pgs stuck undersized; 64 pgs undersized; too few pgs per
>>> osd (10
>>> > < min 20)
>>> > The output of 'ceph -s':
>>> >     cluster b483bc59-c95e-44b1-8f8d-86d3feffcfab
>>> >      health HEALTH_WARN 64 pgs degraded; 64 pgs stuck degraded; 64 pgs
>>> > stuck unclean; 64 pgs stuck undersized; 64 pgs undersized; too few pgs
>>> per
>>> > osd (10 < min 20)
>>> >      monmap e1: 3 mons at {osd-003=192.168.122.23:6789/0,osd-
>>> > 002=192.168.122.22:6789/0,osd-001=192.168.122.21:6789/0}, election
>>> epoch
>>> > 6, quorum 0,1,2 osd-001,osd-002,osd-003
>>> >      osdmap e20: 6 osds: 6 up, 6 in
>>> >       pgmap v36: 64 pgs, 1 pools, 0 bytes data, 0 objects
>>> >             199 MB used, 18166 MB / 18365 MB avail
>>> >                   64 active+undersized+degraded
>>> >
>>> > I have tried to increase the pg_num and pgp_num to 512, as advised
>>> here,
>>> > but Ceph refused to do that, with the following error:
>>> > Error E2BIG: specified pg_num 512 is too large (creating 384 new PGs
>>> on ~6
>>> > OSDs exceeds per-OSD max of 32)
>>> >
>>> > After changing the pg*_num to 256, as advised here, the warning was
>>> > changed to:
>>> > health HEALTH_WARN 256 pgs degraded; 256 pgs stuck unclean; 256 pgs
>>> > undersized
>>> >
>>> > What is the issue behind these warning? and what do I need to do to
>>> fix it?
>>>
>>> It's basically telling you that you current available OSD's don't meet
>>> the requirements to suit the number of replica's you have requested.
>>>
>>> What replica size have you configured for that pool?
>>>
>>> >
>>> > I'm a newcomer in the Ceph world, so please don't shoot me if this
>>> issue has
>>> > been answered / discussed countless times before :) I have searched the
>>> > web and the mailing list for the answers, but I couldn't find a valid
>>> solution.
>>> > Any help is highly appreciated. Thank you!
>>> > Regards,
>>> > Bogdan
>>>
>>>
>>>
>>>
>>>
>>
>> _______________________________________________
>> ceph-users mailing list
>> [email protected]
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>>
>
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to