Hi Bogdan,

 Here is the link for hardware recccomendations :
http://ceph.com/docs/master/start/hardware-recommendations/#hard-disk-drives.
As per this link, minimum  size  reccommended for osds  is 1TB.
 Butt as Nick said, Ceph OSDs must be min. 10GB to get an weight of 0.01
Here is the snippet from crushmaps section of ceph docs:

Weighting Bucket Items

Ceph expresses bucket weights as doubles, which allows for fine weighting.
A weight is the relative difference between device capacities. We recommend
using 1.00 as the relative weight for a 1TB storage device. In such a
scenario, a weight of 0.5 would represent approximately 500GB, and a weight
of 3.00 would represent approximately 3TB. Higher level buckets have a
weight that is the sum total of the leaf items aggregated by the bucket.

Thanks

Sahana

On Fri, Mar 20, 2015 at 2:08 PM, Bogdan SOLGA <[email protected]>
wrote:

> Thank you for your suggestion, Nick! I have re-weighted the OSDs and the
> status has changed to '256 active+clean'.
>
> Is this information clearly stated in the documentation, and I have missed
> it? In case it isn't - I think it would be recommended to add it, as the
> issue might be encountered by other users, as well.
>
> Kind regards,
> Bogdan
>
>
> On Fri, Mar 20, 2015 at 10:33 AM, Nick Fisk <[email protected]> wrote:
>
>> I see the Problem, as your OSD's are only 8GB they have a zero weight, I
>> think the minimum size you can get away with is 10GB in Ceph as the size is
>> measured in TB and only has 2 decimal places.
>>
>> For a work around try running :-
>>
>> ceph osd crush reweight osd.X 1
>>
>> for each osd, this will reweight the OSD's. Assuming this is a test
>> cluster and you won't be adding any larger OSD's in the future this
>> shouldn't cause any problems.
>>
>> >
>> > admin@cp-admin:~/safedrive$ ceph osd tree
>> > # id    weight    type name    up/down    reweight
>> > -1    0    root default
>> > -2    0        host osd-001
>> > 0    0            osd.0    up    1
>> > 1    0            osd.1    up    1
>> > -3    0        host osd-002
>> > 2    0            osd.2    up    1
>> > 3    0            osd.3    up    1
>> > -4    0        host osd-003
>> > 4    0            osd.4    up    1
>> > 5    0            osd.5    up    1
>>
>>
>>
>>
>>
>
> _______________________________________________
> ceph-users mailing list
> [email protected]
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to