You'll also want to change the crush weights of your OSDs to reflect
the different sizes so that the smaller disks don't get filled up
prematurely.  See "weighting bucket items" here:
http://ceph.com/docs/master/rados/operations/crush-map/

On Thu, Jun 5, 2014 at 10:14 AM, Michael <[email protected]> wrote:
> ceph osd dump | grep size
>
> Check that all pools are size 2, min size 2 or 1.
>
> If not you can change on the fly with:
> ceph osd pool set #poolname size/min_size #size
>
> See docs http://ceph.com/docs/master/rados/operations/pools/ for alterations
> to pool attributes.
>
> -Michael
>
>
> On 05/06/2014 17:29, Vadim Kimlaychuk wrote:
>>
>> ____________________________
>>
>> I have
>>   osd pool default size = 2
>> at my ceph.conf. Shouldn' it tell ceph to use 2 OSDs ? Or it is somewhere
>> in CRUSH map?
>>
>> Vadim
>> ____________
>> From: Christian Balzer [[email protected]]
>> Sent: Thursday, June 05, 2014 18:26
>> To: Vadim Kimlaychuk
>> Cc: [email protected]
>> Subject: Re: [ceph-users] Hard drives of different sizes.
>>
>> Hello,
>>
>> On Thu, 5 Jun 2014 14:11:47 +0000 Vadim Kimlaychuk wrote:
>>
>>> Hello,
>>>
>>>              Probably this is anti-pattern, but I have to get answer how
>>> this will work / not work. Input:
>>>              I have single host for tests with ceph 0.80.1 and 2 OSD:
>>>              OSD.0 – 1000 Gb
>>>              OSD.1 – 750 Gb
>>>
>>>              Recompiled CRUSH map to set „step chooseleaf firstn 0 type
>>> osd“
>>>
>> You got it half right.
>>
>> Version .8x aka Firefly has a default replication of 3, so you would need
>> 3 OSDs at least.
>>
>> Christian
>>>
>>>              I am expecting, that part of PG-s will have status
>>> „active+clean“ (with size of ~750Gb) another part of PG-s will have
>>> „active+degradated“ (with size of ~250Gb), because there is not enough
>>> place to replicate data on the second OSD.
>>>
>>>              Instead I have ALL PG-s „active + degradated“
>>>
>>> Output:
>>>       health HEALTH_WARN 192 pgs degraded; 192 pgs stuck unclean
>>>       monmap e1: 1 mons at {storage=172.16.3.2:6789/0}, election epoch 2,
>>> quorum 0 storage osdmap e15: 2 osds: 2 up, 2 in
>>>        pgmap v29: 192 pgs, 3 pools, 0 bytes data, 0 objects
>>>              71496 kB used, 1619 GB / 1619 GB avail
>>>                   192 active+degraded
>>>
>>>              What is the logic behind this?? Can I use different hard
>>> drives successfully? If yes – how?
>>>
>>> Thank you for explanation,
>>>
>>> Vadim
>>>
>>
>> --
>> Christian Balzer        Network/Systems Engineer
>> [email protected]           Global OnLine Japan/Fusion Communications
>> http://www.gol.com/
>> _______________________________________________
>> ceph-users mailing list
>> [email protected]
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
> _______________________________________________
> ceph-users mailing list
> [email protected]
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to