Hello,

On Thu, 5 Jun 2014 14:11:47 +0000 Vadim Kimlaychuk wrote:

> Hello,
> 
>             Probably this is anti-pattern, but I have to get answer how
> this will work / not work. Input:
>             I have single host for tests with ceph 0.80.1 and 2 OSD:
>             OSD.0 – 1000 Gb
>             OSD.1 – 750 Gb
> 
>             Recompiled CRUSH map to set „step chooseleaf firstn 0 type
> osd“
> 
You got it half right.

Version .8x aka Firefly has a default replication of 3, so you would need
3 OSDs at least.

Christian
>             I am expecting, that part of PG-s will have status
> „active+clean“ (with size of ~750Gb) another part of PG-s will have
> „active+degradated“ (with size of ~250Gb), because there is not enough
> place to replicate data on the second OSD.
> 
>             Instead I have ALL PG-s „active + degradated“
> 
> Output:
>      health HEALTH_WARN 192 pgs degraded; 192 pgs stuck unclean
>      monmap e1: 1 mons at {storage=172.16.3.2:6789/0}, election epoch 2,
> quorum 0 storage osdmap e15: 2 osds: 2 up, 2 in
>       pgmap v29: 192 pgs, 3 pools, 0 bytes data, 0 objects
>             71496 kB used, 1619 GB / 1619 GB avail
>                  192 active+degraded
> 
>             What is the logic behind this?? Can I use different hard
> drives successfully? If yes – how?
> 
> Thank you for explanation,
> 
> Vadim
> 


-- 
Christian Balzer        Network/Systems Engineer                
[email protected]           Global OnLine Japan/Fusion Communications
http://www.gol.com/
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to