Hi,

72 OSDs (12 servers with 6 OSD per server) and 2000 placement groups.
Replica factor is 3.


2013/12/12 Pierre BLONDEAU <[email protected]>

> Hi,
>
> How many osd did you have ?
>
> It could be a problem of placement group :
> http://article.gmane.org/gmane.comp.file-systems.ceph.
> user/2261/match=pierre+blondeau
>
> Regards.
>
> Le 10/12/2013 23:23, Łukasz Jagiełło a écrit :
>
>> Hi,
>>
>> Today my ceph cluster suffer of such problem:
>>
>> #v+
>> root@dfs-s1:/var/lib/ceph/osd/ceph-1# df -h | grep ceph-1
>> /dev/sdc1       559G  438G  122G  79% /var/lib/ceph/osd/ceph-1
>> #v-
>>
>> Disk report 122GB free space. Looks ok but:
>>
>> #v+
>> root@dfs-s1:/var/lib/ceph/osd/ceph-1# touch aaa
>> touch: cannot touch `aaa': No space left on device
>> #v-
>>
>> Few more of data:
>> #v+
>> root@dfs-s1:/var/lib/ceph/osd/ceph-1# mount | grep ceph-1
>> /dev/sdc1 on /var/lib/ceph/osd/ceph-1 type xfs (rw,noatime)
>> root@dfs-s1:/var/lib/ceph/osd/ceph-1# xfs_db -r "-c freesp -s" /dev/sdc1
>>     from      to extents  blocks    pct
>>        1       1  366476  366476   1.54
>>        2       3  466928 1133786   4.76
>>        4       7  536691 2901804  12.18
>>        8      15 1554873 19423430  81.52
>> total free extents 2924968
>> total free blocks 23825496
>> average free extent size 8.14556
>> root@dfs-s1:/var/lib/ceph/osd/ceph-1# xfs_db -c frag -r /dev/sdc1
>> actual 9043587, ideal 8926438, fragmentation factor 1.30%
>> #v-
>>
>> Any possible reason of that, and how to avoid that in future ? Someone
>> earlier mention it's problem with fragmentation but 122GB ?
>>
>> Best Regards
>> --
>> Łukasz Jagiełło
>> lukasz<at>jagiello<dot>org
>>
>>
>> _______________________________________________
>> ceph-users mailing list
>> [email protected]
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>>
>
> --
> ----------------------------------------------
> Pierre BLONDEAU
> Administrateur Systèmes & réseaux
> Université de Caen
> Laboratoire GREYC, Département d'informatique
>
> tel     : 02 31 56 75 42
> bureau  : Campus 2, Science 3, 406
> ----------------------------------------------
> _______________________________________________
> ceph-users mailing list
> [email protected]
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>



-- 
Łukasz Jagiełło
lukasz<at>jagiello<dot>org
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to