What are your `ceph osd tree` and `ceph status` as well?

On Tue, May 22, 2018, 3:05 AM Pardhiv Karri <[email protected]> wrote:

> Hi,
>
> We are using Ceph Hammer 0.94.9. Some of our OSDs never get any data or
> PGs even at their full crush weight, up and running. Rest of the OSDs are
> at 50% full. Is there a bug in Hammer that is causing this issue? Does
> upgrading to Jewel or Luminous fix this issue?
>
> I tried deleting and recreating this OSD N number of times and still the
> same issue. I am seeing this in 3 of our 4 ceph clusters in different
> datacenters. We are using HDD as OSD and SSD as Journal drive.
>
> The below is from our lab and OSD 38 is the one that never fills.
>
>
> ID  WEIGHT   REWEIGHT SIZE   USE    AVAIL  %USE  VAR  TYPE NAME
>
>  -1 80.00000        -      0      0      0     0    0 root default
>
>  -2 40.00000        - 39812G  6190G 33521G 15.55 0.68     rack rack_A1
>
>  -3 20.00000        - 19852G  3718G 16134G 18.73 0.82         host
> or1010051251040
>   0  2.00000  1.00000  1861G   450G  1410G 24.21 1.07             osd.0
>
>   1  2.00000  1.00000  1999G   325G  1673G 16.29 0.72             osd.1
>
>   2  2.00000  1.00000  1999G   336G  1662G 16.85 0.74             osd.2
>
>   3  2.00000  1.00000  1999G   386G  1612G 19.35 0.85             osd.3
>
>   4  2.00000  1.00000  1999G   385G  1613G 19.30 0.85             osd.4
>
>   5  2.00000  1.00000  1999G   364G  1634G 18.21 0.80             osd.5
>
>   6  2.00000  1.00000  1999G   319G  1679G 15.99 0.70             osd.6
>
>   7  2.00000  1.00000  1999G   434G  1564G 21.73 0.96             osd.7
>
>   8  2.00000  1.00000  1999G   352G  1646G 17.63 0.78             osd.8
>
>   9  2.00000  1.00000  1999G   362G  1636G 18.12 0.80             osd.9
>
>  -8 20.00000        - 19959G  2472G 17387G 12.39 0.55         host
> or1010051251044
>  30  2.00000  1.00000  1999G   362G  1636G 18.14 0.80             osd.30
>
>  31  2.00000  1.00000  1999G   293G  1705G 14.66 0.65             osd.31
>
>  32  2.00000  1.00000  1999G   202G  1796G 10.12 0.45             osd.32
>
>  33  2.00000  1.00000  1999G   215G  1783G 10.76 0.47             osd.33
>
>  34  2.00000  1.00000  1999G   192G  1806G  9.61 0.42             osd.34
>
>  35  2.00000  1.00000  1999G   337G  1661G 16.90 0.74             osd.35
>
>  36  2.00000  1.00000  1999G   206G  1792G 10.35 0.46             osd.36
>
>  37  2.00000  1.00000  1999G   266G  1732G 13.33 0.59             osd.37
>
>  38  2.00000  1.00000  1999G 55836k  1998G  0.00    0             osd.38
>
>  39  2.00000  1.00000  1968G   396G  1472G 20.12 0.89             osd.39
>
>  -4 20.00000        -      0      0      0     0    0     rack rack_B1
>
>  -5 20.00000        - 19990G  5978G 14011G 29.91 1.32         host
> or1010051251041
>  10  2.00000  1.00000  1999G   605G  1393G 30.27 1.33             osd.10
>
>  11  2.00000  1.00000  1999G   592G  1406G 29.62 1.30             osd.11
>
>  12  2.00000  1.00000  1999G   539G  1460G 26.96 1.19             osd.12
>
>  13  2.00000  1.00000  1999G   684G  1314G 34.22 1.51             osd.13
>
>  14  2.00000  1.00000  1999G   510G  1488G 25.56 1.13             osd.14
>
>  15  2.00000  1.00000  1999G   590G  1408G 29.52 1.30             osd.15
>
>  16  2.00000  1.00000  1999G   595G  1403G 29.80 1.31             osd.16
>
>  17  2.00000  1.00000  1999G   652G  1346G 32.64 1.44             osd.17
>
>  18  2.00000  1.00000  1999G   544G  1454G 27.23 1.20             osd.18
>
>  19  2.00000  1.00000  1999G   665G  1333G 33.27 1.46             osd.19
>
>  -9        0        -      0      0      0     0    0         host
> or1010051251045
>  -6 20.00000        -      0      0      0     0    0     rack rack_C1
>
>  -7 20.00000        - 19990G  5956G 14033G 29.80 1.31         host
> or1010051251042
>  20  2.00000  1.00000  1999G   701G  1297G 35.11 1.55             osd.20
>
>  21  2.00000  1.00000  1999G   573G  1425G 28.70 1.26             osd.21
>
>  22  2.00000  1.00000  1999G   652G  1346G 32.64 1.44             osd.22
>
>  23  2.00000  1.00000  1999G   612G  1386G 30.62 1.35             osd.23
>
>  24  2.00000  1.00000  1999G   614G  1384G 30.74 1.35             osd.24
>
>  25  2.00000  1.00000  1999G   561G  1437G 28.11 1.24             osd.25
>
>  26  2.00000  1.00000  1999G   558G  1440G 27.93 1.23             osd.26
>
>  27  2.00000  1.00000  1999G   610G  1388G 30.52 1.34             osd.27
>
>  28  2.00000  1.00000  1999G   515G  1483G 25.81 1.14             osd.28
>
>  29  2.00000  1.00000  1999G   555G  1443G 27.78 1.22             osd.29
>
> -10        0        -      0      0      0     0    0         host
> or1010051251046
> -11        0        -      0      0      0     0    0         host
> or1010051251023
>                 TOTAL 79793G 18126G 61566G 22.72
>
> MIN/MAX VAR: 0/1.55  STDDEV: 8.26
>
>
> Thanks
> Pardhiv karri
>
>
> _______________________________________________
> ceph-users mailing list
> [email protected]
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to