Hi,

We observe high apply_latency(ms) and poor write performance I believe.
In logs there are repetitive slow request warnings related different OSDs
and servers.

ceph versions 12.2.2

Cluster HW description:
9x Dell PowerEdge R730xd

1x Intel(R) Xeon(R) CPU E5-2630 v4 @ 2.20GHz (10C/20T)
256 GB 2133 MHz DDR4
PERC H730 Mini 1GB cache
12x 4TB TOSHIBA MG04ACA400N (each disk configured as RAID 0 for single OSD)
2x 480GB SSDSC2BB480G7R (in RAID 1 for operating system)
1x NVMe PCIe SSD - Intel SSD DC D3700 Series for journaling, one for 12 OSDs
1x QLogic 57800 2x 10Gb DAC SFP+ and 2x 1Gb (configured as 2x10Gb in
802.3ad Dynamic link aggregation)


/etc/ceph/ceph.conf
[global]
  fsid = 1023c49f-3b10-42de-9f62-9b122db32a9a
  mon_initial_members = host01,host02,host03
  mon_host = 10.212.32.18,10.212.32.19,10.212.32.20
  auth_supported = cephx
  public_network = 10.212.32.0/24
  cluster_network = 10.212.14.0/24
  rgw thread pool size = 256
[client.rgw.host01]
  rgw host = host01
  rgw enable usage log = true
[client.rgw.host02]
  rgw host = host02
  rgw enable usage log = true
[client.rgw.host03]
  rgw host = host03
  rgw enable usage log = true
[osd]
  filestore xattr use omap = true
  osd journal size = 10240
  osd mount options xfs = noatime,inode64,logbsize=256k,logbufs=8
  osd crush location hook = /usr/bin/ceph-crush-location.sh
  osd pool default size = 3
[mon]
  mon compact on start = true
  mon compact on trim = true


OSDs topology as below. Actually I wonder why some OSDs have "hdd" class
assigned.

ID  CLASS WEIGHT    REWEIGHT SIZE   USE    AVAIL  %USE  VAR  PGS TYPE NAME

 -1       392.73273        -   385T   236T   149T 61.24 1.00   - root
default
 -6       392.73273        -   385T   236T   149T 61.24 1.00   -     region
region01
 -5       392.73273        -   385T   236T   149T 61.24 1.00   -
 datacenter dc01
 -4       392.73273        -   385T   236T   149T 61.24 1.00   -
 room room01
 -8        43.63699        - 44684G 26573G 18111G 59.47 0.97   -
     rack rack01
 -7        43.63699        - 44684G 26573G 18111G 59.47 0.97   -
         host host01
  0         3.63599  1.00000  3723G  2405G  1318G 64.59 1.05 210
             osd.0
  2         3.63599  1.00000  3723G  2004G  1719G 53.82 0.88 190
             osd.2
  4         3.63599  1.00000  3723G  2460G  1262G 66.08 1.08 214
             osd.4
  6         3.63599  1.00000  3723G  2474G  1249G 66.45 1.09 203
             osd.6
  8         3.63599  1.00000  3723G  2308G  1415G 61.99 1.01 220
             osd.8
 11         3.63599  1.00000  3723G  2356G  1367G 63.28 1.03 214
             osd.11
 12         3.63599  1.00000  3723G  2303G  1420G 61.86 1.01 206
             osd.12
 14         3.63599  1.00000  3723G  1920G  1803G 51.57 0.84 178
             osd.14
 16         3.63599  1.00000  3723G  2236G  1486G 60.07 0.98 203
             osd.16
 18         3.63599  1.00000  3723G  2203G  1520G 59.17 0.97 193
             osd.18
 20         3.63599  1.00000  3723G  1904G  1819G 51.15 0.84 179
             osd.20
 22         3.63599  1.00000  3723G  1995G  1728G 53.58 0.88 192
             osd.22
 -3        43.63699        - 44684G 27090G 17593G 60.63 0.99   -
     rack rack02
 -2        43.63699        - 44684G 27090G 17593G 60.63 0.99   -
         host host02
  1   hdd   3.63599  1.00000  3723G  2447G  1275G 65.74 1.07 213
             osd.1
  3   hdd   3.63599  1.00000  3723G  2696G  1027G 72.41 1.18 210
             osd.3
  5   hdd   3.63599  1.00000  3723G  2290G  1433G 61.51 1.00 188
             osd.5
  7   hdd   3.63599  1.00000  3723G  2171G  1551G 58.33 0.95 194
             osd.7
  9   hdd   3.63599  1.00000  3723G  2129G  1594G 57.18 0.93 204
             osd.9
 10   hdd   3.63599  1.00000  3723G  2153G  1570G 57.82 0.94 184
             osd.10
 13   hdd   3.63599  1.00000  3723G  2142G  1580G 57.55 0.94 188
             osd.13
 15   hdd   3.63599  1.00000  3723G  2147G  1576G 57.66 0.94 192
             osd.15
 17   hdd   3.63599  1.00000  3723G  2093G  1630G 56.21 0.92 201
             osd.17
 19   hdd   3.63599  1.00000  3723G  2079G  1643G 55.86 0.91 192
             osd.19
 21   hdd   3.63599  1.00000  3723G  2266G  1457G 60.87 0.99 201
             osd.21
 23   hdd   3.63599  1.00000  3723G  2472G  1251G 66.39 1.08 197
             osd.23
-10        43.63699        - 44684G 27247G 17436G 60.98 1.00   -
     rack rack03
 -9        43.63699        - 44684G 27247G 17436G 60.98 1.00   -
         host host03
 24         3.63599  1.00000  3723G  2289G  1433G 61.49 1.00 195
             osd.24
 25         3.63599  1.00000  3723G  2584G  1138G 69.42 1.13 217
             osd.25
 26         3.63599  1.00000  3723G  2183G  1539G 58.65 0.96 198
             osd.26
 28         3.63599  1.00000  3723G  2540G  1182G 68.23 1.11 215
             osd.28
 30         3.63599  1.00000  3723G  2134G  1589G 57.31 0.94 207
             osd.30
 32         3.63599  1.00000  3723G  1715G  2008G 46.06 0.75 183
             osd.32
 34         3.63599  1.00000  3723G  2576G  1146G 69.20 1.13 219
             osd.34
 36         3.63599  1.00000  3723G  2166G  1557G 58.18 0.95 208
             osd.36
 38         3.63599  1.00000  3723G  2050G  1673G 55.06 0.90 191
             osd.38
 40         3.63599  1.00000  3723G  2542G  1181G 68.28 1.11 223
             osd.40
 42         3.63599  1.00000  3723G  2063G  1660G 55.42 0.90 181
             osd.42
 44         3.63599  1.00000  3723G  2399G  1324G 64.44 1.05 197
             osd.44
-12        43.63699        - 44684G 27152G 17531G 60.76 0.99   -
     rack rack04
-11        43.63699        - 44684G 27152G 17531G 60.76 0.99   -
         host host04
 27         3.63599  1.00000  3723G  2479G  1243G 66.60 1.09 212
             osd.27
 29         3.63599  1.00000  3723G  2646G  1077G 71.07 1.16 214
             osd.29
 31         3.63599  1.00000  3723G  2217G  1506G 59.55 0.97 202
             osd.31
 33         3.63599  1.00000  3723G  2073G  1650G 55.68 0.91 198
             osd.33
 35         3.63599  1.00000  3723G  2661G  1061G 71.48 1.17 205
             osd.35
 37         3.63599  1.00000  3723G  2097G  1626G 56.32 0.92 181
             osd.37
 39         3.63599  1.00000  3723G  1949G  1774G 52.35 0.85 196
             osd.39
 41         3.63599  1.00000  3723G  2205G  1518G 59.23 0.97 188
             osd.41
 43         3.63599  1.00000  3723G  2417G  1306G 64.92 1.06 214
             osd.43
 45         3.63599  1.00000  3723G  1925G  1798G 51.71 0.84 189
             osd.45
 46         3.63599  1.00000  3723G  2235G  1488G 60.03 0.98 208
             osd.46
 47         3.63599  1.00000  3723G  2243G  1480G 60.24 0.98 195
             osd.47
-14        43.63699        - 37236G 25721G 11515G 69.08 1.13   -
     rack rack05
-13        43.63699        - 37236G 25721G 11515G 69.08 1.13   -
         host host05
 48         3.63599  1.00000  3723G  2160G  1563G 58.01 0.95 213
             osd.48
 49         3.63599  1.00000  3723G  2603G  1120G 69.91 1.14 216
             osd.49
 50         3.63599  1.00000  3723G  2497G  1225G 67.08 1.10 220
             osd.50
 51         3.63599  1.00000  3723G  2613G  1110G 70.17 1.15 223
             osd.51
 52         3.63599        0      0      0      0     0    0   0
             osd.52
 53         3.63599  1.00000  3723G  2869G   854G 77.05 1.26 235
             osd.53
 54         3.63599  1.00000  3723G  3021G   702G 81.14 1.32 245
             osd.54
 55         3.63599  1.00000  3723G  2190G  1533G 58.83 0.96 209
             osd.55
 56         3.63599        0      0      0      0     0    0   0
             osd.56
 57         3.63599  1.00000  3723G  2651G  1072G 71.21 1.16 223
             osd.57
 58         3.63599  1.00000  3723G  2413G  1310G 64.81 1.06 198
             osd.58
 59         3.63599  1.00000  3723G  2701G  1021G 72.56 1.18 226
             osd.59
-16        43.63699        - 44684G 28445G 16238G 63.66 1.04   -
     rack rack06
-15        43.63699        - 44684G 28445G 16238G 63.66 1.04   -
         host host06
 60   hdd   3.63599  1.00000  3723G  1994G  1728G 53.57 0.87 175
             osd.60
 61   hdd   3.63599  1.00000  3723G  2531G  1192G 67.97 1.11 216
             osd.61
 62   hdd   3.63599  1.00000  3723G  2472G  1251G 66.40 1.08 217
             osd.62
 63   hdd   3.63599  1.00000  3723G  2588G  1135G 69.50 1.13 195
             osd.63
 64   hdd   3.63599  1.00000  3723G  2804G   919G 75.30 1.23 216
             osd.64
 65   hdd   3.63599  1.00000  3723G  2534G  1188G 68.08 1.11 208
             osd.65
 66   hdd   3.63599  1.00000  3723G  2362G  1361G 63.44 1.04 204
             osd.66
 67   hdd   3.63599  1.00000  3723G  2497G  1225G 67.08 1.10 214
             osd.67
 68   hdd   3.63599  1.00000  3723G  1291G  2432G 34.68 0.57 157
             osd.68
 69   hdd   3.63599  1.00000  3723G  2325G  1397G 62.46 1.02 192
             osd.69
 70   hdd   3.63599  1.00000  3723G  2590G  1133G 69.57 1.14 210
             osd.70
 71   hdd   3.63599  1.00000  3723G  2451G  1272G 65.83 1.08 207
             osd.71
-18        43.63699        - 44684G 28525G 16158G 63.84 1.04   -
     rack rack07
-17        43.63699        - 44684G 28525G 16158G 63.84 1.04   -
         host host07
 72         3.63599  1.00000  3723G  2415G  1307G 64.88 1.06 197
             osd.72
 73         3.63599  1.00000  3723G  2216G  1507G 59.52 0.97 188
             osd.73
 74         3.63599  1.00000  3723G  2421G  1301G 65.04 1.06 220
             osd.74
 75         3.63599  1.00000  3723G  2642G  1080G 70.97 1.16 207
             osd.75
 76         3.63599  1.00000  3723G  2307G  1416G 61.97 1.01 204
             osd.76
 77         3.63599  1.00000  3723G  2341G  1381G 62.89 1.03 200
             osd.77
 78         3.63599  1.00000  3723G  2312G  1410G 62.12 1.01 188
             osd.78
 79         3.63599  1.00000  3723G  2166G  1557G 58.18 0.95 183
             osd.79
 80         3.63599  1.00000  3723G  2458G  1264G 66.03 1.08 221
             osd.80
 81         3.63599  1.00000  3723G  2126G  1596G 57.12 0.93 182
             osd.81
 82         3.63599  1.00000  3723G  2442G  1281G 65.58 1.07 189
             osd.82
 83         3.63599  1.00000  3723G  2672G  1051G 71.76 1.17 214
             osd.83
-20        87.27377        - 89368G 50955G 38412G 57.02 0.93   -
     rack rack08
-19        43.63699        - 44684G 25780G 18903G 57.70 0.94   -
         host host08
 84         3.63599  1.00000  3723G  2084G  1638G 55.99 0.91 179
             osd.84
 85         3.63599  1.00000  3723G  2121G  1602G 56.96 0.93 193
             osd.85
 86         3.63599  1.00000  3723G  2131G  1591G 57.25 0.93 186
             osd.86
 87         3.63599  1.00000  3723G  2343G  1380G 62.94 1.03 200
             osd.87
 88         3.63599  1.00000  3723G  2120G  1603G 56.94 0.93 166
             osd.88
 89         3.63599  1.00000  3723G  2175G  1548G 58.43 0.95 161
             osd.89
 90         3.63599  1.00000  3723G  1974G  1749G 53.02 0.87 184
             osd.90
 91         3.63599  1.00000  3723G  1942G  1781G 52.17 0.85 171
             osd.91
 92         3.63599  1.00000  3723G  2281G  1442G 61.27 1.00 169
             osd.92
 93         3.63599  1.00000  3723G  2103G  1620G 56.48 0.92 190
             osd.93
 94         3.63599  1.00000  3723G  2326G  1397G 62.48 1.02 206
             osd.94
 95         3.63599  1.00000  3723G  2175G  1548G 58.42 0.95 186
             osd.95
-21        43.63678        - 44684G 25175G 19508G 56.34 0.92   -
         host host09
 96   hdd   3.63640  1.00000  3723G  2434G  1289G 65.38 1.07 191
             osd.96
 97   hdd   3.63640  1.00000  3723G  2175G  1547G 58.44 0.95 183
             osd.97
 98   hdd   3.63640  1.00000  3723G  1771G  1951G 47.59 0.78 171
             osd.98
 99   hdd   3.63640  1.00000  3723G  2433G  1290G 65.34 1.07 201
             osd.99
100   hdd   3.63640  1.00000  3723G  2098G  1625G 56.35 0.92 175
             osd.100
101   hdd   3.63640  1.00000  3723G  2175G  1547G 58.43 0.95 188
             osd.101
102   hdd   3.63640  1.00000  3723G  1741G  1982G 46.77 0.76 172
             osd.102
103   hdd   3.63640  1.00000  3723G  2123G  1600G 57.02 0.93 173
             osd.103
104   hdd   3.63640  1.00000  3723G  1967G  1755G 52.85 0.86 177
             osd.104
105   hdd   3.63640  1.00000  3723G  2283G  1440G 61.32 1.00 162
             osd.105
106   hdd   3.63640  1.00000  3723G  2021G  1702G 54.29 0.89 184
             osd.106
107   hdd   3.63640  1.00000  3723G  1947G  1776G 52.30 0.85 194
             osd.107
                       TOTAL   385T   236T   149T 61.24

MIN/MAX VAR: 0.57/1.32  STDDEV: 7.11


Output of "ceph osd perf" when "ceph status" shows traffic like  "client:
 679 MB/s rd, 186 MB/s wr, 1200 op/s rd, 790 op/s wr". Most of this traffic
goes via RGW.

osd commit_latency(ms) apply_latency(ms)
107                  0                22
106                  0                 1
105                  0                 0
104                  0                22
103                  0                13
102                  0                 8
101                  0                 2
100                  0                28
 99                  0                25
 98                  0               542
 97                  0               418
 45                  0                23
 44                  0                26
 43                  0                 1
 42                  0                 9
 41                  0                27
 40                  0                 0
 39                  0              1189
 38                  0                 9
 37                  0                 4
 36                  0               625
 35                  0                43
 34                  0                 4
 33                  0               143
 32                  0                 8
 31                  0                 1
 30                  0                21
 29                  0                33
 28                  0                11
 27                  0                40
 26                  0                 7
 25                  0                 6
 24                  0                20
 23                  0                 5
 22                  0                 6
  9                  0                 2
  8                  0                71
  7                  0                 2
  6                  0                35
  5                  0               549
  4                  0                 6
  0                  0               861
  1                  0                15
  2                  0                19
  3                  0                 2
 10                  0                 4
 11                  0               378
 12                  0                19
 13                  0                 1
 14                  0                 6
 15                  0                 4
 16                  0              1040
 17                  0                 7
 18                  0                 8
 19                  0               185
 20                  0                32
 21                  0               855
 46                  0                32
 47                  0                 4
 48                  0                 7
 49                  0                 4
 50                  0                 7
 51                  0                18
 53                  0                 3
 54                  0                87
 55                  0                 4
 56                  0                 0
 57                  0                 1
 58                  0                49
 59                  0                 6
 60                  0                 5
 61                  0                19
 62                  0                67
 63                  0                 0
 64                  0               423
 65                  0                26
 66                  0                30
 67                  0                74
 68                  0                 6
 69                  0                 4
 70                  0                48
 71                  0              1900
 72                  0                24
 73                  0                 7
 74                  0               379
 75                  0                 4
 76                  0                 9
 77                  0                19
 78                  0                14
 79                  0               135
 80                  0              1736
 81                  0                 6
 82                  0              2504
 83                  0                24
 84                  0                 8
 85                  0                10
 86                  0                13
 87                  0               142
 88                  0                68
 89                  0                28
 90                  0                 3
 91                  0                14
 92                  0                14
 93                  0                12
 94                  0                47
 95                  0                39
 96                  0                 8


All OSDs report slow requests

# zgrep 'slow request' /var/log/ceph/ceph.log* | egrep -io ' osd\.[0-9]*'
|sort |uniq -c
   1538  osd.0
   1292  osd.1
    752  osd.10
    640  osd.100
  10456  osd.101
   2058  osd.102
   2174  osd.103
   1898  osd.104
   1556  osd.105
   2332  osd.106
    786  osd.107
   2408  osd.11
   5222  osd.12
    224  osd.13
   2858  osd.14
    238  osd.15
   1880  osd.16
    566  osd.17
    120  osd.18
    284  osd.19
    896  osd.2
   1840  osd.20
   1466  osd.21
   1218  osd.22
   1066  osd.23
   2470  osd.24
   1640  osd.25
   3584  osd.26
  10912  osd.27
   2954  osd.28
   1736  osd.29
    570  osd.3
   1266  osd.30
    298  osd.31
     30  osd.32
   4528  osd.33
   4260  osd.34
   4542  osd.35
    374  osd.36
   2820  osd.37
    158  osd.38
    748  osd.39
  26816  osd.4
   3378  osd.40
    628  osd.41
  16358  osd.42
    454  osd.43
   2798  osd.44
  11378  osd.45
   2280  osd.46
   1374  osd.47
   1660  osd.48
   1122  osd.49
    174  osd.5
   1212  osd.50
   1542  osd.51
   2366  osd.53
   2930  osd.54
    196  osd.55
   2008  osd.57
   1796  osd.58
   1644  osd.59
  12974  osd.6
    208  osd.60
    644  osd.61
   1230  osd.62
   1018  osd.63
   1446  osd.64
    430  osd.65
    512  osd.66
   1782  osd.67
     96  osd.68
   1054  osd.69
    596  osd.7
    760  osd.70
   1160  osd.71
    648  osd.72
    638  osd.73
    752  osd.74
   2196  osd.75
    568  osd.76
   1540  osd.77
    944  osd.78
    400  osd.79
    516  osd.8
   2494  osd.80
    152  osd.81
   1528  osd.82
    846  osd.83
   1218  osd.84
    676  osd.85
   1500  osd.86
    630  osd.87
    618  osd.88
    106  osd.89
    282  osd.9
    972  osd.90
   2442  osd.91
   1056  osd.92
    996  osd.93
   1366  osd.94
   1112  osd.95
    474  osd.96
    990  osd.97
    342  osd.98
   4906  osd.99

Example log

2018-01-30 10:31:39.865054 mon.host01 mon.0 10.212.32.18:6789/0 695886 :
cluster [WRN] Health check update: 57 slow requests are blocked > 32 sec
(REQUEST_SLOW)
2018-01-30 10:31:44.865536 mon.host01 mon.0 10.212.32.18:6789/0 695887 :
cluster [WRN] Health check update: 56 slow requests are blocked > 32 sec
(REQUEST_SLOW)
2018-01-30 10:31:43.408679 osd.85 osd.85 10.212.32.25:6810/142783 4357 :
cluster [WRN] 56 slow requests, 2 included below; oldest blocked for >
54.329588 secs
2018-01-30 10:31:43.408683 osd.85 osd.85 10.212.32.25:6810/142783 4358 :
cluster [WRN] slow request 30.391140 seconds old, received at 2018-01-30
10:31:13.017492: osd_op(client.612334.0:119664853 3.3cca000a 3.3cca000a
(undecoded) ack+ondisk+write+known_if_redirected e5269) currently
queued_for_pg
2018-01-30 10:31:43.408686 osd.85 osd.85 10.212.32.25:6810/142783 4359 :
cluster [WRN] slow request 30.359503 seconds old, received at 2018-01-30
10:31:13.049129: osd_op(client.612334.0:119664855 3.3cca000a 3.3cca000a
(undecoded) ack+ondisk+write+known_if_redirected e5269) currently
queued_for_pg
2018-01-30 10:31:50.891971 mon.host01 mon.0 10.212.32.18:6789/0 695888 :
cluster [INF] Health check cleared: REQUEST_SLOW (was: 56 slow requests are
blocked > 32 sec)
2018-01-30 10:31:50.892042 mon.host01 mon.0 10.212.32.18:6789/0 695889 :
cluster [INF] Cluster is now healthy
2018-01-30 10:31:52.947008 mon.host01 mon.0 10.212.32.18:6789/0 695890 :
cluster [WRN] Health check failed: 1 slow requests are blocked > 32 sec
(REQUEST_SLOW)
2018-01-30 10:31:54.125045 mon.host01 mon.0 10.212.32.18:6789/0 695891 :
cluster [INF] mon.1 10.212.32.19:6789/0
2018-01-30 10:31:54.125098 mon.host01 mon.0 10.212.32.18:6789/0 695892 :
cluster [INF] mon.2 10.212.32.20:6789/0
2018-01-30 10:31:56.146267 mon.host01 mon.0 10.212.32.18:6789/0 695893 :
cluster [WRN] overall HEALTH_WARN 1 slow requests are blocked > 32 sec
2018-01-30 10:31:59.866730 mon.host01 mon.0 10.212.32.18:6789/0 695894 :
cluster [WRN] Health check update: 3 slow requests are blocked > 32 sec
(REQUEST_SLOW)
2018-01-30 10:32:04.867163 mon.host01 mon.0 10.212.32.18:6789/0 695896 :
cluster [WRN] Health check update: 6 slow requests are blocked > 32 sec
(REQUEST_SLOW)
2018-01-30 10:32:05.722529 osd.11 osd.11 10.212.32.18:6812/94831 5792 :
cluster [WRN] 3 slow requests, 3 included below; oldest blocked for >
30.713577 secs
2018-01-30 10:32:05.722536 osd.11 osd.11 10.212.32.18:6812/94831 5793 :
cluster [WRN] slow request 30.023513 seconds old, received at 2018-01-30
10:31:35.698959: osd_op(client.858110.0:109814110 3.6a6d3d03 3.6a6d3d03
(undecoded) ack+ondisk+write+known_if_redirected e5269) currently
queued_for_pg
2018-01-30 10:32:05.722539 osd.11 osd.11 10.212.32.18:6812/94831 5794 :
cluster [WRN] slow request 30.188061 seconds old, received at 2018-01-30
10:31:35.534411: osd_op(client.858110.0:109814108 3.6a6d3d03 3.6a6d3d03
(undecoded) ack+ondisk+write+known_if_redirected e5269) currently
queued_for_pg
2018-01-30 10:32:05.722542 osd.11 osd.11 10.212.32.18:6812/94831 5795 :
cluster [WRN] slow request 30.713577 seconds old, received at 2018-01-30
10:31:35.008895: osd_op(client.864319.0:1526442 20.21bs0 20.da47561b
(undecoded) ondisk+read+known_if_redirected e5269) currently queued_for_pg
2018-01-30 10:32:08.722925 osd.11 osd.11 10.212.32.18:6812/94831 5796 :
cluster [WRN] 4 slow requests, 1 included below; oldest blocked for >
33.713980 secs
2018-01-30 10:32:08.722930 osd.11 osd.11 10.212.32.18:6812/94831 5797 :
cluster [WRN] slow request 30.132129 seconds old, received at 2018-01-30
10:31:38.590746: osd_op(client.864319.0:1526720 20.21bs0 20.2af5361b
(undecoded) ondisk+read+known_if_redirected e5269) currently queued_for_pg
2018-01-30 10:32:11.723286 osd.11 osd.11 10.212.32.18:6812/94831 5798 :
cluster [WRN] 5 slow requests, 1 included below; oldest blocked for >
36.714352 secs
2018-01-30 10:32:11.723290 osd.11 osd.11 10.212.32.18:6812/94831 5799 :
cluster [WRN] slow request 30.963475 seconds old, received at 2018-01-30
10:31:40.759772: osd_op(client.834441.0:1569804 20.21bs0 20.cf3c321b
(undecoded) ondisk+read+known_if_redirected e5269) currently queued_for_pg
2018-01-30 10:32:13.723602 osd.11 osd.11 10.212.32.18:6812/94831 5800 :
cluster [WRN] 6 slow requests, 1 included below; oldest blocked for >
38.714655 secs
2018-01-30 10:32:13.723607 osd.11 osd.11 10.212.32.18:6812/94831 5801 :
cluster [WRN] slow request 30.933479 seconds old, received at 2018-01-30
10:31:42.790071: osd_op(client.612286.0:59425229 3.e2431d03 3.e2431d03
(undecoded) ack+ondisk+write+known_if_redirected e5269) currently
queued_for_pg
2018-01-30 10:32:23.857767 mon.host01 mon.0 10.212.32.18:6789/0 695897 :
cluster [INF] Health check cleared: REQUEST_SLOW (was: 6 slow requests are
blocked > 32 sec)
2018-01-30 10:32:23.857841 mon.host01 mon.0 10.212.32.18:6789/0 695898 :
cluster [INF] Cluster is now healthy
2018-01-30 10:32:25.105695 mon.host01 mon.0 10.212.32.18:6789/0 695899 :
cluster [WRN] Health check failed: 4 slow requests are blocked > 32 sec
(REQUEST_SLOW)
2018-01-30 10:32:34.869380 mon.host01 mon.0 10.212.32.18:6789/0 695900 :
cluster [WRN] Health check update: 6 slow requests are blocked > 32 sec
(REQUEST_SLOW)
2018-01-30 10:32:30.812261 osd.97 osd.97 10.212.32.26:6822/9220 3691 :
cluster [WRN] 1 slow requests, 1 included below; oldest blocked for >
30.393065 secs
2018-01-30 10:32:30.812265 osd.97 osd.97 10.212.32.26:6822/9220 3692 :
cluster [WRN] slow request 30.393065 seconds old, received at 2018-01-30
10:32:00.419168: osd_op(client.1010840.0:1005983 20.13fs0 20.4245d13f
(undecoded) ondisk+read+known_if_redirected e5269) currently queued_for_pg


Any not default ceph.conf options to make better use of cluster
configuration?
So far we increased value for "rgw thread pool size = 256"

What throughput can I expect from above configuration ? I see peaks like
2GBps reads and 100MBps writes.

Thanks
Jakub
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to