Public bug reported:
Hello,
I'm seeing substantially slower read performance from an ssd pool than I
expected.
I have two pools on this computer; one ('fst') is four sata ssds, the
other ('srv') is nine spinning metal drives.
With a long-running ripgrep process on the fst pool, performance started
out really good and grew to astonishingly good (iirc ~30kiops, as
measured by zpool iostat -v 1). However after a few hours the
performance has dropped to 30-40 iops. top reports an arc_reclaim and
many arc_prune processes to be consuming most of the CPU time.
I've included a screenshot of top, some output from zpool iostat -v 1,
and arc_summary, with "===" to indicate the start of the next command's
output:
===
top (memory in gigabytes):
top - 16:27:53 up 70 days, 16:03, 3 users, load average: 35.67, 35.81, 35.58
Tasks: 809 total, 19 running, 612 sleeping, 0 stopped, 0 zombie
%Cpu(s): 0.0 us, 58.1 sy, 0.0 ni, 39.2 id, 2.6 wa, 0.0 hi, 0.0 si, 0.0 st
GiB Mem : 125.805 total, 0.620 free, 96.942 used, 28.243 buff/cache
GiB Swap: 5.694 total, 5.688 free, 0.006 used. 27.840 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
1523 root 20 0 0.0m 0.0m 0.0m R 100.0 0.0 290:52.26 arc_reclaim
4484 root 20 0 0.0m 0.0m 0.0m R 56.2 0.0 1:18.79 arc_prune
6225 root 20 0 0.0m 0.0m 0.0m R 56.2 0.0 1:11.92 arc_prune
7601 root 20 0 0.0m 0.0m 0.0m S 56.2 0.0 2:50.25 arc_prune
30891 root 20 0 0.0m 0.0m 0.0m S 56.2 0.0 1:33.08 arc_prune
3057 root 20 0 0.0m 0.0m 0.0m S 55.9 0.0 9:00.95 arc_prune
3259 root 20 0 0.0m 0.0m 0.0m R 55.9 0.0 3:16.84 arc_prune
24008 root 20 0 0.0m 0.0m 0.0m S 55.9 0.0 1:55.71 arc_prune
1285 root 20 0 0.0m 0.0m 0.0m R 55.6 0.0 3:20.52 arc_prune
5345 root 20 0 0.0m 0.0m 0.0m R 55.6 0.0 1:15.99 arc_prune
30121 root 20 0 0.0m 0.0m 0.0m S 55.6 0.0 1:35.50 arc_prune
31192 root 20 0 0.0m 0.0m 0.0m S 55.6 0.0 6:17.16 arc_prune
32287 root 20 0 0.0m 0.0m 0.0m S 55.6 0.0 1:28.02 arc_prune
32625 root 20 0 0.0m 0.0m 0.0m R 55.6 0.0 1:27.34 arc_prune
22572 root 20 0 0.0m 0.0m 0.0m S 55.3 0.0 10:02.92 arc_prune
31989 root 20 0 0.0m 0.0m 0.0m R 55.3 0.0 1:28.03 arc_prune
3353 root 20 0 0.0m 0.0m 0.0m R 54.9 0.0 8:58.81 arc_prune
10252 root 20 0 0.0m 0.0m 0.0m R 54.9 0.0 2:36.37 arc_prune
1522 root 20 0 0.0m 0.0m 0.0m S 53.9 0.0 158:42.45 arc_prune
3694 root 20 0 0.0m 0.0m 0.0m R 53.9 0.0 1:20.79 arc_prune
13394 root 20 0 0.0m 0.0m 0.0m R 53.9 0.0 10:35.78 arc_prune
24592 root 20 0 0.0m 0.0m 0.0m R 53.9 0.0 1:54.19 arc_prune
25859 root 20 0 0.0m 0.0m 0.0m S 53.9 0.0 1:51.71 arc_prune
8194 root 20 0 0.0m 0.0m 0.0m S 53.6 0.0 0:54.51 arc_prune
18472 root 20 0 0.0m 0.0m 0.0m R 53.6 0.0 2:08.73 arc_prune
29525 root 20 0 0.0m 0.0m 0.0m R 53.6 0.0 1:35.81 arc_prune
32291 root 20 0 0.0m 0.0m 0.0m S 53.6 0.0 1:28.00 arc_prune
3156 root 20 0 0.0m 0.0m 0.0m R 53.3 0.0 3:17.68 arc_prune
6224 root 20 0 0.0m 0.0m 0.0m S 53.3 0.0 1:11.80 arc_prune
9788 root 20 0 0.0m 0.0m 0.0m S 53.3 0.0 0:46.00 arc_prune
10341 root 20 0 0.0m 0.0m 0.0m R 53.3 0.0 2:36.23 arc_prune
11881 root 20 0 0.0m 0.0m 0.0m S 53.0 0.0 2:31.57 arc_prune
24030 root 20 0 0.0m 0.0m 0.0m R 52.6 0.0 1:55.44 arc_prune
===
zpool iostat -v 1 output (for a while):
capacity operations
bandwidth
pool alloc free read write read
write
------------------------------------------- ----- ----- ----- ----- -----
-----
fst 1.15T 679G 68 0 650K
0
mirror 588G 340G 31 0 331K
0
sdj - - 20 0 179K
0
sdk - - 10 0 152K
0
mirror 588G 340G 36 0 319K
0
sdl - - 17 0 132K
0
sdm - - 18 0 187K
0
------------------------------------------- ----- ----- ----- ----- -----
-----
srv 1.30T 6.86T 2 35 187K
144K
mirror 443G 2.29T 0 0 63.8K
0
ata-HGST_HUS724030ALA640_PN2234P8KTWJYY - - 0 0 63.8K
0
ata-HGST_HUS724030ALA640_PN2234P9G620TW - - 0 0 0
0
ata-HGST_HUS724030ALA640_PN2234P9G66E2U - - 0 0 0
0
mirror 443G 2.29T 0 17 0
71.8K
ata-HGST_HUS724030ALA640_PN2234P9G69TKU - - 0 5 0
23.9K
ata-HGST_HUS724030ALA640_PN2234P9G69TXU - - 0 5 0
23.9K
ata-HGST_HUS724030ALA640_PN2234P9G69U2U - - 0 5 0
23.9K
mirror 443G 2.29T 1 17 124K
71.8K
ata-HGST_HUS724030ALA640_PN2234P9G6EBUU - - 1 5 124K
23.9K
ata-HGST_HUS724030ALA640_PN2234P9G6ESAU - - 0 5 0
23.9K
ata-HGST_HUS724030ALA640_PN2234P9G6G70U - - 0 5 0
23.9K
logs - - - - -
-
nvme0n1p1 900K 19.9G 0 0 0
0
cache - - - - -
-
nvme0n1p2 334G 764G 0 0 0
0
------------------------------------------- ----- ----- ----- ----- -----
-----
capacity operations
bandwidth
pool alloc free read write read
write
------------------------------------------- ----- ----- ----- ----- -----
-----
fst 1.15T 679G 110 0 1.07M
0
mirror 588G 340G 59 0 634K
0
sdj - - 28 0 303K
0
sdk - - 30 0 331K
0
mirror 588G 340G 50 0 459K
0
sdl - - 28 0 303K
0
sdm - - 21 0 155K
0
------------------------------------------- ----- ----- ----- ----- -----
-----
srv 1.30T 6.86T 2 229 183K
1.00M
mirror 443G 2.29T 2 73 183K
335K
ata-HGST_HUS724030ALA640_PN2234P8KTWJYY - - 2 24 183K
112K
ata-HGST_HUS724030ALA640_PN2234P9G620TW - - 0 24 0
112K
ata-HGST_HUS724030ALA640_PN2234P9G66E2U - - 0 23 0
112K
mirror 443G 2.29T 0 77 0
347K
ata-HGST_HUS724030ALA640_PN2234P9G69TKU - - 0 25 0
116K
ata-HGST_HUS724030ALA640_PN2234P9G69TXU - - 0 25 0
116K
ata-HGST_HUS724030ALA640_PN2234P9G69U2U - - 0 25 0
116K
mirror 443G 2.29T 0 77 0
347K
ata-HGST_HUS724030ALA640_PN2234P9G6EBUU - - 0 25 0
116K
ata-HGST_HUS724030ALA640_PN2234P9G6ESAU - - 0 25 0
116K
ata-HGST_HUS724030ALA640_PN2234P9G6G70U - - 0 25 0
116K
logs - - - - -
-
nvme0n1p1 900K 19.9G 0 0 0
0
cache - - - - -
-
nvme0n1p2 334G 764G 0 0 0
0
------------------------------------------- ----- ----- ----- ----- -----
-----
capacity operations
bandwidth
pool alloc free read write read
write
------------------------------------------- ----- ----- ----- ----- -----
-----
fst 1.15T 679G 29 0 403K
0
mirror 588G 340G 12 0 171K
0
sdj - - 7 0 79.7K
0
sdk - - 4 0 91.7K
0
mirror 588G 340G 16 0 231K
0
sdl - - 6 0 128K
0
sdm - - 9 0 104K
0
------------------------------------------- ----- ----- ----- ----- -----
-----
srv 1.30T 6.86T 0 66 63.8K
359K
mirror 443G 2.29T 0 21 0
120K
ata-HGST_HUS724030ALA640_PN2234P8KTWJYY - - 0 6 0
39.9K
ata-HGST_HUS724030ALA640_PN2234P9G620TW - - 0 7 0
39.9K
ata-HGST_HUS724030ALA640_PN2234P9G66E2U - - 0 6 0
39.9K
mirror 443G 2.29T 0 21 0
120K
ata-HGST_HUS724030ALA640_PN2234P9G69TKU - - 0 7 0
39.9K
ata-HGST_HUS724030ALA640_PN2234P9G69TXU - - 0 6 0
39.9K
ata-HGST_HUS724030ALA640_PN2234P9G69U2U - - 0 6 0
39.9K
mirror 443G 2.29T 0 22 63.8K
120K
ata-HGST_HUS724030ALA640_PN2234P9G6EBUU - - 0 7 63.8K
39.9K
ata-HGST_HUS724030ALA640_PN2234P9G6ESAU - - 0 6 0
39.9K
ata-HGST_HUS724030ALA640_PN2234P9G6G70U - - 0 7 0
39.9K
logs - - - - -
-
nvme0n1p1 900K 19.9G 0 0 0
0
cache - - - - -
-
nvme0n1p2 334G 764G 0 0 0
0
------------------------------------------- ----- ----- ----- ----- -----
-----
capacity operations
bandwidth
pool alloc free read write read
write
------------------------------------------- ----- ----- ----- ----- -----
-----
fst 1.15T 679G 97 0 797K
0
mirror 588G 340G 58 0 474K
0
sdj - - 27 0 263K
0
sdk - - 30 0 211K
0
mirror 588G 340G 38 0 323K
0
sdl - - 23 0 203K
0
sdm - - 14 0 120K
0
------------------------------------------- ----- ----- ----- ----- -----
-----
srv 1.30T 6.86T 2 176 187K
789K
mirror 443G 2.29T 0 58 59.8K
263K
ata-HGST_HUS724030ALA640_PN2234P8KTWJYY - - 0 19 59.8K
87.7K
ata-HGST_HUS724030ALA640_PN2234P9G620TW - - 0 18 0
87.7K
ata-HGST_HUS724030ALA640_PN2234P9G66E2U - - 0 19 0
87.7K
mirror 443G 2.29T 0 59 0
263K
ata-HGST_HUS724030ALA640_PN2234P9G69TKU - - 0 19 0
87.7K
ata-HGST_HUS724030ALA640_PN2234P9G69TXU - - 0 19 0
87.7K
ata-HGST_HUS724030ALA640_PN2234P9G69U2U - - 0 19 0
87.7K
mirror 443G 2.29T 1 57 128K
263K
ata-HGST_HUS724030ALA640_PN2234P9G6EBUU - - 1 18 128K
87.7K
ata-HGST_HUS724030ALA640_PN2234P9G6ESAU - - 0 19 0
87.7K
ata-HGST_HUS724030ALA640_PN2234P9G6G70U - - 0 18 0
87.7K
logs - - - - -
-
nvme0n1p1 900K 19.9G 0 0 0
0
cache - - - - -
-
nvme0n1p2 334G 764G 0 0 0
0
------------------------------------------- ----- ----- ----- ----- -----
-----
capacity operations
bandwidth
pool alloc free read write read
write
------------------------------------------- ----- ----- ----- ----- -----
-----
fst 1.15T 679G 70 0 426K
0
mirror 588G 340G 38 0 263K
0
sdj - - 21 0 128K
0
sdk - - 16 0 135K
0
mirror 588G 340G 31 0 163K
0
sdl - - 10 0 67.7K
0
sdm - - 20 0 95.6K
0
------------------------------------------- ----- ----- ----- ----- -----
-----
srv 1.30T 6.86T 1 46 116K
2.36M
mirror 443G 2.29T 0 0 59.8K
0
ata-HGST_HUS724030ALA640_PN2234P8KTWJYY - - 0 0 59.8K
0
ata-HGST_HUS724030ALA640_PN2234P9G620TW - - 0 0 0
0
ata-HGST_HUS724030ALA640_PN2234P9G66E2U - - 0 0 0
0
mirror 443G 2.29T 0 37 0
2.31M
ata-HGST_HUS724030ALA640_PN2234P9G69TKU - - 0 12 0
789K
ata-HGST_HUS724030ALA640_PN2234P9G69TXU - - 0 11 0
789K
ata-HGST_HUS724030ALA640_PN2234P9G69U2U - - 0 12 0
789K
mirror 443G 2.29T 0 8 55.8K
47.8K
ata-HGST_HUS724030ALA640_PN2234P9G6EBUU - - 0 2 55.8K
15.9K
ata-HGST_HUS724030ALA640_PN2234P9G6ESAU - - 0 2 0
15.9K
ata-HGST_HUS724030ALA640_PN2234P9G6G70U - - 0 2 0
15.9K
logs - - - - -
-
nvme0n1p1 900K 19.9G 0 0 0
0
cache - - - - -
-
nvme0n1p2 334G 764G 0 0 0
0
------------------------------------------- ----- ----- ----- ----- -----
-----
capacity operations
bandwidth
pool alloc free read write read
write
------------------------------------------- ----- ----- ----- ----- -----
-----
fst 1.15T 679G 108 0 614K
0
mirror 588G 340G 50 0 299K
0
sdj - - 32 0 203K
0
sdk - - 17 0 95.6K
0
mirror 588G 340G 57 0 315K
0
sdl - - 30 0 155K
0
sdm - - 26 0 159K
0
------------------------------------------- ----- ----- ----- ----- -----
-----
srv 1.30T 6.86T 2 68 191K
311K
mirror 443G 2.29T 0 8 0
47.8K
ata-HGST_HUS724030ALA640_PN2234P8KTWJYY - - 0 2 0
15.9K
ata-HGST_HUS724030ALA640_PN2234P9G620TW - - 0 2 0
15.9K
ata-HGST_HUS724030ALA640_PN2234P9G66E2U - - 0 2 0
15.9K
mirror 443G 2.29T 0 29 0
132K
ata-HGST_HUS724030ALA640_PN2234P9G69TKU - - 0 9 0
43.8K
ata-HGST_HUS724030ALA640_PN2234P9G69TXU - - 0 9 0
43.8K
ata-HGST_HUS724030ALA640_PN2234P9G69U2U - - 0 9 0
43.8K
mirror 443G 2.29T 2 29 191K
132K
ata-HGST_HUS724030ALA640_PN2234P9G6EBUU - - 2 9 191K
43.8K
ata-HGST_HUS724030ALA640_PN2234P9G6ESAU - - 0 9 0
43.8K
ata-HGST_HUS724030ALA640_PN2234P9G6G70U - - 0 9 0
43.8K
logs - - - - -
-
nvme0n1p1 900K 19.9G 0 0 0
0
cache - - - - -
-
nvme0n1p2 334G 764G 0 0 0
0
------------------------------------------- ----- ----- ----- ----- -----
-----
capacity operations
bandwidth
pool alloc free read write read
write
------------------------------------------- ----- ----- ----- ----- -----
-----
fst 1.15T 679G 66 0 379K
0
mirror 588G 340G 26 0 144K
0
sdj - - 12 0 63.8K
0
sdk - - 13 0 79.7K
0
mirror 588G 340G 39 0 235K
0
sdl - - 19 0 120K
0
sdm - - 19 0 116K
0
------------------------------------------- ----- ----- ----- ----- -----
-----
srv 1.30T 6.86T 2 166 183K
754K
mirror 443G 2.29T 0 55 0
251K
ata-HGST_HUS724030ALA640_PN2234P8KTWJYY - - 0 18 0
83.7K
ata-HGST_HUS724030ALA640_PN2234P9G620TW - - 0 17 0
83.7K
ata-HGST_HUS724030ALA640_PN2234P9G66E2U - - 0 18 0
83.7K
mirror 443G 2.29T 0 54 0
251K
ata-HGST_HUS724030ALA640_PN2234P9G69TKU - - 0 17 0
83.7K
ata-HGST_HUS724030ALA640_PN2234P9G69TXU - - 0 18 0
83.7K
ata-HGST_HUS724030ALA640_PN2234P9G69U2U - - 0 17 0
83.7K
mirror 443G 2.29T 2 55 183K
251K
ata-HGST_HUS724030ALA640_PN2234P9G6EBUU - - 2 18 183K
83.7K
ata-HGST_HUS724030ALA640_PN2234P9G6ESAU - - 0 17 0
83.7K
ata-HGST_HUS724030ALA640_PN2234P9G6G70U - - 0 18 0
83.7K
logs - - - - -
-
nvme0n1p1 900K 19.9G 0 0 0
0
cache - - - - -
-
nvme0n1p2 334G 764G 0 0 0
0
------------------------------------------- ----- ----- ----- ----- -----
-----
capacity operations
bandwidth
pool alloc free read write read
write
------------------------------------------- ----- ----- ----- ----- -----
-----
fst 1.15T 679G 126 0 698K
0
mirror 588G 340G 64 0 335K
0
sdj - - 37 0 195K
0
sdk - - 26 0 140K
0
mirror 588G 340G 61 0 363K
0
sdl - - 34 0 207K
0
sdm - - 26 0 155K
0
------------------------------------------- ----- ----- ----- ----- -----
-----
srv 1.30T 6.86T 3 274 239K
1.23M
mirror 443G 2.29T 1 91 120K
418K
ata-HGST_HUS724030ALA640_PN2234P8KTWJYY - - 1 30 120K
139K
ata-HGST_HUS724030ALA640_PN2234P9G620TW - - 0 29 0
139K
ata-HGST_HUS724030ALA640_PN2234P9G66E2U - - 0 30 0
139K
mirror 443G 2.29T 0 91 0
418K
ata-HGST_HUS724030ALA640_PN2234P9G69TKU - - 0 30 0
139K
ata-HGST_HUS724030ALA640_PN2234P9G69TXU - - 0 29 0
139K
ata-HGST_HUS724030ALA640_PN2234P9G69U2U - - 0 30 0
139K
mirror 443G 2.29T 1 91 120K
418K
ata-HGST_HUS724030ALA640_PN2234P9G6EBUU - - 1 30 120K
139K
ata-HGST_HUS724030ALA640_PN2234P9G6ESAU - - 0 30 0
139K
ata-HGST_HUS724030ALA640_PN2234P9G6G70U - - 0 29 0
139K
logs - - - - -
-
nvme0n1p1 900K 19.9G 0 0 0
0
cache - - - - -
-
nvme0n1p2 334G 764G 0 0 0
0
------------------------------------------- ----- ----- ----- ----- -----
-----
capacity operations
bandwidth
pool alloc free read write read
write
------------------------------------------- ----- ----- ----- ----- -----
-----
fst 1.15T 679G 70 0 442K
0
mirror 588G 340G 36 0 215K
0
sdj - - 18 0 95.6K
0
sdk - - 17 0 119K
0
mirror 588G 340G 33 0 227K
0
sdl - - 15 0 123K
0
sdm - - 17 0 104K
0
------------------------------------------- ----- ----- ----- ----- -----
-----
srv 1.30T 6.86T 2 0 187K
0
mirror 443G 2.29T 0 0 63.7K
0
ata-HGST_HUS724030ALA640_PN2234P8KTWJYY - - 0 0 63.7K
0
ata-HGST_HUS724030ALA640_PN2234P9G620TW - - 0 0 0
0
ata-HGST_HUS724030ALA640_PN2234P9G66E2U - - 0 0 0
0
mirror 443G 2.29T 0 0 0
0
ata-HGST_HUS724030ALA640_PN2234P9G69TKU - - 0 0 0
0
ata-HGST_HUS724030ALA640_PN2234P9G69TXU - - 0 0 0
0
ata-HGST_HUS724030ALA640_PN2234P9G69U2U - - 0 0 0
0
mirror 443G 2.29T 1 0 123K
0
ata-HGST_HUS724030ALA640_PN2234P9G6EBUU - - 1 0 123K
0
ata-HGST_HUS724030ALA640_PN2234P9G6ESAU - - 0 0 0
0
ata-HGST_HUS724030ALA640_PN2234P9G6G70U - - 0 0 0
0
logs - - - - -
-
nvme0n1p1 900K 19.9G 0 0 0
0
cache - - - - -
-
nvme0n1p2 334G 764G 0 0 0
0
------------------------------------------- ----- ----- ----- ----- -----
-----
===
arc_summary
------------------------------------------------------------------------
ZFS Subsystem Report Wed Feb 06 16:34:09 2019
ARC Summary: (HEALTHY)
Memory Throttle Count: 0
ARC Misc:
Deleted: 142.80m
Mutex Misses: 14.89m
Evict Skips: 21.87b
ARC Size: 86.88% 54.65 GiB
Target Size: (Adaptive) 78.61% 49.45 GiB
Min Size (Hard Limit): 6.25% 3.93 GiB
Max Size (High Water): 16:1 62.90 GiB
ARC Size Breakdown:
Recently Used Cache Size: 25.05% 5.55 GiB
Frequently Used Cache Size: 74.95% 16.62 GiB
ARC Hash Breakdown:
Elements Max: 7.46m
Elements Current: 60.33% 4.50m
Collisions: 43.79m
Chain Max: 8
Chains: 504.81k
ARC Total accesses: 2.51b
Cache Hit Ratio: 95.24% 2.39b
Cache Miss Ratio: 4.76% 119.32m
Actual Hit Ratio: 92.99% 2.33b
Data Demand Efficiency: 94.16% 486.53m
Data Prefetch Efficiency: 86.47% 29.68m
CACHE HITS BY CACHE LIST:
Anonymously Used: 2.20% 52.51m
Most Recently Used: 37.35% 891.38m
Most Frequently Used: 60.29% 1.44b
Most Recently Used Ghost: 0.10% 2.32m
Most Frequently Used Ghost: 0.06% 1.45m
CACHE HITS BY DATA TYPE:
Demand Data: 19.20% 458.13m
Prefetch Data: 1.08% 25.66m
Demand Metadata: 78.22% 1.87b
Prefetch Metadata: 1.51% 36.00m
CACHE MISSES BY DATA TYPE:
Demand Data: 23.80% 28.40m
Prefetch Data: 3.37% 4.02m
Demand Metadata: 66.03% 78.79m
Prefetch Metadata: 6.80% 8.12m
L2 ARC Summary: (HEALTHY)
Low Memory Aborts: 233
Free on Write: 27.52k
R/W Clashes: 0
Bad Checksums: 0
IO Errors: 0
L2 ARC Size: (Adaptive) 364.94 GiB
Compressed: 91.59% 334.23 GiB
Header Size: 0.08% 307.98 MiB
L2 ARC Breakdown: 119.32m
Hit Ratio: 1.42% 1.69m
Miss Ratio: 98.58% 117.63m
Feeds: 6.01m
L2 ARC Writes:
Writes Sent: 100.00% 279.55k
DMU Prefetch Efficiency: 1.89b
Hit Ratio: 2.24% 42.49m
Miss Ratio: 97.76% 1.85b
ZFS Tunable:
dbuf_cache_hiwater_pct 10
dbuf_cache_lowater_pct 10
dbuf_cache_max_bytes 104857600
dbuf_cache_max_shift 5
dmu_object_alloc_chunk_shift 7
ignore_hole_birth 1
l2arc_feed_again 1
l2arc_feed_min_ms 200
l2arc_feed_secs 1
l2arc_headroom 2
l2arc_headroom_boost 200
l2arc_noprefetch 1
l2arc_norw 0
l2arc_write_boost 8388608
l2arc_write_max 8388608
metaslab_aliquot 524288
metaslab_bias_enabled 1
metaslab_debug_load 0
metaslab_debug_unload 0
metaslab_fragmentation_factor_enabled 1
metaslab_lba_weighting_enabled 1
metaslab_preload_enabled 1
metaslabs_per_vdev 200
send_holes_without_birth_time 1
spa_asize_inflation 24
spa_config_path /etc/zfs/zpool.cache
spa_load_verify_data 1
spa_load_verify_maxinflight 10000
spa_load_verify_metadata 1
spa_slop_shift 5
zfetch_array_rd_sz 1048576
zfetch_max_distance 8388608
zfetch_max_streams 8
zfetch_min_sec_reap 2
zfs_abd_scatter_enabled 1
zfs_abd_scatter_max_order 10
zfs_admin_snapshot 1
zfs_arc_average_blocksize 8192
zfs_arc_dnode_limit 0
zfs_arc_dnode_limit_percent 10
zfs_arc_dnode_reduce_percent 10
zfs_arc_grow_retry 0
zfs_arc_lotsfree_percent 10
zfs_arc_max 0
zfs_arc_meta_adjust_restarts 4096
zfs_arc_meta_limit 0
zfs_arc_meta_limit_percent 75
zfs_arc_meta_min 0
zfs_arc_meta_prune 10000
zfs_arc_meta_strategy 1
zfs_arc_min 0
zfs_arc_min_prefetch_lifespan 0
zfs_arc_p_aggressive_disable 1
zfs_arc_p_dampener_disable 1
zfs_arc_p_min_shift 0
zfs_arc_pc_percent 0
zfs_arc_shrink_shift 0
zfs_arc_sys_free 0
zfs_autoimport_disable 1
zfs_compressed_arc_enabled 1
zfs_dbgmsg_enable 0
zfs_dbgmsg_maxsize 4194304
zfs_dbuf_state_index 0
zfs_deadman_checktime_ms 5000
zfs_deadman_enabled 1
zfs_deadman_synctime_ms 1000000
zfs_dedup_prefetch 0
zfs_delay_min_dirty_percent 60
zfs_delay_scale 500000
zfs_delete_blocks 20480
zfs_dirty_data_max 4294967296
zfs_dirty_data_max_max 4294967296
zfs_dirty_data_max_max_percent 25
zfs_dirty_data_max_percent 10
zfs_dirty_data_sync 67108864
zfs_dmu_offset_next_sync 0
zfs_expire_snapshot 300
zfs_flags 0
zfs_free_bpobj_enabled 1
zfs_free_leak_on_eio 0
zfs_free_max_blocks 100000
zfs_free_min_time_ms 1000
zfs_immediate_write_sz 32768
zfs_max_recordsize 1048576
zfs_mdcomp_disable 0
zfs_metaslab_fragmentation_threshold 70
zfs_metaslab_segment_weight_enabled 1
zfs_metaslab_switch_threshold 2
zfs_mg_fragmentation_threshold 85
zfs_mg_noalloc_threshold 0
zfs_multihost_fail_intervals 5
zfs_multihost_history 0
zfs_multihost_import_intervals 10
zfs_multihost_interval 1000
zfs_multilist_num_sublists 0
zfs_no_scrub_io 0
zfs_no_scrub_prefetch 0
zfs_nocacheflush 0
zfs_nopwrite_enabled 1
zfs_object_mutex_size 64
zfs_pd_bytes_max 52428800
zfs_per_txg_dirty_frees_percent 30
zfs_prefetch_disable 0
zfs_read_chunk_size 1048576
zfs_read_history 0
zfs_read_history_hits 0
zfs_recover 0
zfs_resilver_delay 2
zfs_resilver_min_time_ms 3000
zfs_scan_idle 50
zfs_scan_min_time_ms 1000
zfs_scrub_delay 4
zfs_send_corrupt_data 0
zfs_sync_pass_deferred_free 2
zfs_sync_pass_dont_compress 5
zfs_sync_pass_rewrite 2
zfs_sync_taskq_batch_pct 75
zfs_top_maxinflight 32
zfs_txg_history 0
zfs_txg_timeout 5
zfs_vdev_aggregation_limit 131072
zfs_vdev_async_read_max_active 3
zfs_vdev_async_read_min_active 1
zfs_vdev_async_write_active_max_dirty_percent 60
zfs_vdev_async_write_active_min_dirty_percent 30
zfs_vdev_async_write_max_active 10
zfs_vdev_async_write_min_active 2
zfs_vdev_cache_bshift 16
zfs_vdev_cache_max 16384
zfs_vdev_cache_size 0
zfs_vdev_max_active 1000
zfs_vdev_mirror_non_rotating_inc 0
zfs_vdev_mirror_non_rotating_seek_inc 1
zfs_vdev_mirror_rotating_inc 0
zfs_vdev_mirror_rotating_seek_inc 5
zfs_vdev_mirror_rotating_seek_offset 1048576
zfs_vdev_queue_depth_pct 1000
zfs_vdev_raidz_impl [fastest] original
scalar sse2 ssse3 avx2
zfs_vdev_read_gap_limit 32768
zfs_vdev_scheduler noop
zfs_vdev_scrub_max_active 2
zfs_vdev_scrub_min_active 1
zfs_vdev_sync_read_max_active 10
zfs_vdev_sync_read_min_active 10
zfs_vdev_sync_write_max_active 10
zfs_vdev_sync_write_min_active 10
zfs_vdev_write_gap_limit 4096
zfs_zevent_cols 80
zfs_zevent_console 0
zfs_zevent_len_max 512
zfs_zil_clean_taskq_maxalloc 1048576
zfs_zil_clean_taskq_minalloc 1024
zfs_zil_clean_taskq_nthr_pct 100
zil_replay_disable 0
zil_slog_bulk 786432
zio_delay_max 30000
zio_dva_throttle_enabled 1
zio_requeue_io_start_cut_in_line 1
zio_taskq_batch_pct 75
zvol_inhibit_dev 0
zvol_major 230
zvol_max_discard_blocks 16384
zvol_prefetch_bytes 131072
zvol_request_sync 0
zvol_threads 32
zvol_volmode 1
Thanks
ProblemType: Bug
DistroRelease: Ubuntu 18.04
Package: zfsutils-linux 0.7.5-1ubuntu16.4
ProcVersionSignature: Ubuntu 4.15.0-39.42-generic 4.15.18
Uname: Linux 4.15.0-39-generic x86_64
NonfreeKernelModules: zfs zunicode zavl icp zcommon znvpair
ApportVersion: 2.20.9-0ubuntu7.5
Architecture: amd64
Date: Wed Feb 6 16:26:17 2019
InstallationDate: Installed on 2016-04-04 (1038 days ago)
InstallationMedia: Ubuntu-Server 16.04 LTS "Xenial Xerus" - Beta amd64
(20160325)
ProcEnviron:
TERM=rxvt-unicode-256color
PATH=(custom, no user)
XDG_RUNTIME_DIR=<set>
LANG=en_US.UTF-8
SHELL=/bin/bash
SourcePackage: zfs-linux
UpgradeStatus: Upgraded to bionic on 2018-08-16 (174 days ago)
modified.conffile..etc.sudoers.d.zfs: [inaccessible: [Errno 13] Permission
denied: '/etc/sudoers.d/zfs']
** Affects: zfs-linux (Ubuntu)
Importance: Undecided
Status: New
** Tags: amd64 apport-bug bionic
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1814983
Title:
zfs poor sustained read performance from ssd pool
To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1814983/+subscriptions
--
ubuntu-bugs mailing list
[email protected]
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs