I forgot to mention, my l2arc is used on the second pool on this system:

$ zpool iostat -v
                                               capacity     operations     
bandwidth 
pool                                         alloc   free   read  write   read  
write
-------------------------------------------  -----  -----  -----  -----  -----  
-----
fst                                          1.79T  1.84T    611     52  5.51M  
 968K
  raidz1                                     1.79T  1.84T    611     52  5.51M  
 968K
    wwn-0x5002538e4095da39                       -      -    154     12  1.40M  
 242K
    wwn-0x5002538e4095bdd6                       -      -    150     12  1.36M  
 242K
    wwn-0x5002538e4093c6fd                       -      -    154     13  1.40M  
 242K
    wwn-0x5002538e4095da30                       -      -    150     13  1.36M  
 242K
-------------------------------------------  -----  -----  -----  -----  -----  
-----
srv                                          1.69T  6.47T     45     11  4.08M  
 624K
  mirror                                      576G  2.16T     15      3  1.36M  
 206K
    ata-HGST_HUS724030ALA640_PN2234P8KTWJYY      -      -      5      1   464K  
68.8K
    ata-HGST_HUS724030ALA640_PN2234P9G620TW      -      -      5      1   465K  
68.8K
    ata-HGST_HUS724030ALA640_PN2234P9G66E2U      -      -      5      1   465K  
68.8K
  mirror                                      576G  2.16T     15      3  1.36M  
 209K
    ata-HGST_HUS724030ALA640_PN2234P9G69TKU      -      -      5      1   465K  
69.7K
    ata-HGST_HUS724030ALA640_PN2234P9G69TXU      -      -      5      1   464K  
69.7K
    ata-HGST_HUS724030ALA640_PN2234P9G69U2U      -      -      5      1   464K  
69.7K
  mirror                                      576G  2.16T     15      3  1.36M  
 208K
    ata-HGST_HUS724030ALA640_PN2234P9G6EBUU      -      -      5      1   464K  
69.3K
    ata-HGST_HUS724030ALA640_PN2234P9G6ESAU      -      -      5      1   464K  
69.3K
    ata-HGST_HUS724030ALA640_PN2234P9G6G70U      -      -      5      1   464K  
69.3K
logs                                             -      -      -      -      -  
    -
  nvme0n1p1                                   776K  19.9G      0      0      1  
    1
cache                                            -      -      -      -      -  
    -
  nvme0n1p2                                  91.4G  1006G     11      1  52.5K  
 155K
-------------------------------------------  -----  -----  -----  -----  -----  
-----

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1814983

Title:
  zfs poor sustained read performance from ssd pool

To manage notifications about this bug go to:
https://bugs.launchpad.net/zfs/+bug/1814983/+subscriptions

-- 
ubuntu-bugs mailing list
[email protected]
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

Reply via email to