Colin, thanks for the link to https://github.com/openzfs/zfs/issues/9966
; unfortunately I think that's a different problem, my meta use seems
less drastic than the github issue

arc_prune                       4    1859269059
arc_meta_used                   4    3590932168
arc_meta_limit                  4    94868305920
arc_dnode_limit                 4    4294967296
arc_meta_max                    4    79969748624
arc_meta_min                    4    16777216

It looks like there's still 14898557296 bytes available for arc_meta.

Do you know off-hand what the comment about the l2arc issue is? My l2arc
is 1TB, well above 320 gigs.

Here's where my stats are now (the system is not currently under any
real load):

$ cat /proc/spl/kstat/zfs/arcstats
13 1 0x01 96 4608 10492531405 1374854032619736
name                            type data
hits                            4    6820744874
misses                          4    953303948
demand_data_hits                4    1163033450
demand_data_misses              4    112593263
demand_metadata_hits            4    5398565264
demand_metadata_misses          4    771209639
prefetch_data_hits              4    3978361
prefetch_data_misses            4    4454061
prefetch_metadata_hits          4    255167799
prefetch_metadata_misses        4    65046985
mru_hits                        4    2596176358
mru_ghost_hits                  4    7960783
mfu_hits                        4    3972213379
mfu_ghost_hits                  4    37066000
deleted                         4    624581823
mutex_miss                      4    66283716
access_skip                     4    112
evict_skip                      4    59660318004
evict_not_enough                4    531586443
evict_l2_cached                 4    758952236544
evict_l2_eligible               4    6629993222656
evict_l2_ineligible             4    1055824726016
evict_l2_skip                   4    4093
hash_elements                   4    1382698
hash_elements_max               4    7705756
hash_collisions                 4    101258453
hash_chains                     4    54374
hash_chain_max                  4    6
p                               4    3403412095
c                               4    9636886736
c_min                           4    4221281536
c_max                           4    126491074560
size                            4    9501317832
compressed_size                 4    6533731328
uncompressed_size               4    12946349568
overhead_size                   4    1324591616
hdr_size                        4    125140504
data_size                       4    5910385664
metadata_size                   4    1947937280
dbuf_size                       4    313455312
dnode_size                      4    801582080
bonus_size                      4    304865408
anon_size                       4    69632
anon_evictable_data             4    0
anon_evictable_metadata         4    0
mru_size                        4    2874187264
mru_evictable_data              4    2820684800
mru_evictable_metadata          4    11149312
mru_ghost_size                  4    1934149120
mru_ghost_evictable_data        4    254414848
mru_ghost_evictable_metadata    4    1679734272
mfu_size                        4    4984066048
mfu_evictable_data              4    3089700864
mfu_evictable_metadata          4    293028864
mfu_ghost_size                  4    32707072
mfu_ghost_evictable_data        4    32707072
mfu_ghost_evictable_metadata    4    0
l2_hits                         4    16313870
l2_misses                       4    936990042
l2_feeds                        4    1342134
l2_rw_clash                     4    1
l2_read_bytes                   4    73933849600
l2_write_bytes                  4    218946362880
l2_writes_sent                  4    45529
l2_writes_done                  4    45529
l2_writes_error                 4    0
l2_writes_lock_retry            4    21
l2_evict_lock_retry             4    0
l2_evict_reading                4    0
l2_evict_l1cached               4    0
l2_free_on_write                4    878
l2_abort_lowmem                 4    496
l2_cksum_bad                    4    0
l2_io_error                     4    0
l2_size                         4    130902416896
l2_asize                        4    98157307392
l2_hdr_size                     4    97951584
memory_throttle_count           4    0
memory_direct_count             4    856012
memory_indirect_count           4    233680
memory_all_bytes                4    135081009152
memory_free_bytes               4    120346943488
memory_available_bytes          3    118236303360
arc_no_grow                     4    0
arc_tempreserve                 4    0
arc_loaned_bytes                4    0
arc_prune                       4    1859269059
arc_meta_used                   4    3590932168
arc_meta_limit                  4    94868305920
arc_dnode_limit                 4    4294967296
arc_meta_max                    4    79969748624
arc_meta_min                    4    16777216
sync_wait_for_async             4    608494
demand_hit_predictive_prefetch  4    5716590
arc_need_free                   4    0
arc_sys_free                    4    2110640768


Thanks

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1814983

Title:
  zfs poor sustained read performance from ssd pool

To manage notifications about this bug go to:
https://bugs.launchpad.net/zfs/+bug/1814983/+subscriptions

-- 
ubuntu-bugs mailing list
[email protected]
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

Reply via email to