As Dan said, it looks like ZFS is busy.
That's very odd, as the system isn't doing anything I/O heavy. It has only a
single zpool of five devices in a raidz serving a single filesystem, and that
filesystem only writes logs at a rate of about 10MB/s. ZFS compression is
turned off.
How much
I'm cross-posting to zfs-discuss, as this is now more of a ZFS
query than a dtrace query at this point, and I'm not sure if all the ZFS
experts are listening on dtrace-discuss (although they probably
are... :^).
The only thing that jumps out at me is the ARC size - 53.4GB, or
most of your 64GB
Can you gather some ZFS IO statistics, like
fsstat zfs 1 for a minute or so.
Here is a snapshot from when it is exhibiting the behavior:
new name name attr attr lookup rddir read read write write
file remov chng get setops ops ops bytes ops bytes
0 0 0
The only thing that jumps out at me is the ARC size -
53.4GB, or
most of your 64GB of RAM. This in-and-of-itself is
not necessarily
a bad thing - if there are no other memory consumers,
let ZFS cache
data in the ARC. But if something is coming along to
flush dirty
ARC pages
Hi Hans-Peter,
pid provider probes are created on demand. If the system is running
low on probes (there's a limit) it will start culling old and unused
probes. You can recreate them by specifying them in a dtrace(1M)
invocation.
Adam
On Sep 13, 2009, at 3:24 PM, Sloot, Hans-Peter wrote: