For posterity, I'd like to point out the following:
neel's original arcstat.pl uses a crude scaling routine that results in a large
loss of precision as numbers cross from Kilobytes to Megabytes to Gigabytes.
The 1G reported arc size case described here, could actually be anywhere
between
Hello Mike,
thank you for your update.
r...@s0011 # ./arcstat.pl 3
time read miss miss% dmis dm% pmis pm% mmis mm% arcszc
11:23:38 197K 7.8K 3 5.7K 3 2.1K 4 6.1K 5 511M 1.5G
11:23:41 700 00 00 00 0 511M 1.5G
11:23:44 760 00 00
Hmm...according to
http://www.mail-archive.com/vbox-users-commun...@lists.sourceforge.net/msg00640.html
that's only needed before VirtualBox 3.2, or for IDE. = 3.2, non-IDE should
honor flush requests, if I read that correctly.
Which is good, because I haven't seen an example of how to enabling
Hi!
Hi all
I just tested dedup on this test box running OpenIndiana (147) storing bacula
backups, and did some more testing on some datasets with ISO images. The
results show so far that removing 30GB deduped datasets are done in a matter
of minutes, which is not the case with 134
I have the smbshare on zfs volume.
When I copy file with readonly attr on the share the file becomes undeletable.
I can't change Read-Only attribute from Windows.
I can delete it from osol only.
What I need to do to avoid this situation?? (Except to do not set read-only
attr to source file).
Hello Christian,
Thanks for bringing this to my attention. I believe I've fixed the rounding
error in the latest version.
http://github.com/mharsch/arcstat
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
przemol,
Thanks for the feedback. I had incorrectly assumed that any machine running
the script would have L2ARC implemented (which is not the case with Solaris
10). I've added a check for this that allows the script to work on non-L2ARC
machines as long as you don't specify L2ARC stats on
Hello
br
I'll first give you my setup, and then explain my problems.br
br
NexentaOS_134f i86pc i386 i86pc Solaris - NexentaCore 30 Hardy 804b134br
2x Western Digital Caviar Black WD6402AAEX 640GB 7200 SATA 6.0Gb/s (Mirror
Boot)br
9x HITACHI Deskstar 7K2000 HDS722020ALA330 2TB 7200 SATA 3.0Gb/s
I'm working on this scenario in which file system activity appears to
cause the arc cache to evict meta data. I would like to have a
preference to keep the metadata in cache over ZFS File Data
What I've notice on import of a zpool the arc_meta_used goes up
significantly. ZFS meta data
On Fri, Oct 1, 2010 at 11:46 AM, David Blasingame Oracle
david.blasing...@oracle.com wrote:
I'm working on this scenario in which file system activity appears to cause
the arc cache to evict meta data. I would like to have a preference to keep
the metadata in cache over ZFS File Data
What
Hey folks;
Running on Solaris 10 U9 here. How do most of you monitor disk usage /
capacity on your large zpools remotely via SNMP tools?
Net SNMP seems to be using a 32-bit unsigned integer (based on the MIB)
for hrStorageSize and friends, and thus we're not able to get accurate
numbers for
Hello Ray, hello list!
Running on Solaris 10 U9 here. How do most of you monitor disk usage /
capacity on your large zpools remotely via SNMP tools?
Net SNMP seems to be using a 32-bit unsigned integer (based on the MIB)
for hrStorageSize and friends, and thus we're not able to get
On Fri, Oct 01, 2010 at 03:00:16PM -0700, Volker A. Brandt wrote:
Hello Ray, hello list!
Running on Solaris 10 U9 here. How do most of you monitor disk usage /
capacity on your large zpools remotely via SNMP tools?
Net SNMP seems to be using a 32-bit unsigned integer (based on the
13 matches
Mail list logo