This is not a performance issue. The rsync will hang hard and one of the child
process can not be killed (I assume it's the one running on the zfs). the
command gets slower I am referring to the output of the file system commands
(zpool, zfs, df, du, etc) from the different shell. I left the
3 shelves with 2 controllers each. 48 drive per
shelf. These are Fibrechannel attached. We would like
all 144 drives added to the same large pool.
I would do either a 12 or 16 disk raidz3 vdev and do spread out the disk across
controllers within vdevs. also may want to leave a least 1 spare
Why would you recommend a spare for raidz2 or raidz3?
-- richard
Spare is to minimize the reconstruction time. Because remember a vdev can not
start resilvering until there is a spare disk available. And with disks as big
as they are today, resilvering also take many hours. I rather have
Would your opinion change if the disks you used took
7 days to resilver?
Bob
That will only make a stronger case that hot spare is absolutely needed.
This will also make a strong case for choosing raidz3 over raidz2 as well as
vdev smaller number of disks.
--
This message posted from
Looks like I am hitting the same issue now
from the earlier post that you responded.
http://opensolaris.org/jive/thread.jspa?threadID=128532tstart=15
Continue my test migration with the dedup=off and synced couple more file
systems.
I decided the merge two of the file systems together by
ZVOLs 'vol01/zvol01' and
'vol01/zvol02', under COMSTAR soon.
http://wikis.sun.com/display/OpenSolarisInfo/How+to+Configure+iSCSI+Target+Ports
http://wikis.sun.com/display/OpenSolarisInfo/COMSTAR+Administration
- Jim
_
Przem
From: Rick McNeal
Przem,
Anybody has an idea what I can do about it?
zfs set shareiscsi=off vol01/zvol01
zfs set shareiscsi=off vol01/zvol02
Doing this will have no impact on the LUs if configured under COMSTAR.
This will also transparently go away with b136, when ZFS ignores the shareiscsi
property.
- Jim
Okay, so after some test with dedup on snv_134. I decided we can not to use
dedup feature for the time being.
While unable to destroy a dedupped file system. I decided to migrate the file
system to another pool then destroy the pool. (see below)
size of snapshot?
r...@filearch1:/var/adm# zfs list mpool/export/projects/project1...@today
NAMEUSED AVAIL REFER MOUNTPOINT
mpool/export/projects/project1...@today 0 - 407G -
r...@filearch1:/var/adm# zfs list
I was expecting
zfs send tank/export/projects/project1...@today
would send everything up to @today. That is the only snapshot and I am not
using the -i options.
The things worries me is that tank/export/projects/project1_nb was the first
file system that I tested with full dedup and
When I boot up without the disks in the slots. I manually bring the pool on
line with
zpool clear poolname
I believe that was what you were missing from your command. However I did not
try to change controller.
Hopefully you only been unplug disks while the system is turn off. If that's
You may or may not need to add the log device back.
zfs clear should bring the pool online.
either way shouldn't affect the data.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Hi All, is there any procedure to recover a filesystem from an office pool or
bring a pool on-line quickly.
Here is my issue.
* One 700GB Zpool
* 1 filesystem with compression turn on (only using few MB)
* Try to migrated another filesystem from a different pool with dedup stream.
with
zfs send
10GB of memory + 5 days later. The pool was imported.
this file server is a virtual machine. I allocated 2GB of memory and 2 CPU
cores assume this was enough to mange 6 TB (6x 1TB disks). While the pool I am
try to recover is only 700 GB and not the 6TB pool I am try to migrate.
So I decided
, but most
people may init them by packages (though zoneadm says it is copying thousands
of files), so /etc/skel might be a better example of the usecase - though
nearly useless ,)
jim
--
This message posted from opensolaris.org
___
zfs-discuss
, but most
people may init them by packages (though zoneadm says it is copying thousands
of files), so /etc/skel might be a better example of the usecase - though
nearly useless ,)
jim
--
This message posted from opensolaris.org
___
zfs-discuss
A solution to this problem would be my early Christmas present!
Here is how I lost access to an otherwise healthy mirrored pool two months ago:
Box running snv_130 with two disks in a mirror and an iRAM battery-backed
ZIL device was shutdown orderly and powered down normally. While I was away
, plus updates to files' atime attr - and that particular scale of
operation will be greatly improved by an NVRAM ZIL.
If I were to use a ZIL again, i'd use something like the ACARD DDR-2 SATA
boxes, and not an SSD or an iRAM.
-- Jim
--
This message posted from opensolaris.org
I have been looking at why a zfs receive operation is terribly slow and one
observation that seemed directly linked to why it is slow is that at any one
time one of the cpus is pegged at 100% sys while the other 5 in my case are
relatively quiet. I haven't dug any deeper than that, but was
Just an update, I had a ticket open with Sun regarding this and it looks like
they have a CR for what I was seeing (6975124).
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
, but I guess I just delayed the
freeze a little longer. I provided Oracle some explorer output and a crash
dump to analyze and this is the data they used to provide the information I
passed on.
Jim Barker
--
This message posted from opensolaris.org
it, but I wanted to know, if there're more of them.
Assuming that the ZFS filesystem in question is not degrading further (as in a
disk going bad), upon completion of a successful scrub, zpool reports the
complete status of the filesystem being reported on.
- Jim
Regards,
budy
I have a 20Tb pool on a mount point that is made up of 42 disks from an EMC
SAN. We were running out of space and down to 40Gb left (loading 8Gb/day) and
have not received disk for our SAN. Using df -h results in:
Filesystem size used avail capacity Mounted on
pool1
Yes, you're correct. There was a typo when I copied to the forum.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Yes. We run a snap in cron to a disaster recovery site.
NAME USED AVAIL REFER MOUNTPOINT
po...@20100930-22:20:00 13.2M - 19.5T -
po...@20101001-01:20:00 4.35M - 19.5T -
po...@20101001-04:20:00 0 - 19.5T -
po...@20101001-07:20:00
One of us found the following:
The presence of snapshots can cause some unexpected behavior when you attempt
to free space. Typically, given appropriate permissions, you can remove a file
from a full file system, and this action results in more space becoming
available in the file system.
On Oct 8, 2010, at 2:06 AM, Wolfraider wrote:
We have a weird issue with our ZFS pool and COMSTAR. The pool shows online
with no errors, everything looks good but when we try to access zvols shared
out with COMSTAR, windows reports that the devices have bad blocks.
Everything has been
c0t5000C500268C0576d0 c0t5000C500268C5414d0 c0t5000C500268CFA6Bd0
c0t5000C500268D0821d0
- Jim
Unfortunately I get an error:
[b]cannot open '/dev/dsk/c0t5000C500268CFA6Bd0s0': I/O error[/b]
Can anyone give me some clues as to what is wrong?
I have included the zpool status and format
There is nothing in here that requires zfs confidential.
cross-posted to zfs discuss.
On Oct 21, 2010, at 3:37 PM, Jim Nissen wrote:
Cross-posting.
Original Message
Subject: Performance problems due to smaller ZFS recordsize
Date: Thu, 21 Oct 2010 14:00:42 -0500
Hi Jim - cross-posting to zfs-discuss, because 20X is, to say the least,
compelling.
Obviously, it would be awesome if we had the opportunity to whittle-down which
of
the changes made this fly, or if it was a combination of the changes.
Looking at them individually
set
Jim,
They are running Solaris 10 11/06 (u3) with kernel patch 142900-12. See
inline for the rest...
On 10/25/10 11:19 AM, Jim Mauro wrote:
Hi Jim - cross-posting to zfs-discuss, because 20X is, to say the
least, compelling.
Obviously, it would be awesome if we had the opportunity
.
Also I have observed that zpool import took some time to for successful
completion. Is there a way minimize zpool import -f operation time ??
No.
- Jim
Regards,
sridhar.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
/mpxio/mpath whatever your OS
calls multi-pathing.
MC/S (Multiple Connections per Sessions) support was added to the iSCSI Target
in COMSTAR, now available in Oracle Solaris 11 Express.
- Jim
-Ross
___
zfs-discuss mailing list
zfs-discuss
Tim,
On Wed, Nov 17, 2010 at 10:12 AM, Jim Dunham james.dun...@oracle.com wrote:
sridhar,
I have done the following (which is required for my case)
Created a zpool (smpool) on a device/LUN from an array (IBM 6K) on host1
created a array level snapshot of the device using dscli
on the manual
formatting done above)
NOTE: The omission of either slice designator ('s0' or 's1' above), will cause
ZFS to (re)format the whole device, undoing any manual partitioning done with
format.
Jim
So - is it local to a pool, or global?
If it's global, will I need to do something
of dedup, I found that the latest version of
VDBench supports dedup, and is helpful on narrowing in on specific issues
related to the size of the DDT, the ARC and L2ARC.
http://blogs.sun.com/henk/entry/first_beta_version_of_vdbench
Jim
Thanks for the help,
Janice
:
dd of=/dev/rdsk/c?t?d?s0 if=/dev/rdsk/c?t?d?s0 seek=4294967296 count=1
Note: Make sure that both devices specified (dev/rdsk/c?t?d?s0) are identical
so that the data written is identical to the data read.
- Jim
On the initiator or the target? I tried to setup a new server
measurable write I/O
performance, although how much is unclear.
For those interested, one can trace back the ZFS code starting here:
http://cvs.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/uts/common/fs/zfs/vdev_disk.c#276
Jim
3. Assuming I want to do such an allocation
With ZFS, Solaris 10 Update 9, is it possible to
detach configured log devices from a zpool?
I have a zpool with 3 F20 mirrors for the ZIL. They're
coming up corrupted. I want to detach them, remake
the devices and reattach them to the zpool.
Thanks
/jim
, especially with oracle, that using the psargs string is much more
informative - curpsinfo-pr_psargs.
Jim
---
- Original Message -
From: przemol...@poczta.fm
To: zfs-discuss@opensolaris.org
Sent: Tuesday, May 10, 2011 10:27:55 AM GMT -08:00 US/Canada Pacific
Subject: Re: [zfs-discuss] DTrace IO
Well, as I wrote in other threads - i have a pool named pool on physical
disks, and a compressed volume in this pool which i loopback-mount over iSCSI
to make another pool named dcpool.
When files in dcpool are deleted, blocks are not zeroed out by current ZFS
and they are still allocated for
In a recent post r-mexico wrote that they had to parse system messages and
manually fail the drives on a similar, though different, occasion:
http://opensolaris.org/jive/message.jspa?messageID=515815#515815
--
This message posted from opensolaris.org
at the command set associated with stmfadm, and you should see that
it has taken on all sbdadm options, and more. I believe you are looking for the
functionality associated with stmfadm offline-lu, ... online-lu.
- Jim
Is it possible to change the GUID of the newly imported volume to match the
old
.
It is also possible that device names changed (i.e. on x86 - when SATA HDD
access mode in BIOS changed from IDE to AHCI) and the boot device name saved in
eeprom or its GRUB emulator is no longer valid. But this has different error
strings ;)
Good luck,
//Jim
--
This message posted from
like this clone was always here with this
original naming, and your current newer dataset is a cloned deviation.
Hopefully this will fool STMF into using this data instead of new data, with
existing GUID...
5) Enable stmf and iscsi/* services
*) Tell us if it works ;)
HTH,
//Jim
--
This message
come up with an idea of a dtrace for your situation.
I have little non-zero hope that the experts would also come to the web-forums
and review the past month's posts and give their comments to my, your and
others' questions and findings ;)
//Jim Klimov
--
This message posted from opensolaris.org
or used by another system with a solution as simple as that
you'd have to do a forced import (zpool import -f tank) - if it is indeed a
local non-networked pool and no other machine really uses it.
HTH,
//Jim
--
This message posted from opensolaris.org
tweaked this on the fly.
One key indicator is if your disk queues hover around 10.
Jim
---
- Original Message -
From: jimkli...@cos.ru
To: zfs-discuss@opensolaris.org
Sent: Wednesday, May 11, 2011 3:22:19 AM GMT -08:00 US/Canada Pacific
Subject: Re: [zfs-discuss] Performance problem
-m
verbose from eeprom or via reboot - -m verbose from single-user), just to
maybe see some more insight on what fails?.
//Jim
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
different mirrors), or RAIDZ1 when we need more space available.
HTH,
//Jim Klimov
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
exceed
6Gb by itself (and your ETL software uses a separate dataset), you can reserve
6Gb for only the root FS (and hopefully its descendants - but better see
manpages):
# zfs set reservation=6G rpool/ROOT/myBeName
HTH,
//Jim
--
This message posted from opensolaris.org
a sufficiently
empty pool... Hopefully the Illumos team or some other developers would push
this idea into reality ;)
There was a good tip from Jim Litchfield regarding VDEV Queue Sizing, though.
Possible current default for zfs_vdev_max_pending is 10, which is okay (or may
be even too much
or test if the theoretical warnings are valid?
Thanks,
//Jim Klimov
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
on the first try. It found the rpool and current bootfs and imported it with no
problems. Then I just did init 6 to finish the failsafe mode, and after a
reboot the system came back up with no hiccups.
HTH,
//Jim
--
This message posted from opensolaris.org
the zpool import -F command?
Good luck,
//Jim
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
2011-05-16 9:14, Richard Elling пишет:
On May 15, 2011, at 10:18 AM, Jim Klimovjimkli...@cos.ru wrote:
Hi, Very interesting suggestions as I'm contemplating a Supermicro-based server
for my work as well, but probably in a lower budget as a backup store for an
aging Thumper (not as its
substantial) - now I'd get rid of this experiment much faster ;)
--
++
||
| Климов Евгений, Jim Klimov |
| технический директор
, Jim Klimov |
| технический директор CTO |
| ЗАО ЦОС и ВТ JSC COSHT |
||
| +7-903-7705859 (cellular) mailto:jimkli...@cos.ru
many people suggest that a backup on another similar server box is superior to
using tape backups - although probably using more electricity in real-time).
sorry if this goes in the wrong spot i could no find
Seems to have come correctly ;)
HTH,
//Jim Klimov
--
This message posted from
this is somewhat
complicated and hard to explain without a whiteboard. :-)
From recent reading on Jeff's blog and links leading from it,
I might guess this relates to different disk offsets with different
writing speeds? Yes-no would suffice, as to spare the absent
whiteboard ,)
Thanks,
//Jim
-1/iopattern
The latter tries to estimate the amounts of SEQuential and
RNDom reads and writes in your workload.
HTH,
//Jim
2011-05-19 16:35, Sašo Kiselkov пишет:
Hi all,
I'd like to ask whether there is a way to monitor disk seeks. I have an
application where many concurrent readers (50
2011-05-19 17:00, Jim Klimov пишет:
I am not sure you can monitor actual mechanical seeks short
of debugging and interrogating the HDD firmware - because
it is the last responsible logic in the chain of caching,
queuing and issuing actual commands to the disk heads.
For example, a long logical
, and
configuring
MPxIO failover properly helped the system detect them as being actually one
device and stop complaining as long as one path works.
On another hand, you might have some dd if=disk1 of=disk2 kind of cloning
which may have puzzled the system...
HTH,
//Jim
: 90%3342 MB (p)
Most Frequently Used Cache Size: 9%362 MB (c-p)
arc_meta_used = 2617 MB
arc_meta_limit= 6144 MB
arc_meta_max = 4787 MB
Thanks for any insights,
//Jim Klimov
IP
addresses (i.e. localhost and NIC IP) - but that would probably fail
at the same bottleneck moment - or to connect to the zvol/rdsk/...
directly, without iSCSI?
Thanks for ideas,
//Jim Klimov
___
zfs-discuss mailing list
zfs-discuss
.
--
++
||
| Климов Евгений, Jim Klimov |
| технический директор CTO |
| ЗАО ЦОС и ВТ
it ;)
--
++
||
| ?? ???, Jim Klimov |
| ??? CTO |
| ??? ??? ? ?? JSC COSHT
thought about it, can't get
rid of the idea ;) ...
--
++
||
| Климов Евгений, Jim Klimov |
| технический директор
://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
++
||
| Климов Евгений, Jim Klimov |
| технический директор
/s at least.
HTH,
//Jim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
entries
into the HDD pool like we do now?
(BTW, what to we do with dedicated ZIL device - flush the
TXG early?)
//Jim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
of HCL HDDs all have one connector...
Still, I gess my post poses mre questions than answers, but maybe some other
list readers can reply...
Hint: Nexenta people seem to be good OEM friends with Supermicro, so they
might know ;)
HTH,
//Jim Klimov
know ;)
Yes :-)
-- richard
Thanks!
//Jim Klimov
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
to have so much actual
bandwidth.
Thanks,
//Jim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
.
--
++
||
| Климов Евгений, Jim Klimov |
| технический директор CTO |
| ЗАО ЦОС и ВТ JSC COSHT
negligible and there
are more options quickly available, such as mounting the iSCSI
device on another server? Now that I hit the problem of reverting
to direct volume access, this makes sense ;)
Thanks in advance for ideas or clarifications,
//Jim Klimov
4295GB 4295GB 8389kB
But lofiadm doesn't let me address that partition #1 as a separate device :(
Thanks,
//Jim Klimov
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs
of
3*4-disk-raidz1 vs 1*12-disk raidz3, so which
of the tradeoffs is better - more vdevs or more
parity to survive loss of ANY 3 disks vs. right
3 disks?
Thanks,
//Jim Klimov
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
-
From: Matt Keenan matt...@opensolaris.org
Date: Tuesday, May 31, 2011 21:02
Subject: Re: [zfs-discuss] Ensure Newly created pool is imported automatically
in new BE
To: j...@cos.ru
Cc: zfs-discuss@opensolaris.org
Jim,
Thanks for the response, I've nearly got it working, coming up
right.
My FreeRAM-Watchdog code and compiled i386 binary and a
primitive SMF service wrapper can be found here:
http://thumper.cos.ru/~jim/freeram-watchdog-20110531-smf.tgz
Other related forum threads:
* zpool import hangs indefinitely (retry post in parts; too long?)
http://opensolaris.org/jive
Actually if you need beadm to know about the data pool,
it might be beneficial to mix both approaches - yours with
bemount, and init-script to enforce the pool import on that
first boot...
HTH,
//Jim Klimov
___
zfs-discuss mailing list
zfs-discuss
--
++
||
| Климов Евгений, Jim Klimov |
| технический директор CTO |
| ЗАО ЦОС и ВТ JSC COSHT
dedicated tasks with data you're okay with losing.
You can also make the rpool a three-way mirror which may increase
read speeds if you have enough concurrentcy. And when one drive
breaks, your rpool is still mirrored.
HTH,
//Jim Klimov
___
zfs-discuss
out to be a known
bug which may have since been fixed...
Also, in a mirroring scenario is there any good reason to keep a warm spare
instead of making a three-way mirror right away (beside energy saving)?
Rebuild times and non-redundant windows can be decreased considerably ;)
//Jim
inside the volume):
# zpool import -d /dev/zvol/dsk/pool dcpool
cannot import 'dcpool': no such pool available
# zpool import -d /dev/zvol/rdsk/pool/ dcpool
cannot import 'dcpool': no such pool available
//Jim
___
zfs-discuss mailing list
zfs
0 0 0
-- - - - - - -
Thanks,
//Jim Klimov
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
. (SAS drives are dual-port, full-duplex devices.)
Another reason *may be* (maybe not, speculative) if these drives have
a SATA protocol firmware instead of a SAS one - resulting in general
feature sets...
//Jim
___
zfs-discuss mailing list
zfs
(and/or use rsync to correct some misreceived
blocks if network was faulty).
--
++
||
| Климов Евгений, Jim Klimov |
| технический директор
link. Took many retries, and zfs send is not strong at retrying ;)
--
++
||
| Климов Евгений, Jim Klimov |
| технический директор
for my assistant by catching near-freeze conditions,
is here:
* http://thumper.cos.ru/~jim/freeram-watchdog-20110610-v0.11.tgz
I guess it is time for questions now :)
What methods can I use (beside 20-hour-long ZDB walks) to
gain a quick insight on the cause of problems - why doesn't
the pool import
of a single full dump, the chance of a single corruption
making your (latest) backup useless would be also higher, right?
Thanks for clarifications,
//Jim Klimov
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman
2011-06-10 13:51, Jim Klimov пишет:
and the system dies in
swapping hell (scanrates for available pages were seen to go
into millions, CPU context switches reach 200-300k/sec on a
single dualcore P4) after eating the last stable-free 1-2Gb
of RAM within a minute. After this the system responds
2011-06-10 18:00, Steve Gonczi пишет:
Hi Jim,
I wonder what OS version you are running?
There was a problem similar to what you are describing in earlier versions
in the 13x kernel series.
Should not be present in the 14x kernels.
It is OpenIndiana oi_148a, and unlike many other details
) and send these ZIP/RAR archives to the tape.
Obviously, a standard integrated solution within ZFS
would be better and more portable.
See FEC suggestion from another poster ;)
//Jim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
sync times bumped to 30 sec and reduced to 1 sec.
So far I did not find a DTraceToolkit-0.99 utility which
would show me what that would be:
# /export/home/jim/DTraceToolkit-0.99/rwsnoop | egrep -v '/proc|/dev|unkn'
UIDPID CMD D BYTES FILE
0 1251 freeram-watc W 78
/var
?
Or there is no coalescing and this is why? ;)
Thanks,
//Jim Klimov
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Does this reveal anything;
dtrace -n 'syscall::*write:entry /fds[arg0].fi_fs == zfs/ {
@[execname,fds[arg0].fi_pathname]=count(); }'
On Jun 11, 2011, at 9:32 AM, Jim Klimov wrote:
While looking over iostats from various programs, I see that
my OS HDD is busy writing, about 2Mb/sec stream
but otherwise the system
should have remained remain responsive (tested
failmode=continue and failmode=wait on different
occasions).
So I can relate - these things happen, they do annoy,
and I hope they will be fixed sometime soon so that
ZFS matches its docs and promises ;)
//Jim Klimov
2011-06-11 19:16, Jim Mauro пишет:
Does this reveal anything;
dtrace -n 'syscall::*write:entry /fds[arg0].fi_fs == zfs/ {
@[execname,fds[arg0].fi_pathname]=count(); }'
Alas, not much.
# time dtrace -n 'syscall::*write:entry /fds[arg0].fi_fs == zfs/ {
@[execname,fds[arg0].fi_pathname]=count
]-fi_pathname] = count(); }'
On Jun 11, 2011, at 12:34 PM, Jim Klimov wrote:
2011-06-11 19:16, Jim Mauro пишет:
Does this reveal anything;
dtrace -n 'syscall::*write:entry /fds[arg0].fi_fs == zfs/ {
@[execname,fds[arg0].fi_pathname]=count(); }'
Alas, not much.
# time dtrace -n 'syscall
201 - 300 of 753 matches
Mail list logo