I've never seen the file name when doing this with ZFS.
My belief is that ZFS will cheerfully bundle a bunch of data
together from disparate files and spit it out to disk. It's done
with taskq threads so you can't tie the IO to a specific process.
For ZFS, this just doesn't work.

Jim
----

Pramod Batni wrote:
vhiz wrote:
Hi Pramod,

I am using the ZFS filesystem on OpenSolaris. The thing is that the file that my program is reading is 2.5 GB in size, which is greater than the RAM size. Will caching still take place? And when I run the script, I can see that the number of reads that it displays goes up quite a bit. So the reads are being captured, just that the file name is not being displayed.

So you see <unknown> instead of the name of the file ?

Notice that your script is using fi_pathname to print the pathname of the file which is undergoing IO.
From  'http://wikis.sun.com/display/DTrace/io+Provider'
--snip--
The fi_name field contains the name of the file but does not include any directory components. If no file information is associated with an I/O, the fi_name field will be set to the string <none>. In some rare cases, the pathname associated with a file might be unknown. In this case, the fi_name field will be set to the string <unknown>. The fi_pathname field contains the full pathname to the file. As with fi_name, this string may be set to <none> if no file information is present, or <unknown> if the pathname associated with the file is not known.
--snip--
I suspect {need to verify} that reading in the large file [more than the RAM size] is blowing out the v_path information from the vnode.

Pramod

Thanks.

_______________________________________________
dtrace-discuss mailing list
dtrace-discuss@opensolaris.org


_______________________________________________
dtrace-discuss mailing list
dtrace-discuss@opensolaris.org

Reply via email to