I'm troubleshooting an I/O performance problem with one of our applications 
that does a lot of writing, generally blocks just over 32K, sequentially 
writing large files.  It's a Solaris 10 x86 system with UFS disk.  We're often 
only seeing disk write throughput of around 6-8MB/s, even when there is minimal 
read activity.  Running iosnoop shows that most of the physical writes are made 
by the actual app and average around 32KB.  About 15% of the data, however, is 
done by fsflush and only 4 or 8KB at a time.  The write throughput for the 
fsflush writes is about 10% that of the app writes (using the "DTIME" values 
and aggregating the results to get totals).  CPU resources are not a bottleneck.

If I turn off dopageflush the overall rate jumps to 18-20MB/s.  However, this 
would mean that the file data may not get flushed for a very long time, so is 
not a suitable option for production environments.

If I do a dd (to create a file of 1GB, the amount of system memory), even with 
a block size that matches our app, it does much larger block writes, often 1M, 
the overall rate is around 60MB/s, and there were very few writes by fsflush.

Is there any way to cause more of the physical writes to be done by the app 
rather than fsflush (and for that matter, what determines when fsflush does the 
flushing rather than the app?)?
-- 
This message posted from opensolaris.org
_______________________________________________
dtrace-discuss mailing list
dtrace-discuss@opensolaris.org

Reply via email to