Marc Bevand wrote:
> William Fretts-Saxton <william.fretts.saxton <at> sun.com> writes:
>   
>> I disabled file prefetch and there was no effect.
>>
>> Here are some performance numbers.  Note that, when the application server
>> used a ZFS file system to save its data, the transaction took TWICE as long.
>> For some reason, though, iostat is showing 5x as much disk
>> writing (to the physical disks) on the ZFS partition.  Can anyone see a
>> problem here?
>>     
>
> Possible explanation: the Glassfish applications are using synchronous
> writes, causing the ZIL (ZFS Intent Log) to be intensively used, which
> leads to a lot of extra I/O.

The ZIL doesn't do a lot of extra IO. It usually just does one write per 
synchronous request and will batch
up multiple writes into the same log block if possible. However, it does 
need to wait for the
writes to be on stable storage before returning to the application, 
which is what the application has
requested. It does this by waiting for the write to complete and then 
flushing the disk write cache.
If the write cache is battery backed for all zpool devices then the 
global zfs_nocacheflush can be set
to give dramatically better performance.
>  Try to disable it:
>
> http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide#Disabling_the_ZIL_.28Don.27t.29
>
> Since disabling it is not recommended, if you find out it is the cause of your
> perf problems, you should instead try to use a SLOG (separate intent log, see
> above link). Unfortunately your OS version (Solaris 10 8/07) doesn't support
> SLOGs, they have only been added to OpenSolaris build snv_68:
>
> http://blogs.sun.com/perrin/entry/slog_blog_or_blogging_on
>
> -marc
>
> _______________________________________________
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>   

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to