On Fri, May 25, 2007 at 12:01:45PM -0400, Andy Lubel wrote:
> Im using: 
>  
>   zfs set:zil_disable 1
> 
> On my se6130 with zfs, accessed by NFS and writing performance almost
> doubled.  Since you have BBC, why not just set that?

I don't think it's enough to have BBC to justify zil_disable=1.
Besides, I don't know anyone from Sun recommending zil_disable=1. If
your storage array has BBC, it doesn't matter. What matters is what
happens when ZIL isn't flushed and your file server crashes (ZFS file
system is still consistent but you'll lose some info that hasn't been
flushed by ZIL). Even having your file server on a UPS won't help
here.

http://blogs.sun.com/erickustarz/entry/zil_disable discusses some of
the issues affecting zil_disable=1.

We know we get better performance with zil_disable=1 but we're not
taking any chances.

> -Andy
> 
> 
> 
> On 5/24/07 4:16 PM, "Albert Chin"
> <[EMAIL PROTECTED]> wrote:
> 
> > On Thu, May 24, 2007 at 11:55:58AM -0700, Grant Kelly wrote:
> >> I'm running SunOS Release 5.10 Version Generic_118855-36 64-bit
> >> and in [b]/etc/system[/b] I put:
> >> 
> >> [b]set zfs:zfs_nocacheflush = 1[/b]
> >> 
> >> And after rebooting, I get the message:
> >> 
> >> [b]sorry, variable 'zfs_nocacheflush' is not defined in the 'zfs' 
> >> module[/b]
> >> 
> >> So is this variable not available in the Solaris kernel?
> > 
> > I think zfs:zfs_nocacheflush is only available in Nevada.
> > 
> >> I'm getting really poor write performance with ZFS on a RAID5 volume
> >> (5 disks) from a storagetek 6140 array. I've searched the web and
> >> these forums and it seems that this zfs_nocacheflush option is the
> >> solution, but I'm open to others as well.
> > 
> > What type of poor performance? Is it because of ZFS? You can test this
> > by creating a RAID-5 volume on the 6140, creating a UFS file system on
> > it, and then comparing performance with what you get against ZFS.
> > 
> > It would also be worthwhile doing something like the following to
> > determine the max throughput the H/W RAID is giving you:
> >   # time dd of=<raw disk> if=/dev/zero bs=1048576 count=1000
> > For a 2Gbps 6140 with 300GB/10K drives, we get ~46MB/s on a
> > single-drive RAID-0 array, ~83MB/s on a 4-disk RAID-0 array w/128k
> > stripe, and ~69MB/s on a seven-disk RAID-5 array w/128k strip.
> -- 
> 
> 
> _______________________________________________
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> 
> 

-- 
albert chin ([EMAIL PROTECTED])
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to