Cedric,

On 4/26/07, cedric briner <[EMAIL PROTECTED]> wrote:
>> okay let'say that it is not. :)
>> Imagine that I setup a box:
>>   - with Solaris
>>   - with many HDs (directly attached).
>>   - use ZFS as the FS
>>   - export the Data with NFS
>>   - on an UPS.
>>
>> Then after reading the :
>> 
http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide#ZFS_and_Complex_Storage_Considerations
>>
>> I wonder if there is a way to tell the OS to ignore the fsync flush
>> commands since they are likely to survive a power outage.
>
> Cedric,
>
> You do not want to ignore syncs from ZFS if your harddisk is directly
> attached to the server.  As the document mentioned, that is really for
> Complex Storage with NVRAM where flush is not necessary.

This post follows : `XServe Raid & Complex Storage Considerations'
http://www.opensolaris.org/jive/thread.jspa?threadID=29276&tstart=0

Ah... I wasn't aware the other thread was started by you :).  If your
storage device features NVRAM, you should in fact configure it as
discussed in the stated thread.  However, if your storage device(s)
are directly attached disks (or anything without an NVRAM controller),
zfs_noflush=1 is potentially fatal (see link below).

Where we have made the assumption (*1) if the XServe Raid is connected
to an UPS that we can consider the RAM in the XServe Raid as it was NVRAM.

I'm not sure about the interaction between XServe and the UPS but I'd
imagine that the UPS can probably power the XServe  for a few minutes
after a power outage.  That should be enough time for the XServe to
drain stuff in its RAM to disk.

(*1)
   This assumption is even pointed by Roch  :
   http://blogs.sun.com/roch/#zfs_to_ufs_performance_comparison
   >> Intelligent Storage
   through: `the Shenanigans with ZFS flushing and intelligent arrays...'
   http://blogs.digitar.com/jjww/?itemid=44
   >> Tell your array to ignore ZFS' flush commands

So in this way, when we export it with NFS we get a boost in the BW.

Indeed.  This is especially true if you consider that expensive
storage are likely to be shared by more than 1 host.  A flush command
likely flushes the entire cache rather than just parts relevant to the
requesting host.

Okay, then is there any difference that I do not catch between :
  - the Shenanigans with ZFS flushing and intelligent arrays...
  - and my situation

I mean, I want to have a cheap and reliable nfs service. Why should I
buy expensive `Complex Storage with NVRAM' and not just buying a machine
with 8 IDE HD's ?

Your 8 IDE HD may not benefit from zfs_noflush=1 since their caches
are small anyway but the potential impact on reliability will be
fairly severe.
 http://www.opensolaris.org/jive/thread.jspa?messageID=91730

Nothing is stopping you though from getting decent performance from 8
IDE HDD.  You just should not treat them like they are NVRAM backed
array.


--
Just me,
Wire ...
Blog: <prstat.blogspot.com>
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to