Great points Jim. I have requested more information how the gallery share
is being used and any temporary data will be moved out of there.
About atime, it is set to "on" right now and I've considered to turn it off
but I wasn't sure if this will effect incremental zfs send/receive.
'zfs send -i snapshot0 snapshot1' doesn't rely on the atime, right?
On Thu, Jan 17, 2013 at 4:34 PM, Jim Klimov <jimkli...@cos.ru> wrote:
> On 2013-01-18 00:42, Bob Friesenhahn wrote:
>> You can install Brendan Gregg's DTraceToolkit and use it to find out who
>> and what is doing all the writing. 1.2GB in an hour is quite a lot of
>> writing. If this is going continuously, then it may be causing more
>> fragmentation in conjunction with your snapshots.
> As a moderately wild guess, since you're speaking of galleries,
> are these problematic filesystems often-read? By default ZFS
> updates the last access-time of files it reads, as do many other
> filesystems, and this causes avalanches of metadata updates -
> sync writes (likely) as well as fragmentation. This may also
> be a poorly traceable but considerable "used" space in frequent
> snapshots. You can verify (and unset) this behaviour with the
> ZFS FS dataset property "atime", i.e.:
> # zfs get atime pond/export/home
> NAME PROPERTY VALUE SOURCE
> pond/export/home atime off inherited from pond
> On another hand, verify where your software keeps the temporary
> files (i.e. during uploads as may be with galleries). Again, if
> this is a frequently snapshotted dataset (though 1 hour is not
> really that frequent) then needless temp files can be held by
> those older snapshots. Moving such temporary works to a different
> dataset with a different snapshot schedule and/or to a different
> pool (to keep related fragmentation constrained) may prove useful.
> //Jim Klimov
> zfs-discuss mailing list
zfs-discuss mailing list