Hi Richard,

Richard Elling wrote:
> 
> Files are not compressed in ZFS.  Blocks are compressed.

Sorry, yes, I was not specific enough.

> 
> If the compression of the blocks cannot gain more than 12.5% space savings,
> then the block will not be compressed.  If your file contains
> compressable parts
> and uncompressable parts, then (depending on the size/blocks) it may be
> partially compressed.
>

I guess the block size is related (or equal) to the record size set for
this file system, right?

What will happen then if I have a file which contains a header which
fits into 1 or 2 blocks, and is followed by stretches of data which are
say 500kB each (for simplicity) which could be visualized as sitting in
a rectangle with M rows and N columns. Since the file system has no way
of knowing details on the file, it will "cut" the file into blocks and
store it compressed or uncompressed as you have written. However, what
happens if the typical usage pattern is read only columns of the
"rectangle", i.e. read the header, seek to the start of stretch #1, then
seeking to stretch #N+1, ...

Can ZFS make educated guesses where the seek targets might be or will it
read the file block by block until it reaches the target position, in
the latter case it might be quite inefficient if the file is huge and
has a large variance in compressibility.

> 
> The file will be cached in RAM. When the file is closed and synced, the
> data
> will be written to the ZIL and ultimately to the data set.  I don't
> think there
> is a fundamental problem here... you should notice the NFS sync behaviour
> whether the backing store is ZFS or some other file system. Using a slog
> or nonvolatile write cache will help performance for such workloads.
>

Thanks, that's answer I was hoping for :)

> They are good questions :-)

Good :)

Cheers

Carsten
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to