On Feb 27, 2010, at 11:00 AM, Thanos Makatos wrote:

> I have implemented a virtual block device in Linux that transparently 
> compresses and decompresses data. In my implementation, the unit of 
> compression is 4K. Multiple variable-size compressed blocks are stored in the 
> same physical block, which in principle requires a read-modify-write sequence.
> 
> In contrast, NTFS compresses multiples of blocks (typically 64K clusters) and 
> uses fewer 4K physical blocks (i.e. 32K) to store the compressed cluster. 
> This approach eliminates the read-modify-write sequence but is less efficient 
> for applications that exhibit small read/writes, as additional (and useless) 
> decompressions/compressions are performed.
> 
> I've been told that the ZFS file-system block ranges from 512 to 128K. 
> Suppose that the ZFS file-system block is 4K or less and that the physical 
> block is 4K (not 512). Compressing 4K typically results in a 1K to 3K block. 
> How does ZFS store segments of data that are smaller than the physical block?

Today, ZFS does not use a physical block size of 4KB.

For an overview of the Solaris 4KB physical sector migration plan see
http://arc.opensolaris.org/caselog/PSARC/2008/769/final_spec.txt
 -- richard

> I need this information for a journal I am preparing for ACM's Transactions 
> on Storage (an extended work of two papers), as I want to compare my system 
> to NTFS and ZFS.
> -- 
> This message posted from opensolaris.org
> _______________________________________________
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

ZFS storage and performance consulting at http://www.RichardElling.com
ZFS training on deduplication, NexentaStor, and NAS performance
http://nexenta-atlanta.eventbrite.com (March 16-18, 2010)




_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to