On Mon, Apr 26, 2010 at 8:51 AM, tim Kries <tim.kr...@gmx.de> wrote:
> I am kinda confused over the change of dedup ratio from changing the record 
> size, since it should dedup 256-bit blocks.

Dedup works on the blocks or either recordsize or volblocksize. The
checksum is made per block written, and those checksums are used to
dedup the data.

With a recordsize of 128k, two blocks with a one byte difference would
not dedup. With an 8k recordsize, 15 out of 16 blocks would dedup.
Repeat over the entire VHD.

Setting the record size equal to a multiple of the VHD's internal
block size and ensuring that the internal filesystem is block aligned
will probably help to improve dedup ratios. So for an NTFS guest with
4k blocks, use a 4k, 8k or 16k record size and ensure that when you
install in the VHD that its partitions are block aligned for the
recordsize you're using.

VHD supports fixed size and dynamic size images. If you're using a
fixed image, the space is pre-allocated. This doesn't mean you'll
waste unused space on ZFS with compression, since all those zeros will
take up almost no space. Your VHD file should remain block-aligned
however. I'm not sure that a dynamic size image will block align if
there is empty space. Using compress=zle will only compress the zeros
with almost no cpu penalty.

Using a COMSTAR iscsi volume is probably an even better idea, since
you won't have the POSIX layer in the path, and you won't have the VHD
file header throwing off your block alignment.

-B

-- 
Brandon High : bh...@freaks.com
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to