> From: Ian Collins [mailto:i...@ianshome.com]
> Sent: Saturday, July 23, 2011 4:02 AM
> 
> Can you provide more details of your tests?  

Here's everything:
http://dl.dropbox.com/u/543241/dedup%20tests/dedup%20tests.zip

In particular:
Under the "work server" directory.

The basic concept goes like this:
Find some amount of data that takes approx 10 sec to write.  I don't know
the size, I just kept increasing a block counter till got times I felt were
reasonable, so let's suppose it's 10G.

Time Write that much without dedup (all unique).
Remove the file.
Time Write that much with dedup (sha256, no verify) (all unique).
Remove the file.
Write 10x that much with dedup (all unique).
Don't remove the file.
Repeat.

So I'm getting comparisons of write speeds for 10G files, sampling at 100G
intervals.  For a 6x performance degradation, it would be 7 sec to write
without dedup, and 40-45sec to write with dedup.

I am doing fflush() and fsync() at the end of every file write, to ensure
results are not skewed by write buffering.

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to