Sorry roy, but reading the post you pointed me
"meaning about 1,2GB per 1TB stored on 128kB blocks"
I have 1,5TB and 4 Gb of RAM, and not all of this is deduped.
Why you say it's *way* too small. It should be *way* enough.
>From the performance point of view, it is not a problem, I use that machine to 
>store backups and using delta-snapshots it transfers a reasonable amount of 
>data each night.
BTW, have you some info about the fact that 2010.WHATEVER will support dedup in 
a stable way?

Also, you said that should be a very I/O intensive task that blocks my server, 
but before getting hung iostat and zpool-iostat show very little I/O (about 500 
kb/sec) and no CPU usage.
Now I started a destroy -r on a deduped dataset about 500Gb, and it crashed. 
You think it may come up in some days? I have time until monday to test.

Thanks
-- 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to