----- Original Message -----
> Sorry roy, but reading the post you pointed me
> "meaning about 1,2GB per 1TB stored on 128kB blocks"
> I have 1,5TB and 4 Gb of RAM, and not all of this is deduped.
> Why you say it's *way* too small. It should be *way* enough.
> From the performance point of view, it is not a problem, I use that
> machine to store backups and using delta-snapshots it transfers a
> reasonable amount of data each night.
> BTW, have you some info about the fact that 2010.WHATEVER will support
> dedup in a stable way?

I have no idea if 201\d\.\d+ will have usable dedup at release time, but on my 
test system (Intel core2duo 2,3, 8GB RAM, 8x2TB disk and a couple of 160GB 
X25-Ms) dedup doesn't behave very well when removing deduped data. It takes for 
ever and if an unexpected reboot happens when this is running, it will hang 
osol on bootup while reading zfs data - well - hang is the wrong word, but 
it'll finish whatever it started before booting up. Since this can take hours 
or even days, and the system won't be very useful while it's doing this, I've 
decided to halt testing on dedup until something comes out of Oracle.

Vennlige hilsener / Best regards

roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
r...@karlsbakk.net
http://blogg.karlsbakk.net/
--
I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det er 
et elementært imperativ for alle pedagoger å unngå eksessiv anvendelse av 
idiomer med fremmed opprinnelse. I de fleste tilfeller eksisterer adekvate og 
relevante synonymer på norsk.
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to