Hi all

I've been doing a lot of testing with dedup and concluded it's not really ready 
for production. If something fails, it can render the pool unuseless for hours 
or maybe days, perhaps due to single-threded stuff in zfs. There is also very 
little data available in the docs (though I've from what I've got on this list) 
on how much memory one should have for deduping an xTiB dataset.

Does anyone know how the status is for dedup now? In 134 it doesn't work very 
well, but is it better in ON140 etc?

Best regards

roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
r...@karlsbakk.net
http://blogg.karlsbakk.net/
--
I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det er 
et elementært imperativ for alle pedagoger å unngå eksessiv anvendelse av 
idiomer med fremmed opprinnelse. I de fleste tilfeller eksisterer adekvate og 
relevante synonymer på norsk.
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to