> I also strongly encourage people to just stay away from dedup -- seems
> awesome in a lot of respects, but days worth of zfs destroy, etc are
> absolutely insane even if you don't make the blunders that I did.
> Apparently, if you read enough of the shared horrors of this, you
> should delete your dedup'd data before doing the zfs destroy. I'm not
> going to test that for you, but if you're running preproduction and
> you've got a super compelling reason to dedup, then be sure to try
> that out while you have the option of just nuking your pool instead of
> waiting forever.

I cannot but agree. Dedup docs say 2-3GB per TB deduped data, but on a test 
system I have at work, 7x2TB drives plus some X25-M SSDs for shared SLOG/L2ARC 
(8GB SLOG, 2x72GB L2ARC), we have seen the same - zfs/zpool destroy with large 
amounts of deduped data takes forever. Also, when testing dedup for a bacula 
target, I saw terrible performance after just 1TB of used storage, even with a 
rather low dedup percentage. It was after some weeks testing on this system, 
that we decided to do just as you did, get a truckload of drives and let others 
fiddle with dedup until it's stable.

Also, nuking the pool (as in zpool destroy) will also take a very long time. 
Exporting it and creating a new pool on the drives will be better.

Btw, this testing was done on OpenIndiana 147/148, so with a slightly newer 
pool. The zfs destroy problem is said to be fixed in that version, or at least 
part of it, but we still saw the issues mentioned above.

Vennlige hilsener / Best regards

roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
r...@karlsbakk.net
http://blogg.karlsbakk.net/
--
I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det er 
et elementært imperativ for alle pedagoger å unngå eksessiv anvendelse av 
idiomer med fremmed opprinnelse. I de fleste tilfeller eksisterer adekvate og 
relevante synonymer på norsk.
_______________________________________________
opensolaris-discuss mailing list
opensolaris-discuss@opensolaris.org

Reply via email to