> I'm not sure about *docs*, but my rough estimations: > > Assume 1TB of actual used storage. Assume 64K block/slab size. (Not > sure how realistic that is -- it depends totally on your data set.) > Assume 300 bytes per DDT entry. > > So we have (1024^4 / 65536) * 300 = 5033164800 or about 5GB RAM for > one > TB of used disk space. > > Dedup is *hungry* for RAM. 8GB is not enough for your configuration, > most likely! First guess: double the RAM and then you might have > better > luck.
I know... that's why I use L2ARC > The other takeaway here: dedup is the wrong technology for typical > small home server (e.g. systems that max out at 4 or even 8 GB). This isn't a home server test > Look into compression and snapshot clones as better alternatives to > reduce your disk space needs without incurring the huge RAM penalties > associated with dedup. > > Dedup is *great* for a certain type of data set with configurations > that > are extremely RAM heavy. For everyone else, its almost universally the > wrong solution. Ultimately, disk is usually cheaper than RAM -- think > hard before you enable dedup -- are you making the right trade off? Just what sort of configurations would you think of? I've been testing dedup in rather large ones, and the sun is that ZFS doesn't scale well as of now Vennlige hilsener / Best regards roy -- Roy Sigurd Karlsbakk (+47) 97542685 r...@karlsbakk.net http://blogg.karlsbakk.net/ -- I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det er et elementært imperativ for alle pedagoger å unngå eksessiv anvendelse av idiomer med fremmed opprinnelse. I de fleste tilfeller eksisterer adekvate og relevante synonymer på norsk. _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss