I had a box in for 2 weeks testing. It is very impressive. They give you 2 metrics. They give you a deduplication amount, and then a compression. So data is deduped, then compressed. In our tests, the total compression (dedupe x compressed) varied quite a bit based on the data. The total number also increases over time. So your first backup to DD only dedupes in essence duplicate files. But the second and third and fourth start adding significant dedupe.
In 2 weeks, flat file backups were getting close to 20x. vRanger backups of VMDC was getting 82x. I had one share of flat file data that got close to 40x on the first pass which is not typical. That share holds our "publications" data which we know has a lot of duplication even at the file level, much less at the 4k block level. SQL dumps of our Solarwinds (network monitor) DB got a 2x dedupe and about a 4x compression - with only 2 full backups. So yes, I believe 20x is realistic in most environments. On Fri, Oct 30, 2009 at 3:08 PM, Roger Wright <[email protected]> wrote: > After attending a recent presentation for Data Domain's platform and > technology, I'm intrigued. The stated up to 20X compression is impressive > if realistic. > > Can any of you relate your experience using DD's products? > > > Roger Wright > ___ > > > > > > > ~ Finally, powerful endpoint security that ISN'T a resource hog! ~ ~ <http://www.sunbeltsoftware.com/Business/VIPRE-Enterprise/> ~
