Hi, i may answer that from a ZFS point of view.
TL;DR: Yes, ZFS on Linux works well. DeDup not so much and it required a LOT of memory. ZFS on Linux is pretty stable. Speed is comparable to FreeBSD or even Solaris. But ZFS requires a lot of memory. Rule of thumb is 1 GB per 1 TB of disk space with a minimum of 8 GB. ECC memory is a must. For DeDup bear in mind that the whole hash table must fit into memory. The moment it spills to disk things get unbearable slow. When using DeDup we´re more in the range of 5-10 GB memory per 1 TB disk. Again ECC memory. Another problematic thing is that deleting files on a Deduped ZFS takes a lot of time because it has to iterate through all of the metadata to check it the block that should be deleted is references somewhere. The next thing for dedup is that you also need to change the way your backups are structured. No compression, no encryption and no multiplexing (that is: putting several files into one file or backup stream; best would be to backup every file individually to disk/staging). I am new to Bareos so i don´t know much about disk backups - sorry. My recommendation would be: Try it - but NOT on production. The moment you enable Dedup you cannot turn it off. The ZFS pool is altered forever. And only use it with LOTs of memory (see above). With Dedup i would not start below 64 GB memory. You also might need to change the size of the metadata cache size since dedup requires a lot of space for the hash table - in the metadata. If it works out for you - good. If not try to use ZFS as it is and maybe enable LZ4 compression to save space. Mit freundlichen Grüßen Ronny Egner -- Ronny Egner Oracle Certified Master 11g (OCM) Mobile: +49 170 8139903 EMail: [email protected] Am 21.05.15 09:06 schrieb "Ashley" unter <[email protected]>: >Hi Guys, > >I am currently backing up around 15TB of data and the rate of change is >about 400GB a week. > > >At the moment I am backing up to a DELL-TL2000 24 tape auto loader with a >single LTO-6 drive. The server that is running as a the Director and SD >has a 20TB partition on it that is currently just being used as a giant >spool store for tape. > >Every single client has 2 jobs the onsite tape job and the offsite tape >job. This currently means that each month my backup server is spending a >full 7 days just doing the 2 full backups. > > >I would like to migrate to doing Disk to Disk to Tape. This would mean >that I could just spend 3 days doing one backup. Keep the onsite backups >on disk and do a copy job each month. > >I was wondering if any one is using ZFS as a SD store point (This would >be on Linux Centos 7.1)? > >Are you able to store more than 1 FULL backup without occupying Full size >* N, with ZFS deduplication? > >This is just an idea i have had over the past month or so. I am very new >to ZFS but not new to Linux. > > >Ash :) > >-- >You received this message because you are subscribed to the Google Groups >"bareos-users" group. >To unsubscribe from this group and stop receiving emails from it, send an >email to [email protected]. >To post to this group, send email to [email protected]. >For more options, visit https://groups.google.com/d/optout. -- You received this message because you are subscribed to the Google Groups "bareos-users" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To post to this group, send email to [email protected]. For more options, visit https://groups.google.com/d/optout.
