On Wed, 23 May 2007, Ivan Adzhubey wrote:

> One more question: currently, the size of my spooling directory is 250GB and I
> need to add another server to backup list that holds 1.1TB of data. Will that
> create problems?

No.

> On a more general note: how do you guys deal with huge
> datasets?

Add more tapes.

> This server is projected to accumulate 12TB of data in the next 18 
> months and I was asked to have them all backed up on tape, even though 
> disks are in a RAID5 on a hardware controller (3ware).

RAID5 only protects against hardware failure (and only some forms of 
hardware failure). There are still things like "rm -rf /" and other kinds 
of luser error to cope with.

> I am wondering, have anybody had any experience with Bacula working on a 
> really large amounts of data?

I'm backing up 30Tb here and we just bought up another 28Tb array which 
will fill up durig the next 6-8 months.

The things to bear in mind are:

1: Do not try backing up all the filesets at once (I have 12 week cycle on 
full backups, with 2-3 full backups triggered each week instead of trying 
to backup 28 1TB sets in one go)

2: Keep fileset sizes sensible. We use a rule of thumb of 1Tb, which takes 
abour 20 hours on LTO2 (spooling plus tape run) and can run 2 of those 
simultaneously on each tape drive in the same time period (one spools, one 
despools, etc)


-------------------------------------------------------------------------
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
_______________________________________________
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users

Reply via email to