"If you want to write into more than one directory (i.e. to spread the
load to different disk drives), you will need to define two Device
resources, each containing an Archive Device with a different
directory."

That works, but then you're required to make difficult decisions/changes
to your Schedule{}'ing to override your Pool's default Media.

This is because For every Device{} entry in your SD config, you need a
new Storage{} entry in your DIR config.  Different volumes can be in the
same pool, but generally each job only defines one Storage{}.

So if you have a Device:

Device {
  Name = FileStorage
  Media Type = File
  Archive Device = /usr/dump1
...
}   AND:

Device {
  Name = FileStorage2
  Media Type = File
  Archive Device = /usr/dump2
...
}

And your jobdefs:

JobDefs {
  Pool = Daily
  Storage = "FileStorage"
  ....

}

And Pool:


Pool {
  Name = Daily
...
}


You now need two Storage entries:

Storage {
  Name = "FileStorage"
  ...
  Device = "FileStorage"
  Media Type = File
}

Storage {
  Name = "FileStorage1"
  ....
  Device = "FileStorage1"
  Media Type = File
}

Then you can create two volumes/disk-tapes, assign them to two different
Storage devices, and place them in the same Pool:

+---------+---------------+-----------+-------
| mediaid | volumename    | volstatus | volbytes |
+---------+---------------+-----------+---------
|       8 | CFusionDaily0 | Recycle   |        1 |
|       9 | CFusionDaily1 | Append    |        1 | 


*THE PROBLEM* is that your DefJobs defines "Storage = FileStorage" and
to get to volume "CFusionDaily1", you must override the "Storage = " in
your Job{} using a Schedule{} line:


  Run = Level=Incremental Pool=Daily on mon at 14:45
  Run = Level=Incr* Pool=Daily Storage=FileStorage1 on mon at 14:45

Thus you can no longer take advantage of Schedule{} syntax expressions
like "mon-sat", you must now have a separate schedule line for each day,
because each day's tape is in a different "meta-storage-device".

I can only imagine what happens if a job requires more than two tapes
>:}

-----

Example, let's say you want to put your 6 days of Daily tapes into
Storage "FileStorage".  You create / label 6 volumes with:

Maximum Part Size = 32,212,254,720 (30 gig).  You can then nicely fit
all of your daily's onto a 200 gig disk/file system
mounted /var/spool/bacula.

Over time, the system grows.  You need to resize your File based
meta-tapes to 200gig.  

If it's RAID/LVM, you have to tear it down and rebuild the volume with
new members.  Futhermore, ext3 and UFS only support growfs(8) with
partitions on the same media.  

I don't know about Linux LVM growing volumes across multiple disks, but
the only file system I know capable of that is Veritas VXFS.

RAID isn't optimal for this application anyway.

The best solution is NAS/SAN that presents you with individual disk
units not RAID'd/LVM'd at all.  Let's say you have a 12-chassis
direct-attached-storage array, each disk is 300gig, and you want to
mount each component at a mount point as /var/spool/bacula/tape[##],
thus "overlapping" a pre-defined naming scheme that Bacula expects to
use:

# df
Filesystem    1K-blocks     Used    Avail Capacity  Mounted on
/dev/ar0s1a   37846628    14502 37075194     0%    /bacula/Daily000
/dev/ar1s1a   37846628    14502 37075194     0%    /bacula/Daily001
/dev/ar2s1a   37846628    14502 37075194     0%    /bacula/Daily002
/dev/ar3s1a   37846628    14502 37075194     0%    /bacula/Daily003
/dev/ar4s1a   37846628    14502 37075194     0%    /bacula/Daily004
/dev/ar5s1a   37846628    14502 37075194     0%    /bacula/Daily005

Archive Device = /bacula
Archive Device SubdirSuffx = %volume

Thus you could consolidate your daily backups into one "Storage{}" /
"Device{}" combination, while actually spanning the load across multiple
disks without the need for constantly rebuilding the meta-file system
underneath.  

You can also simply add and remove components on-the-fly, needing only
to remount them in accordance with what Bacula expects to label them as.

Thoughts?

TIA,
~BAS

Attachment: smime.p7s
Description: S/MIME cryptographic signature

Reply via email to