[Bacula-users] Job waiting on mount request blocking other jobs

2010-08-30 Thread Greg Golin
Hello,

Recently we went through a following scenario:

An external USB drive to which JobA was supposed to write became unusable. JobA 
requested a mount. Jobs B through Z that were scheduled to run after JobA did 
not run. As a result we lost two days of backups.

How can we make sure that jobs whose destination drives are accessible do not 
get blocked by jobs whose destination drives are inaccessible?

Thanks,
Greg Golin

smime.p7s
Description: S/MIME cryptographic signature
--
This SF.net Dev2Dev email is sponsored by:

Show off your parallel programming skills.
Enter the Intel(R) Threading Challenge 2010.
http://p.sf.net/sfu/intel-thread-sfd___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] vchanger, two drives

2010-04-28 Thread Greg Golin
Hello,

I am trying to configure bacula to use vchanger for offsite backups.
Basically I'd like to be able to swap USB drives every week and take them
offsite. The vchanger howto does not state whether I should run *label
barcodes *for each magazine. Could someone please clarify what operations
should be performed to configure two drives?

bacula-dir.conf relevant config:
Storage {
  Name = OffSiteChanger
  Address = mfile.merus.local-sd
  Password = snip
  Device = OffSiteChanger
  Media Type = File
  Autochanger = yes;
}

bacula-sd.conf relevant config:

Autochanger {
  Name = OffSiteChanger
  Device = OffSiteChanger-drive-0
  Device = OffSiteChanger-drive-1
  Changer Command = /usr/local/bin/vchanger %c %o %S %a %d
  Changer Device = /etc/bacula/removable.conf
}
#---  drive 0 of the OffSiteChanger autochanger
Device {
  Name = OffSiteChanger-drive-0
  DriveIndex = 0
  Autochanger = yes;
  DeviceType = File
  MediaType = File
  ArchiveDevice = /var/lib/bacula/OffSiteChanger/0/drive0
  RemovableMedia = no;
  RandomAccess = yes;
}
#---  drive 1 of the OffSiteChanger autochanger
Device {
  Name = OffSiteChanger-drive-1
  DriveIndex = 1
  Autochanger = yes;
  DeviceType = File
  MediaType = File
  ArchiveDevice = /var/lib/bacula/OffSiteChanger/1/drive1
  RemovableMedia = no;
  RandomAccess = yes;
}# ...

vchanger config:

changer_name = OffSiteChanger
work_dir = /var/lib/bacula/OffSiteChanger
logfile = /var/lib/bacula/OffSiteChanger/OffSiteChanger.log
Virtual_Drives = 1
slots_per_magazine = 4
magazine_bays = 1
automount_dir = /opt/bacula/removable
magazine = UUID:1125820e-3163-41d0-b571-36818eea5a78
magazine = UUID:8a672f1d-9f79-4b68-8260-8285eeb6a1d8

With this configuration when I unplug one drive, plug in the second drive,
and run *update slots *bacula says this:

*update slots
Automatically selected Catalog: MyCatalog
Using Catalog MyCatalog
The defined Storage resources are:
 1: File
 2: Archive
 3: OffSiteChanger
Select Storage resource (1-3): 3
Enter autochanger drive[0]: 1
Connecting to Storage daemon OffSiteChanger at mfile.merus.local-sd:9103 ...
3306 Issuing autochanger slots command.
Device OffSiteChanger has 4 slots.
Connecting to Storage daemon OffSiteChanger at mfile.merus.local-sd:9103 ...
3306 Issuing autochanger list command.
Volume OffSiteChanger_0002_0001 not found in catalog. Slot=1 InChanger set
to zero.
Volume OffSiteChanger_0002_0002 not found in catalog. Slot=2 InChanger set
to zero.
Volume OffSiteChanger_0002_0003 not found in catalog. Slot=3 InChanger set
to zero.
Volume OffSiteChanger_0002_0004 not found in catalog. Slot=4 InChanger set
to zero.
*

I expected it to link the volumes to the new slots. What am I doing wrong?

Thanks,
Greg
--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Migration and data destruction

2010-03-30 Thread Greg Golin
Hello,

We would like to use Bacula for archiving data. A problem that we're trying
to solve is how to prevent Bacula from recycling a volume in case a
migration job fails. The scenario we're concerned about is as follows:

Auto-recycling is on
1. TestBackupJob runs
2. TestArchiveJob runs and fails (this is the migration job)
3. During subsequent TestBackupJob runs, Bacula recycles the volume due to a
retention period expiration and we lose data

So far we've come up with the following scheme to prevent the aforementioned
from happening:

Auto-recycling is off.
1. TestBackupJob runs
2. TestArchiveJob runs and executes an external script that selects all
volumes that have all of their jobs in Migration status and purges, then
deletes those volumes
3. During the next TestBackupJob run, Bacula creates a new volume and uses
it

Another approach that we've been discussing is to run an external command
that would set the retention period to infinite on a migration job failure.

We are wondering if this is the best way to ensure we wont lose data in case
of a migration job failure.

Here is how our test system is set up:

** Bacula config **
Pool {
  Name = TestBackupPool
  Storage = File
  Pool Type = Backup
  Recycle = no
  AutoPrune = yes
  Volume Use Duration = 60 seconds
  LabelFormat = TestBackupVol
  Next Pool = TestArchivePool
  Migration Time = 60 seconds
}

Pool {
  Name = TestArchivePool
  Pool Type = Backup
  Recycle = yes
  AutoPrune = yes
  Storage = Archive
  LabelFormat = ArchiveVol
}
Job {
  Name = TestBackupJob
  Type = Backup
  Level = Full
  Client = testclient-fd
  FileSet = TestBackupFileset
  Schedule = TestBackupSchedule
  Storage = File
  Pool = TestBackupPool
  Messages = NoEmail
  Maximum Concurrent Jobs = 10
}

Job {
  Name = TestArchiveJob
  Type = migrate
  Level = Full
  Client = testclient-fd
  FileSet = TestBackupFileset
  Schedule = TestArchiveSchedule
  Storage = Archive
  Pool = TestBackupPool
  Messages = NoEmail
  Selection Type = PoolTime
  Maximum Concurrent Jobs = 10
  Priority = 8
  RunAfterJob = /bin/bash /etc/bacula/scripts/DeleteMigratedVol.sh
}

#TestBackupSchedule executes a job every 60 seconds.
#TestArchiveSchedule executes a job every 120 seconds.

** End bacula config **

Contents of DeleteMigratedVol.sh:
#!/bin/bash
mysqlbin='/usr/bin/mysql'
username='bacula'
database='bacula'
password='edited'
dboptions='--skip-column-names'
bconsolebin='/usr/sbin/bconsole'
voldisklocation='/opt/bacula/backup'

volumeNames=$($mysqlbin -u $username -p$password $database $dboptionsEOQ
SELECT DISTINCT m.VolumeName, j.Type
FROM Media m
JOIN JobMedia jm
ON jm.MediaId=m.MediaId
JOIN Job j
ON j.JobId=jm.JobId
WHERE j.TYPE = 'M'
AND m.VolumeName NOT IN (
SELECT m2.VolumeName
FROM Media m2
JOIN JobMedia jm2
ON jm2.MediaId=m2.MediaId
JOIN Job j2
ON j2.JobId=jm2.JobId
WHERE j.TYPE != 'M'
);
EOQ)

for i in $volumeNames;
do $bconsolebin EOF
purge volume=$i yes
delete volume=$i yes
quit
EOF
/bin/rm -fv $voldisklocation/$i

done
**End /etc/bacula/scripts/DeleteMigratedVol.sh**

Thank you,
Greg
--
Download Intel#174; Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users