Hi Folks

I know similar topics have been addressed before in many posts, but none of
them provide me with a workable solution…

- We are backing up fairly large volumes: 250TB up to 1.5 PB over 40Gbps
- Bacula backups up to a target PB-scale ZFS pool
- The volumes created are each 100GB in size

We are having trouble getting the first full backups to finish successfully
in one job (due to various IT issues that we do not have control over).

The result is that although the backups are configured as incrementals,
there is never a successful full backup in a single job, and the next job
starts backing up everything all over again. This happens over and over so
we never get a backup of all the content and we fill up our ZFS backup
target pool with all the uncompleted attempts.

Is there a way to prevent this, so that despite a backup job being flagged
as unsuccessfully terminated, the next session will be forced to only be

Any advice would be sincerely appreciated!

Best regards,


This message and attachments are subject to a disclaimer.
Please refer to 
http://upnet.up.ac.za/services/it/documentation/docs/004167.pdf for full 
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
Bacula-users mailing list

Reply via email to