Thanks, Marco. I am starting from scratch, so the code snippet will
definitely make life easier!

Patrick

On Wed, Oct 21, 2015 at 7:14 AM, Marco Weiß <marco.we...@kesslernetworks.de>
wrote:

> Hi Patrick,
>
> i did a short search in our documentation how we did this.
> If you can start from scratch you can do it like that.
> It is required that you have splited your backup jobs before.
>
> 1. Disabling all jobs
> 2. Run job or a bunch of jobs on different days.
>
> To get that automatically we had used that python code i hope it will help
> you.
>
> ----------
>  cmd = 'echo "run"|pfexec bconsole |awk \'/[0-9]:/ && /\./ {print $2}\'
> |grep -e "p.oew.de"'
> #cmd = 'echo "run"|pfexec bconsole |awk \'/[0-9]:/ && /\./ {print $2}\''
> #cmd = 'echo "run"|sudo bconsole |awk \'/[0-9]:/ && /\./ {print $2}\''
> bacula_max_full_interval = 30
> job_start_time = '18:15:10'
>
> from subprocess import *
> import re
> import datetime
>
> p = Popen(cmd, shell=True, stdin=PIPE, stdout=PIPE, stderr=STDOUT,
> close_fds=True)
> joblist_global = p.stdout.read()
>
> joblist = []
> for job in joblist_global.splitlines():
>  joblist.append(job)
>
> for i in xrange(bacula_max_full_interval):
>  # http://docs.python.org/release/2.3/whatsnew/section-slices.html
>  # create a number of 'bacula_max_full_interval' partitions of our
> joblist, starting with item 'i'
>  job_start_day = (datetime.date.today() +
> datetime.timedelta(i)).isoformat()
>  for j in joblist[i::bacula_max_full_interval]:
>   print 'disable job=' + j
>   #print 'run job=' + j + ' level=Full when="' + job_start_day + ' ' +
> job_start_time + '" yes'
>
> print
> for i in xrange(bacula_max_full_interval):
>  # http://docs.python.org/release/2.3/whatsnew/section-slices.html
>  # create a number of 'bacula_max_full_interval' partitions of our
> joblist, starting with item 'i'
>  job_start_day = (datetime.date.today() +
> datetime.timedelta(i)).isoformat()
>  for j in joblist[i::bacula_max_full_interval]:
>   print 'run job=' + j + ' when="' + job_start_day + ' ' + job_start_time
> + '" yes'
> ----------
>
> If you can't start from scratch you could split out a part of data into a
> new job and exclude it in the old job. After a couple of days you will have
> all data from one job in, lets say, 100 jobs.
> These 100 jobs you can distribute over ... 60 days and with max full
> interval you can manage when the next virtual full is done...
>
> Regards Marco
>
> On Wednesday, October 21, 2015 at 1:01:14 PM UTC+2, Patrick Glomski wrote:
> > Thank you, Marco! I will try splitting the backup of the large system
> into several smaller jobs and staggering it. If that is feasible with the
> number/size of files in the production storage, it sounds like an excellent
> solution!
> >
> > Patrick
>
> --
> You received this message because you are subscribed to the Google Groups
> "bareos-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to bareos-users+unsubscr...@googlegroups.com.
> To post to this group, send email to bareos-us...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"bareos-devel" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to bareos-devel+unsubscr...@googlegroups.com.
To post to this group, send email to bareos-devel@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to