Hello,

pt., 21 paź 2022 o 12:11 Marco Gaiarin <g...@lilliput.linux.it> napisał(a):

> Mandi! Radosław Korzeniewski
>   In chel di` si favelave...
>
> > I never used rsnapshot with tapes, so I'm just curious how you manage it
> to
> > span multiple tapes?
>
> Aaaaaaahhhh... sorry, a total misunderstanding, here!
>
> it is easy to put a
> rsnapshot backup on tape,
>

I never used rsnapshot with tapes and I'm very curious how do you "put a
rsnapshot backup on tape"?
How? What command do you use? Do you use any external tools for that, i.e.
a tar?


>
> >>     I thing i've missed a point here... i've defined as 'Next Pool' the
> pool
> >>     itself, and i think it is not permitted: i need a pool for
> incremenetal,
> >>     and a pool for the full. Right?
>
> > The requirements for separate pools comes from a need to avoid deadlocks
> where
> > a device which reads a media blocks other devices from writing new
> virtual full
> > to the same media. Separate pools avoid this kind of deadlock. But I
> > successfully managed virtual full on a single pool for any backup level.
>
> Ok. I'm doing some test so i can ignor for now deadlock. I've simply
> defined
> a scheduling, eg:
>
>  Schedule {
>    Name = WeeklyLinuxProgressive
>
>    Run = VirtualFull fri at 20:00
>    Run = Incremental mon-sun at 23:00
>  }
>
>
To test Virtual Full scheduling is not important at all. Important is to
properly configure all other parameters, i.e. a Pool and Job.


>
> > Let's assume you have the following backup chain, every backup on single
> > volume, then Virtual full is as simple as: F1 + I1 + I2 = F2
> > Bacula will create (if file and not already available) a new volume F2
> and put
> > all backup data from F1, I1, I2 respecting if it is add/mod/del of the
> file in
> > incremental.
>
> Currently i've simply 3 volumes, File Retention = 21 days


File retention is a catalog only parameter. Volume retention is what makes
your data retention and recycling possible.


>
>
> >> You mean 'gradually' poking with filesets (eg, today dir 'a', tomorrow
> dir
> >> 'b') or i can define a job that transfer at most 1GB (for example) and
> >> simply do a (very!) long sequence of incremental job?
>
> > Both are possible solutions, depending on your situation. Unfortunately
> all
> > require manual operation and management.
>
> Sorry, but i've not found option '2' in bacula documentation; can you point
> me to the options to setup an upper bound on job size?
>

What do you mean by upper bound on job size? Bacula does not limit the size
of your job in any case. The job size is what Bacula needs to backup, no
more, no less. To backup less data you need to setup less data to backup.
As I wrote above this requires manual operation and management. As you show
in your example it could be as easy as: today dir 'a' with full, tomorrow
dir 'b' with incremental or dir 'c' with incremental, etc.

best regards
-- 
Radosław Korzeniewski
rados...@korzeniewski.net
_______________________________________________
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users

Reply via email to