Re: [Bacula-users] Progressive Virtual Fulls...

2023-09-29 Thread Josh Fisher via Bacula-users


On 9/29/23 06:07, Marco Gaiarin wrote:

Mandi! Josh Fisher via Bacula-users
   In chel di` si favelave...


I'm really getting mad. This make sense for the behaviour (the first
VirtualFull worked because read full and incremental for the same pool) but
still the docs confuse me.
  https://www.bacula.org/9.4.x-manuals/en/main/Migration_Copy.html

No. Not because they were in the same pool, but rather because the
volumes were all loadable and readable by the same device.

OK.



readable by that particular device. Hence, all volumes must have the
same Media Type, because they must all be read by the same read device.

OK.



In a nutshell, you must have multiple devices and you must ensure that
one device reads all of the existing volumes and another different
device writes the new virtual full volumes. This is why it is not
possible to do virtual fulls or migration with only a single tape drive.

OK. Try to keep it simple.

Storage daemon have two devices, that differ only for the name:

   Device {
 Name = FileStorage
 Media Type = File
 LabelMedia = yes;
 Random Access = Yes;
 AutomaticMount = yes;
 RemovableMedia = no;
 AlwaysOpen = no;
 Maximum Concurrent Jobs = 10
 Volume Poll Interval = 3600
 Archive Device = /rpool-backup/bacula
  }
  Device {
 Name = VirtualFileStorage
 Media Type = File
 LabelMedia = yes;
 Random Access = Yes;
 AutomaticMount = yes;
 RemovableMedia = no;
 AlwaysOpen = no;
 Maximum Concurrent Jobs = 10
 Volume Poll Interval = 3600
 Archive Device = /rpool-backup/bacula
  }

director side i've clearly defined two storages:

  Storage {
 Name = SVPVE3File
 Address = svpve3.sv.lnf.it
 SDPort = 9103
 Password = "ClearlyNotThis."
 Maximum Concurrent Jobs = 25
 Maximum Concurrent Read Jobs = 5
 Device = FileStorage
 Media Type = File
  }
  Storage {
 Name = SVPVE3VirtualFile
 Address = svpve3.sv.lnf.it
 SDPort = 9103
 Password = "ClearlyNotThis."
 Maximum Concurrent Jobs = 25
 Maximum Concurrent Read Jobs = 5
 Device = VirtualFileStorage
 Media Type = File
}


Then on client i've defined a single pool:

  Pool {
 Name = FVG-SV-ObitoFilePoolIncremental
 Pool Type = Backup
 Storage = SVPVE3File
 Maximum Volume Jobs = 6
 Volume Use Duration = 1 week
 Recycle = yes
 AutoPrune = yes
 Action On Purge = Truncate
 Volume Retention = 20 days
  }

and a single job:

  Job {
 Name = FVG-SV-Obito
 JobDefs = DefaultJob
 Storage = SVPVE3File
 Pool = FVG-SV-ObitoFilePoolIncremental
 Messages = StandardClient
 NextPool = FVG-SV-ObitoFilePoolIncremental
 Accurate = Yes
 Backups To Keep = 2
 DeleteConsolidatedJobs = yes
 Schedule = VirtualWeeklyObito
 Reschedule On Error = yes
 Reschedule Interval = 30 minutes
 Reschedule Times = 8
 Max Run Sched Time = 8 hours
 Client = fvg-sv-obito-fd
 FileSet = ObitoTestStd
 Write Bootstrap = "/var/lib/bacula/FVG-SV-Obito.bsr"
  }



The NextPool needs to be specified in the 
FVG-SV-ObitoFilePoolIncremental pool resource, not in the job resource. 
In the Copy/Migration/VirtualFull discussing the applicable Pool 
resource directives for these job types, it states under Important 
Migration Considerations that::


The Next Pool = ... directive must be defined in the *Pool* referenced 
in the Migration Job to define the Pool into which the data will be 
migrated.


Other than that, you are specifically telling the job to run on a single 
Storage resource. That will not work, unless the single Storage resource 
is an autochanger with multiple devices. You need to somehow ensure that 
Bacula can select a different device for writing the new virtual full 
job. If you are using version 13.x, then you can define the job's 
Storage directive as a list of Storage resources to select from. For 
example:


Job {
 Name = FVG-SV-Obito
 Storage = SVPVE3File,SVPVE3VirtualFile
 ...
}

I believe that a virtual full job will only select a single read device, 
so the above may be all that is needed.


Otherwise, you can use a virtual disk autochanger.




If i run manually:

run job=FVG-SV-Obito

work as expected, eg run an incremental job. If i try to run:

run job=FVG-SV-Obito level=VirtualFull storage=SVPVE3VirtualFile

The job run, seems correctly:

  *run job=FVG-SV-Obito level=VirtualFull storage=SVPVE3VirtualFile
  Using Catalog "BaculaLNF"
  Run Backup job
  JobName:  FVG-SV-Obito
  Level:VirtualFull
  Client:   fvg-sv-obito-fd
  FileSet:  ObitoTestStd
  Pool: FVG-SV-ObitoFilePoolIncremental (From Job resource)
  NextPool: 

Re: [Bacula-users] Progressive Virtual Fulls...

2023-09-29 Thread Marco Gaiarin
Mandi! Josh Fisher via Bacula-users
  In chel di` si favelave...

>> I'm really getting mad. This make sense for the behaviour (the first
>> VirtualFull worked because read full and incremental for the same pool) but
>> still the docs confuse me.
>>  https://www.bacula.org/9.4.x-manuals/en/main/Migration_Copy.html
> No. Not because they were in the same pool, but rather because the 
> volumes were all loadable and readable by the same device.

OK.


> readable by that particular device. Hence, all volumes must have the 
> same Media Type, because they must all be read by the same read device.

OK.


> In a nutshell, you must have multiple devices and you must ensure that 
> one device reads all of the existing volumes and another different 
> device writes the new virtual full volumes. This is why it is not 
> possible to do virtual fulls or migration with only a single tape drive.

OK. Try to keep it simple.

Storage daemon have two devices, that differ only for the name:

  Device {
Name = FileStorage
Media Type = File
LabelMedia = yes;
Random Access = Yes;
AutomaticMount = yes;
RemovableMedia = no;
AlwaysOpen = no;
Maximum Concurrent Jobs = 10
Volume Poll Interval = 3600
Archive Device = /rpool-backup/bacula
 }
 Device {
Name = VirtualFileStorage
Media Type = File
LabelMedia = yes;
Random Access = Yes;
AutomaticMount = yes;
RemovableMedia = no;
AlwaysOpen = no;
Maximum Concurrent Jobs = 10
Volume Poll Interval = 3600
Archive Device = /rpool-backup/bacula
 }

director side i've clearly defined two storages:

 Storage {
Name = SVPVE3File
Address = svpve3.sv.lnf.it
SDPort = 9103
Password = "ClearlyNotThis."
Maximum Concurrent Jobs = 25
Maximum Concurrent Read Jobs = 5
Device = FileStorage
Media Type = File
 }
 Storage {
Name = SVPVE3VirtualFile
Address = svpve3.sv.lnf.it
SDPort = 9103
Password = "ClearlyNotThis."
Maximum Concurrent Jobs = 25
Maximum Concurrent Read Jobs = 5
Device = VirtualFileStorage
Media Type = File
}


Then on client i've defined a single pool:

 Pool {
Name = FVG-SV-ObitoFilePoolIncremental
Pool Type = Backup
Storage = SVPVE3File
Maximum Volume Jobs = 6
Volume Use Duration = 1 week
Recycle = yes
AutoPrune = yes
Action On Purge = Truncate
Volume Retention = 20 days
 }

and a single job:

 Job {
Name = FVG-SV-Obito
JobDefs = DefaultJob
Storage = SVPVE3File
Pool = FVG-SV-ObitoFilePoolIncremental
Messages = StandardClient
NextPool = FVG-SV-ObitoFilePoolIncremental
Accurate = Yes
Backups To Keep = 2
DeleteConsolidatedJobs = yes
Schedule = VirtualWeeklyObito
Reschedule On Error = yes
Reschedule Interval = 30 minutes
Reschedule Times = 8
Max Run Sched Time = 8 hours
Client = fvg-sv-obito-fd
FileSet = ObitoTestStd
Write Bootstrap = "/var/lib/bacula/FVG-SV-Obito.bsr"
 }


If i run manually:

run job=FVG-SV-Obito

work as expected, eg run an incremental job. If i try to run:

run job=FVG-SV-Obito level=VirtualFull storage=SVPVE3VirtualFile

The job run, seems correctly:

 *run job=FVG-SV-Obito level=VirtualFull storage=SVPVE3VirtualFile
 Using Catalog "BaculaLNF"
 Run Backup job
 JobName:  FVG-SV-Obito
 Level:VirtualFull
 Client:   fvg-sv-obito-fd
 FileSet:  ObitoTestStd
 Pool: FVG-SV-ObitoFilePoolIncremental (From Job resource)
 NextPool: FVG-SV-ObitoFilePoolIncremental (From Job resource)
 Storage:  SVPVE3VirtualFile (From Command input)
 When: 2023-09-29 11:30:36
 Priority: 10
 OK to run? (yes/mod/no): yes
 Job queued. JobId=11718

in log i initially catch:

 29-Sep 11:30 lnfbacula-dir JobId 11718: Start Virtual Backup JobId 11718, 
Job=FVG-SV-Obito.2023-09-29_11.30.48_26
 29-Sep 11:30 lnfbacula-dir JobId 11718: Consolidating 
JobIds=11594,11633,11672,11673
 29-Sep 11:30 lnfbacula-dir JobId 11718: Found 47215 files to consolidate into 
Virtual Full.
 29-Sep 11:30 lnfbacula-dir JobId 11718: Using Device "FileStorage" to read.

then, after some time:

 29-Sep 11:44 svpve3-sd JobId 11718: JobId=11718, Job 
FVG-SV-Obito.2023-09-29_11.30.48_26 waiting to reserve a device.

So, it is still waiting to reserve a device... i suppose for writing...


Pool/media situation is now:

 *list media pool=FVG-SV-ObitoFilePoolIncremental
 Using Catalog "BaculaLNF"
 
+-++---+-+---+--+--+-+--+---+---+-+--+-+---+
 | mediaid | volumename | volstatus | enabled | volbytes  | volfiles | 
volretention | recycle | slot | inchanger | mediatype | 

Re: [Bacula-users] Progressive Virtual Fulls...

2023-09-29 Thread Marco Gaiarin
Mandi! Rados??aw Korzeniewski
  In chel di` si favelave...

> There is always a cost, from one area to another. You can mitigate storage
> cost in this case with deduplication. So, the cost shifts from storage and
> time to cpu, memory and license price. :)
> And everybody can choose what is the best.

Oh, sure! I'm simply trying to figure out how to determine these 'costs' in
advance... ;-)

-- 
  tutti chiusi in tante celle fanno a chi parla piu' forte
  per non dir che stelle e morte fan paura  (F. Guccini)




___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users