That was it.  The DefaultAI job profile pulled in the AI settings which is what 
triggers the purge. 

Thanks for catching this!


Brock Palen
1 (989) 277-6075
[email protected]
www.mlds-networks.com
Websites, Linux, Hosting, Joomla, Consulting



> On Jan 13, 2020, at 12:05 PM, Matt Rásó-Barnett <[email protected]> wrote:
> 
> Hi Brock,
> I've been experimenting with virtualfull jobs for long term archiving of AI 
> jobs recently and I haven't seen the problem you've got there. For me it 
> works as you hoped, there is no purging/consolidation done, it simply runs 
> making the virtual full backup and completes.
> 
> My only suspicion from your config snippet is that you are using the JobDef 
> 'DefaultJobAI' which is maybe pulling in settings from your usual AI job 
> configuration?
> 
> I think the virtualfull job doesn't really share much config with the AI jobs 
> themselves, so maybe the settings it's inheriting from that jobdef is causing 
> a problem perhaps.
> 
> I use something like:
> 
> JobDefs {
>  Name = "default-ai-virtualfull-job"
>  Type = Backup
>  Level = VirtualFull
>  Messages = "Standard"
>  Pool = "ai-consolidated"
>  Client = "client"
>  FileSet = "generic-linux-all"
>  Schedule = "default-ai-virtualfull-schedule"
>  Priority = 20
>  runscript {
>    console = "update jobid=%i jobtype=A"
>    runswhen = after
>    failonerror = No
>    runsonclient = No
>  }
> }
> 
> Job {
>  Name = "client"
>  Client = "client.fqdn"
>  FileSet = "client-fileset"
>  JobDefs = "default-ai-virtualfull"
>  Accurate = yes
> }
> 
> Hope that helps,
> Matt
> 
> On Sat, Jan 11, 2020 at 03:53:18PM -0500, Brock Palen wrote:
>> I’m trying to get the VirtualFull long term archive job documented here:
>> 
>> https://docs.bareos.org/TasksAndConcepts/AlwaysIncrementalBackupScheme.html#virtual-full-jobs
>> 
>> To work.  I can make the job run (Config below)  but the job then purges the 
>> full I had,  so I am left with no full.
>> Looking at the Job {}  config options I don’t see a way to tell Bareos to 
>> not purge the jobs that were put into the archive job.
>> 
>> This doesn’t appear to address the goals of that documentation, don’t you 
>> want to keep the old entry so you can do future consolidations?
>> 
>> Our intent is to offsite those archive jobs, so the original needs to stay.
>> 
>> Job {
>> Name = "macfu-Users-archive"
>> JobDefs = "DefaultJobAI"
>> Level = VirtualFull
>> FileSet = "OSX No Lib"
>> Client = "macfu-fd"
>> Full Backup Pool = LTO4
>> Virtual Full Backup Pool = LTO4
>> Next Pool = LTO4
>> Run Script {
>>       console = "update jobid=%i jobtype=A"
>>       Runs When = After
>>       Runs On Client = No
>>       Runs On Failure = No
>> }
>> Enabled = no
>> }
>> 
>> 
>> 11-Jan 15:09 myth-dir JobId 21062: Start Virtual Backup JobId 21062, 
>> Job=macfu-Users-archive.2020-01-11_15.09.53_40
>> 11-Jan 15:09 myth-dir JobId 21062: Consolidating JobIds 20909
>> 11-Jan 15:10 myth-dir JobId 21062: Bootstrap records written to 
>> /var/lib/bareos/myth-dir.restore.103.bsr
>> 11-Jan 15:10 myth-dir JobId 21062: Connected Storage daemon at myth:9103, 
>> encryption: TLS_CHACHA20_POLY1305_SHA256
>> 11-Jan 15:10 myth-dir JobId 21062: Using Device "FileStorage" to read.
>> 11-Jan 15:10 myth-dir JobId 21062: Using Device "T-LTO4" to write.
>> 11-Jan 15:10 myth-sd JobId 21062: Ready to read from volume 
>> "AI-Consolidated-1832" on device "FileStorage" (/mnt/bacula).
>> 11-Jan 15:10 myth-sd JobId 21062: Wrote label to prelabeled Volume "LTO4-3" 
>> on device "T-LTO4" (/dev/nst0)
>> 11-Jan 15:10 myth-sd JobId 21062: Spooling data ...
>> 11-Jan 15:10 myth-sd JobId 21062: Forward spacing Volume 
>> "AI-Consolidated-1832" to file:block 4:4091641546.
>> 11-Jan 15:13 myth-sd JobId 21062: End of Volume at file 12 on device 
>> "FileStorage" (/mnt/bacula), Volume "AI-Consolidated-1832"
>> 11-Jan 15:13 myth-sd JobId 21062: Ready to read from volume 
>> "AI-Consolidated-1829" on device "FileStorage" (/mnt/bacula).
>> 11-Jan 15:13 myth-sd JobId 21062: Forward spacing Volume 
>> "AI-Consolidated-1829" to file:block 3:3930327096.
>> 11-Jan 15:17 myth-sd JobId 21062: End of Volume at file 11 on device 
>> "FileStorage" (/mnt/bacula), Volume "AI-Consolidated-1829"
>> 11-Jan 15:17 myth-sd JobId 21062: End of all volumes.
>> 11-Jan 15:17 myth-sd JobId 21062: Committing spooled data to Volume 
>> "LTO4-3". Despooling 66,622,040,466 bytes ...
>> 11-Jan 15:32 myth-sd JobId 21062: Despooling elapsed time = 00:14:33, 
>> Transfer rate = 76.31 M Bytes/second
>> 11-Jan 15:32 myth-sd JobId 21062: Elapsed time=00:22:17, Transfer rate=49.77 
>> M Bytes/second
>> 11-Jan 15:32 myth-sd JobId 21062: Releasing device "T-LTO4" (/dev/nst0).
>> 11-Jan 15:32 myth-sd JobId 21062: Sending spooled attrs to the Director. 
>> Despooling 81,541,147 bytes ...
>> 11-Jan 15:32 myth-sd JobId 21062: Releasing device "FileStorage" 
>> (/mnt/bacula).
>> 11-Jan 15:32 myth-dir JobId 21062: Insert of attributes batch table with 21 
>> entries start
>> 11-Jan 15:33 myth-dir JobId 21062: Insert of attributes batch table done
>> 11-Jan 15:33 myth-dir JobId 21062: Joblevel was set to joblevel of first 
>> consolidated job: Full
>> 11-Jan 15:33 myth-dir JobId 21062: Bareos myth-dir 18.2.5 (30Jan19):
>> Build OS:               Linux-4.4.92-6.18-default ubuntu Ubuntu 18.04 LTS
>> JobId:                  21062
>> Job:                    macfu-Users-archive.2020-01-11_15.09.53_40
>> Backup Level:           Virtual Full
>> Client:                 "macfu-fd" 18.2.7 (12Dec19) 
>> Darwin-17.7.0,darwin,17.7.0
>> FileSet:                "OSX No Lib" 2020-01-07 23:23:52
>> Pool:                   "LTO4" (From Job's NextPool resource)
>> Catalog:                "myth_catalog" (From Client resource)
>> Storage:                "T-LTO4" (From Storage from Job's NextPool resource)
>> Scheduled time:         11-Jan-2020 15:09:53
>> Start time:             07-Jan-2020 23:27:40
>> End time:               08-Jan-2020 00:22:37
>> Elapsed time:           54 mins 57 secs
>> Priority:               4
>> SD Files Written:       238,933
>> SD Bytes Written:       66,551,772,351 (66.55 GB)
>> Rate:                   20185.6 KB/s
>> Volume name(s):         LTO4-3
>> Volume Session Id:      4
>> Volume Session Time:    1578772883
>> Last Volume Bytes:      66,609,736,704 (66.60 GB)
>> SD Errors:              0
>> SD termination status:  OK
>> Accurate:               yes
>> Bareos binary info:     bareos.org build: Get official binaries and vendor 
>> support on bareos.com
>> Termination:            Backup OK
>> 
>> 11-Jan 15:33 myth-dir JobId 21062: purged JobIds 20909 as they were 
>> consolidated into Job 21062
>> 11-Jan 15:33 myth-dir JobId 21062: Begin pruning Jobs older than 12 months .
>> 11-Jan 15:33 myth-dir JobId 21062: No Jobs found to prune.
>> 11-Jan 15:33 myth-dir JobId 21062: Begin pruning Files.
>> 11-Jan 15:33 myth-dir JobId 21062: No Files found to prune.
>> 11-Jan 15:33 myth-dir JobId 21062: End auto prune.
>> 
>> 11-Jan 15:33 myth-dir JobId 21062: console command: run AfterJob "update 
>> jobid=21062 jobtype=A"
>> 
>> 
>> Brock Palen
>> 1 (989) 277-6075
>> [email protected]
>> www.mlds-networks.com
>> Websites, Linux, Hosting, Joomla, Consulting
>> 
>> 
>> 
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "bareos-users" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to [email protected].
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/bareos-users/AD75CB4B-DE14-4B43-B81A-EEB8DE37BA5A%40mlds-networks.com.
> 
> -- 
> Matt Rásó-Barnett
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "bareos-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to [email protected].
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/bareos-users/20200113170547.cjzzqqgarrkibcsv%40yoshimo.localdomain.

-- 
You received this message because you are subscribed to the Google Groups 
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/B6D6CF9E-C570-4676-B380-9E281D0E414A%40mlds-networks.com.

Reply via email to