How does the protocol of one of these Differential (and then allegedly unnecessary Full) Backups start? Are there any hints e.g. "No Full Backup Found!"?

Please always include the list in your postings for other readers to learn from the issue.

Am 21.04.23 um 15:16 schrieb Dr. Thorsten Brandau:
>
> As seen below:
>
> Here is the full log (it is WAAAAY to much data for a differential backup):
>
> ->->->
>
> 7-Apr 23:05 -dir JobId 136: Start Backup JobId 136, Job=FileServer_Full.2023-04-07_23.05.00_04
> 07-Apr 23:05 -dir JobId 136: Using Device "LTO9-1" to write.
> 07-Apr 23:05 -sd JobId 136: Error: 07-Apr 23:05 -sd JobId 136: Volume "000020L9" previously written, moving to end of data. > 07-Apr 23:06 -sd JobId 136: Ready to append to end of Volume "000020L9" at file=2789.
> 07-Apr 23:06 -sd JobId 136: Spooling data ...
> 08-Apr 02:37 -sd JobId 136: User specified Job spool size reached: JobSpoolSize=3,000,000,026,157 MaxJobSpoolSize=3,000,000,000,000 > 08-Apr 02:37 -sd JobId 136: Writing spooled data to Volume. Despooling 3,000,000,026,157 bytes ... > 08-Apr 08:14 -sd JobId 136: Despooling elapsed time = 05:36:38, Transfer rate = 148.5 M Bytes/second
> 08-Apr 08:14 -sd JobId 136: Spooling data again ...
> 08-Apr 11:40 -sd JobId 136: User specified Job spool size reached: JobSpoolSize=3,000,000,008,520 MaxJobSpoolSize=3,000,000,000,000 > 08-Apr 11:40 -sd JobId 136: Writing spooled data to Volume. Despooling 3,000,000,008,520 bytes ... > 08-Apr 17:59 -sd JobId 136: Despooling elapsed time = 06:18:52, Transfer rate = 131.9 M Bytes/second
> 08-Apr 17:59 -sd JobId 136: Spooling data again ...
> 08-Apr 22:57 -sd JobId 136: User specified Job spool size reached: JobSpoolSize=3,000,000,059,190 MaxJobSpoolSize=3,000,000,000,000 > 08-Apr 22:57 -sd JobId 136: Writing spooled data to Volume. Despooling 3,000,000,059,190 bytes ... > 09-Apr 05:47 -sd JobId 136: Despooling elapsed time = 06:49:38, Transfer rate = 122.0 M Bytes/second
> 09-Apr 05:47 -sd JobId 136: Spooling data again ...
> 09-Apr 10:35 -sd JobId 136: Error: Error writing header to spool file. Disk probably full. Attempting recovery. Wanted to write=64512 got=692 > 09-Apr 10:35 -sd JobId 136: Writing spooled data to Volume. Despooling 2,999,394,019,676 bytes ... > 09-Apr 17:35 -sd JobId 136: Despooling elapsed time = 06:59:31, Transfer rate = 119.1 M Bytes/second > 09-Apr 20:28 -sd JobId 136: Error: Error writing header to spool file. Disk probably full. Attempting recovery. Wanted to write=64512 got=3793 > 09-Apr 20:28 -sd JobId 136: Writing spooled data to Volume. Despooling 2,999,392,366,895 bytes ... > 10-Apr 03:33 -sd JobId 136: Despooling elapsed time = 07:04:44, Transfer rate = 117.6 M Bytes/second > 10-Apr 06:22 -sd JobId 136: Committing spooled data to Volume "000020L9". Despooling 1,510,003,913,646 bytes ... > 10-Apr 09:38 -sd JobId 136: Despooling elapsed time = 03:16:01, Transfer rate = 128.3 M Bytes/second > 10-Apr 09:38 -sd JobId 136: Elapsed time=58:31:43, Transfer rate=78.27 M Bytes/second > 10-Apr 09:38 -sd JobId 136: Sending spooled attrs to the Director. Despooling 1,885,573,459 bytes ...
> 10-Apr 09:43 -dir JobId 136: Bacula -dir 11.0.6 (10Mar22):
>   Build OS:               x86_64-suse-linux-gnu openSUSE Tumbleweed
>   JobId:                  136
>   Job: FileServer_Full.2023-04-07_23.05.00_04
>   Backup Level:           Full
>   Client:                 "-fd" 11.0.6 (10Mar22) x86_64-suse-linux-gnu,openSUSE,Tumbleweed
>   FileSet:                "Full Set" 2023-03-18 23:05:00
>   Pool:                   "Tape" (From Job resource)
>   Catalog:                "MyCatalog" (From Client resource)
>   Storage:                "AutoChangerLTO" (From Job resource)
>   Scheduled time:         07-Apr-2023 23:05:00
>   Start time:             07-Apr-2023 23:05:03
>   End time:               10-Apr-2023 09:43:52
>   Elapsed time:           2 days 10 hours 38 mins 49 secs
>   Priority:               10
>   FD Files Written:       6,600,638
>   SD Files Written:       6,600,638
>   FD Bytes Written:       16,491,080,239,221 (16.49 TB)
>   SD Bytes Written:       16,492,258,152,160 (16.49 TB)
>   Rate:                   78109.0 KB/s
>  Software Compression:   None
>   Comm Line Compression:  64.5% 2.8:1
>   Snapshot/VSS:           no
>   Encryption:             no
>   Accurate:               no
>   Volume name(s):         000020L9
>   Volume Session Id:      2
>   Volume Session Time:    1680860753
>   Last Volume Bytes:      19,288,951,428,096 (19.28 TB)
>   Non-fatal FD errors:    0
>   SD Errors:              3
>   FD termination status:  OK
>   SD termination status:  OK
>   Termination:            Backup OK -- with warnings
>
> 10-Apr 09:43 -dir JobId 136: Begin pruning Jobs older than 6 months .
> 10-Apr 09:43 -dir JobId 136: No Jobs found to prune.
> 10-Apr 09:43 -dir JobId 136: Begin pruning Files.
> 10-Apr 09:43 -dir JobId 136: No Files found to prune.
> 10-Apr 09:43 -dir JobId 136: End auto prune.
>
> <-<-<-
>
> There is nothing else in the log.
>
>

Decision making on if the user (or scheduler) requested backup level can be used is database driven. No filesystem or files involved. There are no messages in your log about non-existend full backups to differentiate or increment on. I would assume there is something going wrong with your scheduler. For example the keyword LEVEL is missing in your definition. Excerpt from the manual:

Schedule {
Name = "WeeklyCycle"
Run = Level=Full sun at 2:05
Run = Level=Incremental mon-sat at 2:05
}

Please show: status schedule from the Bacula console.

PS: Despooling seems unusually slow to me.
PPS: Please do not top-post.
_______________________________________________
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users

Reply via email to