Interesting - I just looked at one of our Bacula servers that has some
jobs that back up data that rarely if ever changes. I looked at one
particular job that just did a new Virtual Full. Looking at all the
previous jobs between this newest Virtual Full and the last one it had
done X days ago, none of those incr/diff jobs had any files backed up.
So, this sounds like your scenario but it works fine here. Looking at
our logs, I see the following message for the example job I mentioned
Found 3 files to consolidate into Virtual Full.
This includes the 3 files from the previous FULL - which are the only
files in the job.
So, the question is why does it not include the file count from your
previous FULL when making this calculation.
The code in question is in vbackup.c - the create_bootstrap_file method
which calls a method in bsr.c which is where the "No files found to
read. No bootstrap file written" message is being generated.
cheers,
--tom
I'm running into this issue with Virtual Full backups. I think I know
what the issue is but am not 100% positive and want to bounce this idea...
I have a Full backup, then some number of Incrementals. Every week I
run a Virtual full, rolling in all but the most recent 10 Incremental
backups.
My Virtual Fulls are failing with "No files found to read/Found 0
files to consolidate into Virtual Full"
29-Jul 02:36 bacula-dir JobId 5622: Start Virtual Backup JobId 5622,
Job=Taco-Data-A.2025-07-25_21.15.00_33
29-Jul 02:36 bacula-dir JobId 5622: Consolidating
JobIds=5400,4698,4728,4758,4788,4818
29-Jul 02:36 bacula-dir JobId 5622: No files found to read. No
bootstrap file written.
29-Jul 02:36 bacula-dir JobId 5622: Found 0 files to consolidate into
Virtual Full.
29-Jul 02:36 bacula-dir JobId 5622: Fatal error: Could not get or
create the FileSet record.
What I think is happening is that my disk volume is not being updated
(no changes to files, no new files, no files deleted), and when an
incremental backup runs it stores nothing on the backup "tape".
05-Jul 20:19 bacula-dir JobId 4728: Start Backup JobId 4728,
Job=Taco-Data-A.2025-06-30_19.10.00_42
05-Jul 20:19 bacula-dir JobId 4728: Connected to Storage "FileChanger"
at si-scott.miserver.it.umich.edu:9103
<http://si-scott.miserver.it.umich.edu:9103/> with TLS
05-Jul 20:19 bacula-dir JobId 4728: Using Device "FileChanger-Dev10"
to write.
05-Jul 20:19 bacula-dir JobId 4728: Connected to Client "taco" at
taco.si.umich.edu:9102 <http://taco.si.umich.edu:9102/> with TLS
05-Jul 20:19 taco JobId 4728: Connected to Storage at
si-scott.miserver.it.umich.edu:9103
<http://si-scott.miserver.it.umich.edu:9103/> with TLS
05-Jul 20:19 bacula-dir JobId 4728: Sending Accurate information to
the FD.
05-Jul 20:23 bacula-sd JobId 4728: Elapsed time=00:01:42, Transfer
rate=0 Bytes/second
05-Jul 20:23 bacula-sd JobId 4728: Sending spooled attrs to the
Director. Despooling 0 bytes ...
05-Jul 20:23 bacula-dir JobId 4728: Bacula bacula-dir 15.0.3 (25Mar25):
Build OS: x86_64-pc-linux-gnu ubuntu 24.04
JobId: 4728
Job: Taco-Data-A.2025-06-30_19.10.00_42
Backup Level: Incremental, since=2025-06-25 09:16:04
Client: "taco" 15.0.3 (25Mar25)
x86_64-pc-linux-gnu,ubuntu,22.04
FileSet: "Taco-Data-A" 2025-05-02 15:31:06
Pool: "Taco-Incr" (From Job IncPool override)
Catalog: "MyCatalog" (From Client resource)
Storage: "FileChanger" (From Pool resource)
Scheduled time: 30-Jun-2025 19:10:00
Start time: 05-Jul-2025 20:19:45
End time: 05-Jul-2025 20:23:12
Elapsed time: 3 mins 27 secs
Priority: 10
FD Files Written: 0
SD Files Written: 0
FD Bytes Written: 0 (0 B)
SD Bytes Written: 0 (0 B)
Rate: 0.0 KB/s
Software Compression: None
Comm Line Compression: None
Snapshot/VSS: no
Encryption: no
Accurate: yes
Volume name(s):
Volume Session Id: 619
Volume Session Time: 1749606283
Last Volume Bytes: 2,295 (2.295 KB)
Non-fatal FD errors: 0
SD Errors: 0
FD termination status: OK
SD termination status: OK
Termination: Backup OK
So, when it comes time to consolidate it grabs each incremental and
finds there's nothing, and the Virtual Full fails. I can find these
"tapes" in my Jobs table, but no joining entries in Media, or JobMedia.
If I scan through the "tape" on the filesystem, I find backups on the
tape, only for jobs which actually wrote files.
The end result is that I have a Full backup, which is getting
increasingly older, VirtualFulls which fail, so no new Full backup,
and once I hit a point where the Full backup hits its prune-date it
gets pruned from the file table. After that, the database knows there
was a successful Full, and incrementals still run, but now every file
the incremental comes across is "brand new" and my incremental backup
is basically a Full backup.
--
- Adaptability -- Analytical --- Ideation ---- Input ----- Belief -
-------------------------------------------------------------------
John M. Lockard | U of Michigan - School of Information
Unix Sys Admin | Suite 205 | 309 Maynard Street
jlock...@umich.edu | Ann Arbor, MI 48104-2211
www.umich.edu/~jlockard <http://www.umich.edu/%7Ejlockard> |
734-615-8776 | 734-763-9677 FAX
-------------------------------------------------------------------
- The University of Michigan will never ask you for your password -
_______________________________________________
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users
_______________________________________________
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users