I suspect that job TBackupB is inheriting some options from jobdef DefaultJob that are good for TBackupA but not good for TBackupB. This is probably why TBackupB runs successfully when ran manually with storage and pool specified. For example,
1. Are you certain that you reloaded the director after making changes to TBackupB? 2. What happens if you add "Pool=FullB" and "Storage=StorageB" to the definitions for job TBackupB, then reload the director? Not certain that the 'Storage=" value is needed because the pools already say which storage to use, and I think the pool storage definition overrides all others. It is safest to put this in. I think Martin is correct in that you should use different media types for each different storage location. Something like FileA and FileB and FileExtHD. Heed his warnings about changing your existing storage resources in place, because bacula will be unable to access previous backups made using those resources unless you edit the database the way Phil suggested. An alternative could be to make new resources in bacula-dir.conf and bacula-sd.conf and set jobs to use those new resources. Eventually your old backups will pass their retention period, and you can safely truncate and prune the expired volumes. You will not be able to truncate and prune the old volumes using bacula if you don't have definitions in bacula-dir.conf and bacula-sd.conf that are able to interact with the volumes you already created. Obviously you can just manually delete the volumes, if you feel comfortable doing that. I would recommend making new resources in bacula-dir.conf and bacula-sd.conf with different media types, creating new pools and storage resources that use those media types, pointing your job definitions at the new storages and pools, and eventually using admin jobs to truncate and prune the old volumes. Otherwise, you could make a migrate job to migrate all the old jobs and volumes to the new jobs and volumes. This would involve setting up entirely parallel storage resources, and copying the data from StorageA/B/HDChanger to Storage_A Storage_B HDChanger_1 where Storage_A Storage_B HDChanger_1 each have a media type definition that is different from every other media definition. This option would be a lot like the above, except you'd be using a migrate job to move each backup from the old volumes to new volumes. With all that said, I'd start by added the pool and storage definitions in "2." and see what happens. The fact that the job works when you manually specify those things are run time should be a pretty big clue. Regards, Robert Gerber 402-237-8692 [email protected] On Tue, Nov 4, 2025 at 2:20 AM Andrea Venturoli <[email protected]> wrote: > Hello. > > I've got a system where I have two NAS storages and one HD based virtual > autochanger. > > Clients do nighly backups on either NAS. > Weekly I copy Fulls and Diffs to HDs (on the third storage). > > However, copy jobs from NAS A to HD work, while copy jobs from NAS B to > HD don't. > First they output the following warning: > > > Warning: Could not find a compatible Storage to read volumes in the > TBackupB Job definition (StorageB/File) > > then they ask me to mount the volume (which is on NAS B) on NAS A. > > I've tried and tried, but I cannot find the reason why the correct > storage is not picked up. > I'm pasting below an extract of my config: does anyone see what's wrong > in it? > > bye & Thanks > av. > > -------------------> Storage { > > Name=StorageA > > Address=... > > Password="..." > > Device=BackupA > > Media Type=File > > } > > Storage { > > Name=StorageB > > Address=... > > Password="..." > > Device=BackupB > > Media Type=File > > } > > Storage { > > Name=HDChanger > > Address=... > > Password="..." > > Device=HDChanger > > Media Type=File > > Autochanger=yes > > } > > > > Pool { > > Name=FullA > > Pool Type=Backup > > Storage=StorageA > > Next Pool=ExtHD > > } > > Pool { > > Name=FullB > > Pool Type=Backup > > Storage=StorageB > > Next Pool=ExtHD > > } > > Pool { > > Name=ExtHD > > Pool Type=Backup > > Storage=HDChanger > > } > > > > JobDefs { > > Name="DefaultJob" > > Type=Backup > > Level=Incremental > > Client=... > > FileSet="Catalog" > > Schedule="Weekly" > > Storage=StorageA > > } > > > > Job { > > Name=DBackupA > > JobDefs="DefaultJob" > > Full Backup Pool=FullA > > } > > Job { > > Name=DBackupB > > JobDefs="DefaultJob" > > Full Backup Pool=FullB > > Storage=StorageB > > } > > Job { > > Name=TBackupA > > JobDefs="DefaultJob" > > Type=Copy > > Selection Type=SQLQuery > > Selection Pattern="SELECT * FROM (SELECT jobid,level,endtime FROM job > WHERE name='DBackupA' AND level='F' AND jobstatus='T' AND poolid!=7 ORDER > BY endtime DESC LIMIT 1) AS f UNION SELECT * FROM (SELECT > jobid,level,endtime FROM job WHERE name='DBackupA' AND level='D' AND > jobstatus='T' AND poolid!=7 AND endtime>(SELECT max(endtime) FROM job WHERE > name='DBackupA' AND level='F' AND jobstatus='T' AND poolid!=7) ORDER BY > endtime DESC LIMIT 1) AS d" > > } > > Job { > > Name=TBackupB > > JobDefs="DefaultJob" > > Type=Copy > > Selection Type=SQLQuery > > Selection Pattern="SELECT * FROM (SELECT jobid,level,endtime FROM job > WHERE name='DBackupB' AND level='F' AND jobstatus='T' AND poolid!=7 ORDER > BY endtime DESC LIMIT 1) AS f UNION SELECT * FROM (SELECT > jobid,level,endtime FROM job WHERE name='DBackupB' AND level='D' AND > jobstatus='T' AND poolid!=7 AND endtime>(SELECT max(endtime) FROM job WHERE > name='DBackupB' AND level='F' AND jobstatus='T' AND poolid!=7) ORDER BY > endtime DESC LIMIT 1) AS d" > > } > > P.S. > I also tried adding "Storage=StorageB" to TBackupB job definition, but > it did not help. > However it works if I run the job manually and explicitly specify > (source) pool and storage. > > > _______________________________________________ > Bacula-users mailing list > [email protected] > https://lists.sourceforge.net/lists/listinfo/bacula-users >
_______________________________________________ Bacula-users mailing list [email protected] https://lists.sourceforge.net/lists/listinfo/bacula-users
