Copy jobs are tricky to wrap ones head around, as you noted. I know I had
trouble at first.

The first things I'd try are:

1. The FileCP pool doesn't have a storage resource defined. In this case I
think you'd want one pointing to FileCP. The FileCP pool cannot express
which storage resource is appropriate for it, so when that pool is
referenced the copy control job MigrateDCIM-to-Tape cannot figure out the
correct device. It would be nice if the copy control job figured this out
by looking through records for JobID 4, but that doesn't appear to be
happening here.

2. The tape drive appears not to have a suitable tape labeled and
appendable, though it's possible that the misconfiguration that caused all
this tied up a suitable tape while attempting to use the same resource as
both source and destination (valid if you wanted to copy a tape, but not in
your case).

3. You might want selection criterion "PoolUncopiedJobs". I suspect the
selection tool criterion you've provided will copy jobs matching the mask
you provided, no matter whether they've previously been copied or not. Yes,
it did match your target job correctly, but might lead to extra job copies
later.

4. Make sure that retention matches for original and copy job definitions.
I had an issue where, during testing, I set up longer retention A for my
intial source jobs (local storage). I made full backups on the local
storage file writers. At some point, I decided to lower the retention
period, but I DIDNT forcibly purge the initial full backups with longer
retention periods. Then I set up copy jobs to upload local jobs to the
cloud, with shorter retention period B (at that time matching all other
pools in the system, but this new shorter retention was NOT applied to the
existing jobs). The copy jobs worked and I walked away. Then, a few months
later, the cloud volume retention expired, the copies of the first full
backups were pruned from cloud storage... And were then promptly pruned
from the cloud on completion, because they inherited their retention period
from the start or end date associated with the original job. Meanwhile, the
local full backups weren't set to expire anytime soon. Copy after copy was
uploaded to the cloud before I figured out the mistake. Oof.

5. As far as I can tell, when a copy control job like MigrateDCIM-to-Tape
launches, any selection criterion like PoolUncopiedJobs operates at time of
job launch, even if the job waits for other higher priority jobs to finish
first. In other words, if you launch your file backup jobs, then 1 minute
later launch your copy jobs with a lower priority while the file jobs
complete, the only valid targets for the copy job will be last night's file
storage jobs, not the yet-to-be-completed jobs from tonight. The solution I
found to this problem was to make an admin job that launches the copy jobs
using an echo command to bconsole. So the admin job CopyLauncher sits in
the queue waiting for file jobs to finish, then once activated it
immediately spawns copy control jobs and exits. Those copy control jobs see
all tonights completed jobs that are eligible for a copy, and act
accordingly.


Robert Gerber
402-237-8692
[email protected]

On Sun, Dec 7, 2025, 10:02 PM Kevin Buckley <
[email protected]> wrote:

> Hi there,
>
> trying to get my head around the "Migrate/Copy" examples, specifically
> a Copy, and running the jobs manually whilst testing and familiarising,
> against 15.0.3.
>
> FWIW, the names of the Pools and Jobs have been chosen so as not to
> clash with other testing components still in the config, and so are
> dedicated to this particular testing effort, hence the explicit
> addition of the JobDefs attributes, into the Job stanzas, so as to
> avoid getting any "unwanted defaults"
>
> Pool {
>    Name = FileCP
>    Pool Type = Backup
>    Recycle = yes                       # Bacula can automatically recycle
> Volumes
>    AutoPrune = yes                     # Prune expired volumes
>    Volume Retention = 365 days         # one year
>    Maximum Volume Bytes = 50G          # Limit Volume size to something
> reasonable
>    Maximum Volumes = 100               # Limit number of Volumes in Pool
>    Label Format = "VolCP-"             # Auto label
>    Next Pool = TapeCP
> }
>
>
> # TapeCP Pool definition
> Pool {
>    Name = TapeCP
>    Pool Type = Backup
>    Recycle = yes
>    AutoPrune = yes
>    Storage = TS4300
> }
>
> Job {
>    Name = "BackupDCIM-to-FileCP"
> #-JobDefs = "DefaultJob"
>    Type = Backup
>    Level = Incremental
>    Client = client-fd
>    FileSet = "DCIM Fileset"
>    Schedule = "WeeklyCycle"
>    Storage = File2
> # Messages = Standard
>    Messages = Daemon
>    Pool = FileCP
>    SpoolAttributes = yes
>    Priority = 10
>    Write Bootstrap = "/opt/bacula/working/%c.bsr"
> #-Attributes not defined in JobDefs come after
>    Enabled = yes
> }
>
>
>
> Job {
>    Name = "MigrateDCIM-to-Tape"
> #-JobDefs = "DefaultJob"
>    Type = Copy
>    Level = Full
>    Client = client-fd
>    FileSet = "DCIM Fileset"
>    Schedule = "WeeklyCycle"
>    Messages = Daemon
>    Pool = FileCP
> #-Attributes not defined in JobDefs come after
>    Storage = TS4300
>    Selection Type = Job
>    Selection Pattern = ".*FileCP"
>    Enabled = yes
> }
>
>
> I run the first job
>
> Select Job resource (1-7): 4
> Run Backup job
> JobName:  BackupDCIM-to-FileCP
> Level:    Incremental
> Client:   client-fd
> FileSet:  DCIM Fileset
> Pool:     FileCP (From Job resource)
> Storage:  File2 (From Job resource)
> When:     2025-12-04 16:24:59
> Priority: 10
> OK to run? (Yes/mod/no): Yes
> Job queued. JobId=4
> You have messages.
>
>
> and that produces a file, VolCP-0009, on the filesystem
>
> When I come to run the "Copy" however,
>
> Select Job resource (1-7): 5
> Run Copy job
> JobName:       MigrateDCIM-to-Tape
> Bootstrap:     *None*
> Client:        client-fd
> FileSet:       DCIM Fileset
> Pool:          FileCP (From Job resource)
> NextPool:      TapeCP (From Job Pool's NextPool resource)
> Read Storage:  TS4300 (From Job resource)
> Write Storage: TS4300 (From Job Pool's NextPool resource)
> JobId:         *None*
> When:          2025-12-04 16:30:09
> Catalog:       MyCatalog
> Priority:      10
> OK to run? (Yes/mod/no):Yes
> Job queued. JobId=5
> You have messages.
>
>
> it appears to be trying to "Read" from a Tape Caddy drive, rather
> then from the File2 storage, and fails with
>
> testinst-dir JobId 5: The following 1 JobId was chosen to be copied: 4
> testinst-dir JobId 5: Copying using JobId=4
> Job=BackupDCIM-to-FileCP.2025-12-04_16.25.22_03
> testinst-dir JobId 5: Start Copying JobId 5,
> Job=MigrateDCIM-to-Tape.2025-12-04_16.31.01_05
> testinst-dir JobId 5: Connected to Storage "TS4300" at
> testinst.pawsey.org.au:9103 with TLS
> testinst-dir JobId 5: Connected to Storage "TS4300" at
> testinst.pawsey.org.au:9103 with TLS
> testinst-dir JobId 5: Using Device "TS4300-Drive1" to read.
> testinst-dir JobId 6: Using Device "TS4300-Drive2" to write.
> testinst-sd JobId 6: Connected to Storage at testinst.pawsey.org.au:9103
> with TLS
> testinst-sd JobId 6: Job BackupDCIM-to-FileCP.2025-12-04_16.31.01_06 is
> waiting. Cannot find any appendable volumes.
> Please use the "label" command to create a new Volume for:
>      Storage:      "TS4300-Drive2" (/dev/IBMtape0n)
>      Pool:         TapeCP
>      Media type:   ULT3580-HH9
>
> but I can't see why the Copy Job wants to read (Read Storage) from a
> Tape Caddy DRive, given that it has identified the FileCP Pool correctly.
>
> If, however, I take the TS4300 Storage attribute out of the TapeCP
> Pool stanza (in the belief that it might then just inherit the right
> target, as per the Pool), then the Copy Job doesn't run, giving
> the message:
>
> Select Job resource (1-7): 5
> Job not run.
> You have messages.
> *messages
> 08-Dec 11:27 testinst-dir JobId 0: Fatal error: No Storage specification
> found in Next Pool "TapeCP".
>
>
>
> I am assuming I have managed to mess up the Pool, and/or Storage,
> definitions, but what may I have missed?
>
>
> For completeness, here are the Autochanger-related stanzas:
>
> bacula-dir.conf
>
> Autochanger {
>    Name = TS4300
>    Address = testinst.pawsey.org.au
>    SDPort = 9103
> # password for Storage daemon
>    Password = "somepasswdstring"
> # Device must be same as Device in Storage daemon
>    Device = TS4300-Drive1
>    Device = TS4300-Drive2
> # Media Type must be same as MediaType in Storage daemon
>    Media Type = ULT3580-HH9
> # enable for autochanger device
>    Autochanger = TS4300
>    Maximum Concurrent Jobs = 10
> }
>
> bacula-sd.conf
>
> Autochanger {
>    Name = TS4300
>    Device = TS4300-Drive1
>    Device = TS4300-Drive2
>    Changer Command = "/opt/bacula/scripts/mtx-changer %c %o %S %a %d"
>    Changer Device = /dev/sg3
> }
>
> Device {
>    Name = TS4300-Drive1
>    Drive Index = 0
>    Media Type = ULT3580-HH9
>    Archive Device = /dev/IBMtape1n
>    AutomaticMount = yes;               # when device opened, read it
>    AlwaysOpen = yes;
>    RemovableMedia = yes;
>    RandomAccess = no;
>    AutoChanger = yes
>    Control Device = /dev/sg3
>    Alert Command = "/opt/bacula/scripts/tapealert %l"
> }
>
> Device {
>    Name = TS4300-Drive2
>    Drive Index = 1
>    Media Type = ULT3580-HH9
>    Archive Device = /dev/IBMtape0n
>    AutomaticMount = yes;               # when device opened, read it
>    AlwaysOpen = yes;
>    RemovableMedia = yes;
>    RandomAccess = no;
>    AutoChanger = yes
>    Control Device = /dev/sg3
>    Alert Command = "/opt/bacula/scripts/tapealert %l"
> }
>
>
> Started thinking, from reading other parts of the Bacula docs,
> that it might be possible to target just ONE of the Autochanger
> Drives, but not clear how to do that, should it even be the
> "way to go"
>
> As I say though, I'm sure I've missed something in reading the
> docs a few times now,
> Kevin Buckley
>
>
>
> _______________________________________________
> Bacula-users mailing list
> [email protected]
> https://lists.sourceforge.net/lists/listinfo/bacula-users
>
_______________________________________________
Bacula-users mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/bacula-users

Reply via email to