You can restore directly from jobtype= A in 24 just use the jobid (can be done also in webui)
*list jobs client=share-fd +-------+----------------+----------+---------------------+----------+------+-------+-----------+-----------------+-----------+ | jobid | name | client | starttime | duration | type | level | jobfiles | jobbytes | jobstatus | +-------+----------------+----------+---------------------+----------+------+-------+-----------+-----------------+-----------+ | 20517 | vf_share_month | share-fd | 2025-08-03 01:10:05 | 00:00:49 | A | F | 1,450,544 | 674,714,226,950 | T | *restore jobid=20517 You have selected the following JobId: 20517 Building directory tree for JobId(s) 20517 ... ++++++++++++++++++++++++++++++++++++ 1,450,544 files inserted into the tree. On Monday, 18 August 2025 at 14:29:32 UTC+2 Brock Palen wrote: > Are you looking for the job or you looking for “if I did a full restore > right now could I get it”? > > If you want copies of your jobs you will need to usetup copy jobs for both > your pools, bareos will keep track of both and you can see this with > > list copies > > Note copies interact IMHO poorly with always incremental jobs if you want > anything other than the full. > > https://docs.bareos.org/TasksAndConcepts/AlwaysIncrementalBackupScheme.html#copy-jobs > > What I did to have a off site DR copy and works with always incremental I > run a monthly archive job. > > https://docs.bareos.org/TasksAndConcepts/AlwaysIncrementalBackupScheme.html#virtual-full-jobs > > I found this did waht I wanted yes it is still effecitvly only a single > full and not the history, but it’s the full ‘right now’ merging in all the > incrementals, this better matched what I wanted, also Consolidate jobs > didn’t cause bad behavior (might be betetr now) with the in my case vaulted > media. These literally pile up and expire based on my volume/pool settings. > > The only thing is if you want to restore from them or add them back to the > normal backup consolidaiton you do have to manuall y update the job > > update jobid=#### jobtype=B > > > > > Brock Palen > [email protected] > www.mlds-networks.com > Websites, Linux, Hosting, Joomla, Consulting > > > > > On Aug 18, 2025, at 5:32 AM, Luke Kenny <[email protected]> wrote: > > > > I have what I believe a fairly standard configuration, where a host is > being backed up by an Incremental job that specifies an Incremental pool, > but also separate pools for Full and Differential. The pool that is > actually used is determined by the schedule. > > > > I would like to duplicate this job to S3 storage when it is completed, > for off site redundancy. I have been able to get that up and running with a > Copy Job, and that works, except that I am required to specify the Pool the > job used, so the Job is limited to just the most recent back from that Pool. > > > > I'm not sure how to get around this. Am I required to also have 3 > separate S3 copy jobs set up? How can I ensure the correct version of the > S3 Copy Job is executed following the primary backup? > > > > Is there a method of having the Copy Job pickup the last backup created > by the primary Job, and choose the pool accordingly, as the primary Job > does? > > > > Here's a rundown on the config... > > > > Job { > > Name = "Snap-BW" > > FileSet = "BW" > > Messages = "Snap-Messages" > > Type = Backup > > Level = Incremental > > Client = snap-bw-fd > > Schedule = "WeeklyCycle-AM" > > Storage = NAS-Storage > > Messages = Standard > > Pool = Incremental > > Priority = 8 > > Write Bootstrap = "/var/lib/bareos/%c.bsr" > > Full Backup Pool = Full-Pool > > Differential Backup Pool = Differential-Pool > > Incremental Backup Pool = Incremental-Pool > > } > > > > Pool { > > Name = Full-Pool > > Pool Type = Backup > > Recycle = yes > > AutoPrune = yes > > Volume Retention = 1 month > > Maximum Volume Bytes = 50G > > Maximum Volumes = 100 > > Label Format = "Full-Pool-" > > Next Pool = "S3-Copy-Pool" > > } > > > > Pool { > > Name = Differential-Pool > > Pool Type = Backup > > Recycle = yes > > AutoPrune = yes > > Volume Retention = 2 weeks > > Maximum Volume Bytes = 10G > > Maximum Volumes = 100 > > Label Format = "Diff-Pool-" > > Next Pool = "S3-Copy-Pool" > > } > > > > Pool { > > Name = Incremental-Pool > > Pool Type = Backup > > Recycle = yes > > AutoPrune = yes > > Volume Retention = 1 week > > Maximum Volume Bytes = 1G > > Maximum Volumes = 100 > > Label Format = "Inc-Pool-" > > Next Pool = "S3-Copy-Pool" > > } > > > > Job { > > Name = "s3-copy" > > Type = Copy > > Messages = Standard > > Selection Type = Job > > Selection Pattern = "Snap-BW" > > Pool = Full-Pool > > } > > > > Any advice or suggestions would be greatly appreciated. Thanks for your > help! > > > > -- > > You received this message because you are subscribed to the Google > Groups "bareos-users" group. > > To unsubscribe from this group and stop receiving emails from it, send > an email to [email protected]. > > To view this discussion visit > https://groups.google.com/d/msgid/bareos-users/6e187c5e-20dd-4f98-ab11-27707a5bb62en%40googlegroups.com > . > > -- You received this message because you are subscribed to the Google Groups "bareos-users" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To view this discussion visit https://groups.google.com/d/msgid/bareos-users/e59919fe-2604-4f59-acf2-0c8b32496cd6n%40googlegroups.com.
