Hi, I have found some strange behaviour, from my point of view. I'm using Debian 9.6 and bareos 17.2.4 from bareos repository (http://download.bareos.org/bareos/release/17.2/Debian_9.0/). I have this 3 jobs:
:::::::::::::: job/s_mariadb.conf :::::::::::::: Job { Description = "Backup dos arquivos dor servidor mariadb" Name = s_mariadb Job Defs = jdSrvVFull Client = s_mariadb } :::::::::::::: job/s_ns.conf :::::::::::::: Job { Description = "Backup dos arquivos do servidor ns" Name = s_ns Job Defs = jdSrvVFull Client = s_ns } :::::::::::::: job/copyFitasSrv.conf :::::::::::::: Job { Name = copyFitasSrv Description = "Copia jobs full para a fita " Job Defs = jdAdmin Client = bp-bareos Type = Copy Selection Type = SqlQuery Schedule = aCopyFitasSrv Priority = 302 Pool = pSrvIncr Next Pool = pFitasServ Selection Pattern = " select distinct j.jobid from job j, jobmedia jm, media m, pool p where ( p.name = 'pSrvVFull' or p.name = 'pSrvFull' or p.name = 'pSrvDiff' or p.name = 'pSrvIncr' ) and p.poolid = m.poolid and m.mediaid = jm.mediaid and jm.jobid = j.jobid and j.type = 'B' and j.jobstatus = 'T' and j.poolid = p.poolid and j.jobfiles <> 0 -- and j.readbytes <> 0 and j.jobid not in ( select distinct b.jobid from job b, job c, jobmedia jmb, jobmedia jmc, media mb, media mc, pool pb, pool pc where where ( pb.name = 'pSrvVFull' or pb.name = 'pSrvFull' or pb.name = 'pSrvDiff' or pb.name = 'pSrvIncr' ) and pb.poolid = mb.poolid and mb.mediaid = jmb.mediaid and jmb.jobid = b.jobid and pc.name = 'pFitasServ' and pc.poolid = mc.poolid and mc.mediaid = jmc.mediaid and jmc.jobid = c.jobid and b.jobid <> c.jobid and b.starttime = c.starttime and b.endtime = c.endtime and b.name = c.name and b.jobfiles = c.jobfiles ) order by j.jobid; " } I run jobs s_ns and s_mariadb, and after while, a copy job copyFitasSrv (That copy jobs from pool pSrvVFull to pool pFitasServ). Having this: *list jobs Using Catalog "cat_CI" +-------+--------------+-----------+---------------------+------+-------+----------+-------------+-----------+ | jobid | name | client | starttime | type | level | jobfiles | jobbytes | jobstatus | +-------+--------------+-----------+---------------------+------+-------+----------+-------------+-----------+ | 1 | s_ns | s_ns | 2018-12-29 15:08:02 | B | F | 22,260 | 216,228,422 | T | | 5 | s_ns | s_ns | 2018-12-29 15:08:02 | C | F | 22,260 | 219,121,184 | T | | 2 | s_mariadb | s_mariadb | 2018-12-29 15:10:10 | B | F | 19,766 | 232,436,690 | T | | 7 | s_mariadb | s_mariadb | 2018-12-29 15:10:10 | C | F | 19,766 | 235,001,726 | T | | 3 | copyFitasSrv | bp-bareos | 2018-12-29 15:15:47 | c | F | 0 | 0 | T | | 4 | copyFitasSrv | bp-bareos | 2018-12-29 15:15:49 | c | F | 0 | 0 | T | | 6 | copyFitasSrv | bp-bareos | 2018-12-29 15:15:51 | c | F | 0 | 0 | T | +-------+--------------+-----------+---------------------+------+-------+----------+-------------+-----------+ *list volumes Pool: pSrvVFull +---------+------------+-----------+---------+-------------+----------+--------------+---------+------+-----------+--------------+---------------------+-----------+ | mediaid | volumename | volstatus | enabled | volbytes | volfiles | volretention | recycle | slot | inchanger | mediatype | lastwritten | storage | +---------+------------+-----------+---------+-------------+----------+--------------+---------+------+-----------+--------------+---------------------+-----------+ | 1 | sfull_0001 | Append | 1 | 455,837,104 | 0 | 2,592,000 | 1 | 0 | 0 | File_arq-srv | 2018-12-29 15:13:24 | sSrvVFull | +---------+------------+-----------+---------+-------------+----------+--------------+---------+------+-----------+--------------+---------------------+-----------+ Pool: pFitasServ +---------+-------------+-----------+---------+-------------+----------+--------------+---------+------+-----------+----------------+---------------------+---------+ | mediaid | volumename | volstatus | enabled | volbytes | volfiles | volretention | recycle | slot | inchanger | mediatype | lastwritten | storage | +---------+-------------+-----------+---------+-------------+----------+--------------+---------+------+-----------+----------------+---------------------+---------+ | 2 | f-serv_0001 | Append | 1 | 455,837,110 | 0 | 31,536,000 | 1 | 0 | 0 | File_arq-fitas | 2018-12-29 15:15:52 | sFitas | +---------+-------------+-----------+---------+-------------+----------+--------------+---------+------+-----------+----------------+---------------------+---------+ But, when a use bls to list contents of tapes, I have this: # bls -j -V sfull_0001 arq-srv_01 | grep '^ Job=' Job=s_ns.2018-12-29_15.08.00_04 Date=29-dez-2018 15:08:02 Level=F Type=B Job=s_mariadb.2018-12-29_15.08.04_05 Date=29-dez-2018 15:10:10 Level=F Type=B # bls -j -V f-serv_0001 arq-fitas | grep '^ Job=' Job=copyFitasSrv.2018-12-29_15.15.47_07 Date=29-dez-2018 15:15:49 Level=I Type=c Job=copyFitasSrv.2018-12-29_15.15.47_09 Date=29-dez-2018 15:15:51 Level=I Type=c But on version 16.2.4 (A fresh instalation) of bareos from debian repository, I have this: # bls -j -V sfull_0001 arq-srv_01 | grep '^ Job=' Job=s_ns.2018-12-29_14.48.47_04 Date=29-dez-2018 14:48:50 Level=F Type=B Job=s_mariadb.2018-12-29_14.49.05_05 Date=29-dez-2018 14:50:59 Level=F Type=B # bls -j -V f-serv_0001 arq-fitas | grep '^ Job=' Job=s_ns.2018-12-29_14.48.47_04 Date=29-dez-2018 14:48:50 Level=F Type=B Job=s_mariadb.2018-12-29_14.49.05_05 Date=29-dez-2018 14:50:59 Level=F Type=B The tape f-serv_0001 on bareos 17.2.4 d'ont have the names of jobs that have been copy, but the name of copy job copyFitasSrv. Is it normal now? From my point of view, if I need to move this tape to another bareos and import with bscan (Or in case of lost the bareos machine and the database), it's impossible to identify the clients jobs. I test bareos 18.2.4rc2 from bareos repository (http://download.bareos.org/bareos/release/18.2/Debian_9.0/) and have the same behaviour of 17.2.4. The bareos 17.2 have a undocument (The manual say nothing about this) option that revert to behaviour of 16.2? What am I doing wrong? Thanks. Clistenes Angelus. - Director Daemon configuration: :::::::::::::: jobdefs/jdSrvVFull.conf :::::::::::::: JobDefs { Name = jdSrvVFull Description = "Modelo de Job Incr/Diff/VirtualFull para servidores" Type = Backup Level = Incremental FileSet = fDebian Schedule = aSrvVFull Messages = Standard Priority = 213 Max Start Delay = 8 hour Max Wait Time = 7 hour Max Run Sched Time = 12 hour Max Diff Interval = 1 week Max Virtual Full Interval = 1 month # pool diario Pool = pSrvIncr Incremental Backup Pool = pSrvIncr Differential Backup Pool = pSrvDiff Full Backup Pool = pSrvVFull Write Bootstrap = /bareos/sys/bsr/%n.bsr RunScript { Runs When = Before Runs On Client = Yes Command = "/backup/scripts/bckp-antes %l" } RunScript { Runs When = After Runs On Client = Yes Command = "/backup/scripts/bckp-depois %l %e" } Spool Attributes = yes Accurate = yes } :::::::::::::: pool/pFitasServ.conf :::::::::::::: Pool { Name = pFitasServ Description = "Pool para servidores em Fitas" Pool Type = Backup Storage = sFitas Action On Purge = Truncate Recycle = yes Recycle Oldest Volume = yes Auto Prune = yes Label Format = "f-serv_${cF_Serv+:p/4/0/r}" Volume Retention = 1 year Volume Use Duration = 1 month Next Pool = pLoadServ } :::::::::::::: storage/sFitas.conf :::::::::::::: Storage { Name = sFitas Description = "Storage das fitas de longa duracao" Address = --- Password = "---" Device = arq-fitas Media Type = File_arq-fitas Maximum Concurrent Jobs = 1 Heartbeat Interval = 4 min Collect Statistics = yes } :::::::::::::: pool/pSrvDiff.conf :::::::::::::: Pool { Name = pSrvDiff Description = "Pool para servidor backup Differential" Pool Type = Backup Storage = sSrvDiff Action On Purge = Truncate Recycle = yes Recycle Oldest Volume = yes Auto Prune = yes Label Format = "sdiff_${cSrvDiff+:p/4/0/r}" Volume Retention = 1 month Volume Use Duration = 1 week Next Pool = pSrvVFull } :::::::::::::: storage/sSrvDiff.conf :::::::::::::: Storage { Name = sSrvDiff Description = "Storage das fitas do pool pSrvDiff" Address = --- Password = "---" Device = arq-srv_01 Media Type = File_arq-srv Maximum Concurrent Jobs = 10 Heartbeat Interval = 4 min Collect Statistics = yes } :::::::::::::: pool/pSrvIncr.conf :::::::::::::: Pool { Name = pSrvIncr Description = "Pool para servidor backup Incremental" Pool Type = Backup Storage = sSrvIncr Action On Purge = Truncate Recycle = yes Recycle Oldest Volume = yes Auto Prune = yes Label Format = "sincr_${cSrvIncr+:p/4/0/r}" Volume Retention = 1 month Volume Use Duration = 1 week Next Pool = pSrvVFull } :::::::::::::: storage/sSrvIncr.conf :::::::::::::: Storage { Name = sSrvIncr Description = "Storage das fitas do pool pSrvIncr" Address = --- Password = "---" Device = arq-srv_01 Media Type = File_arq-srv Maximum Concurrent Jobs = 10 Heartbeat Interval = 4 min Collect Statistics = yes } :::::::::::::: pool/pSrvVFull.conf :::::::::::::: Pool { Name = pSrvVFull Description = "Pool para servidor backup Full" Pool Type = Backup Storage = sSrvVFull Action On Purge = Truncate Recycle = yes Recycle Oldest Volume = yes Auto Prune = yes Label Format = "sfull_${cSrvFull+:p/4/0/r}" Volume Retention = 1 month Volume Use Duration = 1 month } :::::::::::::: storage/sSrvVFull.conf :::::::::::::: Storage { Name = sSrvVFull Description = "Storage das fitas do pool pSrvVFull" Address = --- Password = "---" Device = arq-srv_02 Media Type = File_arq-srv Maximum Concurrent Jobs = 1 Heartbeat Interval = 4 min Collect Statistics = yes } :::::::::::::: schedule/aCopyFitasSrv.conf :::::::::::::: Schedule { Name = aCopyFitasSrv Description = "Horario dos jobs de Copy Srv para as fitas" Run = daily at 6:44 } :::::::::::::: schedule/aSrvVFull.conf :::::::::::::: Schedule { Name = aSrvVFull Description = "Horario dos jobs Srv Incr/Diff/VirtualFull" # logo apos o cron-daily (logrotate) Run = level=Incremental priority=211 mon-sat at 6:31 Run = level=Incremental priority=211 1st sun at 6:31 Run = level=Differential priority=212 2nd-5th sun at 6:33 Run = level=VirtualFull priority=214 1st sun at 6:37 } :::::::::::::: fileset/fDebian.conf :::::::::::::: FileSet { Name = fDebian Ignore FileSet Changes = yes Include { Options { signature = SHA1 noatime = yes checkfilechanges = yes aclsupport = yes xattrsupport = yes compression = GZIP9 accurate = pnugms sparse = yes } File = "\\|backup/scripts/include %l" } Include { Options { signature = SHA1 noatime = yes compression = GZIP9 readfifo = yes sparse = yes } File = "\\|/backup/scripts/include-fifo %l" } Exclude { File = "\\|/backup/scripts/exclude %l" } } - Storage Daemon configuration: :::::::::::::: /etc/bareos/b-stg.d/device/arq-fitas.conf :::::::::::::: Device { Name = arq-fitas Description = "fitas em disco backup longa duracao" Archive Device = /bareos/sys/fitas/arq-fitas/ Device Type = File Media Type = File_arq-fitas RemovableMedia = no Random Access = Yes Label Media = Yes } :::::::::::::: /etc/bareos/b-stg.d/device/arq-srv_01.conf :::::::::::::: Device { Name = arq-srv_01 Description = "fitas em disco para servidores" Archive Device = /bareos/sys/fitas/arq-srv/ Device Type = File Media Type = File_arq-srv RemovableMedia = no Random Access = Yes Label Media = Yes Always Open = no Automatic Mount = yes } :::::::::::::: /etc/bareos/b-stg.d/device/arq-srv_02.conf :::::::::::::: Device { Name = arq-srv_02 Description = "fitas em disco para servidores" Archive Device = /bareos/sys/fitas/arq-srv/ Device Type = File Media Type = File_arq-srv RemovableMedia = no Random Access = Yes Label Media = Yes Always Open = no Automatic Mount = yes } -- You received this message because you are subscribed to the Google Groups "bareos-users" group. To unsubscribe from this group and stop receiving emails from it, send an email to bareos-users+unsubscr...@googlegroups.com. To post to this group, send email to bareos-users@googlegroups.com. For more options, visit https://groups.google.com/d/optout.