I'm trying to get a Copy job going. What I'm trying to do is copy recent
full backups (which are stored on disk) copied to tape so they can be
taken off-site.
The Copy job creates jobs, but the jobs it creates are all stuck it
seems with a JobStatus of 'C'. An example:
*llist jobid=359
Any thought to allow pool types to not need to be capitalized? Took me
several hours to track down problems when a new pool used
pool type = backup
That took me into the logs, then the database logs, and finally the
PostgreSQL create tables scripts to find the check statement.
Nearly every thi
On 2016-05-30 7:49 AM, Marco van Wieringen wrote:
> On 05/30/16 10:24 AM, Marco van Wieringen wrote:
>> On 05/30/16 02:38 AM, Douglas K. Rand wrote:
>>> Any thought to allow pool types to not need to be capitalized? Took me
>>> several hours to track down pr
I'm running into Postgres' default 100 maximum connection limit with a
director that has maximum concurrent jobs set to 10. I could increase
the Postgres maximum, but the number of connections for the number of
jobs seems off.
It seems that when a large batch of jobs are scheduled to start at the
I'm not seeing great performance when copying jobs (via the "Type =
Copy" in the Job definition) from disk to my LTO-6 tape drive. I've
verified with dd (dd bs=63k if= of=/dev/nsa0) that I can
copy a Bareos backup file off of disk to the tape drive at a sustained
160 MB/s.
This is on FreeBSD 10.2
On 06/13/16 14:00, Franck Ratier wrote:
> On Monday, 13 June 2016 10:10:24 UTC+1, sylvai...@gmail.com wrote:
>> I configured copies and it work fine.
>>
>> Now, I want perform some restorations tests but I have an issue : I
>> run "restore client= copies".
>>
>> I show correctly jobID and copyjo
I've encountered some FreeBSD specific things that could use
representation in the documentation:
* chio works much better than mtx for changers
* /dev/xpt0 must be writable by the SD for encryption
* SCSICRYPTO needs to be enabled in both the client and the server ports
If I could get some direc
I'm testing some disaster recovery scenarios and I'm having problems
with the bls and bextract commands and encrypted LTO tapes.
Running bls results in the tape looking empty except for the header:
% sudo bls -V ND /dev/nsa0
bls: butil.c:271-0 Using device: "/dev/nsa0" for reading.
26-Aug 14:
On 08/30/16 12:05, Marco van Wieringen wrote:
> On 08/26/16 10:13 PM, Douglas K. Rand wrote:
>> I'm testing some disaster recovery scenarios and I'm having problems
>> with the bls and bextract commands and encrypted LTO tapes.
>>
>> Running bls results in
I've been playing around with using Virtual Full backups for off-site
backups, and generally I really like it. But of course I have a problem.
First, a bit of background: We are doing disk to disk backups for daily
backups. I have the full backups spread across the week to even the load
and keep t
On 09/07/16 05:38, Philipp Storz wrote:
> Hello Douglas,
>
> On 02.09.2016 17:27, Douglas K. Rand wrote:
>> I've been playing around with using Virtual Full backups for off-site
>> backups, and generally I really like it. But of course I have a problem.
>>
>&
I like Bareos, I really do. Even more than Amanda. But ...
I really need to spread out my full backups and not have them all happen
on the same day. I had thought about, and actually built, a schedule
through a Puppet template that randomly spread full backups across the
first 28 days of the month
On 10/7/16 4:11 AM, Jörg Steffens wrote:
Am 05.10.2016 um 22:07 schrieb Douglas K. Rand:
I like Bareos, I really do. Even more than Amanda. But ...
I really need to spread out my full backups and not have them all happen
on the same day. I had thought about, and actually built, a schedule
On 10/12/16 06:31, Jörg Steffens wrote:
> Am 12.10.2016 um 08:45 schrieb Daniel:
>> yes, /var/lib/bareos/storage/arbeitsplatz/oliver is a folder. In the
>> night to yesterday, the backup throws a fatal error. Yesterday I
>> started the backup manually and it works, the same was tonight - it
>> wor
The Bareos documentation is quite clear about when a differential or
full backup may be run:
The same Job name.
The same Client name.
The same FileSet (any change to the definition of the FileSet such as
adding or deleting a file in the Include or Exclude sections
constitutes a different File
On 10/26/16 6:03 AM, Jörg Steffens wrote:
Am 26.10.2016 um 09:41 schrieb Philipp Storz:
Please read the fine documentation:
http://doc.bareos.org/master/html/bareos-manual-main-reference.html#directiveDirFileSetIgnore
File
Set Changes
Besides this, we had also some internal discussion about
On 10/26/16 12:08, Bruno Friedmann wrote:
> On mercredi, 26 octobre 2016 09.41:25 h CEST Douglas K, Rand wrote:
>> On 10/26/16 6:03 AM, Jörg Steffens wrote:
>>> Besides this, we had also some internal discussion about this.
>>> While upgrade to Full is a useful feature,
I'm wanting to experiment with 16.2's neat always incremental feature.
And my question is: Do I have to also update the clients to 16.2? Or
can I utilize always incremental by updating the director and SD to 16.2
and use 15.2 FD's?
>From reading the docs it *seems* that all of the features are in
On 10/27/16 09:10, Jörg Steffens wrote:
> Am 27.10.2016 um 15:08 schrieb Douglas K. Rand:
>> I'm wanting to experiment with 16.2's neat always incremental feature.
>> And my question is: Do I have to also update the clients to 16.2? Or
>> can I utilize always incre
I'm a little startled at how many connections Bareos makes to Postgres.
We have currently have 88 backup jobs scheduled to start each day in the
evening. The max current jobs that we run is 10. For this Bareos makes
about 100 separate connections to Postgres, which seems quite high.
I've attached
eg. bconsole or from webui) connections also cause a DB
connection each.
Regards,
Stephan
On 11/03/2016 03:58 PM, Douglas K. Rand wrote:
I'm a little startled at how many connections Bareos makes to Postgres.
We have currently have 88 backup jobs scheduled to start each day in the
evening.
We are using Puppet to configure Bareos for things like adding clients.
Puppet manages the Bareos config files and re-starts Bareos when changes
happen.
The trouble is that we have Puppet running in the background and if
someone adds a client during a backup run Puppet re-starts Bareos in the
midd
> On 11/03/2016 03:58 PM, Douglas K. Rand wrote:
I'm a little startled at how many connections Bareos makes to
Postgres. We have currently have 88 backup jobs scheduled to start
each day in the evening. The max current jobs that we run is 10.
For this Bareos makes about 100 separate con
On 11/7/16 9:56 PM, Tripp Donnelly wrote:
I want to use One FS = No to allow bareos to backup all mount points. This is
in the sample config, but it does not appear to have any effect. I have tried
onefs=no, One FS = No, and a couple other combinations. I poked around the code
on Github but I'
On 11/03/16 19:47, Douglas K, Rand wrote:
>> On 11/03/2016 03:58 PM, Douglas K. Rand wrote:
>>> I'm a little startled at how many connections Bareos makes to
>>> Postgres. We have currently have 88 backup jobs scheduled to start
>>> each day in the evening.
I'm confused about the consolidate job with always incremental. It seems
from the documentation that there should only be one consolidate job a day:
Job {
Name = "Consolidate"
Type = "Consolidate"
Accurate = "yes"
JobDefs = "DefaultJob"
}
Perhaps I'm reading too much into the fact
On 10/13/16 03:50, Jörg Steffens wrote:
> Am 12.10.2016 um 17:53 schrieb Douglas K. Rand:
> [...]
>>> it seams, you are not the only one having this problem:
>>> https://bugs.bareos.org/view.php?id=691
>>
>> Just a "me too": I experienced the same pr
For disaster recovery I'm pushing our catalog (along with Bareos configs
& keys) out to S3. I'm doing this as a Run After Job script and pushing
the catalog to S3 via s3cmd.
My problem is that the catalog is large and takes about 10 hours to push
to S3. What I want to do is have the push to S3
On 12/05/16 06:50, Robert N wrote:
> Hi,
Hello.
> I also have performance issues. Does the spooling make any difference
> by copy jobs? I mean the data is anyway already on the disk, so
> copying over to a spool ( disk ) and then to tape improves
> performance?
I thought exactly the same thing,
On 11/15/16 10:07, Douglas K. Rand wrote:
> On 10/13/16 03:50, Jörg Steffens wrote:
>> Am 12.10.2016 um 17:53 schrieb Douglas K. Rand:
>> [...]
>>>> it seams, you are not the only one having this problem:
>>>> https://bugs.bareos.org/view.php?id=691
>>
On 12/04/16 15:41, Douglas K, Rand wrote:
> For disaster recovery I'm pushing our catalog (along with Bareos configs
> & keys) out to S3. I'm doing this as a Run After Job script and pushing
> the catalog to S3 via s3cmd.
>
> My problem is that the catalog is larg
On 01/04/17 11:10, Dan Broscoi wrote:
> Hi,
Hey!
> Is Bareos SOX compliant ? If not, any special measures can be taken
> to comply ?
I'm not a SOX expert, but I'm betting that the answer is no where near
as simple as "yes" or "no".
I'd think it would depend on how Bareos is configured. If you c
On 01/09/17 10:34, Joakim Jalap wrote:
> Hello Bareos Users!
Hey!
> I have some backups going to disk on my server. I would like to copy
> some jobs to tape for offsite storage. What I'm thinking of is
> having 4 tapes and doing all the copy jobs on the 1st sun of each
> month.
I've found that
On 01/10/17 08:39, Joakim Jalap wrote:
> "Douglas K. Rand" writes:
>
>> My pool looks like:
>>
>> pool {
>> # No "Label Format" disables automatic volume labeling
>> name = offsite
>> pool type = Backup # Sigh, case s
On 1/31/17 3:56 AM, Михаил Калинин wrote:
Hello all!
Hello
I have Dovecot with IMAP access. Can I backup my client's mailboxs
and mails with Bareos? Can I restore the mails to the mailbox via
console?
You only have IMAP access to the email? Not direct access to the server?
If so, I'd sugg
On 5/28/17 5:34 AM, 'Tilman Glotzner' via bareos-users wrote:
Some of the computers, I want to backup are not necessarily switch
on when I run their backup jobs. So I run a bash script via "Run
before" that pings them, and if that fails, wakes them up via
etherwake. Depending on whether they were
The number of things I like about Bareos are numerous. But perhaps the
sharpest pinch point that we have with Bareos is the safety feature that
any change to a FileSet causes a full backup to be triggered.
From the docs:
Any change to the list of the included files will cause Bareos to
> autom
On 06/01/17 10:06, Jörg Steffens wrote:
On 01.06.2017 at 16:48 wrote Douglas K. Rand:
The number of things I like about Bareos are numerous. But perhaps the
sharpest pinch point that we have with Bareos is the safety feature that
any change to a FileSet causes a full backup to be triggered
I noticed that the docs for Always Incremental have
volume use duration = 23 hours
in the pool for both the AI-Incremental and AI-Consolidated pools. Which
caused me to wonder if Virtual Full and/or Consolidated jobs will read
from pool volumes in the Append state? Or will they only read fr
I'm having a problem where Bareos consumes all 150 of my Postgres
connections and then backup jobs start to fail with the error:
06-Aug 21:00 bareos JobId 0: Fatal error: sql_pooling.c:83 Could not
open database "bareos": ERR=postgresql.c:246 Unable to connect to
PostgreSQL server. Database=ba
On 8/26/17 10:22 PM, John wrote:
I am attempting to install a Bareos server on my FreeNAS in a FreeBSD jail.
I am stuck at § 2.4.2 of the Bareos Manual. It says I need to run the following
commands:
su postgres -c /usr/lib/bareos/scripts/create_bareos_database
su postgres -c /usr/lib/bareos/sc
We've been switching from a normal full, differential, incremental
schedule to using the always incremental system in Bareos v16.2, and it
is simply awesome.
What I'm especially impressed with is the ease of switching from the
classic full/diff/incr scheme to always incremental. Once you have
On 8/31/17 1:32 AM, Julian Poß wrote:
However, do you know how bareos handles incremental backups? For
example i want to keep 14 restore points on disk, and one complete
week of all backups per tape set.
This (imo) will require two jobs per client, where one of them writes
to disk and the other
On 8/31/17 8:49 AM, Jon SCHEWE wrote:
On 8/31/17 8:13 AM, Douglas K. Rand wrote:
The trick that I found for having Bareos then ignore the off-site
backups for other operations is this tiny script that gets run after
each virtual full backup via:
run after job =
"/usr/meridian/share/li
On 09/01/17 04:38, Julian Poß wrote:
May i ask you how you configured this? I would also prefer creating
virutal full backups, instead of encumbering the clients again.
OK, I figured that was next. :)
We use Puppet to manage our Bareos configs, so you'll notice repeated
things (like the iden
On 9/4/17 7:44 AM, dngu...@goldkiwimedia.com wrote:
Hello,
Hello
I have configured the backup system with the
"Always-Incremental-Feature". I use the "Virtual-Full-Feature"to
archive the data.
It works perfectly!
However, I would like to use the "Virtual-Full-Feature" to archive
normal "Ful
On 9/4/17 4:02 AM, Julian Poß wrote:
Thank you a lot! :)
You bet.
I think i understand your setup. But just to make sure, let me try to
wrap it up in my own words.
Your jobs are basically all running with "incremental" job defs,
using the "always-incr" pool for incrementals and "consolidated
I seem to have a deadlock with an AI consolidation job wanting to both
read and write from the same consolidation volume.
From the Bareos director:
Running Jobs:
Console connected at 07-Sep-17 10:34
JobId Level Name Status
===
ime,JobFiles,JobBytes,JobTDate,Job,JobStatus,Type,Level,ClientId,Name,PriorJobId,RealEndTime,JobId,FileSetId,SchedTime,RealEndTime,ReadBytes,HasBase,PurgedFiles
FROM Job WHERE JobId=30176
(28 rows)
On 09/07/17 06:07, Stephan Duehr wrote:
Hi Douglas,
I would expect one DB connection per running job
any of the
parameters, look for "experimental database pooling functionality" in
http://doc.bareos.org/master/html/bareos-manual-main-reference.html#CatalogResource
Regards,
Stephan
On 09/07/2017 07:16 PM, Douglas K. Rand wrote:
Hey Stephan. I have Maximum Concurrent Jobs set to 10 fo
On 09/08/17 08:50, John Maag wrote:
I have been watching some youtube videos of bacula setup and I see mention
to setting up the listening ports and IP addresses. When I go look at the
bareos director config file in /etc/bareos/bareos-dir.d/director I see a
much different format and no mention of
On 09/11/17 09:45, John Maag wrote:
What is the configuration necessary to get bareos to create a single file
for a single backup? I want a backup regardless of full, incremental to go
to its own file
Somehow I can't help myself: Why do you care if Bareos stores each backup in a
single file?
On 09/14/17 09:11, Jon SCHEWE wrote:
I've got the always incremental backup strategy working. It backs up to
the pool AI-Incremental and then the consolidate job goes to
AI-Consolidate.
Now I would like to add virtual full backup for offsite. Below is what I
have for configuration. I'm unsure ab
t day I added 70 more. And then the backups start failing
consuming all 250 Postgres connections.
On 09/08/17 07:28, Douglas K. Rand wrote:
Of course. Thanks for the help.
catalog {
name = catalog
db driver = "pgsql"
db name = "bareos"
db user
I screwed up my migration to Always Incremental backups, and I'm hoping that
there is a path back for me. I had been doing a classic incremental,
differential, and full set of backups to disk. When switching to AI backups
Bareos noticed that there was a full backup available, and then just cont
On 09/15/17 10:51, Jörg Steffens wrote:
On 15.09.2017 at 17:03 wrote Douglas K. Rand:
I screwed up my migration to Always Incremental backups, and I'm hoping
that there is a path back for me. I had been doing a classic
incremental, differential, and full set of backups to disk. When
swit
On 09/15/17 11:17, Jörg Steffens wrote:
On 15.09.2017 at 17:54 wrote Douglas K. Rand:
On 09/15/17 10:51, Jörg Steffens wrote:
On 15.09.2017 at 17:03 wrote Douglas K. Rand:
I screwed up my migration to Always Incremental backups, and I'm hoping
that there is a path back for me. I had
On 09/22/17 08:57, Rick Sutphin wrote:
I recently installed bareos from the bareos repos on Ubuntu 16.04 (lxc
container).
The bareos-dir daemon will start from the command line, but will not start via
the startup scripts i.e.:
# bareos-dir
successfully starts the daemon (ps aux | grep bareos
On 09/25/17 08:25, Jon SCHEWE wrote:
On 9/14/17 9:38 AM, Douglas K. Rand wrote:
On 09/14/17 09:11, Jon SCHEWE wrote:
I've got the always incremental backup strategy working. It backs up to
the pool AI-Incremental and then the consolidate job goes to
AI-Consolidate.
Now I would like t
On 09/25/17 09:38, Jon SCHEWE wrote:
On 9/25/17 8:58 AM, Douglas K. Rand wrote:
On 09/25/17 08:25, Jon SCHEWE wrote:
On 9/14/17 9:38 AM, Douglas K. Rand wrote:
On 09/14/17 09:11, Jon SCHEWE wrote:
I've got the always incremental backup strategy working. It backs
up to
the pool AI-Increm
So, I had my screw up with Always Incremental settings
https://groups.google.com/forum/#!topic/bareos-users/rGtOEqmQAbw
And using bscan seems to help, but it is REALLY slow.
What is the procedure for having a client re-start an always incremental with
a brand new full? Is there a way of having
On 09/26/17 06:22, Jurgen Goedbloed wrote:
It seems to be related to file encryption at the client.
When I disable encryption by removing the PKI lines from bareos-fd.conf, the
backups will succeed.
I have remove the following lines:
PKI Signatures = Yes
PKI Encryption = Yes
PKI Ke
On 9/30/17 12:08 AM, Stefan Klatt wrote:
What about compression of the exported catalog in front of the
backup? Compression with zip or rar is a little better than online
compression I think.
I added this by hand quite easily. For the catalog job I just added a
2nd Run Befofre Job right after th
On 10/19/17 10:43, Norbert Deleutre wrote:
Someome can explain this difference please ?
Right in the manual there is an entire section on terminology that defines the
backup types, and lots of other useful Bareos terms. Section 1.7
http://doc.bareos.org/master/html/bareos-manual-main-refere
On 11/4/17 1:47 AM, Bob Farmer wrote:
Hi all, I'm testing out Bareos as a potential replacement to our
current backup software and have run into performance issues.
Wondering if any solutions are available.
Details of a fairly basic test I'm doing:
I have a Linux backup server (Director & SD)
I'm doing daily always-incremental jobs along with daily consolidation
jobs. And then on the weekends we do virtual full jobs to tape for
off-site storage.
Each of our off-site jobs generate an email with:
04-Nov 10:13 bareos JobId 0: Automatically selected Catalog: catalog
04-Nov 10:13 bareo
On 11/4/17 12:51 PM, Jörg Steffens wrote:
On 04.11.2017 at 16:54 wrote Douglas K. Rand:
I'm doing daily always-incremental jobs along with daily consolidation
jobs. And then on the weekends we do virtual full jobs to tape for
off-site storage.
Each of our off-site jobs generate an email
On 11/08/17 07:49, Norbert Deleutre wrote:
I want to move bareos database into a other directory.
is that someone has already done this ? and whats the step?
What database are you using with Bareos? Postgres, MySQL, or SQLite?
How you move the database depends on which you are using.
--
You
On 11/15/17 02:59, Seitan wrote:
Hello,
I'm having issues with "Always incremental" job.
When bareos tries to consolidate backups into new full backup, I get error:
No Next Pool specification found in Pool "Consolidated".
I do not need longterm pool, - I'm okay with just Consolidated pool.
What
On 11/16/17 03:15, Seitan wrote:
Here is my jobs configuration:
CLNT.conf:
Job {
Name = "CLNT_root"
Client = "CLNT"
JobDefs = "root_cons"
FileSet = "root_CLNT"
Always Incremental = yes
Accurate = yes
AlwaysIncrementalJobRetention = 6 days
AlwaysIncrementalMaxFullAge = 38 days
[Providing some context to the messages by quoting at least part of the
previous thread is helpful for people to understand.]
On 2017-11-16 at 09:52 CST Douglas Rand wrote:
Seitan, do you see how your consolidate job is using the Consolidated
pool? In your CLNT_root job you over-ride the pool to
On 12/2/17 7:42 AM, George Kontostanos wrote:
Greetings all,
Hey George.
I have an OpenBSD client that I need to backup in our Bareos environment.
I understand that there is no port/package for OpenBSD. Also the current bacula
client available is bacula-client-9.0.4
Do you know if that wou
On 12/05/17 10:09, diana sh wrote:
Is there an option to set a client on a pc without a static ip?
Further more, it is connecting grout a router?
Its in the docs, sections 28.1 and 28.2.
http://doc.bareos.org/master/html/bareos-manual-main-reference.html#x1-38200028.1
Is there a tutorial may
I have a virtual full job that runs fine by the schedule but I can't run
by hand. Whey I try to kick of the job via bconsole, it simply doesn't run:
*run job=agena-offsite
Using Catalog "catalog"
Job not run.
I get an un-helpful error in the log:
JobId 0: Fatal error: No Next Pool specificatio
On 12/17/17 14:31, Bruno Friedmann wrote:
On dimanche, 17 décembre 2017 18.19:37 h CET Douglas K. Rand wrote:
I have a virtual full job that runs fine by the schedule but I can't run
by hand. Whey I try to kick of the job via bconsole, it simply doesn't run:
*run job=agena-off
On 01/12/18 07:38, Christoph Haas wrote:
Am 12.01.2018 um 11:54 schrieb 'Dennis Benndorf' via bareos-users:
Is there a possibility to spool to disk in case of tape robot errors? In
Amanda all backups were made and saved in a spool directory and when the
robot was back again dumps could get flus
On 01/15/18 03:51, garbag...@gmail.com wrote:
When I delete a bareos client, I use the command "bareos-dbcheck -b -D
mysql -f" The option "-b" makes it possible to use the command in batch
mode thus in non-interactive mode.
However this command with the option "-b" ask me if I want to create a
t
On 1/30/18 12:59 AM, Norbert Deleutre wrote:
ON the same server, I want to plan multiple backup of filesystem.
How I can do that :
- One job associate for each fileset ?
That is how we do it. Multiple jobs with the same client but different
filesets.
But it does depend on what you want to
On 04/12/18 01:47, Jörg Steffens wrote:
On 11.04.2018 at 21:59 wrote Dante F. B. Colò:
Hello Folks !!
I'´m trying to setup some backups using Wasabi S3 as a storage with the
droplet backend but had no success , when bareos tries to send data to
the storage it simply hang doesn't write anything
If you are writing to tape, then spooling is a large help. If you can afford
the space to spool up entire jobs. We spool the data to disk before writing
it to LTO-6 tape.
Also, note that there are two different types of spooling: Data and
attributes. Data spooling is, like you typed, spooli
On our tape backups, which are virtual full backups, we use
spooling to increase throughput. We run 5 parallel virtual-full
backups that are spooling to disk, but only one de-spools at a time
to tape. This tends to keep the tape drive streaming more often
than not, and the parallel virtual-full ba
On 09/21/18 06:20, d.carra...@i2tic.com wrote:
> Is there any compression method with Multithreading support?.
>
> I want to backup an SQL file of +300GB, and the problem is that GZIP and
> LZ4HC at least are single threaded.
You can hack it up by having a ClientRunBeforeJob script that uses pbz
On 1/3/19 11:46 AM, wrote:
> I run offsite full backups every 2 weeks. I want to mark them as archive
> jobs so that they aren't used as the basis for incrementals and used for
> restores. I can do this with "update jobid=%i jobtype=A" after the job.
> The question now is, will these jobs automati
On 2/4/19 8:46 PM, 'Michael Stum' via bareos-users wrote:
> I have implemented a backup that does Full, Differential, and Incremental
> backups to tape, and all that is working.
>
> However, I would like to always keep a Full Backup of each client around on
> disk. And not just the last Full backu
On 2/6/19 11:29 AM, 'Michael Stum' via bareos-users wrote:
> I've done some more digging with this, and it seems that doing a backup
> to disk is saner for my scenario.
We do really love always incremental jobs. They are exceptionally light
on the clients. The consolidation jobs can be large, b
On 2/12/19 7:01 AM, Claas Goltz wrote:
> Hi, is it technically possible to spool and despool at the same
> time?
Kind of. You can have multiple backup jobs writing to the spool area at
once. This is controlled by the 'maximum concurrent jobs' in the job (I
do it in the jobdef) section. But you r
I discovered that it seems that client side excludes for me has broken. And in
looking into it the documentation seems to have changed. On doc.bareos.org it
says:
If you precede the less-than sign (<) with a backslash as in ∖<, the file-list
will be read on the Client machine instead of on the D
On some systems, especially ones with many millions of files, the
stat(2) work of Bareos looking at files to decide if it has been
modified by far takes the most work. And that work is always the same
for each non-full backup.
If you have ZFS, maybe you also have access to DTrace? You could t
On 3/22/19 4:37 AM, 'Stefan Krüger' via bareos-users wrote:
> @Douglas Rand: I'm running zfs on linux, so I dont have drace, I can
> try strace -c
strace will likely work for you, but you'll have to reduce the data a
bit yourself. There is a Linux DTrace kernel module, but I've never
used it,
On 4/10/19 3:33 AM, Frank Brendel wrote:
> Hi,
Hey!
> during the consolidation I get messages like:
>
> Bareos: Intervention needed for
> Please mount read Volume
>
> It seems that the job wants to mount a volume that is in use by another job.
>
> Warning: stored/vol_mgr.cc:548 Ne
90 matches
Mail list logo