Phil,

I have a grab-bag of thoughts.

In my mind, there are 3 basic possibilities:
1. Bacula is configured in a way that constrains performance. (bandwidth
limit imposed in bacula config, compression is chewing up all the CPU
cycles, multiple concurrent jobs are using all the available resources, etc)
2. Something about the systems in question are constrained in performance.
(bad network link, failing hard drive, etc)
3. Bacula is malfunctioning (no examples come to mind, but it isn't
impossible)

Bacula configuration:
>From your bacula-dir.conf, please show us the relevant job, jobdef,
storage, pool, schedule, and fileset resources. From bacula-sd.conf, please
show us the device and autochanger resources that the impacted jobs output
to.

Also, please let us know if your jobs are running concurrently. Are all
your jobs outputting to disk volumes, or directly to tape? (I know you
mentioned tape performance, but I guess it's possible some jobs are writing
to disk volumes).

I suppose you could create a test job and a more limited test fileset for
your desktop. If writing to tape, you might want to make a pool for this
and dedicate a tape to these tests, so this data isn't mixed in with your
current data. Generate a bunch of large, incompressible files (1GB, maybe
20 files?). Run a full backup. What backup speeds do you see? Replace the
files, make the same number of files with the same sizes, but highly
compressible (all zeros). Run the backup again. Backup performance change?
Switch to a lot of random 4k files, with the same total size as the large
files from before. Run a full backup. Backup performance change? Repeat the
test with highly compressible 4k files (all zeros), same size / quantity as
before. Backup performance change?

System performance:

Here are the immediate possible bottleneck sources I can think of: Desktop
CPU, Desktop storage, Desktop <-> NAS network link, NAS storage, and/or
bacula database.

Below I will mention large, incompressible files multiple times. I
recommend generating them with something like 'dd if=/dev/urandom
of=/path/to/file1 bs=1G count=1", ran multiple times (changing output
filename each time) to make some meaningful number of large incompressible
files. Could also use this to generate a bunch of smaller 4k files (worse
disk performance, maybe more realistic depending on what your files are
like).

Desktop CPU: check top while a backup is running from the desktop to the
NAS. Do you see a single 'bacula-fd' process maxing out a CPU core? To my
knowledge, bacula leaves file hashing up to the FD. I think this hashing
process is single-core limited. File compression might be limiting FD
performance as well. If you are backing up directly to tape, (ie, not
backing up to disk, then copying the jobs from disk volumes to tape), the
conventional wisdom is that you would be better off letting the tape drive
compress the data sent to tape, not compressing it on the FD. Fewer CPU
cycles that way.
Desktop storage: Try making a tar of some of your files, dumping the output
to /dev/null. For bonus points, try dumping the tar to the NAS, but I'd
still want to see the performance numbers when the output is /dev/null.
Prefer large, non-compressible files. Can use hyperfine to time / benchmark
this.
Desktop <-> NAS network link: As a basic first step, try iperf3 tests in
both directions between the desktop and the nas. What numbers are we seeing?
NAS storage: We're already seeing 100+MB/s unspool rates from SSD to tape,
but what write speeds do we see to the NAS (spinning?) disks? I would want
to test both internally to the NAS (dd from /dev/urandom to a file, compare
with dd from /dev/zero to a file), and from the disk on the NAS to a fast
destination, probably dd or tar existing files, dumping output to
/dev/null. Bonus points if the test files in the disk >> fast storage test
are large (1GB), and contain random incompressible data.
Bacula database: I don't know how to check performance on this, but I think
it's certainly a possibility that database performance could be slowing
your backup down. I'd check elsewhere first.


Regards,
Robert Gerber
402-237-8692
[email protected]


On Fri, Nov 21, 2025 at 9:14 AM Phil Pemberton via Bacula-users <
[email protected]> wrote:

> Hi all,
>
> I've been using Bacula to back up my NAS for some time, and it's been
> working well. I see effective backup rates of about 30MB/s all told
> (100MB+ unspooling from SSD to tape, maxing out the LTO6 drive).
>
> Unfortunately when I added my desktop PC to the backup cycle, I found
> that the effective rate dropped like a rock, to single-digit megabytes
> per seconds.
>
> The machines are both fairly fast -- the desktop machine (backup source
> running the FD) is a Ryzen 5 5600X, and the server (with the SAS SSD and
> tape drive) is an Intel Core i5-9400.
> The network is gigabit end-to-end, and the transfer rates I'm seeing are
> very poor. Other applications taking the same path are much faster.
>
> Both systems are running Debian derivatives -- the desktop runs Mint
> with Bacula 13.0.4, and the server runs Ubuntu 24.04 with the same
> version of Bacula.
>
> Is there anything I can do to improve performance backing up over the
> network, before I resort to a nightly Rsync from the workstation to the
> server and backing up from there?
>
> Thanks.
> --
> Phil.
> [email protected]
> https://www.philpem.me.uk/
>
>
>
> _______________________________________________
> Bacula-users mailing list
> [email protected]
> https://lists.sourceforge.net/lists/listinfo/bacula-users
>
_______________________________________________
Bacula-users mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/bacula-users

Reply via email to