Hi Udo,
Your Maximum File Size ist too small. If you assume 300MB/sec the tape
would stop every 5000/300 seconds to write an EOF mark. That cannot be
fast. Please try at least the amount of data for writing 3 minutes
sustained (Maximum File Size = 54GB).
I'll try this once the current copy job is complete, benchmarking the
btape speed command using file_size=5 wrote 5GB chunks, which reached
the 200MB~/sec mark.
What size are your file volumes?
Pool Daily -> Incrementals + daily bacula DB dump
+------------+-----------+---------+-----------------+----------+
| volumename | volstatus | enabled | volbytes | volfiles |
+------------+-----------+---------+-----------------+----------+
| Catalogs | Used | 1 | 407,715,205,671 | 94 |
| Daily-13 | Used | 1 | 953,720,636,816 | 222 |
| Daily-10 | Append | 1 | 79,824,581,674 | 18 |
| Daily-11 | Used | 1 | 778,609,282,174 | 181 |
| Daily-12 | Used | 1 | 573,366,535,214 | 133 |
| Daily-0085 | Used | 1 | 408,215,876,761 | 95 |
| Daily-0260 | Used | 1 | 460,077,395,998 | 107 |
+------------+-----------+---------+-----------------+----------+
Pool Weekly -> Full backups
+-------------+-----------+---------+-------------------+----------+
| volumename | volstatus | enabled | volbytes | volfiles |
+-------------+-----------+---------+-------------------+----------+
| Weekly-0035 | Used | 1 | 3,224,557,692,497 | 750 |
| Weekly-0036 | Used | 1 | 2,323,662,023,874 | 541 |
| Weekly-0037 | Append | 1 | 5,563,811,862,205 | 1,295 |
| Weekly-0043 | Full | 1 | 202 | 0 |
+-------------+-----------+---------+-------------------+----------+
Thank you,
- Gilles
On 4/8/25 15:13, Udo Kaune wrote:
Am 08.04.25 um 14:18 schrieb Gilles Van Vlasselaer:
Hi all, I'm having issues with write speeds to our LTO drive. Copy
jobs aren't going over 100 MB/sec, and on average it's around 80 MB/sec.
Using btape speed I was benchmarking the performance on different
tapes (HW compression on/off, HW encryption on/off), and it reached
180-220 MB/sec, depending on the test. Still, it's nowhere near the
theoretical maximum of the LTO 8 specification. But even reaching the
benchmark speeds would mean that our job copy time would be halved.
Our Storage Daemon stores all full and incremental backups on a file
volume. This volume resides on a ZFS pool. I've also benchmarked the
ZFS pool, and it easily reads continuously over 300 MB/sec. Once a
month, we issue a Copy Job to the LTO 8 tape.
Bacula DIR and SD run on the same machine, with a Quantum LTO-8 HH
drive, using Quantum MR-L8MQN-01 tapes.
Am I'm missing some important properties in the resources? Any help
is appreciated.
- Gilles
Hi Gilles,
Your Maximum File Size ist too small. If you assume 300MB/sec the tape
would stop every 5000/300 seconds to write an EOF mark. That cannot be
fast. Please try at least the amount of data for writing 3 minutes
sustained (Maximum File Size = 54GB).
Be aware that you have to wait at least for one complete block to be
read if you are doing a restore. (3 minutes again in our example).
What size are your file volumes?
Best regards, Udo
_______________________________________________
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users
_______________________________________________
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users