the iperf looks not "at its best" because the bareos director and the 
storage daemon are installed on a NAS system with an Intel Atom 2,1 Ghz 
4core CPU
On backup server CPU is on 30-50% overall
as mentioned above, i've launched a job without tls and lz4, the signature 
is configured as XXH128
I can see that the NAS is reporting 250 MB/s in average and 300 MB/s max 
during job runtime.
But i have also, how to say this properly, speed crashes from 250 MB/s to 
80 MB/s. The network rate is not constantly high.
At the same time, i see this on the bareos client screen
 Files=1,212,590 Bytes=177,269,712,013 Bytes/sec=165,672,628 Errors=0

the bareos FD is running on a 30 Core 192GB RAM server, i can see 75% CPU 
of one core usage. Overall server (to be backued up) is on 34 % CPU


so, what do you mean with "network perform properly"? As mentioned, i have 
some hardware limitations on backup server side, but i think even those 
limitations shouldn't lead to those numbers....
Should i fiddle around with systemctl settings or max nettwork buffers?



Andreas Rogge schrieb am Donnerstag, 19. September 2024 um 12:46:36 UTC+2:

> Am 18.09.24 um 19:31 schrieb Markus Dubois:
> > i'm trying to backup between a directly attached (no switch,. no router 
> > in between) a client via a 10G network
> > [ ID] Interval           Transfer     Bitrate
> > [  5]   0.00-10.00  sec  4.02 GBytes  3.45 Gbits/sec 
> > receiver
>
> Honestly, that looks pretty awful.
>
> I just measured two virtual machines on different hosts in different 
> networks. So the path (that probably also has some switches in between) 
> looks like this:
> VM -> Host -> Router -> Router -> Host -> VM
>
> And the result I got was
> [ ID] Interval Transfer Bitrate Retr
> [ 5] 0.00-10.04 sec 10.7 GBytes 9.16 Gbits/sec 
> receiver
>
> So there seems to be something wrong with your network.
>
> > the backup task runs with lz4 compression over TLS
> Do you have checksums enabled in the Fileset (i.e. "Signature = md5")?
> Depending on the checksum used, this can severely impact performance.
> > 
> > but in average i get this:
> > Full Backup Job started: 18-Sep-24 15:16 Files=6,075,920 
> > Bytes=1,715,453,454,098 *Bytes/sec=113,171,490* Errors=2
>
> So when I calculate the average file size on that, I get
> 1,715,453,454,098 Bytes / 6,075,920 Files = 282,336 Bytes/File or around 
> 282 KB per file on average.
> Bareos (like virtually every other file-based backup system) does not 
> perform at line-speed for very small files. This has a lot of reasons, 
> but basically boils down to reading and writing a lot of metadata 
> compared to the actual data being backed up.
>
> > this seems not the "best" transfer rate.
> agreed.
>
> > I'm trying to find a way to optimize this. Any hints?
> 1. Make your network perform properly
> 2. Use tar or cpio redirected to /dev/null to see how fast your client 
> could actually discover and read the files - Bareos will never be able 
> to outperform that value
> 3. Check the CPU-load of bareos-fd (and probably also bareos-sd) during 
> the backup, if the processes sit at 100% most of the time, your 
> performance is CPU-bound in which case you simply need to reduce that to 
> improve performance (i.e. disable compression, disable checksums or 
> maybe remove regex expressions from your fileset).
>
> Hope that helps!
>
> Best Regards,
> Andreas
> -- 
> Andreas Rogge [email protected]
> Bareos GmbH & Co. KG Phone: +49 221-630693-86 <+49%20221%2063069386>
> http://www.bareos.com
>
> Sitz der Gesellschaft: Köln | Amtsgericht Köln: HRA 29646
> Komplementär: Bareos Verwaltungs-GmbH
> Geschäftsführer: Stephan Dühr, Jörg Steffens, Philipp Storz
>

-- 
You received this message because you are subscribed to the Google Groups 
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/888125df-a07b-4ff8-9d5f-0a59ff67932dn%40googlegroups.com.

Reply via email to