I'm writing Windows PV drivers for Xen, and bacula is the only app that I have found that can reliably trigger a certain bug in the network driver. While debugging that, I have noticed that bacula may not be sending data in the most efficient way possible...
Basically, when my drivers receive a packet, the packet consists of a whole bunch of buffers chained together. The packets I get from Bacula contain quite a few small chunks, eg a sample packet consists of a large number of 4 byte and 12 byte buffers (15 of each in a single 10k TCP packet). It's fairly easy to see the performance difference of using smaller buffer sizes by using the iperf tool. I'm not sure if Linux handles things in the same way though so this might not apply there... I think that there could be some performance benefit in keeping a per-socket buffer and only sending data onto the wire when we have enough, at least for the fd->sd connection. Has this been considered before? The system I'm seeing this on is running 2.2.8 so it's pretty old... maybe buffering has already been implemented in later versions? (A quick look at bnet.c and bsock.c doesn't reveal anything obvious though...) Thanks James ------------------------------------------------------------------------------ This SF.net email is sponsored by: High Quality Requirements in a Collaborative Environment. Download a free trial of Rational Requirements Composer Now! http://p.sf.net/sfu/www-ibm-com _______________________________________________ Bacula-devel mailing list [email protected] https://lists.sourceforge.net/lists/listinfo/bacula-devel
