On new BackupPC host backups go very slow. I believe I have determined
the network connection is not at fault, from Windows machine
Speedtest.net shows a little less than gigabit speeds and from this
backuppc host speedtest.cli shows over 800 Mbits / sek.

But writing to disk is slow, about 3MBytes / second. Disk system is hw
RAID 1 with two 4 TB SATA disks, and file system is XFS. In fact it is
not just BackupPC, if I download large files the speed drops down to
about 3 MBytes / sek.

01:00.0 RAID bus controller: LSI Logic / Symbios Logic MegaRAID SAS
2108 [Liberator] (rev 05)

root@fuji:/srv# xfs_info /dev/sdb2
meta-data=/dev/sdb2 isize=512 agcount=4, agsize=242689472 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=1 spinodes=0 rmapbt=0
         =                       reflink=0
data     =                       bsize=4096   blocks=970757888, imaxpct=5
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal               bsize=4096   blocks=474002, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

I ran Phoronix Test Suite, results:
http://taleman.fi/Fujitsu_RX300/fuji-diskbench-2018-11-29/

Those measurements show 4 k block size random IO speeds down to 1
MByte / sek. With larger block sizes in sequential writes speeds are
very good, better than I need with a gigabit connection.

Is my setup somehow sub-optimal? Why would BackupPC write 4k blocks
and use random IO? I noticed the slowness with the first big backup,
disk was almost empty so BackupPC could just write stream to disk
which to my mind should go with disk media speed, which should be >
100MBytes / sek even on this slowish SATA disks.

I did not think disk system speed would be an
issue, so I did not think about blocksize, separate log device etc
when creating the file system. I chose XFS so I do not need to worry
about inodes running out.

Setup is defaults, BackupPC on Debian GNU/Linux 9.6.

Why is writing to disk so slow?

What can I do to make it faster? The current situation is annoying, I
get a gigabit connection so backups go fast and end up with system
where 0.03 GBit/sec connection would be just as good.

Possible solutions?
===================

Host has also two SSD disks, OS is installed there. Disks have about
150 GB unpartitioned space, so I could put XFS log or journal or
something there. But looks like these must be set up during mkfs time,
so now I need xfsdump, make better fs, xfsrestore.

Bigger block size? I have not yet looked into what are sizes of files
that bet backed up. Backup client hosts are web servers, e-mail
servers and name servers. The performance tests indicate small block
sizes are slow in random IO.

--
Tapio Lehtonen
OSK Satatuuli

<<attachment: tapio_lehtonen.vcf>>

_______________________________________________
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:    https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:    http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/

Reply via email to