I'm posting this to the list so people searching for FreeBSD optimizations will find it in the archives.

I finally got around to looking at why my FreeBSD server was only backing up at about 2.5MB/sec using tar with clients with lots of small files.  

Using my desktop (a Mac PRO) as the test subject backups were running at about 2.5MB/sec or more accurately 25 files a second.   The server (FreeBSD 6.2 with a 1.5 TB UFS2 raid 10 on a 3ware card) was disk bound.

Running the ssh / tar combo from the command line directed to /dev/null gave close to 25MB/sec confirming that it wasn't the client or the network.  I've done the normal optimization stuff (soft updates, noatime).   After a lot of digging I discovered vfs.ufs.dirhash_maxmem

The ufs filesystem hashes directories to speed up access when there are lots of files in a directory (as happens with the pool) however the maximum memory allocated to the hash by default is 2 MB!     This is way too small and the hash buffers were thrashing on almost every pool file open.

(for those who care sysctl -a | egrep dirhash will show the min, max and current hash usage - if current is equal to max you've probably got it set too small)

On my box setting the vfs.ufs.dirhash_maxmem to 128M using sysctl did the trick - the system is using 72M for the whole pool tree (2.5 million files) and backups are now running at about 10 MB/sec and 100 files a second! (this is now compute bound on the server which is an old P4 2.6 box).

John


-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
_______________________________________________
BackupPC-users mailing list
[email protected]
List:    https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:    http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/

Reply via email to