So I used our "business logic" archive to test performance of having locks on a local disk verses on an NFS mount.
I used -j 15 to build the archive, and it consists of roughly 300 object files. I altered the DISTCC_DIR environment variable to point to either an NFS mounted directory, or the local disk. The tests were run late at night, when no other developers were compiling. Times are in minutes and seconds. Running with local locks, I get the first time of 5:48. This is to load up the cache consistently. Then running with NFS locks, I get times of 5:36 and 5:39. Then, returning to local locks I get 6:49, and 5:43. I'm not really seeing a benefit either way. I should also note that IO performance to the NFS mount is better than IO performance to the local disk, due to gigabit Ethernet, and a hefty memory cache on the Network Appliance. The local disk can handle sustained writes of about 40 MB/second. The NetApp can handle 100MB/sec sustained for at least 6GB. I wouldn't expect IO performance to affect locking so much as latency though, and I don't really know how the latency figures compare. This is enough to convince me that NFS locking isn't hurting us at PayPal, anyway. What exactly are the issues that arise elsewhere? Michael __ distcc mailing list http://distcc.samba.org/ To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/distcc
