On 5/17/2010 11:21 AM, Hartmut Reuter wrote: > > Finally it works. > > The following changes were necessary: > > in afsconfog.h (or elsewhere in a header included in rx) > #define RX_TRIMDATABUFS 1
Calls to rx_TrimDataBufs were intentionally wrapped by #ifdef RX_TRIMDATABUFS in July 2009 because Derrick and I discovered while profiling Rx that large amounts of CPU time were being consumed by the process of filling packets with data buffers only to tear them apart again as soon as the data was read from the wire. This process also required holding onto locks which reduced concurrency. The reason that the lack of rx_TrimDataBufs() was hurting the kernel build of rx is that the kernel build has no mechanism to allocate packets while reading data from the network if the free packet queue is empty. Therefore, there is logic in place that forces packets to be torn apart (reclaiming) whenever the free packet counts reach a low water mark. The other size effect is that data that is being received is dropped on the floor. This is necessary to prevent a panic. The Rx library does maintain a flag, rxi_NeedMorePackets, to indicate when additional packets need to be allocated and provides a function, rx_CheckPackets(), that if called will allocate additional packets if the flag is set. However, rx_CheckPackets() was never called after the initial rx_InitHost() call. As a result, additional packets were not being allocated when required to support the incoming or outgoing data flows. This oversight has been corrected with commit 54bf41004b901ca090d63e239768588fa90bc806 I would now expect UNIX cache managers to see lower cpu utilization and higher throughput with 1.5 (master) over 1.4.12. Jeffrey Altman
smime.p7s
Description: S/MIME Cryptographic Signature
