Quoting r. Shirley Ma <[EMAIL PROTECTED]>: > Subject: Re: [openib-general] Re: Re: [PATCH] IPoIB splitting CQ,?increase > both send/recv poll NUM_WC & interval > > > Michael, > > "Michael S. Tsirkin" <[EMAIL PROTECTED]> wrote on 04/29/2006 03:23:51 PM: > > Quoting r. Shirley Ma <[EMAIL PROTECTED]>: > > > Subject: Re: [openib-general] Re: Re: [PATCH] IPoIB splitting CQ,? > > increase both send/recv poll NUM_WC & interval > > > > > > > > > Michael, > > > > > > smp kernel on UP result is very bad. It dropped 40% throughput. > > > up kernel on UP thoughput dropped with cpu utilization dropped > > from 75% idle to 52% idle. > > > > Hmm. So far it seems the approach only works well on 2 CPUs. > > Did a clean 2.6.16 uniprocessor kernel build on both sides, > + patch1 (splitting CQ & handler) > + patch2 (tune CQ polling interval) > + patch3 (use work queue in CQ handler) > + patch4 (remove tx_ring) (rx_ring removal hasn't done yet) > > Without tuning, i got 1-3% throughput increase with average 10% > cpu utiilzation reduce on netserver side. W/O patches, netperf side > is 100% cpu utilization. > > The best result I got so far with tunning, 25% throughput increase > + 2-5% cpu utilization saving in netperf side.
Is the difference with previous result the tx_ring removal? > > > I didn't see latency difference. I used TCP_RR test. > > > > This is somewhat surprising, isn't it? One would explain the extra > > context switch to have some effect on latency, would one not? > > I got around 4% latency decrease on UP with less cpu utilization. You mean, latency actually got better? If so, that is surprising. -- MST _______________________________________________ openib-general mailing list [email protected] http://openib.org/mailman/listinfo/openib-general To unsubscribe, please visit http://openib.org/mailman/listinfo/openib-general
