We did the ce UDP performance analysis of our UV bits using MAXQ. While the 
TX throughput number is similar as Nevada, the CPU util. is noticeable 
higher than Nevada (the more UDP sessions created, the more noticeable 
higher CPU util. we can observe).

I looked at the er_kernel data and didn't find anything obvious. As compared 
to Nevada, much more CPU cycles are spent on ip_wsrv() path, which also seem 
to cause the hot conn_drain_list mutex between conn_drain_tail() and 
ip_wput_ire().

But I didn't see anything obvious to cause ip_wsrv() to be called more.

The mpstat also that UV has more context switches, thread migrations.

As the code-path under-stress is not really the fast-path of dld (it 
involves message queuing in dld, and the stream backenable path 
ce->softmac->ip is a layer deeper than Nevada: ce->ip), so that it is not a 
surprise that there are more context switches, etc. But I don't know whether 
there are other reasons that I missed.

I wish someone else in the team can help to have a look. If you are 
interested, you can find the er_kernel data in

/net/brothers.prc/export/home/er_kernel_result/new_result/

The Nevada one is ktest.1.er.udp.5000.tx.nv.1400.er, and the UV one is 
ktest.1.er.udp.5000.tx.cv.enable.1400.er. 
/ws/onnv-tools-prc/SUNWspro/SS11/bin/analyzer can be used to look at these data.

Thanks
- Cathy

Reply via email to