On Fri, Aug 31, 2007 at 10:34:48AM -0700, prasad jlv wrote: > Dan > > Dan: > > Sorry I did not give more details on the problem that we trying to solve....
No problem. That's why I asked! :) > We are in the process of implementing Simple Traversal of UDP trhuough NATs > -- STUN (http://en.wikipedia.org/wiki/STUN) functionality in our app. To > keep the port open on the NAT device (it may close the port in 20 secs), we > plan to send a zero byte udp packet or a packet with no magic cookie to the > STUN server (Solaris 10 box). We have modified the STUN server code to only > process packets that have the STUN magic cookie but I was wondering if it > can be dropped at a lower layer and thus not incur the overhead. The > keepalives do generate tremendous amount of date and we would like to throw > them away (no response needed). You should take a look at the "detangle" RFE I'm finishing up. This is an integrated-into-IP solution for IPsec NAT-Traversal, which uses similar technology to STUN. The webrev is here: http://cr.opensolaris.org/~danmcd/detangle/ and while I intended this to open up NAT-Traversal to any IPsec Key Management implementation (not just IKE, or not just our in.iked(1M)), one could produce a variant of detangle's UDP_NAT_T_ENDPOINT option to be useful in STUN settings. > Performance is a major issue because we are sending about 200K udp msgs/sec > (1 msg = 56K) and the interrupts are killing the performance (1 whole CPU > on a V890). We are using cassini (ce) driver on the V890 and I believe we > have applied all known udp performance tweaks (TIBCO tweaks) to it but the > performance is still a concern. Where NAT-Traversal drops its packets today and with detangle MAY be a bit too high up the stack for you. Keep in mind, the interrupts will come with every packet, it's a question of DISPATCHING them, and maybe the context switch up to your app's recv/recvfrom/recvmsg() may provide enough savings opportunity. Please take a look at detangle, and see if a modification of it would help your STUN performance. Dan _______________________________________________ networking-discuss mailing list [email protected]
