I would actually EXPECT tun to be more expensive, but not because of Click: because the kernel path is more expensive. (You don't mention whether the CPU charged to Click is kernel or user.) I am a little surprised by this degree of difference. I would try to narrow down the source of the difference by using a different source element -- InfiniteSource + UDPIPEncap, or FromIPSummaryDump, or something else. More specific info about what type of time is being wasted, etc. would be useful too.
E Cliff Frey wrote: > I'd be curious what the system and interrupt times were on the various > machines. Perhaps there is more linux kernel overhead in the different > configurations. These numbers likely show up in top, or you can read raw > values from /proc/stat and see how they are changing. > > Cliff > > On Sat, Mar 6, 2010 at 8:06 AM, Peter Dedecker < > [email protected]> wrote: > >> Hi all, >> >> I've done some performance tests with the VPAN software (which you might >> remember from the nice talk of Jeroen at the Click Symposium ;-)) on >> real performant hardware. I've disabled all encryption stuff to strip it >> all down to almost plain Click with some routing and simple >> encapsulation operations. >> >> I have 3 types of nodes: >> - a sender takes plain IP packets from its tun0 interface, encapsulates >> them and sents them out on its eth0 interface. >> - a forwarding node takes packets from its eth0, decapsulates, looks up >> the next hop (like the sender also does), encapsulates the packet again, >> and sends it out on eth1. >> - the reciever is the final destination of the packets, accepts them on >> its eth0 interface, decapsulates them and delivers them at its tun0 >> interface. >> >> All quite simple and almost all the same operations though. With >> increasing traffic, CPU usage of the Click process increases also, which >> seems normal. However, a main observation on all traffic ranges, is that >> the sender node consequently has the highest CPU load. Forwarding nodes >> have a CPU load of only +/-2/3 of the sender nodes, while the >> destination clearly has the easiest work to do with a CPU load of only >> +/-1/3 compared to the forwarding node or even less than 20% of the >> sender's CPU load. For some figures: the click process has a CPU usage >> (measured with top) of 24% at a sending node, while 14% and 3% at >> forwarders and recievers. All nodes have a Dual-Core AMD Opteron(tm) >> Processor 2212 running at 2010.151 MHz with a cache size of 1024 KB, >> with 2 CPU's each node. Of course, as Click is single threaded, only one >> core will be used. >> >> Is accepting packets from a tun0 interface so hard, compared with >> accepting packets from the eth0 interface? I don't think expensive >> headroom push for packet encapsulation can be the problem, as in this >> configuration only an ethernet header is added with its 14 bytes being >> lower than the default headroom of 28 bytes. Does anyone have an >> explanation for this? Thanks a lot! >> >> Kind regards, >> >> -- >> ir. Peter Dedecker >> Department of Information Technology >> Broadband Communication Networks (IBCN) >> Ghent University - IBBT >> Gaston Crommenlaan 8 (Bus 201), B-9050 Gent, Belgium >> M: +32 486152320 >> T: +32 9 33 14946 ; T Secr: +32 9 33 14900 >> F: +32 9 33 14899 >> E: [email protected] >> W : www.ibcn.intec.UGent.be >> _______________________________________________ >> click mailing list >> [email protected] >> https://amsterdam.lcs.mit.edu/mailman/listinfo/click >> > _______________________________________________ > click mailing list > [email protected] > https://amsterdam.lcs.mit.edu/mailman/listinfo/click _______________________________________________ click mailing list [email protected] https://amsterdam.lcs.mit.edu/mailman/listinfo/click
