On Oct 21, 2013, at 4:58 AM, Anton Matsiuk <anton.mats...@gmail.com> wrote:
> commenting out log messages decreases the latency by 0.03 ms (in average and > obviously it depends on number of log.messages). Thanks for reporting this. Was this on PyPy? That's where I'd expect the biggest win to be -- the inefficiency of the logging module on PyPy is a known. > I need a task which will perform operations on packets in infinite loop (with > lowest priority) which have to be interrupted by packet_In and flow_mod async > events. OpenFlow_01_Task should have highest priority (I am not sure about > prioritization mechanism in recoco Scheduler, but anyway controller should > react on async incoming packets as soon as possible) Can I implement it with > recoco Task() or alternatively with usual python threads with unthreaded_sh > option enabled? Either of these should still work. Modulo bugs, --unthreaded-sh will hopefully not make any difference to anyone, really. It'll probably become the default. With a Task, note that you must explicitly give up control periodically (since they're cooperative). If that's not feasible, you should use a Thread. Or, if appropriate, use an entirely separate process (which eliminates the Python GIL constraint). -- Murphy > On 16 October 2013 21:21, Murphy McCauley <murphy.mccau...@gmail.com> wrote: > Just to summarize some off-list conversation: > > --unthreaded-sh makes a pretty big difference; reducing the measured time > from 15ms to 0.2ms on CPython. PyPy does slightly worse but is still far > better than before. > > The PyPy issue with carp was actually resolved a couple of days ago. > > -- Murphy > > On Oct 15, 2013, at 11:58 AM, Murphy McCauley <murphy.mccau...@gmail.com> > wrote: > >> On Oct 15, 2013, at 5:15 AM, Anton Matsiuk <anton.mats...@gmail.com> wrote: >> >>> Dear Murphy, >>> I am experiencing problems with large delays in processing Packet_In >>> messages on input in POX. >>> For testing the performance I use 2 different schemes: >>> · Mininet 2.0 with single Open vSwitch running in kernel (Ubuntu >>> 13.04) and 2 hosts connected to it. Testbed machine is Core i7 with 8GB RAM. >>> · With standalone hardware switch (NEC PF) and 2 hosts connected to >>> it and POX, running on Debain. >>> I tested it with forwarding.l2_learning l2_pairs and simplified L2 learning >>> (derived from tutorial) modules on betta and carp releases. On betta I >>> tested it both with Cpython and pypy interpreters (with carp I get errors >>> while trying to run it on pypy). >> >> I'd really like to fix these errors. Are they easy to replicate? If your >> carp is up to date, can you send me a report/stack trace/whatever? >>> In all tests I measure the delay between timestamp when Packet_In appears >>> on IP interface (dedicated loopback in case of Mininet and separate >>> Ethernet port in case of hardware switch) and timestamp when it fires >>> Packet_In event in l2_learning controller. In all schemes and cases this >>> delay is about 15ms in average (but with large deviation from 2 ms to >>> 50ms). >>> The processing of Packet_In and construction of packet_out (or Flow_mod) in >>> response on Packet_In (all just for L2 rules) takes 0.3ms and sending >>> Packet_Out (or Flow_Mod) out of controller (till appearance it on >>> IP-interface) also takes about 0.3ms. >>> Such large delay of Packet_In while entering POX causes the RTT of ping >>> between two test hosts to increase up to 50-100 ms when hard timeout of >>> flow rules expires (instead of 0.15ms with rules installed in the switch). >>> There are no other intermediate devices between switch and POX, in both >>> schemes they have direct IP-connectivity. >>> >>> I measure the delay as difference of timestamps in wireshark and in >>> different parts of the code of controller. >>> That’s why I am asking. Is such delay while listening for Packet In normal >>> for POX? Or is there any ways to reduce it? I expect that overall response >>> of POX for installation of Flow_Mod or just sending Packet_Out should be >>> around 1ms in case of simple L2 rule installation. >> >> Yes, it's normal. Optimizing reactive use cases hasn't been a priority. >> But we've actually wanted to address the cause that underlies the delays >> you're seeing for other reasons anyway. I've put an experimental patch in >> the dart branch (surprise; there's a dart branch). Run with ./pox.py >> --unthreaded-sh to enable it. I think you'll probably see an improvement. >> >> -- Murphy > > > > > -- > Best regards, > Anton Matsiuk