On Mar 19, 2013, at 9:53 AM, Tmusic wrote: > I haven't tried to run it with PyPy. The main reason is that performance of > more than an order of magnitude below what I need, which lead me to > investigating the bottlenecks first.
Might be worth trying anyway just to collect a datapoint (i.e., how different are your results). > My guess is that the profiler only looks at the main thread (since I'm seeing > almost only init and sleep). > Can you give some more information about the different threads? Which components are you running? > I'm seeings 5 thread (poxdesk is running): > - Main thread (boot / init) Yeah. This one is sort of "reserved". In Python, signals are only delivered to the main thread. So you can think of the main thread as the "signal thread". Usually it just sleeps and catches signals meant to kill the process and handles them. It can also be used for various things that demand to be on the main thread (like tkinter, which probably demands this because of signals, but I am guessing). > - Webserver? Possibly, if you're running the web module. > - Two threads running recoco (What is the difference?) One of these is the actual cooperative thread that actually schedules and runs Tasks. The other is a select-based IO loop. When there's IO waiting, it schedules tasks on the cooperative thread. It sleeps except for that. This design is to support additional IO loops (besides just the select one) with nothing special about any of them. In practice, this has rarely been used. The debugger branch actually merges the select IO thread and the scheduler. This is a compromise of the design, but it's very practical and will eventually get some level of support in the mainline. > - Socket thread? Maybe a deferred sender thread to keep sends from blocking. This is sort of a hack, but it's very rare that it actually runs. > Maybe I should explain more clearly what is happening: > When the controller becomes under more stress, the link discovery modules > throws a timeout. Immediately afterwards it fires a link discovered event. I > thought this was caused by the fact that the LLDP packets are delayed to much > in the scheduler. Can this be the case? Do you have a whole lot of packet_ins? > However the time needed to install new flows (based on packet_in events) does > not seem to change, which makes me think it is a discovery specific issue. > Changing the delay times in discovery module only has a minor impact. Any > ideas? Try sending barriers and seeing how long they take to come back. -- Murphy