Re: [pox-dev] Profiling Pox

2013-04-21 Thread Tmusic
The core modification is removed! Thank you for the suggestion!


2013/4/18 Murphy McCauley murphy.mccau...@gmail.com


 On Apr 17, 2013, at 2:20 PM, Tmusic wrote:

 Sorry for the extremely late reply!
 Thank you very much for the example code!!


 No problem; glad it was helpful.

 I've also finally cleaned up my profiling code and pushed it to github:
 https://github.com/Timmmy/pox in the betta branch (sorry that it took so
 long). Besides the profiler wrapper, it also contains an autoquit module
 that quits pox after a specified amount of time (I use it for automatic
 experiments).


 Cool, thanks for making this public!

 I commented on the modification to core.  If you can remove that tweak, it
 could be maintained as its own project in its own repository and could be
 cloned into POX's ext directory.  Just a thought. :)

 -- Murphy



Re: [pox-dev] Profiling Pox

2013-04-17 Thread Tmusic
Sorry for the extremely late reply!
Thank you very much for the example code!!

Concerning the queuing: ok, then we're on the same wavelength :)

Over the last couple of weeks, I've been working with your example and
implemented a priority queue. So far it works excellent!

I was able to get the performance even better by using multiprocessing and
performing intensive calculations and some database IO in a separate
process.

I've also finally cleaned up my profiling code and pushed it to github:
https://github.com/Timmmy/pox in the betta branch (sorry that it took so
long). Besides the profiler wrapper, it also contains an autoquit module
that quits pox after a specified amount of time (I use it for automatic
experiments).

Kind regards and sorry again for the late reply,

Tim



2013/3/26 Murphy McCauley murphy.mccau...@gmail.com

 I've pushed recoco.consumer to the carp branch which has a couple generic
 consumer classes.  Using those, you can write a simple component like:

 from pox.core import core
 from pox.lib.recoco.consumer import BaseConsumer
 import pox.openflow.libopenflow_01 as of

 class MyConsumer (BaseConsumer):
   def _do_work (self, work):
 # Work is (dpid,packet_in)
 dpid,packet_in = work
 po = of.ofp_packet_out(data = packet_in)
 po.actions.append(of.ofp_action_output(port = of.OFPP_FLOOD))

 connection = core.openflow.getConnection(dpid)
 if connection is None:
   self.log.warn(Connection lost before sending packet)
   return
 connection.send(po)

 def handle_PacketIn (event):
   consumer.add_work((event.dpid,event.ofp))

 def launch ():
   global consumer
   consumer = MyConsumer()
   core.openflow.addListenerByName(PacketIn, handle_PacketIn)

 Here, the normal OpenFlow event Task is acting like a producer, pushing
 work items (which in this case are a DPID and a packet_in) to a simple
 consumer which just floods the packet back out.  So it's a dumb hub, but
 written producer-consumer style.  BaseConsumer's initializer has some
 parameters for maximum batch size and Task priority.

 More comments below.

 On Mar 23, 2013, at 3:59 PM, Tmusic wrote:
  I'm still a bit confused about the work producing and consuming as it's
 implemented now. So there is the main task loop in OpenFlow_01_Task which
 loops over the connections and calls the read function on each of them.
 This read functions calls the appropriate handler which in its turn fires
 the appropriate event on the pox core (which are then further handled). So
 everything would be processed connection by connection...
 
  But if I understand you correctly, the handlers called by the read
 function put the jobs in a queue, which is than emptied by a separate task
 loop (which I can't find at the moment). Can you give a hint where (in the
 code) the task loop runs that empties the queue and where the filling of
 the queue exactly happens?

 Ah, we have miscommunicated.  There's no queue in general.  The OpenFlow
 events are entirely (or almost entirely?) raised by a single Task with a
 select loop (OpenFlow_01_Task or whatever).  It raises them directly.

 The miscommunication I believe stems from me saying, The OpenFlow event
 handlers are producers that fill a work queue and then you have a consumer
 in the form of a recoco Task that tries to drain the queue.  I wasn't
 describing how it works now.  I was describing the solution to your problem.

 The example above *does* implement this.  The idea being rather than do
 expensive processing directly in the handlers, you're better off handling
 them quickly by just shoving them off at a work queue which can try to
 handle them later (and perhaps with more flexible priorities).  The new
 BaseConsumer/FlexConsumer classes are meant to simplify this pattern.
  (They're based on the producer/consumer example I posed a few days ago,
 but now it's generic.)

 -- Murphy


Re: [pox-dev] Profiling Pox

2013-04-17 Thread Murphy McCauley

On Apr 17, 2013, at 2:20 PM, Tmusic wrote:

 Sorry for the extremely late reply!
 Thank you very much for the example code!!

No problem; glad it was helpful.

 I've also finally cleaned up my profiling code and pushed it to github: 
 https://github.com/Timmmy/pox in the betta branch (sorry that it took so 
 long). Besides the profiler wrapper, it also contains an autoquit module that 
 quits pox after a specified amount of time (I use it for automatic 
 experiments).

Cool, thanks for making this public!

I commented on the modification to core.  If you can remove that tweak, it 
could be maintained as its own project in its own repository and could be 
cloned into POX's ext directory.  Just a thought. :)

-- Murphy

Re: [pox-dev] Profiling Pox

2013-03-19 Thread Murphy McCauley
On Mar 19, 2013, at 10:35 AM, Saul St. John wrote:

 Regarding pt 1: does running POX under PyPy allow for the program to take 
 simultaneous advantage of all the cores in a multiprocessor machine? IOW, 
 does PyPy not suffer from the single executing thread limitation that 
 CPython's GIL imposes?

Unfortunately, no.  PyPy still has a global lock.  Removing the global lock in 
a language designed with one in mind is pretty hard.  BUT, you can pass the 
lock between threads better than CPython 2 does.  CPython 3 actually does 
better.

POX is actually built with the GIL limitation in mind.  Really leveraging 
multiple cores would require a more complex programming model.  POX keeps the 
simpler model because the limitation is inescapable anyway.

-- Murphy

Re: [pox-dev] Profiling Pox

2013-03-19 Thread Tmusic
Thank you for the extensive information :)

I'm trying to run with pypy but it can't import some of my modules (all in
the /pox directory). For example myrouter.mylinkmonitor does not work, but
my myrouter.mypackethandler does.

python pox.py modules works like a charm
./pox.py modules cannot find all modules
path/pypy pox.py modules has the same problem as the latter

It's probably something silly, but I can't find it...

Concerning the number of packet_in events. It's about 10 per second (+
about 10 stats_requests and replies, which are logged by a separate module).


2013/3/19 Murphy McCauley murphy.mccau...@gmail.com

 On Mar 19, 2013, at 9:53 AM, Tmusic wrote:

  I haven't tried to run it with PyPy. The main reason is that performance
 of more than an order of magnitude below what I need, which lead me to
 investigating the bottlenecks first.

 Might be worth trying anyway just to collect a datapoint (i.e., how
 different are your results).

  My guess is that the profiler only looks at the main thread (since I'm
 seeing almost only init and sleep).
  Can you give some more information about the different threads?

 Which components are you running?

  I'm seeings 5 thread (poxdesk is running):
  - Main thread (boot / init)

 Yeah.  This one is sort of reserved.  In Python, signals are only
 delivered to the main thread.  So you can think of the main thread as the
 signal thread.  Usually it just sleeps and catches signals meant to kill
 the process and handles them.  It can also be used for various things that
 demand to be on the main thread (like tkinter, which probably demands this
 because of signals, but I am guessing).

  - Webserver?

 Possibly, if you're running the web module.

  - Two threads running recoco (What is the difference?)

 One of these is the actual cooperative thread that actually schedules and
 runs Tasks.
 The other is a select-based IO loop.  When there's IO waiting, it
 schedules tasks on the cooperative thread.  It sleeps except for that.

 This design is to support additional IO loops (besides just the select
 one) with nothing special about any of them.  In practice, this has rarely
 been used.  The debugger branch actually merges the select IO thread and
 the scheduler.  This is a compromise of the design, but it's very practical
 and will eventually get some level of support in the mainline.

  - Socket thread?

 Maybe a deferred sender thread to keep sends from blocking.  This is sort
 of a hack, but it's very rare that it actually runs.

  Maybe I should explain more clearly what is happening:
  When the controller becomes under more stress, the link discovery
 modules throws a timeout. Immediately afterwards it fires a link discovered
 event. I thought this was caused by the fact that the LLDP packets are
 delayed to much in the scheduler. Can this be the case?

 Do you have a whole lot of packet_ins?

  However the time needed to install new flows (based on packet_in events)
 does not seem to change, which makes me think it is a discovery specific
 issue. Changing the delay times in discovery module only has a minor
 impact. Any ideas?

 Try sending barriers and seeing how long they take to come back.

 -- Murphy


Re: [pox-dev] Profiling Pox

2013-03-19 Thread Murphy McCauley
On Mar 19, 2013, at 12:02 PM, Tmusic wrote:
 I'm trying to run with pypy but it can't import some of my modules (all in 
 the /pox directory). For example myrouter.mylinkmonitor does not work, but my 
 myrouter.mypackethandler does.
 
 python pox.py modules works like a charm
 ./pox.py modules cannot find all modules
 path/pypy pox.py modules has the same problem as the latter
 
 It's probably something silly, but I can't find it...

This is probably because you're importing modules that aren't installed in 
PyPy.  If they're C modules, you may be out of luck.  If they're Python 
modules, you just need to adjust your path or install them in PyPy.  For more 
info, try pypy/bin/pypy debug-pox.py ... or something along those lines and 
see if it's more informative.

 Concerning the number of packet_in events. It's about 10 per second (+ about 
 10 stats_requests and replies, which are logged by a separate module).

Are your event handlers doing much work (and stalling the coop thread)?

-- Murphy