Re: [pox-dev] Specifying multiple POX controllers for mininet.

2013-04-17 Thread Murphy McCauley
No, I think the latter --controller just overrides the former.  I don't 
actually think there's a decent way to do this in Mininet.  In earlier 
versions, I think there was missing infrastructure.  In Mininet 2, I think 
there's still some stuff missing and you could almost make it work except for a 
bug in how it formats the ovs-vsctl commandline.

So the way to do this is really to just invoke ovs-vsctl yourself manually.  
Let Mininet set up OVS and your topology.  Then assuming you have switches s1 
and s2, do something like:
ovs-vsctl set-controller s1 tcp:192.168.129.57:6633 tcp:192.168.129.56:6633
ovs-vsctl set-controller s2 tcp:192.168.129.57:6633 tcp:192.168.129.56:6633

Hope that helps.

(BTW, you might want to ask this on the mininet list.)

-- Murphy

On Apr 16, 2013, at 3:36 PM, Karthik Sharma wrote:

 I am using a mininet topology with a remote controller. However I want to 
 specify another redundant controller as a fallback option. Can I do as follows
 
 mn --custom topo-1sw-2host.py --topo mytopo --controller=remote 
 --ip=192.168.129.56 --ip=192.168.129.57.
 
 Mininet seems to accept the above command.I want the controller
 
 192.168.129.57 to take over when 192.168.129.56 goes offline.
 
 Does this work?
 
 Regards,
 
 Karthik.
 



Re: [pox-dev] Profiling Pox

2013-04-17 Thread Tmusic
Sorry for the extremely late reply!
Thank you very much for the example code!!

Concerning the queuing: ok, then we're on the same wavelength :)

Over the last couple of weeks, I've been working with your example and
implemented a priority queue. So far it works excellent!

I was able to get the performance even better by using multiprocessing and
performing intensive calculations and some database IO in a separate
process.

I've also finally cleaned up my profiling code and pushed it to github:
https://github.com/Timmmy/pox in the betta branch (sorry that it took so
long). Besides the profiler wrapper, it also contains an autoquit module
that quits pox after a specified amount of time (I use it for automatic
experiments).

Kind regards and sorry again for the late reply,

Tim



2013/3/26 Murphy McCauley murphy.mccau...@gmail.com

 I've pushed recoco.consumer to the carp branch which has a couple generic
 consumer classes.  Using those, you can write a simple component like:

 from pox.core import core
 from pox.lib.recoco.consumer import BaseConsumer
 import pox.openflow.libopenflow_01 as of

 class MyConsumer (BaseConsumer):
   def _do_work (self, work):
 # Work is (dpid,packet_in)
 dpid,packet_in = work
 po = of.ofp_packet_out(data = packet_in)
 po.actions.append(of.ofp_action_output(port = of.OFPP_FLOOD))

 connection = core.openflow.getConnection(dpid)
 if connection is None:
   self.log.warn(Connection lost before sending packet)
   return
 connection.send(po)

 def handle_PacketIn (event):
   consumer.add_work((event.dpid,event.ofp))

 def launch ():
   global consumer
   consumer = MyConsumer()
   core.openflow.addListenerByName(PacketIn, handle_PacketIn)

 Here, the normal OpenFlow event Task is acting like a producer, pushing
 work items (which in this case are a DPID and a packet_in) to a simple
 consumer which just floods the packet back out.  So it's a dumb hub, but
 written producer-consumer style.  BaseConsumer's initializer has some
 parameters for maximum batch size and Task priority.

 More comments below.

 On Mar 23, 2013, at 3:59 PM, Tmusic wrote:
  I'm still a bit confused about the work producing and consuming as it's
 implemented now. So there is the main task loop in OpenFlow_01_Task which
 loops over the connections and calls the read function on each of them.
 This read functions calls the appropriate handler which in its turn fires
 the appropriate event on the pox core (which are then further handled). So
 everything would be processed connection by connection...
 
  But if I understand you correctly, the handlers called by the read
 function put the jobs in a queue, which is than emptied by a separate task
 loop (which I can't find at the moment). Can you give a hint where (in the
 code) the task loop runs that empties the queue and where the filling of
 the queue exactly happens?

 Ah, we have miscommunicated.  There's no queue in general.  The OpenFlow
 events are entirely (or almost entirely?) raised by a single Task with a
 select loop (OpenFlow_01_Task or whatever).  It raises them directly.

 The miscommunication I believe stems from me saying, The OpenFlow event
 handlers are producers that fill a work queue and then you have a consumer
 in the form of a recoco Task that tries to drain the queue.  I wasn't
 describing how it works now.  I was describing the solution to your problem.

 The example above *does* implement this.  The idea being rather than do
 expensive processing directly in the handlers, you're better off handling
 them quickly by just shoving them off at a work queue which can try to
 handle them later (and perhaps with more flexible priorities).  The new
 BaseConsumer/FlexConsumer classes are meant to simplify this pattern.
  (They're based on the producer/consumer example I posed a few days ago,
 but now it's generic.)

 -- Murphy


Re: [pox-dev] Profiling Pox

2013-04-17 Thread Murphy McCauley

On Apr 17, 2013, at 2:20 PM, Tmusic wrote:

 Sorry for the extremely late reply!
 Thank you very much for the example code!!

No problem; glad it was helpful.

 I've also finally cleaned up my profiling code and pushed it to github: 
 https://github.com/Timmmy/pox in the betta branch (sorry that it took so 
 long). Besides the profiler wrapper, it also contains an autoquit module that 
 quits pox after a specified amount of time (I use it for automatic 
 experiments).

Cool, thanks for making this public!

I commented on the modification to core.  If you can remove that tweak, it 
could be maintained as its own project in its own repository and could be 
cloned into POX's ext directory.  Just a thought. :)

-- Murphy