On Aug 19, 2014, at 8:22 PM, 张伟 <zhang...@126.com> wrote:

> Hi all, 
> 
> I want to know for the application eg l2_learning. If we run this component, 
> l2_learning is single thread or multi-thread? 

Single threaded.

> My simple test:
> def _handle_PacketIn (event):
>         packet = event.parsed
>         msg = of.ofp_flow_mod()
>         msg.match = of.ofp_match.from_packet(packet, event.port);
>         msg.buffer_id = event.ofp.buffer_id
>         log.info("buffer_id %i", msg.buffer_id);
>         msg.idle_timeout = 10
>         msg.hard_timeout = 30
>         msg.actions.append(of.ofp_action_output(port = 0))
>         msg.data = event.ofp
>         event.connection.send(msg)
> 
> When I print out the buffer_id message, sometimes it is not sequence. I got 
> the result like that:
>  INFO:packet:(udp parse) warning UDP packet data shorter than UDP len: 96 < 
> 962
> INFO:misc.test_flow_miss:buffer_id 0
> INFO:openflow.of_01:[00-00-00-00-00-01 2] connected
> INFO:packet:(udp parse) warning UDP packet data shorter than UDP len: 96 < 962
> INFO:misc.test_flow_miss:buffer_id 2 
> 
> This makes me very confused. At first, I thought just one single thread for 
> the app. app processed the requests according to FIFO mode. Can anybody give 
> some help and explain some reason for my result?

The buffers come from the switches.  How the switches decide on buffer IDs is 
not specified.  Are you sure your switch is sending buffer IDs that increment 
by one?  Have you snooped the control connection to confirm this?

Another possibility is that a higher priority packet-in handler is getting the 
event first and eating it.  If all you're running is l2_learning (and not, say, 
discovery) this shouldn't be the case, though.

> Another question:
> In pox wiki, when we want to install a flow entry: if the tcp port is 8080, 
> why in python code, we do not need to do htons translation. However, the 
> switch side receives the port is network endian. 

Because it would be awful if you had to do it yourself and POX takes care of it 
for you.

-- Murphy

Reply via email to