Hi,

Thanks. It just works. I will take look at that function later. Is there
any special reason which makes you set it to be false by default?

On Sun, Oct 26, 2014 at 2:54 PM, Murphy McCauley <murphy.mccau...@gmail.com>
wrote:

> Try modifying l2_multi so that the call to match.from_packet() passes the
> keyword parameter spec_frags=True.
>
> -- Murphy
>
> On Oct 26, 2014, at 12:06 PM, tim huang <pds...@gmail.com> wrote:
>
> Hi,
>
> After I tested, I found that the packet needs to be fragmented will
> cause _handle_PacketIn in l2_multi continually being called. And the packet
> of the parameter 'event' will always be the fragmented packets. I did try
> to track down into the event and handler system, but can't find the reason.
> Is there any possible reason or clue for this?
>
> I just ping once from the host
>
> --- 10.0.0.3 ping statistics ---
> 1 packets transmitted, 1 received, 0% packet loss, time 0ms
> rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms
> mininet> h1 ping h3 -c 1 -s 2000
> PING 10.0.0.3 (10.0.0.3) 2000(2028) bytes of data.
>
> --- 10.0.0.3 ping statistics ---
> 1 packets transmitted, 0 received, 100% packet loss, time 0ms.
>
>
> ------- --------- ---------
> the code I add in handle_packet_In under l2_multi
>
>     global handler_counter
>     handler_counter += 1
>     print 'l2_multi_packetIn ' + str(handler_counter)
>     print packet
> ---------- ------------- -------------
> The output from controller console
>
>
> l2_multi_packetIn 8118
> [86:7d:e2:27:00:5c>da:22:ba:a5:76:a5 IP]
> l2_multi_packetIn 8119
> [86:7d:e2:27:00:5c>da:22:ba:a5:76:a5 IP]
> l2_multi_packetIn 8120
> [86:7d:e2:27:00:5c>da:22:ba:a5:76:a5 IP]
> l2_multi_packetIn 8121
> [86:7d:e2:27:00:5c>da:22:ba:a5:76:a5 IP]
> l2_multi_packetIn 8122
> [86:7d:e2:27:00:5c>da:22:ba:a5:76:a5 IP]
> l2_multi_packetIn 8123
>
> It's still increasing, even I stop the ping for around 3 minutes.
>
>
>
> On Fri, Oct 24, 2014 at 6:04 AM, Murphy McCauley <
> murphy.mccau...@gmail.com> wrote:
>
>> Dump the control traffic using openflow.debug and then take a look at it,
>> e.g., with the OpenFlow dissector for Wireshark.  That may shed some light.
>>
>> You also might try enabling fragment reassembly by sending an
>> OFPT_SET_CONFIG.
>>
>> -- Murphy
>>
>> On Oct 23, 2014, at 9:41 PM, tim huang <pds...@gmail.com> wrote:
>>
>> Sorry, please ignore last email. I made some mistakes for my statement.
>>
>> Hi,
>>
>> I find the situation is more interesting. At beginning, I thought it was
>> caused by the fragmentation, then I can't ping with any fragmentation.
>> However, when I tried several times.Actually, I can ping successfully with
>> large size packet, if that's the first flow between 2 switches.After the
>> flows timeouts on the switch, I can't ping any more through controller.
>>
>> Here is the output I get with same topology I described in my 2nd email.
>>
>> The 1st ping is successful. After the flows timeout on all the switches,
>> I can't ping any more with large size packets, but I can still issue ping
>> with small size packets.
>>
>> The outputs are the attempts with different sizes.
>>
>> And there are still some situations that I can't ping with large packets
>> at the beginning when I use mininet. I'm thinking maybe there are some
>> packets,like ARP or something else, had been forwarded between these 2
>> switches before I ping. Usually, the first ping can be successful on my
>> esxi platform where I deploy the openvswitches and linux machines.
>>
>> -------------------------------------------------
>> mininet> h1 ping h3 -s 5000
>> PING 10.0.0.3 (10.0.0.3) 5000(5028) bytes of data.
>> 5008 bytes from 10.0.0.3: icmp_req=1 ttl=64 time=94.5 ms
>> 5008 bytes from 10.0.0.3: icmp_req=2 ttl=64 time=0.161 ms
>> 5008 bytes from 10.0.0.3: icmp_req=3 ttl=64 time=0.092 ms
>> 5008 bytes from 10.0.0.3: icmp_req=4 ttl=64 time=0.155 ms
>> 5008 bytes from 10.0.0.3: icmp_req=5 ttl=64 time=0.130 ms
>> ^C
>> --- 10.0.0.3 ping statistics ---
>> 5 packets transmitted, 5 received, 0% packet loss, time 4001ms
>> rtt min/avg/max/mdev = 0.092/19.012/94.526/37.757 ms
>> mininet> h1 ping h3 -s 5000
>> ^CPING 10.0.0.3 (10.0.0.3) 5000(5028) bytes of data.
>>
>> --- 10.0.0.3 ping statistics ---
>> 212 packets transmitted, 0 received, 100% packet loss, time 211623ms
>>
>> mininet> h1 ping h3
>> PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data.
>> 64 bytes from 10.0.0.3: icmp_req=2 ttl=64 time=0.330 ms
>> 64 bytes from 10.0.0.3: icmp_req=1 ttl=64 time=1166 ms
>> 64 bytes from 10.0.0.3: icmp_req=3 ttl=64 time=0.093 ms
>> 64 bytes from 10.0.0.3: icmp_req=4 ttl=64 time=0.095 ms
>> 64 bytes from 10.0.0.3: icmp_req=5 ttl=64 time=0.081 ms
>> 64 bytes from 10.0.0.3: icmp_req=6 ttl=64 time=0.083 ms
>> ^C
>> --- 10.0.0.3 ping statistics ---
>> 6 packets transmitted, 6 received, 0% packet loss, time 5002ms
>> rtt min/avg/max/mdev = 0.081/194.523/1166.459/434.663 ms, pipe 2
>> mininet> h1 ping h3
>> PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data.
>> 64 bytes from 10.0.0.3: icmp_req=1 ttl=64 time=76.2 ms
>> 64 bytes from 10.0.0.3: icmp_req=2 ttl=64 time=0.083 ms
>> 64 bytes from 10.0.0.3: icmp_req=3 ttl=64 time=0.095 ms
>> 64 bytes from 10.0.0.3: icmp_req=4 ttl=64 time=0.061 ms
>> 64 bytes from 10.0.0.3: icmp_req=5 ttl=64 time=0.079 ms
>> ^C
>> --- 10.0.0.3 ping statistics ---
>> 5 packets transmitted, 5 received, 0% packet loss, time 4001ms
>> rtt min/avg/max/mdev = 0.061/15.313/76.250/30.468 ms
>>
>> mininet> h1 ping h3 -s 5000
>> ^CPING 10.0.0.3 (10.0.0.3) 5000(5028) bytes of data.
>>
>> --- 10.0.0.3 ping statistics ---
>> 42 packets transmitted, 0 received, 100% packet loss, time 41045ms
>> --------------------------------------------------
>>
>> I really want to fix this problem for pox to do some contribution,
>> because it really helps me a lot. I do love the concept of pox than
>> other similar controller,like Ryu. Can anybody give me some clue about this
>> problem?
>>
>> On Fri, Oct 24, 2014 at 12:25 AM, tim huang <pds...@gmail.com> wrote:
>>
>>> Hi,
>>>
>>> I find the situation is more interesting than I thought. At beginning,
>>> it's just caused by the fragmentation,I can't ping with any fragmentation.
>>> however, when I tried several times. I found something new. I can ping
>>> successfully with large size packet, it that's the first flow between 2
>>> switches,after the flows timeouts on the switch, I can't ping any more.
>>>
>>> Here is the output I get with same topology I described above.
>>>
>>> The 1st ping is successful. After the flows timeout on all the switches,
>>> I can't ping any more with large size packets, but I can still issue ping
>>> with small size packets.
>>> Here are the attempts with different sizes.
>>>
>>> And there are still some situations that I can't ping with large packets
>>> at the beginning. I'm thinking that maybe there are some packets had been
>>> forwarded between these 2 switches before I ping.
>>>
>>> I want to fix this problem for pox to contribute some code for this
>>> project, because it really helps me a lot. I do love the concept of pox
>>> than other similar controller,like Ryu. Can anybody give me some clue about
>>> this problem?
>>> -------------------------------------------------
>>> mininet> h1 ping h3 -s 5000
>>> PING 10.0.0.3 (10.0.0.3) 5000(5028) bytes of data.
>>> 5008 bytes from 10.0.0.3: icmp_req=1 ttl=64 time=94.5 ms
>>> 5008 bytes from 10.0.0.3: icmp_req=2 ttl=64 time=0.161 ms
>>> 5008 bytes from 10.0.0.3: icmp_req=3 ttl=64 time=0.092 ms
>>> 5008 bytes from 10.0.0.3: icmp_req=4 ttl=64 time=0.155 ms
>>> 5008 bytes from 10.0.0.3: icmp_req=5 ttl=64 time=0.130 ms
>>> ^C
>>> --- 10.0.0.3 ping statistics ---
>>> 5 packets transmitted, 5 received, 0% packet loss, time 4001ms
>>> rtt min/avg/max/mdev = 0.092/19.012/94.526/37.757 ms
>>> mininet> h1 ping h3 -s 5000
>>> ^CPING 10.0.0.3 (10.0.0.3) 5000(5028) bytes of data.
>>>
>>> --- 10.0.0.3 ping statistics ---
>>> 212 packets transmitted, 0 received, 100% packet loss, time 211623ms
>>>
>>> mininet> h1 ping h3
>>> PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data.
>>> 64 bytes from 10.0.0.3: icmp_req=2 ttl=64 time=0.330 ms
>>> 64 bytes from 10.0.0.3: icmp_req=1 ttl=64 time=1166 ms
>>> 64 bytes from 10.0.0.3: icmp_req=3 ttl=64 time=0.093 ms
>>> 64 bytes from 10.0.0.3: icmp_req=4 ttl=64 time=0.095 ms
>>> 64 bytes from 10.0.0.3: icmp_req=5 ttl=64 time=0.081 ms
>>> 64 bytes from 10.0.0.3: icmp_req=6 ttl=64 time=0.083 ms
>>> ^C
>>> --- 10.0.0.3 ping statistics ---
>>> 6 packets transmitted, 6 received, 0% packet loss, time 5002ms
>>> rtt min/avg/max/mdev = 0.081/194.523/1166.459/434.663 ms, pipe 2
>>> mininet> h1 ping h3
>>> PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data.
>>> 64 bytes from 10.0.0.3: icmp_req=1 ttl=64 time=76.2 ms
>>> 64 bytes from 10.0.0.3: icmp_req=2 ttl=64 time=0.083 ms
>>> 64 bytes from 10.0.0.3: icmp_req=3 ttl=64 time=0.095 ms
>>> 64 bytes from 10.0.0.3: icmp_req=4 ttl=64 time=0.061 ms
>>> 64 bytes from 10.0.0.3: icmp_req=5 ttl=64 time=0.079 ms
>>> ^C
>>> --- 10.0.0.3 ping statistics ---
>>> 5 packets transmitted, 5 received, 0% packet loss, time 4001ms
>>> rtt min/avg/max/mdev = 0.061/15.313/76.250/30.468 ms
>>>
>>> mininet> h1 ping h3 -s 5000
>>> ^CPING 10.0.0.3 (10.0.0.3) 5000(5028) bytes of data.
>>>
>>> --- 10.0.0.3 ping statistics ---
>>> 42 packets transmitted, 0 received, 100% packet loss, time 41045ms
>>> --------------------------------------------------
>>>
>>>
>>> On Tue, Oct 21, 2014 at 7:08 AM, Lucas Brasilino <lr...@cin.ufpe.br>
>>> wrote:
>>>
>>>> > Yup, dart.  It's actually more than due to get rolled over to eel,
>>>> but it's
>>>> > waiting on me to have some time to dedicate to POX, which hasn't
>>>> happened
>>>> > for a while. :)
>>>>
>>>> eel ? I was about to suggest 'eager' name :-D
>>>>
>>>>
>>>> --
>>>> Att
>>>> Lucas Brasilino
>>>> MSc Student @ Federal University of Pernambuco (UFPE) / Brazil
>>>> twitter: @lucas_brasilino
>>>>
>>>
>>>
>>>
>>> --
>>> Thanks
>>> Tim
>>>
>>
>>
>>
>> --
>> Thanks
>> Tim
>>
>>
>>
>
>
> --
> Thanks
> Tim
>
>
>


-- 
Thanks
Tim

Reply via email to