Ah, I'd completely missed that the ARPs were wrapped in ISL :-( I'd seen 
that for some of the others packet types...

I'll post an update later about progress at this end.

Glen


Natasha Gude wrote:
> Hey Glen,
>
> So the reason I didn't see the ARPs was because I was using tcpdump 
> and they came up as SNAP packets.  Opening it up in wireshark, it 
> turns out all of the ARPs are being encapsulated by Cisco's ISL 
> protocol.  So I think what's happening is that mvm-17 and mvm-33 are 
> sending out ARPs, which the 6k is encapsulating in ISL, which is then 
> getting sent up the controller.  These packets are getting flooded 
> out, to all of the Openflow ports, including ones not connected to the 
> 6k, which I'm guessing would have stripped that outer header.  Are 
> mvm-44 and 24 on eth2 and eth3?  Are they perhaps receiving these 
> packets and not knowing how to respond?
>
> So it seems like this might be a 6k configuration problem rather than 
> an Openflow/NOX one, but let me know if you have more information.
>
> Also, it's probably worth noting that mvm-17 and mvm-33 seem to have 
> the same mac address (probably because they're vms?).  When the ISL 
> stuff gets figured out, I'm guessing this will cause some issues.
>
> Natasha
>
> Glen Gibb wrote:
>> Thanks for getting back to me Natasha,
>>
>>
>> Natasha Gude wrote:
>>> Hey Glen,
>>>
>>> First off, thanks for all of the debugging output - it really helped 
>>> in figuring out your setup.
>>>
>> No probs. I figure the more useful information I give you the more 
>> likely you are to be able to help with the problem. Let me correct 
>> one minor mistake in the previous e-mail. It appears that mvm-17 is 
>> on eth1 and mvm-33 is on eth4 (both via Cat6k) -- I had said they 
>> were both on eth1.
>>
>>
>>> I took a look at the dump files and nox log, and I didn't see more 
>>> than a few ARP packets, but I did see a lot of cisco traffic, 
>>> presumably from the cat6k.  I'm not sure if there's necessarily a 
>>> problem here, but I can tell you what it seems like the situation is 
>>> and you can let me know if NOX should be behaving differently.
>> The ARP traffic I was referring to can be seen from packet 212 on in 
>> b.eth1.dump and packet 199 in b.eth2.dump (and similar locations in 
>> the other two). Once that first ARP request comes in they appear on 
>> the order of < 100ms :-(
>>
>>
>>> Almost all of the flows seen by NOX are from the mac 
>>> 00:01:63:d4:67:ca (which I'm assuming is the 6k).  Because the 6k is 
>>> connected to two of the Openflow switch's ports, NOX has record of 
>>> the mac at these two locations (eth1 and eth4, which are port 
>>> numbers 0 and 3 respectively in the NOX log file).  Right now, we 
>>> have a notion of a "primary" location when a sender is connected at 
>>> two different points in the network.  At any given time, a sender's 
>>> primary location is the location it most recently sent a packet 
>>> from.  When that location switches, the old one is "poisoned" to 
>>> force ongoing traffic to be routed to the new location.  Thus the 
>>> poison dbg message you were seeing results from the 6k sending a 
>>> packet with that source mac address through a different port on the 
>>> openflow switch than which it last sent through.  What's interesting 
>>> is that to begin with, a different mac address is used by the 6k 
>>> when sending traffic to openflow's eth4 interface 
>>> (00:01:63:d4:67:cb), but then when the destination address changes 
>>> to 01:00:0c:00:00:07, the source address is always 00:01:63:d4:67:ca 
>>> regardless of the openflow port it is received on.
>> Actually all traffic from the Cisco on eth4 is from 
>> 00:01:63:d4:67:cb. We only start seeing things listed as being from 
>> the :ca address on eth4 when NOX is running.
>>
>>
>>> The second point worth noting is that all of the 6k's traffic is 
>>> sent to multicast addresses, and NOX currently treats multicast and 
>>> broadcast traffic the same, flooding a packet out every port except 
>>> for the one it came in on.  If there's a more appropriate way of 
>>> dealing with this traffic, please let me know!
>> As far as I know it should be okay to simply flood this traffic.
>>
>>>
>>> So that's what seems to be the situation.  Again, let me know if you 
>>> think any of the above described behavior is incorrect, or if 
>>> there's still a problem that I just couldn't deduce from the 
>>> log/dump files I looked at.
>>
>> Glen
>
>


_______________________________________________
nox-dev mailing list
[email protected]
http://noxrepo.org/mailman/listinfo/nox-dev_noxrepo.org

Reply via email to