KK,
Here is what I have seen with my particular switch configuration.
For the Asterix demo, a tunnel machine was set up with one
interface connected to the Princeton campus network and another
interface connected to an HP 5406zl OpenFlow switch. The OpenFlow
switch port was manually configured to be untagged on VLAN 100 in order
to segregate Asterix traffic from all other traffic on the switch.
There were also 2 client machines connected to other untagged VLAN 100
ports on the switch.
One of the types of packets to come out of the tunnel and ingress
to the switch was LLDP traffic from some device or controller at
Stanford (at least I *believe* it was from Stanford, as that was where
the tunnel terminated). When this frame entered the switch, I believe
that it was "tagged" for internal processing purposes as being on VLAN
100. I think that this behavior is correct in that the switch needs to
be able to forward the packet appropriately in the absence of an
OpenFlow controller. The packet_in message frame data for this LLDP
packet is therefore an 802.1Q-tagged frame. While it did not ingress
the switch as a tagged packet, it does not seem unreasonable to me that
the switch would tag the packet for internal processing, resulting in a
tagged frame going to the controller even though the frame was untagged
when it entered the switch port.
I would expect that different switch manufacturers might treat
frame data destined for packet_in messages differently. For example,
they might strip any internal tagging for packets entering on untagged
ports, while retaining 802.1Q tagging for frames that enter on tagged
ports. Or, they might tag all frames for ease and consistency of
internal processing, and always send tagged frames to the OpenFlow
controller, regardless of whether entered on tagged or untagged ports.
My feeling is that it is always best to follow the old TCP
robustness principle: be conservative in what you do and be liberal in
what you accept from others. So, unless there is something in the
OpenFlow specification that states that switches must send frames within
a packet_in message *exactly* as they entered a switch port, any process
that examines the frame data should be able to deal with 802.1Q-tagged
frames.
In the case of the nox discovery module, I assume that would mean
checking to see if the frame data it received is 802.1Q tagged, and if
so, *then* checking to see if the "payload" portion of the frame is
LLDP. If the frame data is not tagged, the module would then behave
exactly as it does now. Does this make sense?
/Chris
On 11/12/2010 10:18 AM, kk yap wrote:
It is not clear to me what this patch is. From what I understand, it
is because some switches does not completely strip VLAN tags. Is that
what this is? Can't really comment on anything I do not understand,
beyond I am unclear on what this is.
As for patches, I am particularly touchy on non-OpenFlow compliant and
switch-specific ones. But I do not speak for everyone that has commit
rights to NOX. Beyond, we are generally happy to take patches, esp.
documented ones.
Regards
KK
PS>> might be good to address to nox comitters in nox-dev and not
openflow-discuss.
On 9 November 2010 13:46, Rob Sherwood<rob.sherw...@stanford.edu> wrote:
nox-dev commiters:
is there any reason why this patch shouldn't be pushed into the
repository? IIRC, this is not the first time Srini has proposed this
fix.
- Rob
.
On Tue, Nov 9, 2010 at 1:41 PM, Srini Seetharaman<seeth...@stanford.edu> wrote:
Please try the attached patch. This pays attention to whether the LLDP
packet has a VLAN tag in the pkt_in and handles it correctly.
Hopefully, after git-apply of this patch, you shudn't see those
errors.
On Tue, Nov 9, 2010 at 1:28 PM, Srini Seetharaman<seeth...@stanford.edu> wrote:
Hi Chris
I assume all packets sent/received by the switch are VLAN tagged in
your case? Could you please mail us a copy of the control traffic (so
that we can look at the pkt_in for the LLDP msg). Ideally, the "match"
function in the discovery.py shud've already set a condition that only
packets with DL_TYPE of LLDP_TYPE will be sent to
lldp_input_handler(). So, it is unclear why this happened.
Thanks
Srini.
On Tue, Nov 9, 2010 at 12:16 PM, Christopher J. Tengi
<te...@cs.princeton.edu> wrote:
Greetings All,
In my efforts to get past the limitations of the current release of snac,
I have decided to jump in with both feet to nox destiny territory and try to
get both it and nox-gui.py running. My goal is to have nox make 3 HP
switches using VLAN aggregation mode with tagged uplink ports act as
learning switches. Currently, the switches are in a star topology, with the
central switch running non-OpenFlow firmware, so there are no
OpenFlow-to-OpenFlow switch links. I cloned nox from noxrepo.org and
checked out the destiny branch. I built it with configure arguments of
"--prefix=/var/local --with-python=yes --with-gnu-ld" and both the "make"
and "make check" succeeded.
Based on various things I've read, and with a self-proclaimed limited
understanding of how some of this stuff glues together, I started nox with
these commands:
cd /var/local/src/nox/build/src
./nox_core -i ptcp:6633 pyswitch discovery lavi monitoring switchstats
topology
'lsof' commands on both the flowvisor and nox machines, as well as fvctl,
tell me that I have a connection between them for each of the 3 DPIDs in
play. And while the laptop I am connecting to one of the switches appears
to be working, for the most part, I get loads and loads of the following
sent to the xterm window where I am running nox:
00925|pyrt|ERR:unable to invoke a Python event handler:
Traceback (most recent call last):
File "./nox/lib/util.py", line 116, in f
event.total_len, buffer_id, packet)
File "./nox/netapps/discovery/discovery.py", line 163, in<lambda>
discovery.lldp_input_handler(self,dp,inport,reason,len,bid,packet),
File "./nox/netapps/discovery/discovery.py", line 250, in
lldp_input_handler
assert (packet.type == ethernet.LLDP_TYPE)
AssertionError
I suspect that any client-side problems I am currently having are due to
the lack of capabilities of the example pyswitch code, and I plan to
investigate that further. However, with all of the errors streaming by
concerning discovery, the logging is a bit too loud to see anything else
that might be of use. I do see a number of other messages fly by amongst
all of the python errors, and I figure that if I can get rid of the type of
error listed above, I might actually be able to look into the other errors.
So, given that I'm not a python programmer, can anybody give me a clue as
to what might be going on here? Should I run tcpdump and grab a .pcap file
or 2? Once I get past all of this, I also hope to get started with
nox-gui.py. However, I suspect that it will never show me any topology
information until nox_core is happy with discovery.
Thanks,
/Chris
_______________________________________________
openflow-discuss mailing list
openflow-disc...@lists.stanford.edu
https://mailman.stanford.edu/mailman/listinfo/openflow-discuss
_______________________________________________
openflow-discuss mailing list
openflow-disc...@lists.stanford.edu
https://mailman.stanford.edu/mailman/listinfo/openflow-discuss
_______________________________________________
openflow-discuss mailing list
openflow-disc...@lists.stanford.edu
https://mailman.stanford.edu/mailman/listinfo/openflow-discuss
_______________________________________________
nox-dev mailing list
nox-dev@noxrepo.org
http://noxrepo.org/mailman/listinfo/nox-dev_noxrepo.org
_______________________________________________
nox-dev mailing list
nox-dev@noxrepo.org
http://noxrepo.org/mailman/listinfo/nox-dev_noxrepo.org