Absolutely agreed. We should verify that the full path routing still
works as expected and then change the default configuration.
Hi Martin,
I am probably not fully appreciating the problem at hand too. My
personal experience is that installing flow entries in the reverse
direction (destination to source) and checking if a port is internal
before updating host location goes a long time against such race
conditions. To some extent, hop-by-hop routing simply slows down the
entire process to remove the race conditions.
I do not have a good feel of how often this is the case of other
deployments, but hop-by-hop routing does not seems like a good default
for NOX to be in unless majority of the users needs it that way, which
I would contend not.
Regards
KK
On 24 February 2010 14:12, Martin Casado <[email protected]> wrote:
Again, I may not be remembering correctly, but things are complex if you
have multiple OF switches connected to a single non-OF switch. You can get
timeouts which create wierdnesses such as hosts attached to internal ports
(causing a software flood since the location is unknown). Hop-by-hop
simplifies the forwarding logic so you're reasonably assured that packet
processing will be done on the fast path.
Hi Martin,
I do not understand. This should not make a difference, since routing
still have calculate a route, for which some OpenFlow switches might
be connected directly. Doing it hop-by-hop does not make a
difference. What am I missing here?
Regards
KK
On 24 February 2010 13:58, Martin Casado <[email protected]> wrote:
I believe it is simpler integration with a legacy network in which all
switches are not running OF.
Hi,
What is the motivation for hop-by-hop routing? Does seems novel in
some aspects.
Regards
KK
On 24 February 2010 13:52, Martin Casado <[email protected]> wrote:
From Natasha:
"I'm wondering if maybe the server is showing up on hpsw3, and so the
packet
is first getting routed there, and then re-routed again when it reaches
hpsw1. This is probably a result of all the authenticator code
commented
out
that was making it depend on routing. It's in a couple places
(wherever
the
word "routing" is used)."
This is likely the culprit. Currently Nox 0.6 is doing hop-by-hop
routing
rather then setting up the full path. Can you search for routing_mode
in
authenticator.hh, authenticator_modify.cc and authenticator_util.cc,
uncomment the code and see if this fixes the problem?
I notice that the routing module is behaving differently with NOX0.6
causing each switch en route to generate independent packet_ins, while
NOX0.4 generates only 1 packet_in. This behavior incurs higher flow
setup time.
I have a topology of client <-> hpsw3 <-> hpsw1 <-> server . I
performed a wget operation from client to server. Following is the
control traffic sent/received by the controller (Timestamp was what my
tcpdump captured):
1266984715.446715 PACKET_IN hpsw3
1266984715.446895 FLOW_MOD hpsw3
1266984715.446936 PACKET_OUT hpsw3
1266984715.452756 PACKET_IN hpsw1
1266984715.452913 FLOW_MOD hpsw1
1266984715.452937 PACKET_OUT hpsw1
Ideally, I would've expected to see the controller to push out the
second FLOW_MOD soon enough (and not 6 ms after the PACKET_OUT).
When I use NOX0.4, the action sequence is:
1266987591.116579 PACKET_IN hpsw3
1266987591.116725 FLOW_MOD hpsw3
1266987591.116755 FLOW_MOD hpsw1
1266987591.116787 PACKET_OUT hpsw3
Any idea if there is a code change?
Thanks
Srini.
_______________________________________________
nox-dev mailing list
[email protected]
http://noxrepo.org/mailman/listinfo/nox-dev_noxrepo.org
_______________________________________________
nox-dev mailing list
[email protected]
http://noxrepo.org/mailman/listinfo/nox-dev_noxrepo.org
_______________________________________________
nox-dev mailing list
[email protected]
http://noxrepo.org/mailman/listinfo/nox-dev_noxrepo.org