Re: [pox-dev] l2_multi explanation
We assume that we're going to want to know paths from many sources to many destinations, that establishing paths is far more common than topology changes, and that we want to minimize the path setup time. With these assumptions, it seems like a reasonable thing to do is to precompute all paths so that at path-establishment time, it's a simple lookup rather than actually running a path-finding algorithm. This sounds more like all-pairs shortest paths than the single-source shortest paths which Dijkstra's does. Doing a version of Dijkstra's which does all pairs well (e.g., Johnson's algorithm and a Fibonacci heap) is *considerably* more complex than Floyd-Warshall. But really, neither of these is probably the best choice, and you might be better off using one of, e.g., Demetrescu's algorithms and updating it dynamically (also more complex than Floyd-Warshall). Of course, if performance were a major concern you probably wouldn't be implementing it in Python to begin with, and performance certainly isn't important here, since l2_multi is just an example. The code is meant to be relatively straightforward, and Floyd-Warshall is a nice, simple algorithm. And you can replace l2_multi's path-finding by replacing a single function. -- Murphy On Dec 19, 2013, at 8:12 PM, Dhanu dhan...@gmail.com wrote: Mr Murphy, is there any explanation why l2_multi use Floyd-Warshall algorithm, instead of Dijkstra or any other shortest path algorithm? thanks On Thu, Dec 19, 2013 at 2:09 AM, Murphy McCauley murphy.mccau...@gmail.com wrote: The barrier sort of serves as a confirmation message. When you send a switch a flow-mod, you ordinarily have no way of knowing when or if it has taken effect since there's no positive acknowledgement. We don't even know if it's valid. The switch might send an error, but we have no idea *when* it will send an error. However, according to the specification, when a switch sends a barrier reply, everything before the barrier should have been processed -- the table entry should actually be active, and if there was an error, the error messages should have been sent *before* we get the barrier reply. The reason for clearing the flow tables when a link has changed is that the link event signifies that the topology has changed. This may mean that a path which was previously valid (which we are using!) is no longer valid. It may also mean that a path which was previously the shortest is no longer the shortest. If l2_multi kept track of all the paths it had installed and which were active, etc., it could figure this out and just fix the particular paths that need adjusting. But it doesn't, since this would considerably complicate what is supposed to be a relatively straightforward example. So it just clears everything and lets everything get rebuilt from scratch. -- Murphy On Dec 18, 2013, at 8:45 AM, Amer amer7...@hotmail.com wrote: Hello, Very helpful explanation, I would like to ask you about the barrier command and its replay. What is the benefit from using it in l2_multi. Also, what is the benefit from clearing flow-table contents, while receiving a LinkEvent. Best regards, Amer On ١٦/١٢/٢٠١٣, at ٩:٢٦ ص, Murphy McCauley murphy.mccau...@gmail.com wrote: There isn't. In short, it's like this: The discovery component raises LinkEvents when links change. l2_multi keeps an adjacency map. The value of adjacency[sw1][sw2] is the port which connects sw1 to sw2. This is updated by the l2_multi class, which watches LinkEvents. LinkEvents also cause all switches to have their tables cleared. The Switch class watches PacketIn events. When one occurs, it learns the source MAC (stored in mac_map). In the usual case, this should only be from a port which *doesn't* connect two switches (and therefore won't be in the the adjacency map). If we know the destination, we install a path from the current switch all the way to its final destination (if we don't, as usual, we flood). To install a path, we find the shortest path using the Floyd-Warshall algorithm, and then fill in the port numbers from the adjacency map. We send the flow-mods to all the switches on the path, followed by a barrier. When we have gotten the barrier reply from each switch, the entire path should be ready, so we send the waiting packet (that is, the packet which caused us to want to install the flow in the first place). Most of the rest of the code is to handle exceptional cases like multicasts, partitions, etc. Don't look to l2_multi as an example of a good way to write network-wide forwarding code, though. It's just a 500 line example which is meant to be relatively easy to understand. -- Murphy On Dec 15, 2013, at 8:24 PM, Amer amer7...@hotmail.com wrote: Hello, I would like to ask you if there is a document that help me to
Re: [pox-dev] l2_multi explanation
Hello, Very helpful explanation, I would like to ask you about the barrier command and its replay. What is the benefit from using it in l2_multi. Also, what is the benefit from clearing flow-table contents, while receiving a LinkEvent. Best regards, Amer On ١٦/١٢/٢٠١٣, at ٩:٢٦ ص, Murphy McCauley murphy.mccau...@gmail.com wrote: There isn't. In short, it's like this: The discovery component raises LinkEvents when links change. l2_multi keeps an adjacency map. The value of adjacency[sw1][sw2] is the port which connects sw1 to sw2. This is updated by the l2_multi class, which watches LinkEvents. LinkEvents also cause all switches to have their tables cleared. The Switch class watches PacketIn events. When one occurs, it learns the source MAC (stored in mac_map). In the usual case, this should only be from a port which *doesn't* connect two switches (and therefore won't be in the the adjacency map). If we know the destination, we install a path from the current switch all the way to its final destination (if we don't, as usual, we flood). To install a path, we find the shortest path using the Floyd-Warshall algorithm, and then fill in the port numbers from the adjacency map. We send the flow-mods to all the switches on the path, followed by a barrier. When we have gotten the barrier reply from each switch, the entire path should be ready, so we send the waiting packet (that is, the packet which caused us to want to install the flow in the first place). Most of the rest of the code is to handle exceptional cases like multicasts, partitions, etc. Don't look to l2_multi as an example of a good way to write network-wide forwarding code, though. It's just a 500 line example which is meant to be relatively easy to understand. -- Murphy On Dec 15, 2013, at 8:24 PM, Amer amer7...@hotmail.com wrote: Hello, I would like to ask you if there is a document that help me to understand l2_multi code. Best regards, Amer
Re: [pox-dev] l2_multi explanation
The barrier sort of serves as a confirmation message. When you send a switch a flow-mod, you ordinarily have no way of knowing when or if it has taken effect since there's no positive acknowledgement. We don't even know if it's valid. The switch might send an error, but we have no idea *when* it will send an error. However, according to the specification, when a switch sends a barrier reply, everything before the barrier should have been processed -- the table entry should actually be active, and if there was an error, the error messages should have been sent *before* we get the barrier reply. The reason for clearing the flow tables when a link has changed is that the link event signifies that the topology has changed. This may mean that a path which was previously valid (which we are using!) is no longer valid. It may also mean that a path which was previously the shortest is no longer the shortest. If l2_multi kept track of all the paths it had installed and which were active, etc., it could figure this out and just fix the particular paths that need adjusting. But it doesn't, since this would considerably complicate what is supposed to be a relatively straightforward example. So it just clears everything and lets everything get rebuilt from scratch. -- Murphy On Dec 18, 2013, at 8:45 AM, Amer amer7...@hotmail.com wrote: Hello, Very helpful explanation, I would like to ask you about the barrier command and its replay. What is the benefit from using it in l2_multi. Also, what is the benefit from clearing flow-table contents, while receiving a LinkEvent. Best regards, Amer On ١٦/١٢/٢٠١٣, at ٩:٢٦ ص, Murphy McCauley murphy.mccau...@gmail.com wrote: There isn't. In short, it's like this: The discovery component raises LinkEvents when links change. l2_multi keeps an adjacency map. The value of adjacency[sw1][sw2] is the port which connects sw1 to sw2. This is updated by the l2_multi class, which watches LinkEvents. LinkEvents also cause all switches to have their tables cleared. The Switch class watches PacketIn events. When one occurs, it learns the source MAC (stored in mac_map). In the usual case, this should only be from a port which *doesn't* connect two switches (and therefore won't be in the the adjacency map). If we know the destination, we install a path from the current switch all the way to its final destination (if we don't, as usual, we flood). To install a path, we find the shortest path using the Floyd-Warshall algorithm, and then fill in the port numbers from the adjacency map. We send the flow-mods to all the switches on the path, followed by a barrier. When we have gotten the barrier reply from each switch, the entire path should be ready, so we send the waiting packet (that is, the packet which caused us to want to install the flow in the first place). Most of the rest of the code is to handle exceptional cases like multicasts, partitions, etc. Don't look to l2_multi as an example of a good way to write network-wide forwarding code, though. It's just a 500 line example which is meant to be relatively easy to understand. -- Murphy On Dec 15, 2013, at 8:24 PM, Amer amer7...@hotmail.com wrote: Hello, I would like to ask you if there is a document that help me to understand l2_multi code. Best regards, Amer
[pox-dev] l2_multi explanation
Hello, I would like to ask you if there is a document that help me to understand l2_multi code. Best regards, Amer
Re: [pox-dev] l2_multi explanation
There isn't. In short, it's like this: The discovery component raises LinkEvents when links change. l2_multi keeps an adjacency map. The value of adjacency[sw1][sw2] is the port which connects sw1 to sw2. This is updated by the l2_multi class, which watches LinkEvents. LinkEvents also cause all switches to have their tables cleared. The Switch class watches PacketIn events. When one occurs, it learns the source MAC (stored in mac_map). In the usual case, this should only be from a port which *doesn't* connect two switches (and therefore won't be in the the adjacency map). If we know the destination, we install a path from the current switch all the way to its final destination (if we don't, as usual, we flood). To install a path, we find the shortest path using the Floyd-Warshall algorithm, and then fill in the port numbers from the adjacency map. We send the flow-mods to all the switches on the path, followed by a barrier. When we have gotten the barrier reply from each switch, the entire path should be ready, so we send the waiting packet (that is, the packet which caused us to want to install the flow in the first place). Most of the rest of the code is to handle exceptional cases like multicasts, partitions, etc. Don't look to l2_multi as an example of a good way to write network-wide forwarding code, though. It's just a 500 line example which is meant to be relatively easy to understand. -- Murphy On Dec 15, 2013, at 8:24 PM, Amer amer7...@hotmail.com wrote: Hello, I would like to ask you if there is a document that help me to understand l2_multi code. Best regards, Amer