On Sun, 25 Jan 2015, [email protected] wrote:
Disagree. See below.
On Saturday, January 24, 2015 11:35pm, "David Lang" <[email protected]> said:
On Sat, 24 Jan 2015, [email protected] wrote:
> A side comment, meant to discourage continuing to bridge rather than route.
>
> There's no reason that the AP's cannot have different IP addresses, but a
> common ESSID. Roaming between them would be like roaming among mesh subnets.
> Assuming you are securing your APs' air interfaces using encryption over the
> air, you are already re-authenticating as you move from AP to AP. So using
> routing rather than bridging is a good idea for all the reasons that routing
> rather than bridging is better for mesh.
The problem with doing this is that all existing TCP connections will break when
you move from one AP to another and while some apps will quickly notice this and
establish new connections, there are many apps that will not and this will cause
noticable disruption to the user.
Bridgeing allows the connections to remain intact. The wifi stack re-negotiates
the encryption, but the encapsulated IP packets don't change.
There is no reason why one cannot set up an enterprise network to support
roaming, yet maintaining the property that IP addresses don't change while
roaming from AP to AP. Here's a simple concept, that amounts to moving what
would be in the Ethernet bridging tables up to the IP layer.
All addresses in the enterprise are assigned from a common prefix (XXX/16 in
IPv4, perhaps). Routing in each access point is used to decide whether to
send the packet on its LAN, or to reflect it to another LAN. A node's
preferred location would be updated by the endpoint itself, sending its
current location to its current access point (via ARP or some other protocol).
The access point that hears of a new node that it can reach tells all the
other access points that the node is attached to it. Delivery of a packet to
a node is done by the access point that receives the packet by looking up the
destination IP address in its local table, and sending it to the access point
that currently has the destination IP address.
This is far better than "bridging" at the Ethernet level from a functionality
point of view - it is using routing, not bridging. Bridging at the Ethernet
level uses Ethernet's STP feature, which doesn't work very well in collections
of wireless LAN's (it is slow to recalculate when something moves, because it
was designed for unplug/plug of actual cables, and moving the host from one
physical location to another).
IMO, Ethernet sometimes aspires to solve problems that are already well-solved
in the Internet protocols. (for example the 802.11s mess which tries to do a
mesh entirely in the Ethernet layer, and fails pretty miserably).
Of course that's only my opinion, but I think it applies to overuse of
bridging at the Ethernet layer when there are better approaches at the next
layer up.
Unless you are going to have your routing tables handle every address in your
network separately (and fix all the software that depends on broadcasts) you are
going to have trouble trying to do this at the IP layer.
The 'modern Enterprise' datacenter has lots of large machines that get sliced
into multiple virtual machines. For redundancy purposes you want to have the
machines used for a particular job to be spread across as many of these machines
as possible, spread around your datacenter.
Switches in this environment are becoming layer 2 routers. They are connected
together with multiple links providing redundant paths around the network. This
isn't being done with Spanning Tree because Spanning Tree only allows one path
to exist at once, and that is inefficient and creates bottlenecks. As a result,
they are now keeping all these links live at the same time and using least cost
paths to route the layer 2 traffic across the switches.
It's fair to argue that this is abuse of layer 2, but the difficulties in having
to change the software operating at higher layers vs the fact that making these
changes at the layer 2 level is completely transparent to the higher layers make
it so that using this layer 2 capability is pragmantically a far better choice.
The Computer Scientist will cringe at the 'hacks' that this introduces, but
there is far more progress made when new capabilities can be added in a way
that's transparent to other layers of the stack then when it requires major
changes to how things work.
The software layer is the worst to try and force fundamental changes to. You
would be horrified to learn how old some of the software is that's running major
jobs at large companies. Even if the software is in continuous development, the
age of the core software frequently shows.
David Lang
_______________________________________________
Cerowrt-devel mailing list
[email protected]
https://lists.bufferbloat.net/listinfo/cerowrt-devel