Re: [PATCH 2/2] drivers: add vpls support

2017-08-12 Thread Roopa Prabhu
On Sat, Aug 12, 2017 at 6:40 AM, Amine Kherbouche
 wrote:
>
>
> On 11/08/2017 17:14, Roopa Prabhu wrote:
>>
>> On Fri, Aug 11, 2017 at 5:55 AM, David Lamparter 
>> wrote:
>>>
>>> On Thu, Aug 10, 2017 at 10:28:37PM +0200, Amine Kherbouche wrote:

 This commit introduces the support of VPLS virtual device, that allows
 performing  L2VPN multipoint to multipoint communication over MPLS PSN.

 VPLS device encap received ethernet frame over mpls packet and send it
 the
 output device, in the other direction, when receiving the right
 configured
 mpls packet, the matched mpls route calls the handler vpls function,
 then pulls out the mpls header and send it back the entry point via
 netif_rx().

 Two functions, mpls_entry_encode() and mpls_output_possible() are
 exported from mpls/internal.h to be able to use them inside vpls driver.

 Signed-off-by: Amine Kherbouche 
>>
>> [snip]
>>
>>> [...]

 +union vpls_nh {
 + struct in6_addr addr6;
 + struct in_addr  addr;
 +};
 +
 +struct vpls_dst {
 + struct net_device   *dev;
 + union vpls_nh   addr;
 + u32 label_in, label_out;
 + u32 id;
 + u16 vlan_id;
>>>
>>> I looked at VLAN support and decided against it because the bridge layer
>>> can handle this perfectly fine by using the bridge's vlan support to tag
>>> a port's pvid.
>>
>> yes, agreed. there is no need for vlan here. The bridge can be
>> configured with the required vlan
>> mapping on the vpls port.
>
> what if the output device cannot handle vlan encapsulation? because on my
> example of configuration in the cover letter, I sent the vpls packets over
> a simple physical net device (not a bridge nor a vlan port).

There are already multiple ways vlan encap is added today, eg vlan
device, under a bridge, using tc etc. I don't think every driver
should carry vlan encap info. see vxlan as an example, it does
notyou can use a bridge or tc etc for the vlan to vni map. You
will need a bridge anyways for fwding db, stp etc in such deployments.

We can add vlan in the future if it becomes necessary. I don't see a need today.


Re: [PATCH 2/2] drivers: add vpls support

2017-08-12 Thread Amine Kherbouche



On 11/08/2017 17:14, Roopa Prabhu wrote:

On Fri, Aug 11, 2017 at 5:55 AM, David Lamparter  wrote:

On Thu, Aug 10, 2017 at 10:28:37PM +0200, Amine Kherbouche wrote:

This commit introduces the support of VPLS virtual device, that allows
performing  L2VPN multipoint to multipoint communication over MPLS PSN.

VPLS device encap received ethernet frame over mpls packet and send it the
output device, in the other direction, when receiving the right configured
mpls packet, the matched mpls route calls the handler vpls function,
then pulls out the mpls header and send it back the entry point via
netif_rx().

Two functions, mpls_entry_encode() and mpls_output_possible() are
exported from mpls/internal.h to be able to use them inside vpls driver.

Signed-off-by: Amine Kherbouche 

[snip]


[...]

+union vpls_nh {
+ struct in6_addr addr6;
+ struct in_addr  addr;
+};
+
+struct vpls_dst {
+ struct net_device   *dev;
+ union vpls_nh   addr;
+ u32 label_in, label_out;
+ u32 id;
+ u16 vlan_id;

I looked at VLAN support and decided against it because the bridge layer
can handle this perfectly fine by using the bridge's vlan support to tag
a port's pvid.

yes, agreed. there is no need for vlan here. The bridge can be
configured with the required vlan
mapping on the vpls port.

what if the output device cannot handle vlan encapsulation? because on my
example of configuration in the cover letter, I sent the vpls packets over
a simple physical net device (not a bridge nor a vlan port).




+ u8  via_table;
+ u8  flags;
+ u8  ttl;
+};

[...]

+struct vpls_priv {
+ struct net  *encap_net;
+ struct vpls_dst dst;
+};
+
+static struct nla_policy vpls_policy[IFLA_VPLS_MAX + 1] = {
+ [IFLA_VPLS_ID]  = { .type = NLA_U32 },
+ [IFLA_VPLS_IN_LABEL]= { .type = NLA_U32 },
+ [IFLA_VPLS_OUT_LABEL]   = { .type = NLA_U32 },
+ [IFLA_VPLS_OIF] = { .type = NLA_U32 },
+ [IFLA_VPLS_TTL] = { .type = NLA_U8  },
+ [IFLA_VPLS_VLANID]  = { .type = NLA_U8 },
+ [IFLA_VPLS_NH]  = { .type = NLA_U32 },
+ [IFLA_VPLS_NH6] = { .len = sizeof(struct in6_addr) },
+};

The original patchset was point-to-multipoint in a single netdev, and
had some starts on optimized multicast support (which, admittedly, is a
bit of a fringe thing, but still.)


I had been thinking about this as a single netdevice as well...which
can work with
the bridge driver using per vlan dst_metadata infra (similar to vxlan
single device and per vlan - vxlan mapping).

Multiple netdevice with one per vlan-vpls-id will work as well... but
starting with a single netdev
will be better (helps with scaling).

That's why I added the vpls id, to have in the future many vpls tunnels with
a single device, so the vpls id let us keep tracking the device working like
the vni of vxlan. (vpls lwtunnels in the TODO list).


Re: [PATCH 2/2] drivers: add vpls support

2017-08-11 Thread Roopa Prabhu
On Fri, Aug 11, 2017 at 5:55 AM, David Lamparter  wrote:
> On Thu, Aug 10, 2017 at 10:28:37PM +0200, Amine Kherbouche wrote:
>> This commit introduces the support of VPLS virtual device, that allows
>> performing  L2VPN multipoint to multipoint communication over MPLS PSN.
>>
>> VPLS device encap received ethernet frame over mpls packet and send it the
>> output device, in the other direction, when receiving the right configured
>> mpls packet, the matched mpls route calls the handler vpls function,
>> then pulls out the mpls header and send it back the entry point via
>> netif_rx().
>>
>> Two functions, mpls_entry_encode() and mpls_output_possible() are
>> exported from mpls/internal.h to be able to use them inside vpls driver.
>>
>> Signed-off-by: Amine Kherbouche 
>

[snip]

> [...]
>> +union vpls_nh {
>> + struct in6_addr addr6;
>> + struct in_addr  addr;
>> +};
>> +
>> +struct vpls_dst {
>> + struct net_device   *dev;
>> + union vpls_nh   addr;
>> + u32 label_in, label_out;
>> + u32 id;
>> + u16 vlan_id;
>
> I looked at VLAN support and decided against it because the bridge layer
> can handle this perfectly fine by using the bridge's vlan support to tag
> a port's pvid.

yes, agreed. there is no need for vlan here. The bridge can be
configured with the required vlan
mapping on the vpls port.



>
>> + u8  via_table;
>> + u8  flags;
>> + u8  ttl;
>> +};
>
> [...]
>> +struct vpls_priv {
>> + struct net  *encap_net;
>> + struct vpls_dst dst;
>> +};
>> +
>> +static struct nla_policy vpls_policy[IFLA_VPLS_MAX + 1] = {
>> + [IFLA_VPLS_ID]  = { .type = NLA_U32 },
>> + [IFLA_VPLS_IN_LABEL]= { .type = NLA_U32 },
>> + [IFLA_VPLS_OUT_LABEL]   = { .type = NLA_U32 },
>> + [IFLA_VPLS_OIF] = { .type = NLA_U32 },
>> + [IFLA_VPLS_TTL] = { .type = NLA_U8  },
>> + [IFLA_VPLS_VLANID]  = { .type = NLA_U8 },
>> + [IFLA_VPLS_NH]  = { .type = NLA_U32 },
>> + [IFLA_VPLS_NH6] = { .len = sizeof(struct in6_addr) },
>> +};
>
> The original patchset was point-to-multipoint in a single netdev, and
> had some starts on optimized multicast support (which, admittedly, is a
> bit of a fringe thing, but still.)
>

I had been thinking about this as a single netdevice as well...which
can work with
the bridge driver using per vlan dst_metadata infra (similar to vxlan
single device and per vlan - vxlan mapping).

Multiple netdevice with one per vlan-vpls-id will work as well... but
starting with a single netdev
will be better (helps with scaling).


Re: [PATCH 2/2] drivers: add vpls support

2017-08-11 Thread David Lamparter
On Thu, Aug 10, 2017 at 10:28:37PM +0200, Amine Kherbouche wrote:
> This commit introduces the support of VPLS virtual device, that allows
> performing  L2VPN multipoint to multipoint communication over MPLS PSN.
> 
> VPLS device encap received ethernet frame over mpls packet and send it the
> output device, in the other direction, when receiving the right configured
> mpls packet, the matched mpls route calls the handler vpls function,
> then pulls out the mpls header and send it back the entry point via
> netif_rx().
> 
> Two functions, mpls_entry_encode() and mpls_output_possible() are
> exported from mpls/internal.h to be able to use them inside vpls driver.
> 
> Signed-off-by: Amine Kherbouche 

This code is derivative of code that I authored;  while you
significantly changed it I'd appreciate if you kept a hint of that.

Unfortunately, I also have some concerns with this patch...

> +#define VPLS_MAX_ID  256 /* Max VPLS WireID (arbitrary) */

There is no point in keeping a VPLS wire ID.  Again, this was in the
README that accompanied my patchset:

- I made a design mistake with the wire ID.  It's simply not needed.  A
  pseudowire can be identified by its incoming label.  There is also some
  really ugly code keeping an array of wires...

I don't even see where you're using the wire ID anymore at this point,
it might be a dead leftover from my code.

[...]
> +union vpls_nh {
> + struct in6_addr addr6;
> + struct in_addr  addr;
> +};
> +
> +struct vpls_dst {
> + struct net_device   *dev;
> + union vpls_nh   addr;
> + u32 label_in, label_out;
> + u32 id;
> + u16 vlan_id;

I looked at VLAN support and decided against it because the bridge layer
can handle this perfectly fine by using the bridge's vlan support to tag
a port's pvid.

> + u8  via_table;
> + u8  flags;
> + u8  ttl;
> +};

[...]
> +struct vpls_priv {
> + struct net  *encap_net;
> + struct vpls_dst dst;
> +};
> +
> +static struct nla_policy vpls_policy[IFLA_VPLS_MAX + 1] = {
> + [IFLA_VPLS_ID]  = { .type = NLA_U32 },
> + [IFLA_VPLS_IN_LABEL]= { .type = NLA_U32 },
> + [IFLA_VPLS_OUT_LABEL]   = { .type = NLA_U32 },
> + [IFLA_VPLS_OIF] = { .type = NLA_U32 },
> + [IFLA_VPLS_TTL] = { .type = NLA_U8  },
> + [IFLA_VPLS_VLANID]  = { .type = NLA_U8 },
> + [IFLA_VPLS_NH]  = { .type = NLA_U32 },
> + [IFLA_VPLS_NH6] = { .len = sizeof(struct in6_addr) },
> +};

The original patchset was point-to-multipoint in a single netdev, and
had some starts on optimized multicast support (which, admittedly, is a
bit of a fringe thing, but still.)

This patch implements a single pseudowire (so the name is kind of
misleading; it's a VLL / VPWS, multiple of which you'd use to build full
VPLS).  However, you are now missing split-horizon ethernet bridging
support.  How is that done here?


-David


P.S.: for anyone curious, the original patchset is at
https://github.com/eqvinox/vpls-linux-kernel
I didn't go ahead with posting it because I felt there were several
things where I'd want to change the design, hence this README:
https://github.com/eqvinox/vpls-linux-kernel/commit/81c809d6f9c40c0332098e13fcad65144aa51795