Re: pair(4) (was: connect routing domains on layer 2)
On 10/24/15 06:46, Reyk Floeter wrote: vether doesn't help as it is not transmitting any traffic. in other words, "vether is a bridge endpoint" "pair is a bridge link" This may be a dead topic, but doesn't bridge_output() transmit for vether(4)? Or am I missing the point entirely? pair(4) does look very useful as a "cable". I just wonder why bridge(4) doesn't act more like a physical switch which would accept the single endpoint of a vether(4) Geoff Steckel
Re: pair(4) (was: connect routing domains on layer 2)
> On 10/24/15 06:46, Reyk Floeter wrote: > > vether doesn't help as it is not transmitting any traffic. > > in other words, "vether is a bridge endpoint" "pair is a bridge link" > This may be a dead topic, but doesn't bridge_output() transmit for > vether(4)? > Or am I missing the point entirely? > > pair(4) does look very useful as a "cable". I just wonder why bridge(4) > doesn't act more like a physical switch which would accept the single > endpoint of a vether(4) That is answered in the manual page.
Re: pair(4) (was: connect routing domains on layer 2)
Reyk Floeter wrote: > Hi, > > as requested by Theo and discussed with many, the following diff moves > it into a new driver. This also allowed to improve the logic of link > states related to the connection (as discussed with Claudio). > > The new driver is called pair(4). > > # ifconfig pair1 rdomain 1 10.1.1.1/24 up > # ifconfig pair2 rdomain 2 10.1.1.2/24 up > # ifconfig pair1 patch pair2 > # route -T 1 exec ping 10.1.1.2 > # ifconfig pair1 -patch > > manpages and documentation can be improved, but I'd like to continue > in the tree if there are no other serious concerns. We are doing this because we don't want a bridge of vethers? ok, but what if I want to connect three rdomains together? I can put any number of vethers into a bridge, but pair would seem limited to exactly two interfaces.
Re: pair(4) (was: connect routing domains on layer 2)
On Sat, Oct 24, 2015 at 06:12:44AM -0400, Ted Unangst wrote: > Reyk Floeter wrote: > > Hi, > > > > as requested by Theo and discussed with many, the following diff moves > > it into a new driver. This also allowed to improve the logic of link > > states related to the connection (as discussed with Claudio). > > > > The new driver is called pair(4). > > > > # ifconfig pair1 rdomain 1 10.1.1.1/24 up > > # ifconfig pair2 rdomain 2 10.1.1.2/24 up > > # ifconfig pair1 patch pair2 > > # route -T 1 exec ping 10.1.1.2 > > # ifconfig pair1 -patch > > > > manpages and documentation can be improved, but I'd like to continue > > in the tree if there are no other serious concerns. > > We are doing this because we don't want a bridge of vethers? ok, but what if I > want to connect three rdomains together? I can put any number of vethers into > a bridge, but pair would seem limited to exactly two interfaces. vether doesn't help as it is not transmitting any traffic. in other words, "vether is a bridge endpoint" "pair is a bridge link" As with vether, you can add pairs to a bridge. For example, add all "rdomain 0" pairs to a central bridge (eg. for the common uplink), add connect them with pairs in different rdomains. This way, bridge0 becomes your "core switch". Let's assume pair1-4 are all in rdomain 0: # ifconfig bridge0 add pair1 add pair2 add pair3 add em0 up And pair10,20,30 are in rdomain 1,2,3: # ifconfig pair1 patch pair10 # ifconfig pair2 patch pair20 # ifconfig pair3 patch pair30 Now you can use rdomain 0 as an routing uplink from each rdomain as well, assuming pair1 is 10.10.0.1 and pair10 in rdomain 1 is 10.10.0.10: # route -T 1 add default 10.10.0.1 And, you can put the other pairs in bridges as well, to create "distribution switches". And pf will deal with is just fine. btw., besides the patch, the main intentional difference between vether(4) and pair(4) is the fact that pair(4)'s link state is down until it is connected with another pair(4). vether(4) is always up. So you cannot use stand-alone pair(4) like vether(4). I'm going to document this in the manpage, but hodling it off until it is in the tree. Reyk
pair(4) (was: connect routing domains on layer 2)
Hi, as requested by Theo and discussed with many, the following diff moves it into a new driver. This also allowed to improve the logic of link states related to the connection (as discussed with Claudio). The new driver is called pair(4). # ifconfig pair1 rdomain 1 10.1.1.1/24 up # ifconfig pair2 rdomain 2 10.1.1.2/24 up # ifconfig pair1 patch pair2 # route -T 1 exec ping 10.1.1.2 # ifconfig pair1 -patch manpages and documentation can be improved, but I'd like to continue in the tree if there are no other serious concerns. OK? Reyk Index: sbin/ifconfig/ifconfig.8 === RCS file: /cvs/src/sbin/ifconfig/ifconfig.8,v retrieving revision 1.257 diff -u -p -u -p -r1.257 ifconfig.8 --- sbin/ifconfig/ifconfig.86 Oct 2015 17:23:21 - 1.257 +++ sbin/ifconfig/ifconfig.823 Oct 2015 15:44:28 - @@ -1270,6 +1270,33 @@ The is an IPv4 address that will be used to find the nexthop in the MPLS network. .El +.\" PAIR +.Sh PAIR +.nr nS 1 +.Bk -words +.Nm ifconfig +.Ar pair-interface +.Op Oo Fl Oc Ns Cm patch Ar interface +.Ek +.nr nS 0 +.Pp +The following options are available for a +.Xr pair 4 +interface: +.Bl -tag -width Ds +.It Cm patch Ar interface +Connect the interface with a second +.Xr pair 4 +interface. +Any outgoing packets from the first +.Ar pair-interface +will be received by the second +.Ar interface +and vice versa. +This link allows to interconnect two routing domains locally. +.It Fl patch +If configured, disconnect the interface pair. +.El .\" PFLOW .Sh PFLOW .nr nS 1 Index: sbin/ifconfig/ifconfig.c === RCS file: /cvs/src/sbin/ifconfig/ifconfig.c,v retrieving revision 1.302 diff -u -p -u -p -r1.302 ifconfig.c --- sbin/ifconfig/ifconfig.c3 Oct 2015 10:44:23 - 1.302 +++ sbin/ifconfig/ifconfig.c23 Oct 2015 15:44:29 - @@ -275,6 +275,8 @@ voidsetifipdst(const char *, int); void setifdesc(const char *, int); void unsetifdesc(const char *, int); void printifhwfeatures(const char *, int); +void setpair(const char *, int); +void unsetpair(const char *, int); #else void setignore(const char *, int); #endif @@ -490,6 +492,8 @@ const structcmd { { "-descr", 1, 0, unsetifdesc }, { "wol",IFXF_WOL, 0, setifxflags }, { "-wol", -IFXF_WOL, 0, setifxflags }, + { "patch", NEXTARG,0, setpair }, + { "-patch", 1, 0, unsetpair }, #else /* SMALL */ { "powersave", NEXTARG0, 0, setignore }, { "priority", NEXTARG,0, setignore }, @@ -2917,6 +2921,7 @@ status(int link, struct sockaddr_dl *sdl struct ifreq ifrdesc; struct ifkalivereq ikardesc; char ifdescr[IFDESCRSIZE]; + char ifname[IF_NAMESIZE]; #endif uint64_t *media_list; int i; @@ -2955,6 +2960,9 @@ status(int link, struct sockaddr_dl *sdl (ikardesc.ikar_timeo != 0 || ikardesc.ikar_cnt != 0)) printf("\tkeepalive: timeout %d count %d\n", ikardesc.ikar_timeo, ikardesc.ikar_cnt); + if (ioctl(s, SIOCGIFPAIR, ) == 0 && ifrdesc.ifr_index != 0 && + if_indextoname(ifrdesc.ifr_index, ifname) != NULL) + printf("\tpatch: %s\n", ifname); #endif vlan_status(); #ifndef SMALL @@ -5199,6 +5207,29 @@ setinstance(const char *id, int param) ifr.ifr_rdomainid = rdomainid; if (ioctl(s, SIOCSIFRDOMAIN, (caddr_t)) < 0) warn("SIOCSIFRDOMAIN"); +} +#endif + +#ifndef SMALL +void +setpair(const char *val, int d) +{ + strlcpy(ifr.ifr_name, name, sizeof(ifr.ifr_name)); + if ((ifr.ifr_index = if_nametoindex(val)) == 0) { + errno = ENOENT; + err(1, "patch %s", val); + } + if (ioctl(s, SIOCSIFPAIR, (caddr_t)) < 0) + warn("SIOCSIFPAIR"); +} + +void +unsetpair(const char *val, int d) +{ + ifr.ifr_index = 0; + strlcpy(ifr.ifr_name, name, sizeof(ifr.ifr_name)); + if (ioctl(s, SIOCSIFPAIR, (caddr_t)) < 0) + warn("SIOCSIFPAIR"); } #endif Index: sys/conf/GENERIC === RCS file: /cvs/src/sys/conf/GENERIC,v retrieving revision 1.220 diff -u -p -u -p -r1.220 GENERIC --- sys/conf/GENERIC10 Aug 2015 20:35:36 - 1.220 +++ sys/conf/GENERIC23 Oct 2015 15:44:31 - @@ -96,6 +96,7 @@ pseudo-device gre # GRE encapsulation i pseudo-device loop# network loopback pseudo-device mpe # MPLS PE interface pseudo-device mpw # MPLS pseudowire support +pseudo-device pair# Virtual Ethernet interface pair pseudo-device ppp # PPP