Re: Bridged vether interfaces can't talk to each other (multiple routing tables)

2017-04-27 Thread Anders Andersson
In case someone finds this thread in the future, I would like to add
that I have now received a possible solution to the problem
out-of-band. The solution is to use pair(4):

The following setup works for me, although it is a bit too convoluted:

# cat /etc/hostname.pair0
up

# cat /etc/hostname.pair1
rdomain 1 up patch pair0

# cat /etc/hostname.bridge0
add em2 add vether0 add pair0
!dhclient vether0
!dhclient pair1


In this setup, both vether0 and pair1 gets a separate IP address on
the same network but in different routing domains, but they can still
talk to each other over the patched pair+bridge.

This leads to the following mess, only possible to decipher with a
monospace font:

..-.  10.0.0.1  .-,(  ),-.
| bridge0|   em2   |dhcp server  .-(  )-.
|| (no ip) |--->  gateway   --->(internet)
|'-' __  '-(  ).-'
.---. .---.|[_...__...°] '-.( ).-'
|  vether0  | |   pair0   ||
| rdomain 0 | | rdomain 0 ||
|   dhcp| |  (no ip)  ||
| 10.0.0.2  | '---'|.---.
'---'---^--'|   pair1   |
|   | rdomain 1 |
'---patch---|   dhcp|
| 10.0.0.3  |
'---'

Still not sure if this is a good idea, but it is the solution to my
literal problem so I consider that one solved.







On Sat, Apr 22, 2017 at 3:49 AM, Anders Andersson  wrote:
> === BACKGROUND ===
>
> I'm trying to set up an OpenBSD 6.1 server having two externally visible
> IP numbers through one physical network port, each IP mapping to a
> unique MAC address[1]. I have it mostly working, but my interfaces can't
> talk to each other.
>
> All traffic should use the primary IP, and most services should listen
> on that. The secondary IP should only be used on-demand for one or two
> services.
>
> Thinking that separate routing tables can solve this, I have configured
> my network like this[2][3]:
>
> # cat hostname.em2
> up
>
> # cat hostname.vether0
> lladdr 00:00:00:00:00:02
>
> # cat hostname.vether1
> lladdr 00:00:00:00:00:03 rdomain 1
>
> # cat hostname.bridge0
> add em2 add vether0 add vether1 up
> !dhclient vether0
> !dhclient vether1
>
> # cat sysctl.conf
> net.inet.ip.forwarding=1
>
> Leading to something like this[4]:
> (full post in monospace: http://paste.debian.net/928811 )
>
> ..-.  10.0.0.1  .-,(  ),-.
> | bridge0|   em2   |dhcp server  .-(  )-.
> || (no ip) |--->  gateway   --->(internet)
> |'-' __  '-(  ).-'
> |  |[_...__...°] '-.( ).-'
> .---.  .---.
> |  vether0  |  |  vether1  |
> | rdomain 0 |  | rdomain 1 |
> |   dhcp|  |   dhcp|
> | 10.0.0.2  |  | 10.0.0.3  |
> '---'--'---'
>
> Everything else should be the default, this is on a clean 6.1 install.
>
> This configuration works great, vether0 and vether1 both gets an IP
> number from my DHCP server, all traffic goes out on vether0 by default,
> but I can select vether1 manually:
>
> # traceroute -nvq1 10.0.0.1
> traceroute to 10.0.0.1 (10.0.0.1), 64 hops max, 40 byte packets
>  1  10.0.0.1 48 bytes to 10.0.0.2  0.994 ms
>
> # route -T1 exec traceroute -nvq1 10.0.0.1
> traceroute to 10.0.0.1 (10.0.0.1), 64 hops max, 40 byte packets
>  1  10.0.0.1 48 bytes to 10.0.0.3  0.984 ms
>
> I can also reach each IP from outside the box. They are going in on em2,
> through the bridge, and in to vether0 or vether1 respectively.
>
>
>
>
> === PROBLEM ===
>
> Now to my problem: I have no connection between vether0<->vether1.
>
> # traceroute -nvq1 10.0.0.3
> traceroute to 10.0.0.3 (10.0.0.3), 64 hops max, 40 byte...
>  1  *
>  2  *
> ^C
>
> If I listen with tcpdump on the bridge, I see lots of unanswered arp
> who-has:
>
> # tcpdump -nti bridge0
> tcpdump: listening on bridge0, link-type EN10MB
> arp who-has 10.0.0.3 tell 10.0.0.2
> arp who-has 10.0.0.3 tell 10.0.0.2
> ^C
>
> These packets even go out on em2 to my LAN, but no one ever answers. The
> same thing happens in reverse.
>
> I have experimented with these bridge settings:
> 'blocknonip' - adding or removing on members makes no difference
> 'discover' - should be the default, adding makes no difference
> 'learn' - should be the default, adding makes no difference
>
>
>
> === EXPECTATIONS ===
>
> I expected that someone should answer those arp who-is requests, either
> vether1 directly, or the bridge0 who should know which interfaces it
> has. Is there something I must configure to make this work, or is my
> plan flawed from the start?
>
>
>
>
>
> === INFORMATION ===
>
> Various 

Re: Bridged vether interfaces can't talk to each other (multiple routing tables)

2017-04-25 Thread Anders Andersson
On 22 April 2017 at 04:22, Edgar Pettijohn  wrote:
> On 04/21/17 20:49, Anders Andersson wrote:
>>
>> Now to my problem: I have no connection between vether0<->vether1.
>>
>>  # traceroute -nvq1 10.0.0.3
>>  traceroute to 10.0.0.3 (10.0.0.3), 64 hops max, 40 byte...
>>   1  *
>>   2  *
>>  ^C
>>
>> If I listen with tcpdump on the bridge, I see lots of unanswered arp
>> who-has:
>>
>>  # tcpdump -nti bridge0
>>  tcpdump: listening on bridge0, link-type EN10MB
>>  arp who-has 10.0.0.3 tell 10.0.0.2
>>  arp who-has 10.0.0.3 tell 10.0.0.2
>>  ^C
>
>
> Never done this, but maybe you need an arp proxy.  Not sure which $iface to
> put it on, but something like:
> # arp -s 10.0.0.2 00:00:00:00:00:02 pub
>
> may or may not help depending on if my understanding of what I read in the
> manual actually does what I think it will.

Thank you for the reply! I tried this, and it *does* help with the ARP
problem. However, it only moves the problem to the next stage.

# ping 10.0.0.3
PING 10.0.0.3 (10.0.0.3): 56 data bytes
^C

# tcpdump -nti vether1
tcpdump: listening on vether1, link-type EN10MB
10.0.0.2 > 10.0.0.3: icmp: echo request
10.0.0.2 > 10.0.0.3: icmp: echo request
10.0.0.2 > 10.0.0.3: icmp: echo request
^C

Now the pings are transmitted, and according to tcpdump, they are
received on the virtual interface, it's just that there's no reply.
Pinging the same interface from outside the box works great, the
packets are transported through the physical interface, through the
bridge, and ending up at the virtual interface which replies. Running
httpd on the interface in routing domain 1 also works from the
outside.

I probably have to trim this down to an even smaller example in order
to get any help, I realize that the initial mail was a bit much to
digest. I don't really *have* to connect between the interfaces, but I
expect that I will find a lot of problems with this setup in the
future unless I understand all the issues involved.

// Anders



Re: Bridged vether interfaces can't talk to each other (multiple routing tables)

2017-04-21 Thread Edgar Pettijohn



On 04/21/17 20:49, Anders Andersson wrote:

=== BACKGROUND ===

I'm trying to set up an OpenBSD 6.1 server having two externally visible
IP numbers through one physical network port, each IP mapping to a
unique MAC address[1]. I have it mostly working, but my interfaces can't
talk to each other.

All traffic should use the primary IP, and most services should listen
on that. The secondary IP should only be used on-demand for one or two
services.

Thinking that separate routing tables can solve this, I have configured
my network like this[2][3]:

 # cat hostname.em2
 up

 # cat hostname.vether0
 lladdr 00:00:00:00:00:02

 # cat hostname.vether1
 lladdr 00:00:00:00:00:03 rdomain 1

 # cat hostname.bridge0
 add em2 add vether0 add vether1 up
 !dhclient vether0
 !dhclient vether1

 # cat sysctl.conf
 net.inet.ip.forwarding=1

Leading to something like this[4]:
(full post in monospace: http://paste.debian.net/928811 )

..-.  10.0.0.1  .-,(  ),-.
| bridge0|   em2   |dhcp server  .-(  )-.
|| (no ip) |--->  gateway   --->(internet)
|'-' __  '-(  ).-'
|  |[_...__...°] '-.( ).-'
.---.  .---.
|  vether0  |  |  vether1  |
| rdomain 0 |  | rdomain 1 |
|   dhcp|  |   dhcp|
| 10.0.0.2  |  | 10.0.0.3  |
'---'--'---'


Love the ascii art!



Everything else should be the default, this is on a clean 6.1 install.

This configuration works great, vether0 and vether1 both gets an IP
number from my DHCP server, all traffic goes out on vether0 by default,
but I can select vether1 manually:

 # traceroute -nvq1 10.0.0.1
 traceroute to 10.0.0.1 (10.0.0.1), 64 hops max, 40 byte packets
  1  10.0.0.1 48 bytes to 10.0.0.2  0.994 ms

 # route -T1 exec traceroute -nvq1 10.0.0.1
 traceroute to 10.0.0.1 (10.0.0.1), 64 hops max, 40 byte packets
  1  10.0.0.1 48 bytes to 10.0.0.3  0.984 ms

I can also reach each IP from outside the box. They are going in on em2,
through the bridge, and in to vether0 or vether1 respectively.




=== PROBLEM ===

Now to my problem: I have no connection between vether0<->vether1.

 # traceroute -nvq1 10.0.0.3
 traceroute to 10.0.0.3 (10.0.0.3), 64 hops max, 40 byte...
  1  *
  2  *
 ^C

If I listen with tcpdump on the bridge, I see lots of unanswered arp
who-has:

 # tcpdump -nti bridge0
 tcpdump: listening on bridge0, link-type EN10MB
 arp who-has 10.0.0.3 tell 10.0.0.2
 arp who-has 10.0.0.3 tell 10.0.0.2
 ^C


Never done this, but maybe you need an arp proxy.  Not sure which $iface 
to put it on, but something like:

# arp -s 10.0.0.2 00:00:00:00:00:02 pub

may or may not help depending on if my understanding of what I read in 
the manual actually does what I think it will.




These packets even go out on em2 to my LAN, but no one ever answers. The
same thing happens in reverse.

I have experimented with these bridge settings:
 'blocknonip' - adding or removing on members makes no difference
 'discover' - should be the default, adding makes no difference
 'learn' - should be the default, adding makes no difference



=== EXPECTATIONS ===

I expected that someone should answer those arp who-is requests, either
vether1 directly, or the bridge0 who should know which interfaces it
has. Is there something I must configure to make this work, or is my
plan flawed from the start?





=== INFORMATION ===

Various information that could help answer my question (trimmed
whitespace and boilerplate):

# route -n show -inet
Destination   Gateway   Flags Refs Use   Mtu Prio Iface
default   10.0.0.1  UGS  0   0 -8 vether0
224/4 127.0.0.1 URS  0   0 327688 lo0
10.0.0/24 10.0.0.2  UCn  1   0 -4 vether0
10.0.0.1  link#6UHLch1   1 -3 vether0
10.0.0.2  00:00:00:00:00:02 UHLl 0   0 -1 vether0
10.0.0.25510.0.0.2  UHb  0   0 -1 vether0
127/8 127.0.0.1 UGRS 0   0 327688 lo0
127.0.0.1 127.0.0.1 UHhl 1   2 327681 lo0

# route -T1 -n show -inet
Destination   Gateway   Flags Refs Use   Mtu Prio Iface
default   10.0.0.1  UGS  0  32 -8 vether1
10.0.0/24 10.0.0.3  UCn  1   4 -4 vether1
10.0.0.1  00:00:00:00:00:01 UHLch1   3 -3 vether1
10.0.0.3  00:00:00:00:00:03 UHLl 0   0 -1 vether1
10.0.0.25510.0.0.3  UHb  0   0 -1 vether1

# for if in bridge0 em2 vether{0,1}; do ifconfig $if; done
bridge0: flags=41
description: Bridge for external virtual NICs
index 9 llprio 3
groups: bridge
priority 32768 hellotime 2 fwddelay 15 maxage 20 holdcnt 6 proto rstp
designated: id 

Bridged vether interfaces can't talk to each other (multiple routing tables)

2017-04-21 Thread Anders Andersson
=== BACKGROUND ===

I'm trying to set up an OpenBSD 6.1 server having two externally visible
IP numbers through one physical network port, each IP mapping to a
unique MAC address[1]. I have it mostly working, but my interfaces can't
talk to each other.

All traffic should use the primary IP, and most services should listen
on that. The secondary IP should only be used on-demand for one or two
services.

Thinking that separate routing tables can solve this, I have configured
my network like this[2][3]:

# cat hostname.em2
up

# cat hostname.vether0
lladdr 00:00:00:00:00:02

# cat hostname.vether1
lladdr 00:00:00:00:00:03 rdomain 1

# cat hostname.bridge0
add em2 add vether0 add vether1 up
!dhclient vether0
!dhclient vether1

# cat sysctl.conf
net.inet.ip.forwarding=1

Leading to something like this[4]:
(full post in monospace: http://paste.debian.net/928811 )

..-.  10.0.0.1  .-,(  ),-.
| bridge0|   em2   |dhcp server  .-(  )-.
|| (no ip) |--->  gateway   --->(internet)
|'-' __  '-(  ).-'
|  |[_...__...°] '-.( ).-'
.---.  .---.
|  vether0  |  |  vether1  |
| rdomain 0 |  | rdomain 1 |
|   dhcp|  |   dhcp|
| 10.0.0.2  |  | 10.0.0.3  |
'---'--'---'

Everything else should be the default, this is on a clean 6.1 install.

This configuration works great, vether0 and vether1 both gets an IP
number from my DHCP server, all traffic goes out on vether0 by default,
but I can select vether1 manually:

# traceroute -nvq1 10.0.0.1
traceroute to 10.0.0.1 (10.0.0.1), 64 hops max, 40 byte packets
 1  10.0.0.1 48 bytes to 10.0.0.2  0.994 ms

# route -T1 exec traceroute -nvq1 10.0.0.1
traceroute to 10.0.0.1 (10.0.0.1), 64 hops max, 40 byte packets
 1  10.0.0.1 48 bytes to 10.0.0.3  0.984 ms

I can also reach each IP from outside the box. They are going in on em2,
through the bridge, and in to vether0 or vether1 respectively.




=== PROBLEM ===

Now to my problem: I have no connection between vether0<->vether1.

# traceroute -nvq1 10.0.0.3
traceroute to 10.0.0.3 (10.0.0.3), 64 hops max, 40 byte...
 1  *
 2  *
^C

If I listen with tcpdump on the bridge, I see lots of unanswered arp
who-has:

# tcpdump -nti bridge0
tcpdump: listening on bridge0, link-type EN10MB
arp who-has 10.0.0.3 tell 10.0.0.2
arp who-has 10.0.0.3 tell 10.0.0.2
^C

These packets even go out on em2 to my LAN, but no one ever answers. The
same thing happens in reverse.

I have experimented with these bridge settings:
'blocknonip' - adding or removing on members makes no difference
'discover' - should be the default, adding makes no difference
'learn' - should be the default, adding makes no difference



=== EXPECTATIONS ===

I expected that someone should answer those arp who-is requests, either
vether1 directly, or the bridge0 who should know which interfaces it
has. Is there something I must configure to make this work, or is my
plan flawed from the start?





=== INFORMATION ===

Various information that could help answer my question (trimmed
whitespace and boilerplate):

# route -n show -inet
Destination   Gateway   Flags Refs Use   Mtu Prio Iface
default   10.0.0.1  UGS  0   0 -8 vether0
224/4 127.0.0.1 URS  0   0 327688 lo0
10.0.0/24 10.0.0.2  UCn  1   0 -4 vether0
10.0.0.1  link#6UHLch1   1 -3 vether0
10.0.0.2  00:00:00:00:00:02 UHLl 0   0 -1 vether0
10.0.0.25510.0.0.2  UHb  0   0 -1 vether0
127/8 127.0.0.1 UGRS 0   0 327688 lo0
127.0.0.1 127.0.0.1 UHhl 1   2 327681 lo0

# route -T1 -n show -inet
Destination   Gateway   Flags Refs Use   Mtu Prio Iface
default   10.0.0.1  UGS  0  32 -8 vether1
10.0.0/24 10.0.0.3  UCn  1   4 -4 vether1
10.0.0.1  00:00:00:00:00:01 UHLch1   3 -3 vether1
10.0.0.3  00:00:00:00:00:03 UHLl 0   0 -1 vether1
10.0.0.25510.0.0.3  UHb  0   0 -1 vether1

# for if in bridge0 em2 vether{0,1}; do ifconfig $if; done
bridge0: flags=41
   description: Bridge for external virtual NICs
   index 9 llprio 3
   groups: bridge
   priority 32768 hellotime 2 fwddelay 15 maxage 20 holdcnt 6 proto rstp
   designated: id 00:00:00:00:00:00 priority 0
   em2 flags=3
   port 3 ifpriority 0 ifcost 0
   vether0 flags=3
   port 6 ifpriority 0 ifcost 0
   vether1 flags=3
   port 7 ifpriority 0 ifcost 0
   Addresses (max cache: 100, timeout: 240):
   00:00:00:00:00:01 em2 1 flags=0<>
em2: