On 2022-05-14, William Ahern <[email protected]> wrote:
> On Fri, May 13, 2022 at 11:10:41PM +0200, [email protected] wrote:
>> Hi,
>> 
>> I've set up an OpenBSD server on the Cloud, set up a Wireguard tunnel, and
>> configured default route through that server. I've noticed that I can't
>> access some websites: my browser was not able to complete TLS handshakes
>> with some servers. I've traced the issue to the fact that the MTU on my
>> server's network interface is 1500 while the default MTU on a wg0
>> interface is 1420. So when a large enough packet has a DF flag set it
>> would not make it through the smaller wg0 interface. I've fixed the
>> problem by adding a "scrub" option to server's pf.conf like this:
>> 
>>   match out on egress from (wg0:network) nat-to (egress:0) scrub (no-df 
>> random-id)
>> 
>> But I'm surprised that I did not see anyone mentioning this problem. I
>> also did not see that "scrub" option included in any examples of Wireguard
>> setup that I was able to find.

>> I'm not a networking expert, so I wonder if using a "scrub" option like
>> that is a good idea.
>
> Seems like ICMP responses are being dropped. In such cases the proper
> solution is fix whatever is filtering out ICMP responses.

It is most likely some firewall close to the website, you need to
deal with the fact that this happens because it's quite common on the
internet.

> However, according to
> https://github.com/QubesOS/qubes-issues/issues/5264#issuecomment-683177300
> Wireguard deliberately drops ICMP responses to its UDP transport packets. If
> this is the case in your situation, the better solution might be to drop the
> MTU on the Wireguard interfaces so oversized packets are rejected before
> they're encapsulated. A common fail-safe MTU for VPN interfaces is 1300 or
> 1280.

Problem will be with the decapsulated packets on the path between the
wg endpoint and the webserver. (Very unlikely to be elsewhere as PF
automatically permits relevant ICMP messages as part of the state for a
TCP connection).


> Another alternative might be to switch to IPSec+IKEv2. If there's no NAT
> between your tunnel endpoints, it won't need to use UDP encapsulation, so
> packet overhead would be smaller. But even with NAT traversal, OpenBSD's
> iked might handle things better (e.g. permitting fragmentation of its UDP
> packet, or mirroring ICMP responses), though I don't know specifically if
> this would the case.

It's a common thing with every protocol which lowers MTU of packets
generated by systems which don't know about the limit (it's well known
for pppoe but affects plenty of other protocols).

Happens with IPsec too.

I recommend "max-mss" instead of no-df, you don't really want fragments
if you can help it. The number to cap at is 40 below the lowest actual
MTU across the tunnel, so 1380 should do for WireGuard, IPsec varies
depending on the options used.

(You _could_ also hit the problem with UDP, but the main place where
people actually hit this was with large EDNS0 buffers, and modern
server software tends to stick to oacket sizes that work across all
non-ridiculous tunnels).


-- 
Please keep replies on the mailing list.

Reply via email to