Re: PF Outbound traffic Load Balancing over multiple tun/openvpn interfaces/tunnels

2018-09-11 Thread Andreas Krüger
Maybe rdomains?

> Den 11. sep. 2018 kl. 15.59 skrev Andrew Lemin :
> 
> Hi list,
> 
> I use an OpenVPN based internet access service (like NordVPN, AirVPN etc).
> 
> The issue with these public VPN services, is the VPN servers are always 
> congested. The most I’ll get is maybe 10Mbits through one server.
> 
> Local connection is a few hundred mbps..
> 
> So I had the idea of running multiple openvpn tunnels to different servers, 
> and load balancing outbound traffic across the tunnels.
> 
> Sounds simple enough..
> 
> However every vpn tunnel uses the same subnet and nexthop gw. This of course 
> won’t work with normal routing.
> 
> So my question:
> How can I use rdomains or rtables with openvpn clients, so that each VPN is 
> started in its own logical VRF?
> 
> And is it then a case of just using PF to push the outbound packets into the 
> various rdomains/rtables randomly (of course maintaining state)? LAN 
> interface would be in the default rdomain/rtable..
> 
> My confusion is that an interface needs to be bound to the logical VRF, but 
> the tunX interfaces are created dynamically by openvpn.
> 
> So I am not sure how to configure this within hostname.tunX etc, or if I’m 
> even approaching this correctly?
> 
> Thanks, Andy.
> 



sshd hangs when ldap server is offline

2017-11-29 Thread Andreas Krüger
Hi,

We have been trying to play around with the login_ldap package and after we 
have configured login.conf, ypldap.conf and added portmap_flags=YES, 
ypldap_flags=“”, and ypbind_flags=“” to rc.conf.local we have see an issue. If 
the ldap server is offline, sshd is not able to restart or even start the 
daemon. We tried to reboot the server and firstly it hangs with yp_first 
causing RPC errors. If we ^C that in the console, it continues and want to 
start the sshd daemon, but that just hangs forever.

We have been using the guide from 
http://blogs.helion-prime.com/2009/05/07/authorization-with-ldap-on-openbsd.html
 yet we skipped the last step of automate execution.
If the server is able to reach the ldap server, then everything works fine.


Andreas


Syntax Highlight for Atom

2017-10-19 Thread Andreas Krüger
Hi all,

If anybody wants then I just made a syntax highlighter available for
the Atom editor. The package is called language-pf
(https://atom.io/packages/language-pf)

Feel free to do any contributions to it

Andreas



Re: TCP Window Scaling

2017-09-15 Thread Andreas Krüger
I see that. But it still does not answer the question why the option to set 
them through sysctl was removed. Why would you suddenly not be allowed to set 
the max size with sysctl, what is the reason behind that choice taken in the 
4.9 release.

> Den 15. sep. 2017 kl. 13.34 skrev Stuart Henderson :
> 
>> On 2017-09-14, Chris Cappuccio  wrote:
>> -w1M works for me
>> -
>> Andreas Kr??ger [a...@patientsky.com] wrote:
>>> I do manage to read the manual, but let me clarify this. I am not
>>> allowed to set a buffer larger than 256KB with iperf:
>>> 
>>> $ uname -a
>>> OpenBSD odn1-fw-odn1-01 6.0 GENERIC.MP#0 amd64
> 
> 6.0 is limited to 256K, 6.1 and newer allow up to 2MB, and by default
> it will auto tune.
> 
> As well as iperf -w, here's how to hardcode it on a few other programs:
> 
> httpd/relayd "socket buffer"
> tcpbench -S
> rsync --sockopts=SO_SNDBUF=xxx,SO_RCVBUF=yyy
> 
> You might be interested to watch "netstat -Bn -p tcp" if you're playing
> with this..
> 
> 



Re: TCP Window Scaling

2017-09-14 Thread Andreas Krüger
I do manage to read the manual, but let me clarify this. I am not
allowed to set a buffer larger than 256KB with iperf:

$ uname -a
OpenBSD odn1-fw-odn1-01 6.0 GENERIC.MP#0 amd64

$ iperf -s -w 256KB

Server listening on TCP port 5001
TCP window size:  256 KByte


$ iperf -s -w 4MB

Server listening on TCP port 5001
TCP window size: 16.0 KByte (WARNING: requested 4.00 MByte)

$

ANDREAS KRÜGER
CTO Hosting and Infrastructure

+45 51808863
a...@patientsky.com



PatientSky AS
Hovfaret 17 B, NO-0275 Oslo, Norway
patientsky.com




2017-09-14 19:46 GMT+02:00 Chris Cappuccio <ch...@nmedia.net>:
> ipsec tunnels don't use TCP
>
> iperf has the -w option
>
> Andreas Kr??ger [a...@patientsky.com] wrote:
>> How would i set i for ipsec tunnels or iperf etc. then?
>> ANDREAS KR??GER
>> CTO Hosting and Infrastructure
>>
>> +45 51808863
>> a...@patientsky.com
>>
>>
>>
>> PatientSky AS
>> Hovfaret 17 B, NO-0275 Oslo, Norway
>> patientsky.com
>>
>>
>>
>>
>> 2017-09-14 13:10 GMT+02:00 Janne Johansson <icepic...@gmail.com>:
>> >
>> > 2017-09-14 13:08 GMT+02:00 Janne Johansson <icepic...@gmail.com>:
>> >>
>> >> Since 6.1 I think the max is 2M, and not 256k. Many programs will also
>> >> allow you to bump limits using setsockopt.
>> >>
>> >>
>> >
>> > httpd.conf:
>> > server "secret.site" {
>> > tcp {
>> > socket buffer 2097152
>> > }
>> >
>> > rsyncd.conf:
>> >  ...
>> > socket options = SO_SNDBUF=2097152
>> >
>> >
>> > --
>> > May the most significant bit of your life be positive.



Re: TCP Window Scaling

2017-09-14 Thread Andreas Krüger
How would i set i for ipsec tunnels or iperf etc. then?
ANDREAS KRÜGER
CTO Hosting and Infrastructure

+45 51808863
a...@patientsky.com



PatientSky AS
Hovfaret 17 B, NO-0275 Oslo, Norway
patientsky.com




2017-09-14 13:10 GMT+02:00 Janne Johansson <icepic...@gmail.com>:
>
> 2017-09-14 13:08 GMT+02:00 Janne Johansson <icepic...@gmail.com>:
>>
>> Since 6.1 I think the max is 2M, and not 256k. Many programs will also
>> allow you to bump limits using setsockopt.
>>
>>
>
> httpd.conf:
> server "secret.site" {
> tcp {
> socket buffer 2097152
> }
>
> rsyncd.conf:
>  ...
> socket options = SO_SNDBUF=2097152
>
>
> --
> May the most significant bit of your life be positive.



TCP Window Scaling

2017-09-14 Thread Andreas Krüger
Hi All,

I am wondering why there is no option to set the max tcp window
scaling sizes for send and receive since version 4.9.
I saw in the change log, that it was converted to auto scaling, but
the max values are now hardcoded and removed from sysctl, for some
reason?

The problem is, I have two OpenBSD machines connected on WAN on 1
gigabit, with a 17 ms delay between them, which means I need to have a
bigger tcp scaling window than 256 KB to use the full 1 gigabit.

How would i change these values? On FreeBSD you still have the option for

net.inet.tcp.recvspace=262144
net.inet.tcp.sendspace=262144

Etc.

Regards,
Andreas