Re: [Openvpn-users] tcp-client: large ping during transfers (fwd)

2017-11-10 Thread David Sommerseth
On 10/11/17 11:50, Jan Just Keijser wrote:
> Hi Adam,
> 
> On 10/11/17 11:18, Adam Wysocki via Openvpn-users wrote:
>> [...]
> 
>> David:
>>> Port sharing is a feature for the server side to "hide" OpenVPN behind
>>> an existing SSL/TLS based service (typically https).  So packets which
>>> carries an OpenVPN signature will be processed by the OpenVPN process.
>>> Anything else will be sent to the provided IP address and port and the
>>> OpenVPN process will just act as a proxy.
>>>
>>> This happens on _all_ packets - OpenVPN packets and anything else, not
>>> just some or just during the initial handshake.
>> Strange. Are you sure about it? What would be a reason for this with TCP?
>> With UDP I perfectly see why (despite port-sharing being a TCP-only
>> feature), but with TCP? Once a connection is established and it's known
>> that it was an OpenVPN client thas has connected?
>>
>> This seems consistent with this code from socket.c (stream_buf_added()):
>>
>> #if PORT_SHARE
>>    if (sb->port_share_state == PS_ENABLED)
>>  {
>>    if (!is_openvpn_protocol (>buf))
>>  {
>>    msg (D_STREAM_ERRORS, "Non-OpenVPN client protocol
>> detected");
>>    sb->port_share_state = PS_FOREIGN;
>>    sb->error = true;
>>    return false;
>>  }
>>    else
>>  sb->port_share_state = PS_DISABLED;
>>  }
>> #endif
>>
>> To summarize:
>>
>> - if port sharing state is ENABLED
>>    - if the protocol is not openvpn, we set state to FOREIGN
>>    - if the protocol is openvpn, we set state to DISABLED
>>
>> So it seems it works only on a first data packet, and I guess that states
>> are:
>>
>> - ENABLED - we don't know yet if we're port-sharing, decision is to be
>> made
>> - FOREIGN - we know that the first packet wasn't openvpn one, so from
>> now we're forwarding
>> - DISABLED - we know that the first packet was our, so from now we
>> don't forward
>>
>>
> now you've made me curious so I've just checked it by adding a single line
>   msg( M_INFO, "Is it OpenVPN?" );
> 
> to ps.c in the function 'is_openvpn_protocol' and indeed, that function
> is called only once, when the client first connects.
> So, port-sharing does not make your problem any worse than it already is ;)
> 
Ahh, thanks!  I stand corrected.  I had missed the detail that the
struct stream_buf actually carried a state which is kept valid and
unchanged within the running session.

I agree with JJK's conclusion in this regards.


--
kind regards,

David Sommerseth



signature.asc
Description: OpenPGP digital signature
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
Openvpn-users mailing list
Openvpn-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/openvpn-users


Re: [Openvpn-users] tcp-client: large ping during transfers (fwd)

2017-11-10 Thread Jan Just Keijser

Hi Adam,

On 10/11/17 11:18, Adam Wysocki via Openvpn-users wrote:

[...]



David:

Port sharing is a feature for the server side to "hide" OpenVPN behind
an existing SSL/TLS based service (typically https).  So packets which
carries an OpenVPN signature will be processed by the OpenVPN process.
Anything else will be sent to the provided IP address and port and the
OpenVPN process will just act as a proxy.

This happens on _all_ packets - OpenVPN packets and anything else, not
just some or just during the initial handshake.

Strange. Are you sure about it? What would be a reason for this with TCP?
With UDP I perfectly see why (despite port-sharing being a TCP-only
feature), but with TCP? Once a connection is established and it's known
that it was an OpenVPN client thas has connected?

This seems consistent with this code from socket.c (stream_buf_added()):

#if PORT_SHARE
   if (sb->port_share_state == PS_ENABLED)
 {
   if (!is_openvpn_protocol (>buf))
 {
   msg (D_STREAM_ERRORS, "Non-OpenVPN client protocol detected");
   sb->port_share_state = PS_FOREIGN;
   sb->error = true;
   return false;
 }
   else
 sb->port_share_state = PS_DISABLED;
 }
#endif

To summarize:

- if port sharing state is ENABLED
   - if the protocol is not openvpn, we set state to FOREIGN
   - if the protocol is openvpn, we set state to DISABLED

So it seems it works only on a first data packet, and I guess that states
are:

- ENABLED - we don't know yet if we're port-sharing, decision is to be made
- FOREIGN - we know that the first packet wasn't openvpn one, so from now we're 
forwarding
- DISABLED - we know that the first packet was our, so from now we don't forward



now you've made me curious so I've just checked it by adding a single line
  msg( M_INFO, "Is it OpenVPN?" );

to ps.c in the function 'is_openvpn_protocol' and indeed, that function 
is called only once, when the client first connects.

So, port-sharing does not make your problem any worse than it already is ;)

cheers,

JJK


--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
Openvpn-users mailing list
Openvpn-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/openvpn-users


Re: [Openvpn-users] tcp-client: large ping during transfers (fwd)

2017-11-10 Thread Adam Wysocki via Openvpn-users
Jan:

> The only thing you can do, is to run something like Traffic Control (tc) 
> on the link to prioritize low latency traffic compared to bulk downloads

Yes, I thought about it... the problem is that I usually transfer files 
with scp and work over ssh, so it won't be as easy as simply prioritizing 
one port over another... but prioritizing based on a packet size seems 
doable.

Greg:

> The universal sign of a congested link is latencies going up by, easily, 
> an order of magnitude, and little or no packet loss. [i.e. 30ms to 
> 300ms, or even more] Once a connection reaches very high levels of 
> utilization, latencies increase dramatically.
> 
> So, the fact that OpenVPN does similar things seems unremarkable to me. 
> [But perhaps I missed something more in the thread that does make it 
> more remarkable...]

The problem is that my link is never "100% full", as ping times over the 
Internet to the same host (and from the server to other, TCP and UDP 
connected clients) during this "100% full load" are normal or only 
slightly larger. I suspect (but haven't tested it yet) that if I used 
another instance of OpenVPN to handle low-latency traffic, it would work 
fine despite other instances being fully loaded.

Now I remember that definitely it worked this way when I had two machines 
on the same network, connected over TCP. When I fully loaded the 
connection from one client to the server, it was impossible to work 
remotely and ping was huge, but the second connection (another machine, 
same network) was mostly unaffected and still had both low latency and 
plenty of bandwidth available.

Gert, Jan:

> I just re-ran the test and in one direction I've got the latency spikes 
> under control (ping time < 70 ms).  This is achieved by adding
> "--sndbuf 32000" on the client and "--rcvbuf 36000 --tcp-queue-limit 8" 
> on the server.

Great, it helped! Now the ping time jumped from ~30ms to ~350ms, but it's 
acceptable. Thank you! I'll experiment with these options to further 
increase throughput.

David:

> Port sharing is a feature for the server side to "hide" OpenVPN behind
> an existing SSL/TLS based service (typically https).  So packets which
> carries an OpenVPN signature will be processed by the OpenVPN process.
> Anything else will be sent to the provided IP address and port and the
> OpenVPN process will just act as a proxy.
>
> This happens on _all_ packets - OpenVPN packets and anything else, not
> just some or just during the initial handshake.

Strange. Are you sure about it? What would be a reason for this with TCP? 
With UDP I perfectly see why (despite port-sharing being a TCP-only 
feature), but with TCP? Once a connection is established and it's known 
that it was an OpenVPN client thas has connected?

This seems consistent with this code from socket.c (stream_buf_added()):

#if PORT_SHARE
  if (sb->port_share_state == PS_ENABLED)
{
  if (!is_openvpn_protocol (>buf))
{
  msg (D_STREAM_ERRORS, "Non-OpenVPN client protocol detected");
  sb->port_share_state = PS_FOREIGN;
  sb->error = true;
  return false;
}
  else
sb->port_share_state = PS_DISABLED;
}
#endif

To summarize:

- if port sharing state is ENABLED
  - if the protocol is not openvpn, we set state to FOREIGN
  - if the protocol is openvpn, we set state to DISABLED

So it seems it works only on a first data packet, and I guess that states 
are:

- ENABLED - we don't know yet if we're port-sharing, decision is to be made
- FOREIGN - we know that the first packet wasn't openvpn one, so from now we're 
forwarding
- DISABLED - we know that the first packet was our, so from now we don't forward

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
Openvpn-users mailing list
Openvpn-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/openvpn-users


Re: [Openvpn-users] tcp-client: large ping during transfers (fwd)

2017-11-10 Thread Gert Doering
Hi,

On Fri, Nov 10, 2017 at 04:34:13PM +0800, Antonio Quartulli wrote:
> > It isn't.  It's just stuffing packets into the tcp stream as they come
> > out the tun/tap fd.
> > 
> > What we *could* do is make the socket buffer much smaller and have a
> > larger internal queue in openvpn, and then do smart stuff, like "move
> > small packets in front of full-sized TCP packets", "random early drop"
> > and all that stuff people have done research on over the last 20 years.  
> > 
> > But that's quite a bit of work...
> 
> On top of that, adding another queue is likely going to increase the
> bufferbloat effect. We already have the kernel doing its own stuff and
> we should try to add more "smart" things on top.

You need to significantly reduce the socket buffers to make it meaningful,
yes (= stop the kernel from doing its own stuff, which it mostly "buffering"
for "packets inside a TCP stream").

gert

-- 
USENET is *not* the non-clickable part of WWW!
   //www.muc.de/~gert/
Gert Doering - Munich, Germany g...@greenie.muc.de
fax: +49-89-35655025g...@net.informatik.tu-muenchen.de


signature.asc
Description: PGP signature
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
Openvpn-users mailing list
Openvpn-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/openvpn-users


Re: [Openvpn-users] tcp-client: large ping during transfers (fwd)

2017-11-10 Thread Antonio Quartulli


On 10/11/17 03:00, Gert Doering wrote:
> Hi,
> 
> On Thu, Nov 09, 2017 at 01:21:56PM -0500, Selva wrote:
>> So I do feel that high latency under load is not a fundamental limitation
>> -- probably
>> openvpn with --proto tcp is not trying hard to manage the queue smartly?
> 
> It isn't.  It's just stuffing packets into the tcp stream as they come
> out the tun/tap fd.
> 
> What we *could* do is make the socket buffer much smaller and have a
> larger internal queue in openvpn, and then do smart stuff, like "move
> small packets in front of full-sized TCP packets", "random early drop"
> and all that stuff people have done research on over the last 20 years.  
> 
> But that's quite a bit of work...

On top of that, adding another queue is likely going to increase the
bufferbloat effect. We already have the kernel doing its own stuff and
we should try to add more "smart" things on top.

(not relevant for this thread itself, but I thought it was worth
pointing out)

Cheers,

-- 
Antonio Quartulli



signature.asc
Description: OpenPGP digital signature
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
Openvpn-users mailing list
Openvpn-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/openvpn-users


Re: [Openvpn-users] tcp-client: large ping during transfers (fwd)

2017-11-09 Thread David Sommerseth
On 09/11/17 11:08, Gof via Openvpn-users wrote:
> As to port sharing, I can disable it, but isn't it used only during initial 
> handshake?
Port sharing is a feature for the server side to "hide" OpenVPN behind
an existing SSL/TLS based service (typically https).  So packets which
carries an OpenVPN signature will be processed by the OpenVPN process.
Anything else will be sent to the provided IP address and port and the
OpenVPN process will just act as a proxy.

This happens on _all_ packets - OpenVPN packets and anything else, not
just some or just during the initial handshake.


--
kind regards,

David Sommerseth



signature.asc
Description: OpenPGP digital signature
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
Openvpn-users mailing list
Openvpn-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/openvpn-users


Re: [Openvpn-users] tcp-client: large ping during transfers (fwd)

2017-11-09 Thread Gert Doering
Hi,

On Thu, Nov 09, 2017 at 01:21:56PM -0500, Selva wrote:
> So I do feel that high latency under load is not a fundamental limitation
> -- probably
> openvpn with --proto tcp is not trying hard to manage the queue smartly?

It isn't.  It's just stuffing packets into the tcp stream as they come
out the tun/tap fd.

What we *could* do is make the socket buffer much smaller and have a
larger internal queue in openvpn, and then do smart stuff, like "move
small packets in front of full-sized TCP packets", "random early drop"
and all that stuff people have done research on over the last 20 years.  

But that's quite a bit of work...

gert

-- 
USENET is *not* the non-clickable part of WWW!
   //www.muc.de/~gert/
Gert Doering - Munich, Germany g...@greenie.muc.de
fax: +49-89-35655025g...@net.informatik.tu-muenchen.de


signature.asc
Description: PGP signature
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
Openvpn-users mailing list
Openvpn-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/openvpn-users


Re: [Openvpn-users] tcp-client: large ping during transfers (fwd)

2017-11-09 Thread Selva
Hi,

On Thu, Nov 9, 2017 at 11:48 AM, Gregory Sloop  wrote:

> Top posting
>
>
>
>
>
>
>
> *JJK> The only thing you can do, is to run something like Traffic Control
> (tc) JJK> on the link to prioritize low latency traffic compared to bulk
> JJK> downloads. If I throttle my iperf session to use 80% of the maximum
> link JJK> speed then the ping times remain much lower. When the link is
> "100% JJK> full" with TCP traffic then the ping times increase 100fold. *While
> I'm not going to address, specifically, OpenVPN's handling of this - this
> is just typical behavior when links get loaded. I run smokeping [with
> fping/icmp ping] to monitor a bunch of stuff - especially my client's
> external internet connections.
>
> The universal sign of a congested link is latencies going up by, easily,
> an order of magnitude, and little or no packet loss. [i.e. 30ms to 300ms,
> or even more] Once a connection reaches very high levels of utilization,
> latencies increase dramatically.
>

This may be true in many networks but does not look unavoidable.
I have seen the "bufferbloat" -like behaviour that Gert alluded to with many
residential DSL/ADSL providers which got cured as we moved to metro
ethernet--similar bandwidth but no latency spikes under load.

So I do feel that high latency under load is not a fundamental limitation
-- probably
openvpn with --proto tcp is not trying hard to manage the queue smartly?
Whatever
that means...

Selva
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
Openvpn-users mailing list
Openvpn-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/openvpn-users


Re: [Openvpn-users] tcp-client: large ping during transfers (fwd)

2017-11-09 Thread Gert Doering
Hi,

On Thu, Nov 09, 2017 at 08:48:45AM -0800, Gregory Sloop wrote:
> So, the fact that OpenVPN does similar things seems unremarkable to me. 
> [But perhaps I missed something more in the thread that does make it more 
> remarkable...]

The original question was "why does this happen for openvpn over TCP and
not for openvpn over UDP?" - well, basically, "over TCP" adds a new "link"
that can get congested (queues in TCP)...

But not everyone is well-versed in buffers, queueing, and latency :-)
(and then, there are different schools of buffering - "large buffers
with smart queueing" vs. "shallow buffers, drop early, leave this to
the upper layer protocol to sort out")

gert


-- 
USENET is *not* the non-clickable part of WWW!
   //www.muc.de/~gert/
Gert Doering - Munich, Germany g...@greenie.muc.de
fax: +49-89-35655025g...@net.informatik.tu-muenchen.de


signature.asc
Description: PGP signature
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
Openvpn-users mailing list
Openvpn-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/openvpn-users


Re: [Openvpn-users] tcp-client: large ping during transfers (fwd)

2017-11-09 Thread Gregory Sloop
Top posting

JJK> The only thing you can do, is to run something like Traffic Control (tc)
JJK> on the link to prioritize low latency traffic compared to bulk 
JJK> downloads. If I throttle my iperf session to use 80% of the maximum link
JJK> speed then the ping times remain much lower. When the link is "100% 
JJK> full" with TCP traffic then the ping times increase 100fold.

While I'm not going to address, specifically, OpenVPN's handling of this - this 
is just typical behavior when links get loaded. I run smokeping [with 
fping/icmp ping] to monitor a bunch of stuff - especially my client's external 
internet connections.

The universal sign of a congested link is latencies going up by, easily, an 
order of magnitude, and little or no packet loss. [i.e. 30ms to 300ms, or even 
more] Once a connection reaches very high levels of utilization, latencies 
increase dramatically. 

So, the fact that OpenVPN does similar things seems unremarkable to me. 
[But perhaps I missed something more in the thread that does make it more 
remarkable...]

HTH
-Greg

JJK> Hi,

JJK> On 09/11/17 11:08, Gof via Openvpn-users wrote:

>>> you're using bridging + tap + proto tcp + port sharing on a VPS and are
>>> expecting good latency? hmmm there are many reasons why that combination
>>> will NOT give you any performance.
>> Bridge is used only to link TCP and UDP clients. All client machines are
>> mine and used by me alone, and 99% of the time don't generate any traffic,
>> they're only there so I can log into them. During my tests I used only these
>> two machines I did the test on.

>> Why tap might be a worse idea than tun?
JJK> tap has a slightly higher overhead compared to tun, but it would not 
JJK> explain the high latency during a transfer.

>> As to port sharing, I can disable it, but isn't it used only during initial
>> handshake?

>> As to the bridge, TAP and VPS, it performs very well with UDP-connected
>> clients, so I suspect TCP alone...

>>> However, I see an increase in ping time in my setup as well:
>>> - udp
>>> - tun
>> This increase (from 0.6ms to 4ms) is normal and perfectly acceptable... but
>> not to 3000ms, it definitely isn't only encryption/decryption latency...


JJK> as Gert was pointing out already, it's mostly related to the nature of
JJK> TCP traffic.
JJK> The good news is: I can reproduce what you are seeing at home (ADSL) as
JJK> well:
JJK> - I'm connecting to a server at work over TCP
JJK> - without any load the ping times are  ~ 7 ms , which is actually quite
JJK> good for ADSL
JJK> - when I run a long iperf session and then do a ping in the background,
JJK> the ping times go up to 800+ ms, then once the iperf is done, the ping
JJK> times go down again

JJK> The bad news: that's just the way it is with OpenVPN over TCP, I guess.
JJK> There are no parameters to tweak that would help (--tcp-nodelay makes 
JJK> things *worse*, for example). I also suspect that you (and I ) are being
JJK> hit with TCP congestion delays: when the transfer is 
JJK> "interrupted" by an ICMP packet then the TCP window is reset to a much
JJK> lower value and the transfer is throttled. This is normal TCP behaviour.
JJK> However, I suspect that this throttling leads to some form of 
JJK> TCP-over-TCP congestion which then blows out the entire link, causing 
JJK> ping times to go through the roof.

JJK> The only thing you can do, is to run something like Traffic Control (tc)
JJK> on the link to prioritize low latency traffic compared to bulk 
JJK> downloads. If I throttle my iperf session to use 80% of the maximum link
JJK> speed then the ping times remain much lower. When the link is "100% 
JJK> full" with TCP traffic then the ping times increase 100fold.--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
Openvpn-users mailing list
Openvpn-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/openvpn-users


Re: [Openvpn-users] tcp-client: large ping during transfers (fwd)

2017-11-09 Thread Jan Just Keijser

Hi,

On 09/11/17 18:08, Gert Doering wrote:

On Thu, Nov 09, 2017 at 05:19:14PM +0100, Jan Just Keijser wrote:

- without any load the ping times are  ~ 7 ms , which is actually quite
good for ADSL
- when I run a long iperf session and then do a ping in the background,
the ping times go up to 800+ ms, then once the iperf is done, the ping
times go down again

It might be worth to experiment with --txqueuelen (on both sides)
or --tcp-queue-limit and --sndbuf (on the side that sends lots of data
into the TCP connection).

--tcp-queue-limit will limit the amount of data OpenVPN will try to
stuff into the TCP session, so will lead to some loss on the TCP-over-
TCP session, which *could* end up in slower sending by the file
transfer (etc.).

--sndbuf will limit the mount of data sitting inside the kernel for
outgoing packets.  Reducing this to, say, 16000, will reduce achievable
throughput (because for througput, you want large buffers to really
sustain sending, not having to wait for the app to fill the buffer
while the network is idle), but will also reduce the amount of data
sitting "in front" of your pings -> better RTT.


I'd try "--sndbuf 16000 --tcp-queue-limit 8" for a start (on the sending
side) and see if that makes a noticeable difference.  Then start tuning.


that definitely is worth experimenting with:  I just re-ran the test and 
in one direction I've got the latency spikes under control (ping time < 
70 ms).  This is achieved by adding "--sndbuf 32000" on the client and 
"--rcvbuf 36000 --tcp-queue-limit 8" on the server. With this, download 
speed is not affected yet "upstream ping" (i.e. the client pinging the 
server) never exceeds 70 ms. During upload, the upstream ping now tops 
off at  ~ 400 ms, which is better than before, but not as significant as 
the 700-> 70ms drop.
I'll keep experimenting - and thanks for pointing out those tweaking 
parameters again for me ;)


HTH,

JJK


--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
Openvpn-users mailing list
Openvpn-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/openvpn-users


Re: [Openvpn-users] tcp-client: large ping during transfers (fwd)

2017-11-09 Thread Gert Doering
Hi,

On Thu, Nov 09, 2017 at 05:19:14PM +0100, Jan Just Keijser wrote:
> - without any load the ping times are  ~ 7 ms , which is actually quite 
> good for ADSL
> - when I run a long iperf session and then do a ping in the background, 
> the ping times go up to 800+ ms, then once the iperf is done, the ping 
> times go down again

It might be worth to experiment with --txqueuelen (on both sides)
or --tcp-queue-limit and --sndbuf (on the side that sends lots of data 
into the TCP connection).

--tcp-queue-limit will limit the amount of data OpenVPN will try to
stuff into the TCP session, so will lead to some loss on the TCP-over-
TCP session, which *could* end up in slower sending by the file
transfer (etc.).

--sndbuf will limit the mount of data sitting inside the kernel for
outgoing packets.  Reducing this to, say, 16000, will reduce achievable
throughput (because for througput, you want large buffers to really
sustain sending, not having to wait for the app to fill the buffer
while the network is idle), but will also reduce the amount of data
sitting "in front" of your pings -> better RTT.


I'd try "--sndbuf 16000 --tcp-queue-limit 8" for a start (on the sending
side) and see if that makes a noticeable difference.  Then start tuning.

> The bad news: that's just the way it is with OpenVPN over TCP, I guess. 
> There are no parameters to tweak that would help (--tcp-nodelay makes 
> things *worse*, for example). 

Never say we have no options to weak things :-) - we have options for
all the things (and even had bugs in --tcp-send-queue... back then, with
 :-) ).

> I also suspect that you (and I ) are being 
> hit with TCP congestion delays: when the transfer is 
> "interrupted" by an ICMP packet then the TCP window is reset to a much 
> lower value and the transfer is throttled. This is normal TCP behaviour. 
> However, I suspect that this throttling leads to some form of 
> TCP-over-TCP congestion which then blows out the entire link, causing 
> ping times to go through the roof.

I suspect it's more "all available queues are filled to the upper limit
by the sending TCP-over-TCP, so the ICMP packet has to queue at the back
of this".  No QoS of any sort inside an OpenVPN TCP session.

(I'll throw the buzzword "bufferbloat" into the ring as well, which 
describes a similar effect of an ADSL line reaching ping times way over
a second if buffers are *too big* and large downloads filling everything)


> The only thing you can do, is to run something like Traffic Control (tc) 
> on the link to prioritize low latency traffic compared to bulk 
> downloads. If I throttle my iperf session to use 80% of the maximum link 
> speed then the ping times remain much lower. When the link is "100% 
> full" with TCP traffic then the ping times increase 100fold.

Right.  Limiting inside TCP to 80% (or so) of the maximum achievable 
limit will make sure the queues are never filling up.

gert
-- 
USENET is *not* the non-clickable part of WWW!
   //www.muc.de/~gert/
Gert Doering - Munich, Germany g...@greenie.muc.de
fax: +49-89-35655025g...@net.informatik.tu-muenchen.de


signature.asc
Description: PGP signature
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
Openvpn-users mailing list
Openvpn-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/openvpn-users


Re: [Openvpn-users] tcp-client: large ping during transfers (fwd)

2017-11-09 Thread Jan Just Keijser

Hi,

On 09/11/17 11:08, Gof via Openvpn-users wrote:



you're using bridging + tap + proto tcp + port sharing on a VPS and are
expecting good latency? hmmm there are many reasons why that combination
will NOT give you any performance.

Bridge is used only to link TCP and UDP clients. All client machines are
mine and used by me alone, and 99% of the time don't generate any traffic,
they're only there so I can log into them. During my tests I used only these
two machines I did the test on.

Why tap might be a worse idea than tun?
tap has a slightly higher overhead compared to tun, but it would not 
explain the high latency during a transfer.



As to port sharing, I can disable it, but isn't it used only during initial
handshake?

As to the bridge, TAP and VPS, it performs very well with UDP-connected
clients, so I suspect TCP alone...


However, I see an increase in ping time in my setup as well:
- udp
- tun

This increase (from 0.6ms to 4ms) is normal and perfectly acceptable... but
not to 3000ms, it definitely isn't only encryption/decryption latency...


as Gert was pointing out already, it's mostly related to the nature of 
TCP traffic.
The good news is: I can reproduce what you are seeing at home (ADSL) as 
well:

- I'm connecting to a server at work over TCP
- without any load the ping times are  ~ 7 ms , which is actually quite 
good for ADSL
- when I run a long iperf session and then do a ping in the background, 
the ping times go up to 800+ ms, then once the iperf is done, the ping 
times go down again


The bad news: that's just the way it is with OpenVPN over TCP, I guess. 
There are no parameters to tweak that would help (--tcp-nodelay makes 
things *worse*, for example). I also suspect that you (and I ) are being 
hit with TCP congestion delays: when the transfer is 
"interrupted" by an ICMP packet then the TCP window is reset to a much 
lower value and the transfer is throttled. This is normal TCP behaviour. 
However, I suspect that this throttling leads to some form of 
TCP-over-TCP congestion which then blows out the entire link, causing 
ping times to go through the roof.


The only thing you can do, is to run something like Traffic Control (tc) 
on the link to prioritize low latency traffic compared to bulk 
downloads. If I throttle my iperf session to use 80% of the maximum link 
speed then the ping times remain much lower. When the link is "100% 
full" with TCP traffic then the ping times increase 100fold.


HTH,

JJK


--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
Openvpn-users mailing list
Openvpn-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/openvpn-users


Re: [Openvpn-users] tcp-client: large ping during transfers (fwd)

2017-11-09 Thread Gof via Openvpn-users
Hi again,

Thanks for all responses.

Selva:

> In case it helps: I recall seeing long latency with TCP tunnels under load.
> But don't have any TCP tunnels in real use, so never looked more into it.

Thanks, at least it shows it's not something related to my setup...

Jan:

> you're using bridging + tap + proto tcp + port sharing on a VPS and are 
> expecting good latency? hmmm there are many reasons why that combination 
> will NOT give you any performance.

Bridge is used only to link TCP and UDP clients. All client machines are 
mine and used by me alone, and 99% of the time don't generate any traffic, 
they're only there so I can log into them. During my tests I used only these 
two machines I did the test on.

Why tap might be a worse idea than tun?

As to port sharing, I can disable it, but isn't it used only during initial 
handshake?

As to the bridge, TAP and VPS, it performs very well with UDP-connected 
clients, so I suspect TCP alone...

> However, I see an increase in ping time in my setup as well:
> - udp
> - tun

This increase (from 0.6ms to 4ms) is normal and perfectly acceptable... but 
not to 3000ms, it definitely isn't only encryption/decryption latency...

Gert:

> With TCP, I expect queueing effects to add up as well - with UDP,
> OpenVPN just throws out the packet, but with TCP, there are kernel
> buffers involved, and if there's a packet getting lost, retransmits
> (= delay!!).

Aren't there any options to set that might help? Packets most probably don't 
get lost, the Internet link quality is good and other TCP connections over 
Internet (outside of the VPN) work well (and with low latency) during load 
on VPN too.

> In other words: TCP is there because in some cases it's unavoidable
> because stupid people block UDP access, but as long as UDP works,
> people really should not use TCP.

Yeah, it's my case. Brain-dead corporate policies resulting in only 443/tcp 
being available (even 80/tcp is blocked by a transparent proxy). I thought 
of using 53/udp but it's blocked too. I talked to admins and unblocking 
anything else is not an option.

> UDP has even more advantages, like "roaming to new networks and not
> losing VPN access" (like --float, automatic in recent 2.3.x servers),
> "surviving loss of NAT state in routers / carrier-grade NAT boxes", etc.

I believe that UDP is a better transport and I'm using it on most of my 
client machines, but with two hosts I'm stuck with TCP...

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
Openvpn-users mailing list
Openvpn-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/openvpn-users


Re: [Openvpn-users] tcp-client: large ping during transfers (fwd)

2017-11-09 Thread Gert Doering
Hi,

On Thu, Nov 09, 2017 at 01:16:39AM +0100, Jan Just Keijser wrote:
> Admittedly, not as much as you are seeing but it's definitely there and 
> it is to be expected over a VPN link: during the transfer/throughput 
> test the VPN is encrypting+decrypting like mad, which will affect 
> latency at some point.

With TCP, I expect queueing effects to add up as well - with UDP, 
OpenVPN just throws out the packet, but with TCP, there are kernel
buffers involved, and if there's a packet getting lost, retransmits
(= delay!!).

In other words: TCP is there because in some cases it's unavoidable
because stupid people block UDP access, but as long as UDP works, 
people really should not use TCP.

UDP has even more advantages, like "roaming to new networks and not
losing VPN access" (like --float, automatic in recent 2.3.x servers),
"surviving loss of NAT state in routers / carrier-grade NAT boxes", etc.

gert

-- 
USENET is *not* the non-clickable part of WWW!
   //www.muc.de/~gert/
Gert Doering - Munich, Germany g...@greenie.muc.de
fax: +49-89-35655025g...@net.informatik.tu-muenchen.de


signature.asc
Description: PGP signature
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
Openvpn-users mailing list
Openvpn-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/openvpn-users


Re: [Openvpn-users] tcp-client: large ping during transfers (fwd)

2017-11-08 Thread Jan Just Keijser

Hi,

On 08/11/17 12:30, Gof via Openvpn-users wrote:

Really noone had such problems with VPNs over TCP before?


you're using bridging + tap + proto tcp + port sharing on a VPS and are 
expecting good latency? hmmm
there are many reasons why that combination will NOT give you any 
performance.


However, I see an increase in ping time in my setup as well:
- udp
- tun

and during an iperf run on raw hardware I see the ping time go up too:

64 bytes from 10.200.0.1: icmp_seq=7 ttl=64 time=0.601 ms
64 bytes from 10.200.0.1: icmp_seq=8 ttl=64 time=0.567 ms
64 bytes from 10.200.0.1: icmp_seq=9 ttl=64 time=3.01 ms
64 bytes from 10.200.0.1: icmp_seq=10 ttl=64 time=4.42 ms
64 bytes from 10.200.0.1: icmp_seq=11 ttl=64 time=2.13 ms
64 bytes from 10.200.0.1: icmp_seq=12 ttl=64 time=5.48 ms
64 bytes from 10.200.0.1: icmp_seq=13 ttl=64 time=6.30 ms
64 bytes from 10.200.0.1: icmp_seq=14 ttl=64 time=4.68 ms
64 bytes from 10.200.0.1: icmp_seq=15 ttl=64 time=5.81 ms
64 bytes from 10.200.0.1: icmp_seq=16 ttl=64 time=4.00 ms
[...]
64 bytes from 10.200.0.1: icmp_seq=23 ttl=64 time=7.11 ms
64 bytes from 10.200.0.1: icmp_seq=24 ttl=64 time=8.01 ms
64 bytes from 10.200.0.1: icmp_seq=25 ttl=64 time=4.86 ms
64 bytes from 10.200.0.1: icmp_seq=26 ttl=64 time=5.68 ms
64 bytes from 10.200.0.1: icmp_seq=27 ttl=64 time=5.31 ms
64 bytes from 10.200.0.1: icmp_seq=28 ttl=64 time=4.17 ms
64 bytes from 10.200.0.1: icmp_seq=29 ttl=64 time=0.355 ms
64 bytes from 10.200.0.1: icmp_seq=30 ttl=64 time=0.577 ms


Admittedly, not as much as you are seeing but it's definitely there and 
it is to be expected over a VPN link: during the transfer/throughput 
test the VPN is encrypting+decrypting like mad, which will affect 
latency at some point.



HTH,

JJK


On Fri, 27 Oct 2017, Gof via Openvpn-users wrote:


Hi,

I have a problem with OpenVPN and I hope you'll be able to help...

I have two OpenVPN daemons on one Linux machine - one listening on TCP and
one bound to the UDP port. They are using TAP devices that are bridged
together, and TCP additionally shares port with ssh via "port-share".

The problem is with clients connected to the TCP server (I can't switch
them to UDP because of the firewall). During testing, I switched one UDP
client (Linux) to TCP and observed the problem only over TCP.

Ping time between them when connection is idle is about 30 ms both over
the Internet and over VPN and that's okay.

#v+
$ ping -c 5 vps
PING vps (81.4.x.x) 56(84) bytes of data.
64 bytes from vps (81.4.x.x): icmp_seq=1 ttl=53 time=29.9 ms
64 bytes from vps (81.4.x.x): icmp_seq=2 ttl=53 time=31.5 ms
64 bytes from vps (81.4.x.x): icmp_seq=3 ttl=53 time=31.1 ms
64 bytes from vps (81.4.x.x): icmp_seq=4 ttl=53 time=30.5 ms
64 bytes from vps (81.4.x.x): icmp_seq=5 ttl=53 time=31.1 ms

--- vps ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4006ms
rtt min/avg/max/mdev = 29.915/30.859/31.529/0.582 ms

$ ping -c 5 vps.v
PING vps.v (172.24.44.18) 56(84) bytes of data.
64 bytes from vps.v (172.24.44.18): icmp_seq=1 ttl=64 time=32.1 ms
64 bytes from vps.v (172.24.44.18): icmp_seq=2 ttl=64 time=30.4 ms
64 bytes from vps.v (172.24.44.18): icmp_seq=3 ttl=64 time=30.8 ms
64 bytes from vps.v (172.24.44.18): icmp_seq=4 ttl=64 time=29.8 ms
64 bytes from vps.v (172.24.44.18): icmp_seq=5 ttl=64 time=30.2 ms

--- vps.v ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4006ms
rtt min/avg/max/mdev = 29.845/30.722/32.160/0.816 ms
#v-

When I start a full-speed transfer from server to client, the ping raises
over VPN to about 3500-4000 ms, but over Internet it stays the same. It
makes remote work nearly impossible despite having enough Internet
capacity.

#v+
$ ping -c 5 vps.v
PING vps.v (172.24.44.18) 56(84) bytes of data.
64 bytes from vps.v (172.24.44.18): icmp_seq=1 ttl=64 time=3438 ms
64 bytes from vps.v (172.24.44.18): icmp_seq=2 ttl=64 time=4167 ms
64 bytes from vps.v (172.24.44.18): icmp_seq=3 ttl=64 time=4110 ms
64 bytes from vps.v (172.24.44.18): icmp_seq=4 ttl=64 time=3959 ms
64 bytes from vps.v (172.24.44.18): icmp_seq=5 ttl=64 time=3976 ms

--- vps.v ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4141ms
rtt min/avg/max/mdev = 3438.649/3930.410/4167.983/258.253 ms, pipe 4

$ ping -c 5 vps
PING vps (81.4.x.x) 56(84) bytes of data.
64 bytes from vps (81.4.x.x): icmp_seq=1 ttl=53 time=31.5 ms
64 bytes from vps (81.4.x.x): icmp_seq=2 ttl=53 time=36.7 ms
64 bytes from vps (81.4.x.x): icmp_seq=3 ttl=53 time=33.4 ms
64 bytes from vps (81.4.x.x): icmp_seq=4 ttl=53 time=30.6 ms
64 bytes from vps (81.4.x.x): icmp_seq=5 ttl=53 time=30.7 ms

--- vps ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4007ms
rtt min/avg/max/mdev = 30.650/32.641/36.799/2.314 ms
#v-

What can be the cause for this? And how could I remedy? I already tried
adding TCP_NODELAY option to the socket, but it didn't help.

My full configs are below.

#v+ server config (no IP address, because it's 

Re: [Openvpn-users] tcp-client: large ping during transfers (fwd)

2017-11-08 Thread Selva
Hi,

On Wed, Nov 8, 2017 at 6:30 AM, Gof via Openvpn-users <
openvpn-users@lists.sourceforge.net> wrote:

> Really noone had such problems with VPNs over TCP before?
>

In case it helps: I recall seeing long latency with TCP tunnels under load.
But
don't have any TCP tunnels in real use, so never looked more into it.

I suspect most people use UDP, thus no replies.

Selva
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
Openvpn-users mailing list
Openvpn-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/openvpn-users


Re: [Openvpn-users] tcp-client: large ping during transfers (fwd)

2017-11-08 Thread Gof via Openvpn-users
Really noone had such problems with VPNs over TCP before?

On Fri, 27 Oct 2017, Gof via Openvpn-users wrote:

> Hi,
> 
> I have a problem with OpenVPN and I hope you'll be able to help...
> 
> I have two OpenVPN daemons on one Linux machine - one listening on TCP and 
> one bound to the UDP port. They are using TAP devices that are bridged 
> together, and TCP additionally shares port with ssh via "port-share".
> 
> The problem is with clients connected to the TCP server (I can't switch 
> them to UDP because of the firewall). During testing, I switched one UDP 
> client (Linux) to TCP and observed the problem only over TCP.
> 
> Ping time between them when connection is idle is about 30 ms both over 
> the Internet and over VPN and that's okay.
> 
> #v+
> $ ping -c 5 vps
> PING vps (81.4.x.x) 56(84) bytes of data.
> 64 bytes from vps (81.4.x.x): icmp_seq=1 ttl=53 time=29.9 ms
> 64 bytes from vps (81.4.x.x): icmp_seq=2 ttl=53 time=31.5 ms
> 64 bytes from vps (81.4.x.x): icmp_seq=3 ttl=53 time=31.1 ms
> 64 bytes from vps (81.4.x.x): icmp_seq=4 ttl=53 time=30.5 ms
> 64 bytes from vps (81.4.x.x): icmp_seq=5 ttl=53 time=31.1 ms
> 
> --- vps ping statistics ---
> 5 packets transmitted, 5 received, 0% packet loss, time 4006ms
> rtt min/avg/max/mdev = 29.915/30.859/31.529/0.582 ms
> 
> $ ping -c 5 vps.v
> PING vps.v (172.24.44.18) 56(84) bytes of data.
> 64 bytes from vps.v (172.24.44.18): icmp_seq=1 ttl=64 time=32.1 ms
> 64 bytes from vps.v (172.24.44.18): icmp_seq=2 ttl=64 time=30.4 ms
> 64 bytes from vps.v (172.24.44.18): icmp_seq=3 ttl=64 time=30.8 ms
> 64 bytes from vps.v (172.24.44.18): icmp_seq=4 ttl=64 time=29.8 ms
> 64 bytes from vps.v (172.24.44.18): icmp_seq=5 ttl=64 time=30.2 ms
> 
> --- vps.v ping statistics ---
> 5 packets transmitted, 5 received, 0% packet loss, time 4006ms
> rtt min/avg/max/mdev = 29.845/30.722/32.160/0.816 ms
> #v-
> 
> When I start a full-speed transfer from server to client, the ping raises 
> over VPN to about 3500-4000 ms, but over Internet it stays the same. It 
> makes remote work nearly impossible despite having enough Internet 
> capacity.
> 
> #v+
> $ ping -c 5 vps.v
> PING vps.v (172.24.44.18) 56(84) bytes of data.
> 64 bytes from vps.v (172.24.44.18): icmp_seq=1 ttl=64 time=3438 ms
> 64 bytes from vps.v (172.24.44.18): icmp_seq=2 ttl=64 time=4167 ms
> 64 bytes from vps.v (172.24.44.18): icmp_seq=3 ttl=64 time=4110 ms
> 64 bytes from vps.v (172.24.44.18): icmp_seq=4 ttl=64 time=3959 ms
> 64 bytes from vps.v (172.24.44.18): icmp_seq=5 ttl=64 time=3976 ms
> 
> --- vps.v ping statistics ---
> 5 packets transmitted, 5 received, 0% packet loss, time 4141ms
> rtt min/avg/max/mdev = 3438.649/3930.410/4167.983/258.253 ms, pipe 4
> 
> $ ping -c 5 vps
> PING vps (81.4.x.x) 56(84) bytes of data.
> 64 bytes from vps (81.4.x.x): icmp_seq=1 ttl=53 time=31.5 ms
> 64 bytes from vps (81.4.x.x): icmp_seq=2 ttl=53 time=36.7 ms
> 64 bytes from vps (81.4.x.x): icmp_seq=3 ttl=53 time=33.4 ms
> 64 bytes from vps (81.4.x.x): icmp_seq=4 ttl=53 time=30.6 ms
> 64 bytes from vps (81.4.x.x): icmp_seq=5 ttl=53 time=30.7 ms
> 
> --- vps ping statistics ---
> 5 packets transmitted, 5 received, 0% packet loss, time 4007ms
> rtt min/avg/max/mdev = 30.650/32.641/36.799/2.314 ms
> #v-
> 
> What can be the cause for this? And how could I remedy? I already tried 
> adding TCP_NODELAY option to the socket, but it didn't help.
> 
> My full configs are below.
> 
> #v+ server config (no IP address, because it's set on the bridge)
> dev   tap1
> port  443
> proto tcp-server
> ca/etc/openvpn/ca.crt
> cert  /etc/openvpn/svpst.crt
> key   /etc/openvpn/svpst.key
> dh/etc/openvpn/dh2048.pem
> crl-verify/etc/openvpn/crl.pem
> mode  server
> tls-server
> client-to-client
> keepalive 10 45
> max-clients   64
> verb  4
> mute  20
> persist-key
> persist-tun
> comp-lzo  no
> user  nobody
> group nogroup
> # socket-flagsTCP_NODELAY
> port-share127.0.0.1 22
> cipherAES-256-CBC
> #v- 
> 
> #v+ client config
> dev tap0
> port  443
> proto   tcp-client
> ca  /etc/openvpn/ca.crt
> cert/etc/openvpn/pi.crt
> key /etc/openvpn/pi.key
> remote81.4.x.x
> ifconfig172.24.44.20 255.255.255.0
> comp-lzo  no
> keepalive 10 45
> tls-client
> persist-key
> persist-tun
> # socket-flagsTCP_NODELAY
> usernobody
> group   nogroup
> connect-retry 1
> connect-timeout   7
> verb4
> mute60
> script-security   2
> up/etc/openvpn/pi.up.sh
> down  /etc/openvpn/pi.down.sh
> cipherAES-256-CBC
> #v-
> 
> Mentioned up and down scripts only set default routing through the VPN 
> (I'm using two routing tables and fwmark to be able to access all ports on 
> the server except 443 through the VPN, and only 443 through the default