Le 06/09/2014 02:12, Leif Hedstrom a écrit :
On Sep 5, 2014, at 5:08 PM, Brian Geffon <[email protected]
<mailto:[email protected]>> wrote:
You might try setting proxy.config.http.server_tcp_init_cwnd to
something like 10.
That doesn’t work on Linux (only on OmniOS afaik). If I recall, Linux
upstream refused to take the patches that would allow application
support for this.
As of CentOS6.4 (I think?) the ICWND is default to 10. I don’t know
what other distros do. You can tweak this per route (or default) using
the “ip route” command.
Also, one thing that has bitten me in the past is that “slow start
restart”. After some time (I *think* 2x RTT?) of idle, a TCP
connection goes back to ICWND again. You can turn this off, with a
sysctl (net.ipv4.tcp_slow_start_after_idle).
Also, look at perhaps tuning your send / recv buffers. You might need
to both adjust the kernel’s settings, as well as the ATS configurations.
Cheers,
— Leif
Thanks for the replies
It looks like changing ICWND on the route and turning off
net.ipv4.tcp_slow_start_after_idle resolved the issue.
--
Fabien
Brian
On Fri, Sep 5, 2014 at 3:51 PM, Fabien Duranti <[email protected]
<mailto:[email protected]>> wrote:
Hello,
I am running a proof of concept with ATS 4.2.2 in reverse proxy
mode to improve user experience when accessing a web application
over an high latency link ( 255ms RTT )
The setup is the following :
client -> ATS1 --- High latency link --- > ATS2 -> origin_server
all connections have SSL.
Everything behaves as expected when client requests are small,
response times when connections are established pretty much
correspond to RTT + origin response time + the negligible ATS
processing time.
However, when requests are bigger than something around 3000bytes
(POSTs typically), reponse times are looking more like 2x RTT +
origin response time.
Running a network capture showed that when ATS1 sends a big
request, there are on the wire 2x 1500bytes packets sent, then
wait for ATS2's ACK, then another small packet to send the
remaining of the request. That wait for the ACK pretty much makes
response times mutltipled by 2.
I immediately thought about the TCP tunning of the operating
system (RHEL6) could be wrong but running the exact same request
with curl on ATS1 wouldn't show that. on the wire we would rather
see two fragmented packets of 2650bytes + 800bytes and the
response from ATS 2 comming shortly after the RTT.
Is there any misconfiguration in ATS that would explain this ?I
tried proxy.config.ssl.max_record_size = 0 then 1000, then 4000
but didn't see any difference
Thanks !
--
Fabien Duranti