Hello,
I am running a proof of concept with ATS 4.2.2 in reverse proxy mode to
improve user experience when accessing a web application over an high
latency link ( 255ms RTT )
The setup is the following :
client -> ATS1 --- High latency link --- > ATS2 -> origin_server
all connections have SSL.
Everything behaves as expected when client requests are small, response
times when connections are established pretty much correspond to RTT +
origin response time + the negligible ATS processing time.
However, when requests are bigger than something around 3000bytes (POSTs
typically), reponse times are looking more like 2x RTT + origin
response time.
Running a network capture showed that when ATS1 sends a big request,
there are on the wire 2x 1500bytes packets sent, then wait for ATS2's
ACK, then another small packet to send the remaining of the request.
That wait for the ACK pretty much makes response times mutltipled by 2.
I immediately thought about the TCP tunning of the operating system
(RHEL6) could be wrong but running the exact same request with curl on
ATS1 wouldn't show that. on the wire we would rather see two fragmented
packets of 2650bytes + 800bytes and the response from ATS 2 comming
shortly after the RTT.
Is there any misconfiguration in ATS that would explain this ?I tried
proxy.config.ssl.max_record_size = 0 then 1000, then 4000 but didn't
see any difference
Thanks !
--
Fabien Duranti