On 11/22/2013 06:57 AM, Adam W. Dace wrote:
Also, once you've gotten past your immediate problem and are looking
to deploy my Wiki page may help:
https://cwiki.apache.org/confluence/display/TS/WebProxyCacheTuning
Thank you for link, CONFIG proxy.config.http.chunking.size 64k played
the trick.
To call it "best practices" would be a bit much, but I spent quite a
bit of time simply tuning ATS for my own uses.
The page is finally stable(i.e. I'm done now) and I'm quite pleased.
I'm hoping once the next release is out the door
I can start bugging the commiters to take a look and review it.
Regards,
On Thu, Nov 21, 2013 at 9:53 PM, Adam W. Dace
<[email protected] <mailto:[email protected]>> wrote:
If I'm not mistaken, that usually indicates the -incoming- HTTP
connection limit has been reached.
Unless you've modified the generic config, that usually clocks in
at 30,000 simultaneous connections.
Maybe you can tweak the test that's generating those connections
and have them request fewer, bigger objects.
You also might want to take a look at
proxy.config.net.connections_throttle in records.conf.
I'm not sure what the impact of raising it might be, though.
On Thu, Nov 21, 2013 at 2:46 PM, Pavel Kazlenka
<[email protected]
<mailto:[email protected]>> wrote:
You are right, I see
>Server {0x2b74d72c1700} WARNING: too many connections, throttling
in diags.log
My test simulates 1k origin servers and 1k user-agents on 500
clients (2 agents per ip). Which kind of connections this
warning about (client or server side?) and what can be
improved here?
On 11/21/2013 10:42 PM, Leif Hedstrom wrote:
On Nov 21, 2013, at 11:50 AM, Pavel Kazlenka
<[email protected]
<mailto:[email protected]>> wrote:
Hi gentlemen,
I'm trying to estimate maximum performance of ATS
4.0.2 on single server in forwarding proxy mode.
Server hardware is 6-core CPU ( +6 cores with
HyperThreading), 12Gb RAM and two 10G NICs (one in
client LAN and another in server LAN).
ATS is configured with hwloc support, caching is
disabled, squid blob logging is disabled too.
I started from the config close to default:
http://pastebin.com/AVQnJ4VL
But whatever I tried, I cannot force ATS to keep more
than 500mbit/s (6k requests/s in my test) and do not
start to drop requests. I tried to:
- limit working threads number to number of cores and
let ATS decide how to map threads to cores;
- leave 6 cores to NIC interrupts and bind ATS working
threads + accept thread to another cores so avoiding
cores from changing tasks (ats/interrupts);
- play with memory related config variables:
system.mmap_max, thread.default.stacksize,
allocator.thread_freelist_size, etc.
Hmmmm, that sounds bad. Have you verified that the origin
side can do beyond this ? Is there anything in the logs
about e.g. connection throttling, or anything else ? Is it
limiting the number of origin connections?
I’ll see if I can setup something in our lab to test this,
it’s a bit unwieldy right now, so not sure I can get
access to something with this sort of capacity.
— Leif
--
____________________________________________________________
Adam W. Dace <[email protected]
<mailto:[email protected]>>
Phone: (815) 355-5848 <tel:%28815%29%20355-5848>
Instant Messenger: AIM & Yahoo! IM - colonelforbin74 | ICQ - #39374451
Microsoft Messenger - [email protected]
<mailto:[email protected]>
Google Profile:
https://plus.google.com/u/0/109309036874332290399/about
--
____________________________________________________________
Adam W. Dace <[email protected]
<mailto:[email protected]>>
Phone: (815) 355-5848
Instant Messenger: AIM & Yahoo! IM - colonelforbin74 | ICQ - #39374451
Microsoft Messenger - [email protected] <mailto:[email protected]>
Google Profile: https://plus.google.com/u/0/109309036874332290399/about