Thanks for the answer!
I personally would first try to resolve the connection issue. The logs
are warning of too many connections. After that I would start debugging
the cache control.
That's what we did. The first two points happend when there was only one
user connecting to ts.
I was simply doing curl to make sure caching works. My problem is I
cannot exactly determine caching algorithm. When does ts decide that
page should be cached? How size of the document influences this decision?
After thinking about point number 2 little more I thought that maybe I
get response faster then it gets into cache? Is it possible that often
changed resource will never get into cache?
For example:
Client makes request, cache decides it is miss and sends req to server
and then response back to client. Client receives request and makes the
same request again immediately. Cache didn't had time to save response
to disk so it marks it as miss and forwards req to server. Because
response2 is different then response1 so cache starts to insert it into
cache, and then client makes another request...
My colleague thinks it is not the case, but I still had to to ask. :-D.
Can you limit your connections from your script to see if there is a
consistent throttle in place?
Yes we can. But real problem is that ts didn't get stabilized after we
stopped script.
Our test generates 200, 400, 700 and 1000 concurent requests. Logs say
that ts is being restarted, that's fine, but why cannot I connect to it
then?
Netstat even says that ts is listening, just I get no response. :-(
To make it clear. The throttling problem is from separate test then this
caching experiments.
thanks in advance,
m.