On Oct 19, 2009, at 10:45 AM, Simon Eisenmann wrote:
Am Montag, den 19.10.2009, 10:37 -0400 schrieb Adam Kocoloski:
o, until JIRA comes back online I'll follow up with that here. I
think I could see how repeated pull replications in rapid succession
could end up blowing through sockets. Each pull replication sets up
one new connection for the _changes feed, and tears it down at the
end
(everything else replication-related goes through a connection
pool).
Do enough of those very short requests and you could end up with
lots
of connections in TIME_WAIT and eventually run out of sockets.
FWIW,
the default Erlang limit is slightly less than 1024. If your
update_notification process uses a new connection for every POST to
_replicate you'll hit the system limit (also 1024 in Ubuntu IIRC)
twice as fast.
I see that you mean. Though i am using a connection pool in the update
notification as well. I verified that i am not leaking connections on
the client side. The only reason i have to throw away and open a nw
connection is if the response is returned chunked.
I see the reasons for chunged transfer encoding. Is there a way to
disable it on connections so i can make sure i have a fixed connection
pool size on the client?
Hi Simon, I'm not sure I follow why chunked responses require new
connections, but in any event, I'm fairly certain that responses to
_replicate are not chunked.
(Digression more appropriate for dev@) Does the client really have any
control over whether the responses are chunked? I guess the "right"
way to force no chunking from the client side would be to make an HTTP/
1.0 request (with Connection: keep-alive if you still want the
connection pool). We should check at some point to see if that
actually works.
Adam