I've run some tests and I'm pretty sure the reason I was getting 5MB stuck
on nginx side was because of RCVBUF on upstream socket uses default socket
buffers and by default it ends up with 5MB RCV buffer.
I added logs to check that value and even after I configured sndbuf and
rcvbuf inside listen
Yes, I tested that and it appears to be the case. However, I don't see where
nginx sets rcvbuf on the upstream socket as this one cannot be inherited.
Somehow even with SND/RCV buffers set to low values and buffering disabled I
get around 2.5BM stuck on nginx side. With my own simple proxy I get
In nginx docs I see sndbuf option in listen directive.
Is there something that I don't understand about it, or developers of nginx
don't understand meaning of sndbuf... but I do not see a point to set sndbuf
on a listening socket. It just does not make any sense!
sndbuf/rcvbuf is needed perhaps
It looks like for localhost buffers are bigger, but even if it's not local
host I do get 5MB stuck in socket buffers. I was only able to get perfect
results by writing my own proxy in c and doing some obscure nodejs code to
avoid buffering.
In any case, if nginx does not provide a way to control
I did some logs on my proxy test and compared results with wireshark trace
at some random point in time (t=511sec)
And numbers match exactly between logs and wireshark.
This is a log line from my test proxy:
time: 511s, bytesSent:5571760, down:{ SND:478720 OUTQ:280480 } up:{
RCV:5109117
I wrote my own proxy and it appears that the data is all stuck in socket
buffers. If SNDBUF isn't set, then OS will resize it if you try to write
more data than remote can accept. Overall, in my tests I see that this
buffer grows to 2.5MB and in wireshark I see that difference grows up to
5MB. As
> You should check tcpdump (or wireshark) to see where actually 12.5MB
> of data have been stuck.
Wireshark confirms my assumption. All the data is buffered by nginx. More
over, I see some buggy behavior, and I've seen that happen quite often.
This is localhost tcp screenshot:
Hello Valentin,
> 1. Write socket buffer in kernel on node.js side where node.js writes
data.
we can throw this out from equation, as I measure my end time by the event
when socket is closed on nodejs side, (I use http1.0 from nginx to node to
make it simple for this case).
> 2. Read socket
> Depending on the compromises you are willing to make, to accuracy or
> convenience, you may be able to come up with something good enough.
I have a more or less working solution. nginx breaks it and I'm trying to
figure out how to fix it.
> Yes. That is (part of) what a proxy does. Even
> X-Accel-Buffering: no
> That will disable nginx's buffering for the request.
At first it looked like exactly what I was looking for (after reading nginx
docs), but after trying I observed that there were no effects from that.
In code that writes headers I added
10 matches
Mail list logo