Re: Buffering issues with nginx

2017-07-31 Thread Dan34
I've run some tests and I'm pretty sure the reason I was getting 5MB stuck
on nginx side was because of RCVBUF on upstream socket uses default socket
buffers and by default it ends up with 5MB RCV buffer.

I added logs to check that value and even after I configured sndbuf and
rcvbuf inside listen directive I was getting 5MB buffer on upstream socket.
After I configured these buffers on upstream (by fiddling with nginx code) I
got immediate results and right away that 5MB delay disappeared.
Similar to 'X-Accel-Buffering' I added X-Accel-Up-RCVBUF and
X-Accel-Down-SNDBUF headers and they seem to be working as expected.

I'm testing this scenario: downstream has limited bandwidth and upstream
(node) can generate data much faster. My goal is to ensure that overall read
speed from upsteram gets limited by downstream, so that nginx doesn't try to
read faster than downstream is capable of. Basically I'm ok if nginx buffers
some constant amount of data (e.g. not more than 1 sec of data at downstream
speed.

Even after fixing it, nginx doesn't work as well as simple single-threaded
vanilla test proxy that I wrote for testing.
That vanilla proxy delivers perfect results for this simple reason: I set
upstream and downstream buffers to some low value (e.g. 128KB) and then I
use blocking recv and blocking send in the same thread. This way whatever it
reads from upstream it sends right away downstream in the same loop

Any reason why would this not work with nginx?.. I don't see why it wouldn't
work with async sockets the same way as with blocking read/send loop in
vanilla proxy.

Posted at Nginx Forum: 
https://forum.nginx.org/read.php?2,275526,275758#msg-275758

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: Buffering issues with nginx

2017-07-30 Thread Dan34
Yes, I tested that and it appears to be the case. However, I don't see where
nginx sets rcvbuf on the upstream socket as this one cannot be inherited.
Somehow even with SND/RCV buffers set to low values and buffering disabled I
get around 2.5BM stuck on nginx side. With my own simple proxy I get perfect
results: when socket buffers are low there is no data accumulated on the
proxy side.

Posted at Nginx Forum: 
https://forum.nginx.org/read.php?2,275526,275744#msg-275744

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: Buffering issues with nginx

2017-07-29 Thread Dan34
In nginx docs I see sndbuf option in listen directive.
Is there something that I don't understand about it, or developers of nginx
don't understand meaning of sndbuf... but I do not see a point to set sndbuf
on a listening socket. It just does not make any sense!
sndbuf/rcvbuf is needed perhaps for an upstream proxy connection, or for a
connected socket (from outside), but it just have no meaning to set that on
a listening socket. Total nonsense.

Posted at Nginx Forum: 
https://forum.nginx.org/read.php?2,275526,275730#msg-275730

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: Buffering issues with nginx

2017-07-29 Thread Dan34
It looks like for localhost buffers are bigger, but even if it's not local
host I do get 5MB stuck in socket buffers. I was only able to get perfect
results by writing my own proxy in c and doing some obscure nodejs code to
avoid buffering.

In any case, if nginx does not provide a way to control sockets buffer I
cannot use it. For example, 'X-Accel-Buffering: no' supposedly disables
caching (I didn't see any effect of that anyways), so I wanted to add some
kind of headers to be able to tell nginx what buffers to set per connection.
In my case I do regular reverse proxy stuff with nginx, but on certain
connections I need exact control and nginx doesn't provide any of that.
haproxy for example worked much better for me, but it's sndbuf/rcvbuf
settings are global, which is equally unacceptable.

Would that be easy to add headers like these X-Accel-Up-Rcvbuf: 12345,
X-Accel-Down-Sndbuf: 4567 so that on getting them from upstream nginx would
configure sockets that are used by that connection?

Posted at Nginx Forum: 
https://forum.nginx.org/read.php?2,275526,275729#msg-275729

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: Buffering issues with nginx

2017-07-24 Thread Dan34
I did some logs on my proxy test and compared results with wireshark trace
at some random point in time (t=511sec)
And numbers match exactly between logs and wireshark. 

This is a log line from my test proxy:
time: 511s, bytesSent:5571760, down:{ SND:478720 OUTQ:280480 } up:{
RCV:5109117 INQ:3837047 }
SND is SNDBUF, RCV is RCVBUF, OUTQ is SIOCOUTQ, INQ is SIOCINQ/FIONREAD.
down is the link between wget and proxy, and up is the link from proxy to
node on localhost.

At the same time in wireshark the up link has 9409056 bytes ACKed, down link
has 5291529 bytes ACKed, or 9409056-5291529=4117527 bytes got accumulated
inside proxy process. This is exactly the same number of bytes that's stuck
in socket queues on up+down links (280480 + 3837047 =4117527).

>From logs it also looks like when INQ and OUTQ reach values close to SND/RCV
then in wireshark I see TCP packets with window size = 0 to stop any
transmission.
Perhaps, if I set SND+RCV to 64K then total buffering should not exceed
128KB inside proxy buffers.

Posted at Nginx Forum: 
https://forum.nginx.org/read.php?2,275526,275640#msg-275640

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: Buffering issues with nginx

2017-07-24 Thread Dan34
I wrote my own proxy and it appears that the data is all stuck in socket
buffers. If SNDBUF isn't set, then OS will resize it if you try to write
more data than remote can accept. Overall, in my tests I see that this
buffer grows to 2.5MB and in wireshark I see that difference grows up to
5MB. As docs from SO_SNDBUF state, actual internal value is usually twice
larger than what SO_SNDBUF ports then it kind of starts to make sense where
this max 5MB difference is wasted.

Posted at Nginx Forum: 
https://forum.nginx.org/read.php?2,275526,275635#msg-275635

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: Buffering issues with nginx

2017-07-22 Thread Dan34
> You should check tcpdump (or wireshark) to see where actually 12.5MB
> of data have been stuck.

Wireshark confirms my assumption. All the data is buffered by nginx. More
over, I see some buggy behavior, and I've seen that happen quite often.

This is localhost tcp screenshot: http://i.imgur.com/9Rz6Acs.png

You can see that after 1327 seconds nginx ACKed 18.5MB (which is 13.9MB/s).
node actually writes at 20MB/s to the socket, node will internally buffer
all unset data. At this point node stops sending any data and in 30 seconds
nginx closes socket (at 1399s).
Then nginx goes on to deliver all the data that it got buffered and when it
finishes sending 18.5M that it got from node before closing TCP connection
it also closes connection to wget. Wget simply restarts file transfer with a
new HTTP range request to download starting from 18.5MB, at this point you
can see on this screenshot that around 1820sec nginx sends new GET request
to node (that's the range get).


Here you can see outgoing packets from node around the same time when nginx
closed socket to node at 1399sec: http://i.imgur.com/pdnDIFS.png
 You can see that by this time remote (wget) ACKed exactly 14MB (as I run
wget with 10KB/s rate limit).

So, without any tcp buffers involved nginx does buffer like 5MB of data.
Moreover, when I review node->nginx packet capture, nginx clearly was
reading full speed (20KB/s speed limit on node side) and at some point
perhaps something triggered nginx to stop reading fullspeed. This happened
at 391sec, at which point nginx ACKed 7.8MB (which is exactly 20KB/s). At
the same time wget ACKed only 4MB, at this point nginx was buffering around
4MB and started to slow down read speed from node.

So, configs do not have any effect. What else should I check? Effectively,
in this scenario nginx should also read from node at 10KB/s (plus some fixed
buffer) and this doesn't seem to work properly in nginx.

Posted at Nginx Forum: 
https://forum.nginx.org/read.php?2,275526,275611#msg-275611

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: Buffering issues with nginx

2017-07-21 Thread Dan34
Hello Valentin,

> 1. Write socket buffer in kernel on node.js side where node.js writes
data.

we can throw this out from equation, as I measure my end time by the event
when socket is closed on nodejs side, (I use http1.0 from nginx to node to
make it simple for this case).

> 2. Read socket buffer in kernel for node.js connection from what nginx
reads data.

SO_RCVBUF shouldn't be over 64KB by default. What does nginx use, is there a
config that controls it?.. still this shouldn't be a big issue, I'm fine if
there is such a constant buf.


> 3. Heap memory buffer to that nginx reads data from kernel socket buffer
(controlled by proxy_buffers
> and proxy_buffer_size directives).
> 
> No buffering here means that nginx doesn't keep that data in buffers
> for some time, but writes it immediately to write socket buffer in kernel
> for client connection.

I'm trying to configure these to be skipped or used to minimum. E.g. I don't
wan any data to be held in these.

> 4. Write socket buffer in kernel for client connection where nginx writes
data.

SO_SNDBUF shouldn't be over 64KB by default, perhaps nginx changes it as
well. What's the value that nginx uses and is there a config that controls
it?

> 5. Read socket buffer in kernel for client connection from what wget reads
data.

We can throw this out from equation, we may assume these aren't used, as for
my test I use final time when wget finishes and prints stats. There is
obviously highly unlikely chance that wget actually reads data twice faster
from network, but shows slower speed in it's cli results and "waits" for
data even though it's already received and is in local buffers. This is
totally dumb and I don't think this might happen, but I could check with
wireshark just in case.


In short, these could affect my case: SO_RCVBUF, SO_SNDBUF on nginx side and
whatever buffering nginx uses for handling data. I run that same test with
25MB data and I got totally identical result: 12.5MB was buffered on nginx
side. That stuff that could affect my case cannot really add up to 12.5MB
and 10 minute of time.
There is a wild possibility that tcp window scaling resulted in some huge
window on node->nginx side and ended up storing that 12MB in tcp window
itself but i'm not sure if TCP window should be accounted into these
SO_RCVBUF or that RCVBUF is extra data on top of internals of TCP.

So,.. any ideas how come nginx ends up buffering 12.5MB data?

Posted at Nginx Forum: 
https://forum.nginx.org/read.php?2,275526,275608#msg-275608

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: Buffering issues with nginx

2017-07-21 Thread Dan34
> Depending on the compromises you are willing to make, to accuracy or
> convenience, you may be able to come up with something good enough.

I have a more or less working solution. nginx breaks it and I'm trying to
figure out how to fix it.


> Yes. That is (part of) what a proxy does. Even without nginx as a
> reverse-proxy, your client might be talking through one or more proxy
> servers. You will never know whether your response got to the actual
> end client, without some extra verification step that only the end
> client does.

I don't care for the case if there are any other proxies, I care for bytes
that left my server. Specifically, bytes that left my server and were ACKed
by next point (either final user or some proxy in between). Verification
isn't an option.

> > When I updated some of these buffering
> > configs things improved, but still were failing with smaller uploads
that
> > are still fully buffered by nginx.

> proxy_buffers and proxy_buffer_size can be tuned (lowered, in this case,
> probably) to slow down nginx's receive-rate from your upstream.
> 
> If you can show one working configuration with a description of how
> it does not do what you want it to do, possibly someone can offer some
> advice on what to change.

I tried proxy_buffers off; and it didn't make a difference. I'm fairly
confident that it's a bug in nginx, or some "feature" that doesn't get
disabled with any configs.

Here's full config that I use:

location / {
proxy_pass http://localhost:80;
#proxy_http_version 1.1;
#proxy_http_version 1.0;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
#proxy_set_header Upgrade $http_upgrade;
#proxy_set_header Connection 'upgrade';
#proxy_set_header Connection 'close';
proxy_buffering off;
proxy_request_buffering off;
#proxy_buffer_size 4k;
#proxy_buffers 3 4k;
proxy_no_cache 1;
proxy_set_header Host $host;
#proxy_cache_bypass $http_upgrade;
proxy_hide_header X-Powered-By;
proxy_max_temp_file_size 0;
}

I run nginx on 8080, for testing, since it's not suitable for live use on 80
in my case and I'm trying to figure out how to fix it.
And here's why I believe that there is a bug.

In my case, I wrote test code on node side that serves some binary content.
I can control speed at what node serves this content. On receiving end (on
the other side of the planet) I use wget with --limite-rate. In the test
that I'm trying to fix I send 5MB from nodejs at 20KB/s speed, client that
requests that binary data reads it at 10KB/s. Obviously overall speed has to
be 10KB/s as it's limited by the client that requests the data.

What happens is that entire connection from nginx to node is closed after
node sends all data to nginx. Basically in my test 5MB will take
approximately 500s to deliver, but node gets tcp connection closed 255 s
from start (when there is still 250 more seconds to go and 2.5MB is still
stuck on nginx side). So, no matter what I do nginx totally breaks my
scenario, it does not obey any configs and still buffers 2.5MB

Just in case if nginx devs ever read here, I have 1.12.1 version on ubuntu.

Posted at Nginx Forum: 
https://forum.nginx.org/read.php?2,275526,275605#msg-275605

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: Buffering issues with nginx

2017-07-21 Thread Dan34
> X-Accel-Buffering: no
> That will disable nginx's buffering for the request.

At first it looked like exactly what I was looking for (after reading nginx
docs), but after trying I observed that there were no effects from that.
In code that writes headers I added res.setHeader('X-Accel-Buffering',
'no');

Posted at Nginx Forum: 
https://forum.nginx.org/read.php?2,275526,275603#msg-275603

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx