On Sat, Mar 16, 2013 at 10:18:31PM +0400, Ruslan Ermilov wrote:
On Sat, Mar 16, 2013 at 10:06:34PM +0400, ivan babrou wrote:
Maybe you're right about moving gdImageInterlace
from ngx_http_image_out. Should I fix something else or is it okay now?
I like the patch in the form I sent it
Hello
I was trying to add some custom headers via add_header directive to webdav
response and run into the problem: if response code was 201 (file created),
then custom headers weren't added. The reason in that big if statement in
ngx_http_headers_filter doesn't check for NGX_HTTP_CREATED. Is it
Oh, never mind. This is fixed in recent nginx version.
On Tue, Mar 19, 2013 at 2:11 PM, Dmitry Petrov dmitry.petr...@gmail.comwrote:
Hello
I was trying to add some custom headers via add_header directive to webdav
response and run into the problem: if response code was 201 (file created),
Hi all,
I'm using nginx as a frontend for my SCGI application and I want to
handle authentication in my SCGI code. I have to deal with POST
requests. Is it ok that nginx sends 401 Unauthorized after sending
100 Continue?
Are both requests bellow correct?
I'm asking because of this curl message:
Hi,
On 19.03.2013 16:31, Luka Perkov wrote:
Hi all,
I'm using nginx as a frontend for my SCGI application and I want to
handle authentication in my SCGI code. I have to deal with POST
requests. Is it ok that nginx sends 401 Unauthorized after sending
100 Continue?
Are both requests bellow
On 19 Mar 2013, at 12:31, Luka Perkov wrote:
Hi all,
I'm using nginx as a frontend for my SCGI application and I want to
handle authentication in my SCGI code. I have to deal with POST
requests. Is it ok that nginx sends 401 Unauthorized after sending
100 Continue?
Are both requests
Всем привет.
Есть сервер на котором в качестве фронтенда крутится nginx. Выдержки из
конфига:
===
worker_processes 8;
events {
worker_connections 4096;
}
proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=all:512m inactive=1d
max_size=6g;
proxy_cache_key
Здравствуйте.
nginx.conf, что касается кеша:
proxy_cache_path /var/www/nginx_cache levels=1:2 keys_zone=cache:64m
max_size=1m inactive=600m;
proxy_temp_path /tmp/nginx;
Конфиг виртуального сервера:
location = / {
proxy_cache cache;
proxy_cache_key $uri;
proxy_cache_valid 200
Ngnx 1.1.19. Судя по конфигу обращение к бэкенду должно происходить раз в
минуту. Так и происходит, но иногда, довольно часто, проходят два и более
обращений. Бывает, что на бэкенд проходит такое количество запросов, что
из-за медленного его ответа, ngnx направляет уже все запросы к бэкенду.
Здравствуйте!
Попробуйте добавить в конфиг
proxy_cache_use_stale updating;
Из документации:
Кроме того, дополнительный параметр updating разрешает использовать
устаревший закэшированный ответ, если на данный момент он уже обновляется.
Это позволяет минимизировать число обращений к проксированным
19.03.2013 14:30, Maksim Kulik пишет:
Попробуйте добавить в конфиг
proxy_cache_use_stale updating;
Спасибо, это именно то, что мы упустили.
--
Сергей Панин
___
nginx-ru mailing list
nginx-ru@nginx.org
Hello!
On Mon, Mar 18, 2013 at 10:49:59PM +0400, Oleg wrote:
On Mon, Mar 18, 2013 at 08:00:55PM +0400, Maxim Dounin wrote:
Hello!
А http-redirect может только модуль фазы NGX_HTTP_CONTENT_PHASE слать
или с
фазы NGX_HTTP_ACCESS_PHASE тоже можно слать перенаправления?
Можно
On Tue, Mar 19, 2013 at 02:55:21PM +0400, Maxim Dounin wrote:
Hello!
Так, насколько я понимаю, будет мусор на выходе - сначала ответ
302 без тела, а потом ответ на исходный запрос. Посмотрите
telnet'ом на ответ.
Да :-). Я это предположил, но проверить забыл.
Какие-то символы 'ba' в
А заголовки позволяют кешировать ответ?
19.03.2013 12:23 пользователь sitsalavat nginx-fo...@nginx.us написал:
Всем привет.
Есть сервер на котором в качестве фронтенда крутится nginx. Выдержки из
конфига:
===
worker_processes 8;
events {
worker_connections 4096;
Спасибо!
Как не хватает в документации ссылок смотри также... :-)
Вторник, 19 марта 2013, 17:25 +04:00 от Maxim Dounin mdou...@mdounin.ru:
Hello!
On Tue, Mar 19, 2013 at 09:07:43AM +0400, Nicholas Kostirya wrote:
[...]
И в такой конфигурации, когда бекенд возвращает ответ с
On Tue, Mar 19, 2013 at 09:25:09AM +0530, Geo P.C. wrote:
Hi there,
We have 3 servers with Nginx as webserver. The setup is as follows:
So in proxy server we need to setup as while accessing geotest.com and all
its subdirectories like geotest.com/* it should go to app server 1 except
while
Hello Jay,
On Mar 19, 2013, at 2:09 , Jay Oster j...@kodewerx.org wrote:
Hi again!
On Sun, Mar 17, 2013 at 2:17 AM, Jason Oster j...@kodewerx.org wrote:
Hello Andrew,
On Mar 16, 2013, at 8:05 AM, Andrew Alexeev and...@nginx.com wrote:
Jay,
You mean you keep seeing SYN-ACK loss
Hello!
We are running some applications servers (grails) and using nginx as reverse
proxy before that for caching and load balancing purposes.
everything is working as expected, but now that we received our ssl
certificate, i am failing to route the ssl requests over nginx (i did
understand that
Hello!
On Tue, Mar 19, 2013 at 06:32:04AM -0400, gvag wrote:
Hi guys,
I am trying to find if it is possible to proxy a websocket request based on
the Sec-Websocket-Protocol. More specific, is there a way to check the
Sec-Websocket-Protocol
On Tue, Mar 19, 2013 at 10:43 PM, Peter Booth peter_bo...@s5a.com wrote:
The code does the following:
1. remove an HTTP header named SWSSLHDR
2. replaces it with SWSSLHDR: port, where the port is the local port of
the current context's TCP connection, presumably the port that your F5
virtual
Peter Booth wrote on 03/19/2013 10:43:12 AM:
The code does the following:
1. remove an HTTP header named SWSSLHDR
2. replaces it with SWSSLHDR: port, where the port is the local port of
the current context's TCP connection, presumably the port that your F5
virtual server is listening on.
Ok,
Its getting better :-)
Could get it to listen to 443 by using
listen *:443 default_server ssl;
listen *:80;
(star double dot port)
however server still says
ERR_CONNECTION_REFUSED
and in access log, nothing appears for https .. any help would be highly
appreciated ..
Posted at Nginx
You might find that you get most traction with open resty its an nginx
bundle project that includes ngx_lua,
HttpHeadersMoreModule and a bunch of other mopdules that are great for
transforming requests
and implementing F5-like logic. I have been using it for six months and its
saved me a bunch
peter wrote on 03/19/2013 01:54:20 PM:
You might find that you get most traction with open resty ? its an
nginx bundle project that includes ngx_lua,
HttpHeadersMoreModule and a bunch of other mopdules that are great
for transforming requests
and implementing F5-like logic. I have been
Thanks Maxim, I got what you mean.
Since I'm using fastCGI so I put something like this:
fastcgi_param HTTP_COOKIE $http_cookie; mycookie=$cookie_note;
(I populated cookie_note in my filter already, this was done for logging
purpose thus it is just a reuse of existing facility)
More
On Tue, Mar 19, 2013 at 07:42:25PM +0530, Geo P.C. wrote:
Hi there,
location / {
proxy_pass http://192.168.0.1/; #app1
}
location /cms {
proxy_pass http://192.168.0.2/; #
}
1. geotest.com à Working fine getting the contents of app1 server
2.
I have an application behind nginx (Rails on Unicorn if it matters) which
listens on a UNIX socket.
It all works nice, especially when load is low. When there is some load on
the server though (say 70%), I randomly(?) get a bunch of 502 responses --
usually in batches.
Thus the question: when
In my experience, Nginx returns 502 when the upstream server (Unicorn in
your case) doesn't respond or terminates the connection unexpectedly.
_Nik
On 3/19/2013 1:29 PM, fastcatch wrote:
I have an application behind nginx (Rails on Unicorn if it matters) which
listens on a UNIX socket.
It
Hi Andrei!
On Tue, Mar 19, 2013 at 2:49 AM, Andrei Belov de...@nginx.com wrote:
Hello Jay,
If I understand you right, issue can be repeated in the following cases:
1) client and server are on different EC2 instances, public IPs are used;
2) client and server are on different EC2 instances,
Valentin is already worked on this, and I believe he'll be able to
provide a little bit more generic patch.
Ok, well I might just use ours for now, but won't develop it any
further.
Any idea on a time frame for this more official patch?
Rob
___
On Mar 20, 2013, at 5:47 , Matthieu Tourne wrote:
I just found an interesting behavior in Nginx while looking at a reqeust that
was causing an error in my code.
For a request with no HTTP/xx version, Nginx will return no HTTP response
headers.
From what I gathered, this is just Nginx
31 matches
Mail list logo