Re: Development

2014-07-01 Thread David Carlier
Hi Filipe,

What if I look into this one http://trac.nginx.org/nginx/ticket/485

This kind of change in the ngx_http_request_t struct will be accepted (and
furthermore auth basic module, upstream ...) ??

Thanks in advance.


On 30 June 2014 09:05, Filipe Da Silva fdasilv...@gmail.com wrote:

 Hi,

 In short : http://nginx.org/en/docs/contributing_changes.html

 And, patch must be made with this option set in your hgrc file :

 [diff]
 showfunc = True

 Rgds,
 Filipe

 2014-06-28 8:22 GMT+02:00 David Carlier dcarl...@afilias.info:
  HI All,
  I am working as C/C++ developer for a company which makes nginx modules
 and
  would like to know if I can contribute a bit.
 
  Kind regards.
  David CARLIER
 
  dotMobi / Afilias Technologies DUBLIN
 

 ___
 nginx-devel mailing list
 nginx-devel@nginx.org
 http://mailman.nginx.org/mailman/listinfo/nginx-devel

___
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel

Re: Два sub_filter

2014-07-01 Thread Sargas
Максим, а можно немного подробнее про нюансы с unix сокетами?


30 июня 2014 г., 21:54 пользователь Maxim Dounin mdou...@mdounin.ru
написал:

 Hello!

 On Sun, Jun 29, 2014 at 09:07:55PM +0400, Maxim Kozlov wrote:

  Использовать сторонние модули желания нет.
 
   Простой workaround - сделать дополнительное проксирование
  Настраивать дополнительный виртуальный сервер? Как лучше тогда
 проксировать
  -- на порт или unix сокет?

 Делайте так, как вам удобнее/понятнее.  С технической точки
 зрения - и отдельного location'а хватит:

 location / {
 sub_filter ...
 proxy_pass http://localhost/sub2/;
 }

 location /sub2/ {
 sub_filter ...
 ...
 }

 Что касается unix-сокетов, то они позволяют наиграть немного
 производительности, но иногда бывают нюансы, так что пытаться их
 использовать без нужды я бы не рекомендовал.

 --
 Maxim Dounin
 http://nginx.org/

 ___
 nginx-ru mailing list
 nginx-ru@nginx.org
 http://mailman.nginx.org/mailman/listinfo/nginx-ru

___
nginx-ru mailing list
nginx-ru@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-ru

Re: Непонятное поведение размера кэша

2014-07-01 Thread Давид Мзареулян
Вряд ли дело в этом: размер блока у меня 1 Кб, средний размер файлов во
втором кэше — около 100 Кб, файлов порядка полумиллиона. Так что погрешность
может быть не больше полугигабайта.

Posted at Nginx Forum: 
http://forum.nginx.org/read.php?21,251307,251352#msg-251352

___
nginx-ru mailing list
nginx-ru@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-ru

Re: rewrite xbt

2014-07-01 Thread Klim81
Maxim Dounin, спасибо вам огромное! взял вариант с return. 
Несколько суток потратил на пробы, из них около половины потратил на поиски
ответа на задачу 
Огромное спасибо!

Posted at Nginx Forum: 
http://forum.nginx.org/read.php?21,251323,251363#msg-251363

___
nginx-ru mailing list
nginx-ru@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-ru

Re: Nginx возвращает 499 при проксировании после нескольких часов работы

2014-07-01 Thread Andruxa
Максим, спасибо за ответ!

Действительно, таймер был увеличен ранее специально, правда решали не совсем
эту задачу. 
На бэкенде обрабатывались продолжительные запросы, соединение держалось
открытым, поэтому все таймауты были увеличены. Постепенно с этой проблемой
разобрались, перевели долгие процессы в фон, соединение закрываем сразу, но
видимо настройку не вернули обратно. 

Нагрузка на backend не большая по всем параметрам, да и запросов не так
много. Используем для мониторинга munin. 

Сейчас не совсем понятно
1) в логах на backend'е нет запросов, которые на frontend'e помечены как
499
Получается, что запрос либо не был отправлен, либо не дошел. 

2) после того, как проблема начинается проявлятся, она стабильно
воспроизводится до тех пор, пока не перезагрузишь nginx

3) Все запросы между frontend и backend ходят по http
Обнаружили, что если запрашивать ресурс у frontend'a по http и он стабильно
не возвращается, то при запросе по https он может вернуться

4) Помимо 499 для части ресурсов возвращался статус 401
Вот тут можно посмотреть дебаг лог
https://www.dropbox.com/s/6xw5frcmrypzk8p/filtered_401_renamed.log
При этом опять нет ничего в логах на backend'e. К слову на backend работает
nginx 1.5.13
И опять после перезапуска nginx'а на frontend все отдается как положено.

Тут в логах есть вот такие строки
readv: 1:3560   
readv() not ready (11: Resource temporarily unavailable)
Могут ли быть такие ошибки из проблем с соединением между frotned'ом и
backend'ом?


Могли бы вы посоветовать, что еще можно проверить?

Posted at Nginx Forum: 
http://forum.nginx.org/read.php?21,251304,251365#msg-251365

___
nginx-ru mailing list
nginx-ru@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-ru

Re: Nginx возвращает 499 при проксировании после нескольких часов работы

2014-07-01 Thread Andruxa
Подсмотрели с коллегами у новелла 
http://www.novell.com/support/kb/doc.php?id=7007165

Попробовали посмотреть изменение параметра с включенным tcpdump - счетчик не
растет. 

Могли бы вы подсказать, какие еще есть средства диагрности и как можно
исправить данный показатель счетчика?

Posted at Nginx Forum: 
http://forum.nginx.org/read.php?21,251304,251366#msg-251366

___
nginx-ru mailing list
nginx-ru@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-ru

Re: Непонятное поведение размера кэша

2014-07-01 Thread Maxim Dounin
Hello!

On Tue, Jul 01, 2014 at 02:36:19AM -0400,  Давид Мзареулян wrote:

 Вряд ли дело в этом: размер блока у меня 1 Кб, средний размер файлов во
 втором кэше — около 100 Кб, файлов порядка полумиллиона. Так что погрешность
 может быть не больше полугигабайта.

Бывают ещё:

- некорректный размер в памяти из-за ручных удалений (лечится - 
  перезагрузкой кеша, e.g. при выполнении процедуры upgrade);

- сюрпризы при использовании нестандартных файловых систем 
  (например, xfs с allocsize=много делает странное, см. 
  http://trac.nginx.org/nginx/ticket/157).

-- 
Maxim Dounin
http://nginx.org/

___
nginx-ru mailing list
nginx-ru@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-ru

Re: Непонятное поведение размера кэша

2014-07-01 Thread Давид Мзареулян
Хм, кажется, это «сюрпризы». У меня XFS как раз.

Posted at Nginx Forum: 
http://forum.nginx.org/read.php?21,251307,251375#msg-251375

___
nginx-ru mailing list
nginx-ru@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-ru

Re: Непонятное поведение размера кэша

2014-07-01 Thread Давид Мзареулян
Правда, у меня не выставлен allocsize. Вот мои параметры, если это как-то
поможет в будущем победить баг:

$ mount -l
...
/dev/xvdf1 on /mnt1 type xfs (rw,noatime,nodiratime)

$ xfs_info /mnt1
meta-data=/dev/xvdf1 isize=256agcount=4, agsize=26214144
blks
 =   sectsz=512   attr=2
data =   bsize=1024   blocks=104856576, imaxpct=25
 =   sunit=0  swidth=0 blks
naming   =version 2  bsize=4096   ascii-ci=0
log  =internal   bsize=1024   blocks=32768, version=2
 =   sectsz=512   sunit=0 blks, lazy-count=1
realtime =none   extsz=4096   blocks=0, rtextents=0

Posted at Nginx Forum: 
http://forum.nginx.org/read.php?21,251307,251377#msg-251377

___
nginx-ru mailing list
nginx-ru@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-ru

Re: Nginx возвращает 499 при проксировании после нескольких часов работы

2014-07-01 Thread Vitaliy Okulov
Если после tcpdump счетчик не поднялся, то либо backlog надо увеличить +
проверить текущие параметры сетевых карт на frontend и backend и всех
свитчах по дороге, что везде full duplex и скорость сетевой карты равна
скорости порта на свитче, и нет никаких ошибок.
Как вариант, в логах Apache включить вывод времени на обработку запроса:
httpd.apache.org/docs/2.2/mod/mod_log_config.html - параметр %T, и поискать
запросы близкие к timeout параметрам выставленным в nginx.




1 июля 2014 г., 13:44 пользователь Andruxa nginx-fo...@nginx.us написал:

 Подсмотрели с коллегами у новелла
 http://www.novell.com/support/kb/doc.php?id=7007165

 Попробовали посмотреть изменение параметра с включенным tcpdump - счетчик
 не
 растет.

 Могли бы вы подсказать, какие еще есть средства диагрности и как можно
 исправить данный показатель счетчика?

 Posted at Nginx Forum:
 http://forum.nginx.org/read.php?21,251304,251366#msg-251366

 ___
 nginx-ru mailing list
 nginx-ru@nginx.org
 http://mailman.nginx.org/mailman/listinfo/nginx-ru

___
nginx-ru mailing list
nginx-ru@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-ru

Re: Непонятное поведение размера кэша

2014-07-01 Thread Maxim Dounin
Hello!

On Tue, Jul 01, 2014 at 08:10:29AM -0400,  Давид Мзареулян wrote:

 Правда, у меня не выставлен allocsize. Вот мои параметры, если это как-то
 поможет в будущем победить баг:

По умолчанию allocsize, если верить интернетам, 64k, при типичном 
размере ответа 100k - этого будет достаточно, чтобы получить то 
расхождение объемов, которое наблюдается в вашем случае.

-- 
Maxim Dounin
http://nginx.org/

___
nginx-ru mailing list
nginx-ru@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-ru

Re: Nginx возвращает 499 при проксировании после нескольких часов работы

2014-07-01 Thread Andruxa
Виталий, спасибо за информацию! Попробуем настроить и понаблюдать. 

Сейчас только обнаружил, что ошибся и возможно ввел заблуждение насчет
счетчиков. 
Посмотрел показатели не на том адаптере (взял показатели для management
адаптера).

Вот такие показатели на основном
  RX packets:612896756 errors:0 dropped:2060993 overruns:0 frame:0
  TX packets:631594750 errors:0 dropped:0 overruns:0 carrier:0

Тут соотношение dropped/total значительно ниже.

Какое соотношение можно считать допустимым?

Posted at Nginx Forum: 
http://forum.nginx.org/read.php?21,251304,251388#msg-251388

___
nginx-ru mailing list
nginx-ru@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-ru

Re: Два sub_filter

2014-07-01 Thread Maxim Dounin
Hello!

On Tue, Jul 01, 2014 at 09:13:47AM +0300, Sargas wrote:

 Максим, а можно немного подробнее про нюансы с unix сокетами?

Например, одно время как минимум на двух разных операционных 
системах были проблемы с sendfile()'ом через unix-сокеты, что 
приводило к проблемам при отправке запросов с большим телом.

Сейчас я о каких-либо распространённых проблемах с unix-сокетами 
не осведомлён (кроме, разве что, проблем с одним экспериментальным 
патчем на sendfile() для FreeBSD, который делает glebius@, но он 
на то и экспериментальный).  Однако, как уже сказано, нюансы - 
бывают.

-- 
Maxim Dounin
http://nginx.org/

___
nginx-ru mailing list
nginx-ru@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-ru

Re: Nginx возвращает 499 при проксировании после нескольких часов работы

2014-07-01 Thread Pavel V.

Здравствуйте, Andruxa.
Вы писали 1 июля 2014 г., 20:16:15:

 Виталий, спасибо за информацию! Попробуем настроить и понаблюдать. 

 Сейчас только обнаружил, что ошибся и возможно ввел заблуждение насчет
 счетчиков. 
 Посмотрел показатели не на том адаптере (взял показатели для management
 адаптера).

 Вот такие показатели на основном
   RX packets:612896756 errors:0 dropped:2060993 overruns:0 frame:0
   TX packets:631594750 errors:0 dropped:0 overruns:0 carrier:0

 Тут соотношение dropped/total значительно ниже.
 Какое соотношение можно считать допустимым?

dropped/total = 0.


-- 
С уважением,
 Pavel  mailto:pavel2...@ngs.ru

___
nginx-ru mailing list
nginx-ru@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-ru

Re: Nginx возвращает 499 при проксировании после нескольких часов работы

2014-07-01 Thread Maxim Dounin
Hello!

On Tue, Jul 01, 2014 at 05:30:03AM -0400, Andruxa wrote:

 Максим, спасибо за ответ!
 
 Действительно, таймер был увеличен ранее специально, правда решали не совсем
 эту задачу. 
 На бэкенде обрабатывались продолжительные запросы, соединение держалось
 открытым, поэтому все таймауты были увеличены. Постепенно с этой проблемой
 разобрались, перевели долгие процессы в фон, соединение закрываем сразу, но
 видимо настройку не вернули обратно. 
 
 Нагрузка на backend не большая по всем параметрам, да и запросов не так
 много. Используем для мониторинга munin. 
 
 Сейчас не совсем понятно
 1) в логах на backend'е нет запросов, которые на frontend'e помечены как
 499
 Получается, что запрос либо не был отправлен, либо не дошел. 

Как я уже писал, из логов фронтенда очевидно, что не 
устанавливается соединение с бекендом.  Естественно, в логах 
бекенда его не будет.

Нужно смотреть что на бекенде с listen queue (BSD - netstat 
-Lan, Linux - ss -nlt), и если всё хорошо - доходят ли туда 
вообще соответствующие SYN-пакеты.

 2) после того, как проблема начинается проявлятся, она стабильно
 воспроизводится до тех пор, пока не перезагрузишь nginx
 
 3) Все запросы между frontend и backend ходят по http
 Обнаружили, что если запрашивать ресурс у frontend'a по http и он стабильно
 не возвращается, то при запросе по https он может вернуться
 
 4) Помимо 499 для части ресурсов возвращался статус 401
 Вот тут можно посмотреть дебаг лог
 https://www.dropbox.com/s/6xw5frcmrypzk8p/filtered_401_renamed.log
 При этом опять нет ничего в логах на backend'e. К слову на backend работает
 nginx 1.5.13
 И опять после перезапуска nginx'а на frontend все отдается как положено.

Про 401 не скажу, а судя по остальным симптомам, то если на 
бекенде nginx - я бы сказал, что у вас между фронтендом и бекендом 
statefull firewall, и у него кончаются state'ы.  Либо же кончаются 
локальные порты.  Если это так, то с listen queue на  бекенде будет 
всё хорошо, а SYN-пакеты - доходить не будут.

 Тут в логах есть вот такие строки
 readv: 1:3560   
 readv() not ready (11: Resource temporarily unavailable)
 Могут ли быть такие ошибки из проблем с соединением между frotned'ом и
 backend'ом?

Это не ошибки.

-- 
Maxim Dounin
http://nginx.org/

___
nginx-ru mailing list
nginx-ru@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-ru

Re: Два sub_filter

2014-07-01 Thread Sargas
Благодарю, с подобным пока не сталкивался, но теперь буду знать куда
смотреть в случае подобной проблемы. Экспериментальный патч это Async
sendfile ?


1 июля 2014 г., 16:27 пользователь Maxim Dounin mdou...@mdounin.ru
написал:

 Hello!

 On Tue, Jul 01, 2014 at 09:13:47AM +0300, Sargas wrote:

  Максим, а можно немного подробнее про нюансы с unix сокетами?

 Например, одно время как минимум на двух разных операционных
 системах были проблемы с sendfile()'ом через unix-сокеты, что
 приводило к проблемам при отправке запросов с большим телом.

 Сейчас я о каких-либо распространённых проблемах с unix-сокетами
 не осведомлён (кроме, разве что, проблем с одним экспериментальным
 патчем на sendfile() для FreeBSD, который делает glebius@, но он
 на то и экспериментальный).  Однако, как уже сказано, нюансы -
 бывают.

 --
 Maxim Dounin
 http://nginx.org/

 ___
 nginx-ru mailing list
 nginx-ru@nginx.org
 http://mailman.nginx.org/mailman/listinfo/nginx-ru

___
nginx-ru mailing list
nginx-ru@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-ru

Re: Два sub_filter

2014-07-01 Thread Maxim Dounin
Hello!

On Tue, Jul 01, 2014 at 07:39:25PM +0300, Sargas wrote:

 Благодарю, с подобным пока не сталкивался, но теперь буду знать куда
 смотреть в случае подобной проблемы. Экспериментальный патч это Async
 sendfile ?

Да.

-- 
Maxim Dounin
http://nginx.org/

___
nginx-ru mailing list
nginx-ru@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-ru

WordPress + Nginx шифрованная админка проблема ?

2014-07-01 Thread hitarcherru
Добрый день! Подскажите пожалуйста, че-то не получается админку запустить
чтобы через SSL работает, браузер пишет циклическая переадресация постоянно,
в чем может быть причина ? Подкорректируйте конфигурацию пожалуйста

server {
server_name mysite.com www.mysite.com;
listen 192.168.1.100;
listen 192.168.1.100:443 ssl;
set $root_path /home/testing/data/www/mysite.com;
location ~*
^.+\.(jpg|jpeg|gif|png|svg|js|css|mp3|ogg|mpe?g|avi|zip|gz|bz2?|rar|swf)$ {
root $root_path;
access_log /home/nginx-logs/testing isp;
access_log /home/httpd-logs/mysite.com.access.log ;
error_page 404 = @fallback;
add_header Access-Control-Allow-Origin *;
}
location / {
proxy_pass http://192.168.1.100:81;
proxy_redirect http://192.168.1.100:81/ /;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For 
$proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Real-IP $remote_addr;
}
location ~* ^/(webstat|awstats|webmail|myadmin|pgadmin)/ {
proxy_pass http://192.168.1.100:81;
proxy_redirect http://192.168.1.100:81/ /;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For 
$proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Real-IP $remote_addr;
}

location @fallback {
proxy_pass http://192.168.1.100:81;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For 
$proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Real-IP $remote_addr;
}
include /usr/local/ispmgr/etc/nginx.inc;
ssl_certificate /home/httpd-cert/testing/mysite.com.crt;
ssl_certificate_key /home/httpd-cert/testing/mysite.com.key;
charset UTF-8;
disable_symlinks if_not_owner from=$root_path;
}

Posted at Nginx Forum: 
http://forum.nginx.org/read.php?21,251401,251401#msg-251401

___
nginx-ru mailing list
nginx-ru@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-ru

Re: WordPress + Nginx шифрованная админка проблема ?

2014-07-01 Thread Maksim Kulik
Думаю это может помочь:
http://kulmaks.by/работа-wordpress-на-nginx-apache-постоянный-301-редирект/

И SSL тут совсем не виноват.

1 июля 2014 г., 19:45 пользователь hitarcherru nginx-fo...@nginx.us
написал:

 Добрый день! Подскажите пожалуйста, че-то не получается админку запустить
 чтобы через SSL работает, браузер пишет циклическая переадресация
 постоянно,
 в чем может быть причина ? Подкорректируйте конфигурацию пожалуйста

  Posted at Nginx Forum:
 http://forum.nginx.org/read.php?21,251401,251401#msg-251401

 ___
 nginx-ru mailing list
 nginx-ru@nginx.org
 http://mailman.nginx.org/mailman/listinfo/nginx-ru
___
nginx-ru mailing list
nginx-ru@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-ru

Re: WordPress + Nginx шифрованная админка проблема ?

2014-07-01 Thread hitarcherru
так что именно мне нужно в мой конфиг дописать или изменить ?

Posted at Nginx Forum: 
http://forum.nginx.org/read.php?21,251401,251404#msg-251404

___
nginx-ru mailing list
nginx-ru@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-ru

Re: WordPress + Nginx шифрованная админка проблема ?

2014-07-01 Thread hitarcherru
потому что без поддержки https:// все нормально работает

Posted at Nginx Forum: 
http://forum.nginx.org/read.php?21,251401,251406#msg-251406

___
nginx-ru mailing list
nginx-ru@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-ru

Re: WordPress + Nginx шифрованная админка проблема ?

2014-07-01 Thread Maksim Kulik
Вместо 2 и 3 location добавить:

location / {
index index.php index.html index.htm;
try_files $uri $uri/ @fallback;
}

location ~[^?]*/$ {
   proxy_set_header X-Real-IP $remote_addr;
   proxy_set_header Host $host;
   proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
   proxy_set_header X-Forwarded-Proto $scheme;
   proxy_pass http://192.168.1.100:81;
}

Этого, по идее, хватит.



1 июля 2014 г., 20:50 пользователь hitarcherru nginx-fo...@nginx.us
написал:

 так что именно мне нужно в мой конфиг дописать или изменить ?

 Posted at Nginx Forum:
 http://forum.nginx.org/read.php?21,251401,251404#msg-251404

 ___
 nginx-ru mailing list
 nginx-ru@nginx.org
 http://mailman.nginx.org/mailman/listinfo/nginx-ru

___
nginx-ru mailing list
nginx-ru@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-ru

Re: WordPress + Nginx шифрованная админка проблема ?

2014-07-01 Thread Maksim Kulik
А можно глянуть конфиг апача? (Он же в качестве бэкэнда работает?)
01.07.2014 21:07 пользователь hitarcherru nginx-fo...@nginx.us написал:

 потому что без поддержки https:// все нормально работает

 Posted at Nginx Forum:
 http://forum.nginx.org/read.php?21,251401,251406#msg-251406

 ___
 nginx-ru mailing list
 nginx-ru@nginx.org
 http://mailman.nginx.org/mailman/listinfo/nginx-ru
___
nginx-ru mailing list
nginx-ru@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-ru

Re: WordPress + Nginx шифрованная админка проблема ?

2014-07-01 Thread hitarcherru
проправил, все равно не работает

server {
server_name xn8sbwecba5aajmm6f.xn--p1ai
www.xn8sbwecba5aajmm6f.xn--p1ai;
listen 78.108.92.172;
listen 78.108.92.172:443 ssl;
set $root_path 
/home/zapalsky/data/www/xn8sbwecba5aajmm6f.xn--p1ai;
location ~*
^.+\.(jpg|jpeg|gif|png|svg|js|css|mp3|ogg|mpe?g|avi|zip|gz|bz2?|rar|swf)$ {
root $root_path;
access_log /home/nginx-logs/zapalsky isp;
access_log 
/home/httpd-logs/xn8sbwecba5aajmm6f.xn--p1ai.access.log ;
error_page 404 = @fallback;
add_header Access-Control-Allow-Origin *;
}



location / {
index index.php index.html index.htm;
try_files $uri $uri/ @fallback;
}

location ~[^?]*/$ {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass http://78.108.92.172:81;
}


include /usr/local/ispmgr/etc/nginx.inc;
ssl_certificate
/home/httpd-cert/zapalsky/xn8sbwecba5aajmm6f.xn--p1ai.crt;
ssl_certificate_key
/home/httpd-cert/zapalsky/xn8sbwecba5aajmm6f.xn--p1ai.key;
charset UTF-8;
disable_symlinks if_not_owner from=$root_path;
}

Posted at Nginx Forum: 
http://forum.nginx.org/read.php?21,251401,251409#msg-251409

___
nginx-ru mailing list
nginx-ru@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-ru

Re: WordPress + Nginx шифрованная админка проблема ?

2014-07-01 Thread hitarcherru
да пожалуйста

Directory /home/data/www/site.com
Options -ExecCGI -Includes
php_admin_value open_basedir /home/data:.
php_admin_flag engine on
/Directory

VirtualHost 192.168.1.100:81 
ServerName site.com
AssignUserID user user
CustomLog /home/httpd-logs/site.access.log combined
DocumentRoot /home/data/www/site
ErrorLog /home/httpd-logs/site.error.log
ServerAdmin hitarc...@gmail.com
ServerAlias www.site
AddType application/x-httpd-php .php .php3 .php4 .php5 .phtml
AddType application/x-httpd-php-source .phps
php_admin_value open_basedir /home/data:.
php_admin_value sendmail_path /usr/sbin/sendmail -t -i -f
hitarc...@gmail.com
php_admin_value upload_tmp_dir /home/data/mod-tmp
php_admin_value session.save_path /home/data/mod-tmp
AddDefaultCharset UTF-8
/VirtualHost

Posted at Nginx Forum: 
http://forum.nginx.org/read.php?21,251401,251410#msg-251410

___
nginx-ru mailing list
nginx-ru@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-ru

Re: WordPress + Nginx шифрованная админка проблема ?

2014-07-01 Thread Maksim Kulik
Предыдущее уточнение про работу с http было весомым и сейчас надо смотреть
в конфиг бэкэнда.
01.07.2014 21:30 пользователь hitarcherru nginx-fo...@nginx.us написал:

 проправил, все равно не работает

 server {
 server_name xn8sbwecba5aajmm6f.xn--p1ai
 www.xn8sbwecba5aajmm6f.xn--p1ai;
 listen 78.108.92.172;
 listen 78.108.92.172:443 ssl;
 set $root_path
 /home/zapalsky/data/www/xn8sbwecba5aajmm6f.xn--p1ai;
 location ~*
 ^.+\.(jpg|jpeg|gif|png|svg|js|css|mp3|ogg|mpe?g|avi|zip|gz|bz2?|rar|swf)$ {
 root $root_path;
 access_log /home/nginx-logs/zapalsky isp;
 access_log
 /home/httpd-logs/xn8sbwecba5aajmm6f.xn--p1ai.access.log ;
 error_page 404 = @fallback;
 add_header Access-Control-Allow-Origin *;
 }



 location / {
 index index.php index.html index.htm;
 try_files $uri $uri/ @fallback;
 }

 location ~[^?]*/$ {
 proxy_set_header X-Real-IP $remote_addr;
 proxy_set_header Host $host;
 proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
 proxy_set_header X-Forwarded-Proto $scheme;
 proxy_pass http://78.108.92.172:81;
 }


 include /usr/local/ispmgr/etc/nginx.inc;
 ssl_certificate
 /home/httpd-cert/zapalsky/xn8sbwecba5aajmm6f.xn--p1ai.crt;
 ssl_certificate_key
 /home/httpd-cert/zapalsky/xn8sbwecba5aajmm6f.xn--p1ai.key;
 charset UTF-8;
 disable_symlinks if_not_owner from=$root_path;
 }

 Posted at Nginx Forum:
 http://forum.nginx.org/read.php?21,251401,251409#msg-251409

 ___
 nginx-ru mailing list
 nginx-ru@nginx.org
 http://mailman.nginx.org/mailman/listinfo/nginx-ru
___
nginx-ru mailing list
nginx-ru@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-ru

Re: WordPress + Nginx шифрованная админка проблема ?

2014-07-01 Thread hitarcherru
ну че скажите ?

Posted at Nginx Forum: 
http://forum.nginx.org/read.php?21,251401,251412#msg-251412

___
nginx-ru mailing list
nginx-ru@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-ru

Re: WordPress + Nginx шифрованная админка проблема ?

2014-07-01 Thread Maksim Kulik
Попробуйте добавить в конфиг апача строку:

SetEnvIf X-Forwarded-Proto https HTTPS=on

И почитайте, на всякий, http://habrahabr.ru/post/142363/


2014-07-01 21:42 GMT+03:00 hitarcherru nginx-fo...@nginx.us:

 ну че скажите ?

 Posted at Nginx Forum:
 http://forum.nginx.org/read.php?21,251401,251412#msg-251412

 ___
 nginx-ru mailing list
 nginx-ru@nginx.org
 http://mailman.nginx.org/mailman/listinfo/nginx-ru

___
nginx-ru mailing list
nginx-ru@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-ru

Re: WordPress + Nginx шифрованная админка проблема ?

2014-07-01 Thread hitarcherru
теперь появилось сообщение 500 Internal Server Error при запросе к https://

Posted at Nginx Forum: 
http://forum.nginx.org/read.php?21,251401,251414#msg-251414

___
nginx-ru mailing list
nginx-ru@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-ru

Re: WordPress + Nginx шифрованная админка проблема ?

2014-07-01 Thread hitarcherru
wp-login.php?redirect_to=https%3A%2F%2Fsite.com%2Fwp-admin%2Freauth=1

ерунда в урле

Posted at Nginx Forum: 
http://forum.nginx.org/read.php?21,251401,251415#msg-251415

___
nginx-ru mailing list
nginx-ru@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-ru

Re: WordPress + Nginx шифрованная админка проблема ?

2014-07-01 Thread hitarcherru
может еще что-то надо в .htaccess прописать ?

Posted at Nginx Forum: 
http://forum.nginx.org/read.php?21,251401,251416#msg-251416

___
nginx-ru mailing list
nginx-ru@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-ru

Re: SSL slow on nginx

2014-07-01 Thread khav
Thanks Maxim and GreenGecko for the insights


The worker process does match my number of cpu cores (running on 8 cores
atm)


How can i know  the number of handshakes per seconds occurring on the
server

The openssl speed result have been posted on http://pastebin.com/hNeVhJfa
for readability


You can find my full list of ssl ciphers here ---
http://pastebin.com/7xJRJgJC

If you can suggest faster ciphers with same level of compatibility , i
would be awesome

Will a faster cpu actually solve the issue ?
My cpu load never reached a value  0.50 as far as i know and average is
like 0.30

Posted at Nginx Forum: 
http://forum.nginx.org/read.php?2,251277,251353#msg-251353

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: proxy_pass_header not working in 1.6.0

2014-07-01 Thread Jonathan Matthews
On 1 Jul 2014 07:58, Lucas Rolff lu...@slcoding.com wrote:

 Hi guys,

 I'm currently running nginx version 1.6.0 (after upgrading from 1.4.4).

 Sadly I've found out, after upgrading proxy_pass_header seems to stop
working, meaning no headers is passed from the upstream at all

You need to read the proxy_pass_header and proxy_hide_header reference
documentation. You're using it wrongly, possibly because you've assumed it
takes generic parameters instead of very specific ones.
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: proxy_pass_header not working in 1.6.0

2014-07-01 Thread Lucas Rolff

Well, it used to work before 1.6.0..

For me 
http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_pass_header 
shows that I should do:


proxy_pass_header Cache-Control;

So that should be correct

Best regards,
Lucas Rolff

Jonathan Matthews wrote:


On 1 Jul 2014 07:58, Lucas Rolff lu...@slcoding.com 
mailto:lu...@slcoding.com wrote:


 Hi guys,

 I'm currently running nginx version 1.6.0 (after upgrading from 1.4.4).

 Sadly I've found out, after upgrading proxy_pass_header seems to 
stop working, meaning no headers is passed from the upstream at all


You need to read the proxy_pass_header and proxy_hide_header reference 
documentation. You're using it wrongly, possibly because you've 
assumed it takes generic parameters instead of very specific ones.


___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: proxy_pass_header not working in 1.6.0

2014-07-01 Thread Jonathan Matthews
On 1 Jul 2014 10:20, Lucas Rolff lu...@slcoding.com wrote:

 Well, it used to work before 1.6.0..

 For me
http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_pass_header
shows that I should do:

 proxy_pass_header Cache-Control;

 So that should be correct

No. You have misread the documentation.

proxy_pass_header accepts a very limited set of headers whereas your use of
it assumes it is generic.

Please carefully *re*read the _pass_ AND _hide_ documentation as I
suggested.
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: proxy_pass_header not working in 1.6.0

2014-07-01 Thread Jonathan Matthews
On 1 Jul 2014 10:34, Lucas Rolff lu...@slcoding.com wrote:

 Do you have a link to a documentation that has info about this then?
Because in the below link, and in
http://wiki.nginx.org/HttpProxyModule#proxy_pass_header theres nothing
about what it accepts.

How about the doc you already found, and then the link that it contains:

 On 1 Jul 2014 10:20, Lucas Rolff lu...@slcoding.com wrote:
  For me
http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_pass_header
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: proxy_pass_header not working in 1.6.0

2014-07-01 Thread Lucas Rolff
So.. Where is the thing that states I can't use proxy_pass_header 
cache-control, or expires? :)))


Maybe I'm just stupid

Best regards,
Lucas Rolff

Jonathan Matthews wrote:


On 1 Jul 2014 10:34, Lucas Rolff lu...@slcoding.com 
mailto:lu...@slcoding.com wrote:


 Do you have a link to a documentation that has info about this then? 
Because in the below link, and in 
http://wiki.nginx.org/HttpProxyModule#proxy_pass_header theres nothing 
about what it accepts.


How about the doc you already found, and then the link that it contains:

 On 1 Jul 2014 10:20, Lucas Rolff lu...@slcoding.com 
mailto:lu...@slcoding.com wrote:
  For me 
http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_pass_header


___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: proxy_pass_header not working in 1.6.0

2014-07-01 Thread Jonathan Matthews
On 1 Jul 2014 11:01, Lucas Rolff lu...@slcoding.com wrote:

 So.. Where is the thing that states I can't use proxy_pass_header
cache-control, or expires? :)))

The proxy_hide_header and  proxy_pass_header reference docs.
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: proxy_pass_header not working in 1.6.0

2014-07-01 Thread Lucas Rolff
I've verified that 1.4.4 works as it should, I receive the cache-control 
and expires headers sent from upstream (Apache 2.4 in this case), 
upgrading to nginx 1.6.0 breaks this, no config changes, nothing.


But thanks for the explanation Robert!
I'll try investigate it further to see if I can find the root cause, 
since for me this is very odd that it's suddenly not sent to the client 
anymore.


Best regards,
Lucas Rolff

Robert Paprocki wrote:

Can we move past passive aggressive posting to a public mailing list and
actually try to accomplish something?

The nginx docs indicate the following about proxy_pass_header

Permits passing otherwise disabled header fields from a proxied server
to a client.

'otherwise disabled header fields' are documented as the following (from
proxy_hide_header docs):

By default, nginx does not pass the header fields “Date”, “Server”,
“X-Pad”, and “X-Accel-...” from the response of a proxied server to a
client.

So I don't know why you would need to have proxy_pass_header
Cache-Control in the first place, since this wouldn't seem to be dropped
by default from the response of a proxied server to a client.

Have you tried downgrading back to 1.4.4 to confirm whatever problem
you're having doesn't exist within some other part of your
infrastructure that was potentially changed as part of your upgrade?


On 07/01/2014 01:09 AM, Jonathan Matthews wrote:

On 1 Jul 2014 11:01, Lucas Rolfflu...@slcoding.com
mailto:lu...@slcoding.com  wrote:

So.. Where is the thing that states I can't use proxy_pass_header

cache-control, or expires? :)))

The proxy_hide_header and  proxy_pass_header reference docs.



___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx



___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: proxy_pass_header not working in 1.6.0

2014-07-01 Thread Lucas Rolff
I've been investigating, and seems like it's related to 1.6 or so - 
because 1.4.2 and 1.4.4 works perfectly with the config in the first email.


Any that can possibly reproduce this as well?

Best regards,
Lucas R

Robert Paprocki wrote:

Can we move past passive aggressive posting to a public mailing list and
actually try to accomplish something?

The nginx docs indicate the following about proxy_pass_header

Permits passing otherwise disabled header fields from a proxied server
to a client.

'otherwise disabled header fields' are documented as the following (from
proxy_hide_header docs):

By default, nginx does not pass the header fields “Date”, “Server”,
“X-Pad”, and “X-Accel-...” from the response of a proxied server to a
client.

So I don't know why you would need to have proxy_pass_header
Cache-Control in the first place, since this wouldn't seem to be dropped
by default from the response of a proxied server to a client.

Have you tried downgrading back to 1.4.4 to confirm whatever problem
you're having doesn't exist within some other part of your
infrastructure that was potentially changed as part of your upgrade?


On 07/01/2014 01:09 AM, Jonathan Matthews wrote:

On 1 Jul 2014 11:01, Lucas Rolfflu...@slcoding.com
mailto:lu...@slcoding.com  wrote:

So.. Where is the thing that states I can't use proxy_pass_header

cache-control, or expires? :)))

The proxy_hide_header and  proxy_pass_header reference docs.



___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx



___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: proxy_pass_header not working in 1.6.0

2014-07-01 Thread Valentin V. Bartenev
On Tuesday 01 July 2014 10:30:47 Lucas Rolff wrote:
 I've verified that 1.4.4 works as it should, I receive the cache-control 
 and expires headers sent from upstream (Apache 2.4 in this case), 
 upgrading to nginx 1.6.0 breaks this, no config changes, nothing.
 
 But thanks for the explanation Robert!
 I'll try investigate it further to see if I can find the root cause, 
 since for me this is very odd that it's suddenly not sent to the client 
 anymore.
 
[..]

They can be not sent because your backend stopped returning them for some 
reason.  Try to investigate what happens on the wire between your backend
and nginx.

  wbr, Valentin V. Bartenev

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: proxy_pass_header not working in 1.6.0

2014-07-01 Thread Lucas Rolff

nginx:

curl -I http://domain.com/wp-content/uploads/2012/05/forside.png
HTTP/1.1 200 OK
Server: nginx
Date: Tue, 01 Jul 2014 10:42:06 GMT
Content-Type: image/png
Content-Length: 87032
Last-Modified: Fri, 08 Mar 2013 08:02:48 GMT
Connection: keep-alive
Vary: Accept-Encoding
ETag: 51399b28-153f8
Accept-Ranges: bytes

Backend:

curl -I http://domain.com:8081/wp-content/uploads/2012/05/forside.png
HTTP/1.1 200 OK
Date: Tue, 01 Jul 2014 10:42:30 GMT
Server: Apache
Last-Modified: Fri, 08 Mar 2013 08:02:48 GMT
Accept-Ranges: bytes
Content-Length: 87032
Cache-Control: max-age=2592000
Expires: Thu, 31 Jul 2014 10:42:30 GMT
Content-Type: image/png

So backend returns the headers just fine.

Best regards,
Lucas Rolff


Valentin V. Bartenev wrote:

On Tuesday 01 July 2014 10:30:47 Lucas Rolff wrote:

I've verified that 1.4.4 works as it should, I receive the cache-control
and expires headers sent from upstream (Apache 2.4 in this case),
upgrading to nginx 1.6.0 breaks this, no config changes, nothing.

But thanks for the explanation Robert!
I'll try investigate it further to see if I can find the root cause,
since for me this is very odd that it's suddenly not sent to the client
anymore.


[..]

They can be not sent because your backend stopped returning them for some
reason.  Try to investigate what happens on the wire between your backend
and nginx.

   wbr, Valentin V. Bartenev

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: How can the number of parallel/redundant open streams/temp_files be controlled/limited?

2014-07-01 Thread Maxim Dounin
Hello!

On Mon, Jun 30, 2014 at 11:10:52PM -0400, Paul Schlie wrote:

 Regarding:
 
  In http, responses are not guaranteed to be the same.  Each 
  response can be unique, and you can't assume responses have to be 
  identical even if their URLs match.
 
 Yes, but potentially unique does not imply that upon the first valid ok or 
 valid
 partial response that it will likely be productive to continue to open 
 further such
 channels unless no longer responsive, as doing so will most likely be counter
 productive, only wasting limited resources by establishing redundant channels;
 being seemingly why proxy_cache_lock was introduced, as you initially 
 suggested.

Again: responses are not guaranteed to be the same, and unless 
you are using cache (and hence proxy_cache_key and various header 
checks to ensure responses are at least interchangeable), the only 
thing you can do is to proxy requests one by one.

If you are using cache, then there is proxy_cache_key to identify 
a resource requested, and proxy_cache_lock to prevent multiple 
parallel requests to populate the same cache node (and 
proxy_cache_use_stale updating to prevent multiple requests when 
updating a cache node).

In theory, cache code can be improved (compared to what we 
currently have) to introduce sending of a response being loaded 
into a cache to multiple clients.  I.e., stop waiting for a cache 
lock once we've got the response headers, and stream the response 
body being load to all clients waited for it.  This should/can 
help when loading large files into a cache, when waiting with 
proxy_cache_lock for a complete response isn't cheap.  In 
practice, introducing such a code isn't cheap either, and it's not 
about using other names for temporary files.

-- 
Maxim Dounin
http://nginx.org/

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: proxy_pass_header not working in 1.6.0

2014-07-01 Thread Robert Paprocki
You need to examine traffic over the wire between the proxy and the
origin as you send a request from an outside client to the proxy. This
will allow you to see if the origin is even returning the expected
headers to the proxy, or if the proxy is seeing a different response
than a direct client is.

On 07/01/2014 04:00 AM, Lucas Rolff wrote:
 
 So backend returns the headers just fine.

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


proxy_cache not serving file from edge server !!

2014-07-01 Thread shahzaib shahzaib
We've an origin and edge server with nginx-1.6 . Origin web-server(Located
in U.S) is configured with nginx_geo_module and edge(Local ISP) is
configured with proxy_cache in order to cache files from origin server and
serve from their lately. We're using following method for caching with
proxy_cache :-

1. client (1.1.1.1) sends mp4 request to origin webserver and geo_module in
origin checks, if the ip is 1.1.1.1 then pass that client to the edge
server using proxy_pass.

2. Edge, checks if the file is in proxy_cache than it should serve the file
locally and if file is not in proxy_cache, it'll pass back the request to
origin server and client will be served from origin server as well as
requested file will also be cached in local server, so next time the edge
will not have to pass request again to origin server and serve the same
file via locally.

But, looks like our caching is not working as expected. Our ISP is
complaining that, whenever edge server serves the file, instead of serving
that file to local client (1.1.1.1) it serves the file back to origin
server(U.S) and all outgoing bandwidth is going back to U.S instead of
local clients (Offcourse bandwidth not being saved).

So i want to ask, if the origin server is passing request to edge server,
the cached file must be served locally but the request going back to the
origin server even the cache status: HIT. Following are my configs :-

ORIGIN :-

geo $TW {
  default 0;
1.1.1.1 1;

}



server {
listen  80;
server_name  origin.files.com origin.gear.net  origin.gear.com;
location / {
root   /var/www/html/files;
index index.html index.htm index.php;

}


location ~ \.(mp4|jpg)$ {

proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
if ($TW) {
proxy_pass http://tw002.edge.com:80;
}
 mp4;
root /var/www/html/files;


expires 7d;
valid_referers none blocked  video.pk *.video.pk blog.video.pk *.
facebook.com *.twitter.com *.files.com *.gear.net video.tv *.video.tv
videomedia.tv www.videomedia.tv embed.videomedia.tv;
if ($invalid_referer) {
return   403;
}
}

 # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
location ~ \.php$ {
root /var/www/html/files;
fastcgi_pass   127.0.0.1:9000;
   fastcgi_index  index.php;
fastcgi_param  SCRIPT_FILENAME
$document_root$fastcgi_script_name;
includefastcgi_params;
}

location ~ /\.ht {
deny  all;
}
}

EDGE :-

#proxy_ignore_headers Set-Cookie;
proxy_cache_path /data/nginx/cache levels=1:2 keys_zone=static:100m
loader_threshold=200 loader_files=500 inactive=1d
max_size=62g;


server {

listen   80;
server_name  tw002.edge.com;
root /var/www/html/files;
location ~ \.(mp4|jpeg|jpg)$ {
   root   /var/www/html/files;
mp4;
try_files $uri @getfrom_origin;

}


location @getfrom_origin {
proxy_pass http://origin.files.com:80;
#   proxy_cache_valid 200 302   60m;
proxy_cache_valid  15d;
proxy_cache static;
proxy_cache_min_uses 1;
}



}

Help will be highly appreciated.
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: proxy_cache not serving file from edge server !!

2014-07-01 Thread shahzaib shahzaib
Our caching method is :-

client  origin --- edge.


On Tue, Jul 1, 2014 at 4:57 PM, shahzaib shahzaib shahzaib...@gmail.com
wrote:

 We've an origin and edge server with nginx-1.6 . Origin web-server(Located
 in U.S) is configured with nginx_geo_module and edge(Local ISP) is
 configured with proxy_cache in order to cache files from origin server and
 serve from their lately. We're using following method for caching with
 proxy_cache :-

 1. client (1.1.1.1) sends mp4 request to origin webserver and geo_module
 in origin checks, if the ip is 1.1.1.1 then pass that client to the edge
 server using proxy_pass.

 2. Edge, checks if the file is in proxy_cache than it should serve the
 file locally and if file is not in proxy_cache, it'll pass back the request
 to origin server and client will be served from origin server as well as
 requested file will also be cached in local server, so next time the edge
 will not have to pass request again to origin server and serve the same
 file via locally.

 But, looks like our caching is not working as expected. Our ISP is
 complaining that, whenever edge server serves the file, instead of serving
 that file to local client (1.1.1.1) it serves the file back to origin
 server(U.S) and all outgoing bandwidth is going back to U.S instead of
 local clients (Offcourse bandwidth not being saved).

 So i want to ask, if the origin server is passing request to edge server,
 the cached file must be served locally but the request going back to the
 origin server even the cache status: HIT. Following are my configs :-

 ORIGIN :-

 geo $TW {
   default 0;
 1.1.1.1 1;

 }



 server {
 listen  80;
 server_name  origin.files.com origin.gear.net  origin.gear.com;
 location / {
 root   /var/www/html/files;
 index index.html index.htm index.php;

 }


 location ~ \.(mp4|jpg)$ {

 proxy_set_header X-Real-IP $remote_addr;
 proxy_set_header X-Forwarded-For
 $proxy_add_x_forwarded_for;
 proxy_set_header Host $http_host;
 if ($TW) {
 proxy_pass http://tw002.edge.com:80;
 }
  mp4;
 root /var/www/html/files;


 expires 7d;
 valid_referers none blocked  video.pk *.video.pk blog.video.pk *.
 facebook.com *.twitter.com *.files.com *.gear.net video.tv *.video.tv
 videomedia.tv www.videomedia.tv embed.videomedia.tv;
 if ($invalid_referer) {
 return   403;
 }
 }

  # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
 location ~ \.php$ {
 root /var/www/html/files;
 fastcgi_pass   127.0.0.1:9000;
fastcgi_index  index.php;
 fastcgi_param  SCRIPT_FILENAME
 $document_root$fastcgi_script_name;
 includefastcgi_params;
 }

 location ~ /\.ht {
 deny  all;
 }
 }

 EDGE :-

 #proxy_ignore_headers Set-Cookie;
 proxy_cache_path /data/nginx/cache levels=1:2 keys_zone=static:100m
 loader_threshold=200 loader_files=500 inactive=1d
 max_size=62g;


 server {

 listen   80;
 server_name  tw002.edge.com;
 root /var/www/html/files;
 location ~ \.(mp4|jpeg|jpg)$ {
root   /var/www/html/files;
 mp4;
 try_files $uri @getfrom_origin;

 }


 location @getfrom_origin {
 proxy_pass http://origin.files.com:80;
 #   proxy_cache_valid 200 302   60m;
 proxy_cache_valid  15d;
 proxy_cache static;
 proxy_cache_min_uses 1;
 }



 }

 Help will be highly appreciated.

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: proxy_pass_header not working in 1.6.0

2014-07-01 Thread Maxim Dounin
Hello!

On Tue, Jul 01, 2014 at 01:00:05PM +0200, Lucas Rolff wrote:

 nginx:
 
 curl -I http://domain.com/wp-content/uploads/2012/05/forside.png
 HTTP/1.1 200 OK
 Server: nginx
 Date: Tue, 01 Jul 2014 10:42:06 GMT
 Content-Type: image/png
 Content-Length: 87032
 Last-Modified: Fri, 08 Mar 2013 08:02:48 GMT
 Connection: keep-alive
 Vary: Accept-Encoding
 ETag: 51399b28-153f8
 Accept-Ranges: bytes
 
 Backend:
 
 curl -I http://domain.com:8081/wp-content/uploads/2012/05/forside.png
 HTTP/1.1 200 OK
 Date: Tue, 01 Jul 2014 10:42:30 GMT
 Server: Apache
 Last-Modified: Fri, 08 Mar 2013 08:02:48 GMT
 Accept-Ranges: bytes
 Content-Length: 87032
 Cache-Control: max-age=2592000
 Expires: Thu, 31 Jul 2014 10:42:30 GMT
 Content-Type: image/png
 
 So backend returns the headers just fine.

The response returned by nginx is a static file served by nginx 
itself.  Note the ETag header returned, and the location 
~*.*\.(3gp|gif|jpg|jpeg|png|... in your config - it looks like 
the file exists on the filesystem, and returned directly as per 
configuration.  There is no surprise the response doesn't have any 
headers which are normally returned by your backend.

(And yes, all proxy_pass_header directives in your config are 
meaningless and should be removed.)

-- 
Maxim Dounin
http://nginx.org/

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: How can the number of parallel/redundant open streams/temp_files be controlled/limited?

2014-07-01 Thread Paul Schlie
As it appears a downstream response is not cached until first completely read 
into a temp_file (which for a large file may require 100's if not 1,000's of MB 
be transferred), there appears to be no cache node formed which to lock or 
serve stale responses from, and thereby until the first cache node is 
useably created, proxy_cache_lock has nothing to lock requests to?

The code does not appear to be forming a cache node using the designated 
cache_key until the requested downstream element has completed transfer as 
you've noted?

For the scheme to work, a lockable cache_node would need to formed immediately 
upon the first unique cache_key request, and not wait until the transfer of the 
requested item being stored into a temp_file is complete; as otherwise multiple 
redundant active streams between nginx and a backend server may be formed, each 
most likely transferring the same information needlessly; being what 
proxy_cache_lock was seemingly introduced to prevent (but it doesn't)?

On Jul 1, 2014, at 7:01 AM, Maxim Dounin mdou...@mdounin.ru wrote:

 Hello!
 
 On Mon, Jun 30, 2014 at 11:10:52PM -0400, Paul Schlie wrote:
 
 Regarding:
 
 In http, responses are not guaranteed to be the same.  Each 
 response can be unique, and you can't assume responses have to be 
 identical even if their URLs match.
 
 Yes, but potentially unique does not imply that upon the first valid ok or 
 valid
 partial response that it will likely be productive to continue to open 
 further such
 channels unless no longer responsive, as doing so will most likely be counter
 productive, only wasting limited resources by establishing redundant 
 channels;
 being seemingly why proxy_cache_lock was introduced, as you initially 
 suggested.
 
 Again: responses are not guaranteed to be the same, and unless 
 you are using cache (and hence proxy_cache_key and various header 
 checks to ensure responses are at least interchangeable), the only 
 thing you can do is to proxy requests one by one.
 
 If you are using cache, then there is proxy_cache_key to identify 
 a resource requested, and proxy_cache_lock to prevent multiple 
 parallel requests to populate the same cache node (and 
 proxy_cache_use_stale updating to prevent multiple requests when 
 updating a cache node).
 
 In theory, cache code can be improved (compared to what we 
 currently have) to introduce sending of a response being loaded 
 into a cache to multiple clients.  I.e., stop waiting for a cache 
 lock once we've got the response headers, and stream the response 
 body being load to all clients waited for it.  This should/can 
 help when loading large files into a cache, when waiting with 
 proxy_cache_lock for a complete response isn't cheap.  In 
 practice, introducing such a code isn't cheap either, and it's not 
 about using other names for temporary files.
 
 -- 
 Maxim Dounin
 http://nginx.org/
 
 ___
 nginx mailing list
 nginx@nginx.org
 http://mailman.nginx.org/mailman/listinfo/nginx

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: proxy_pass_header not working in 1.6.0

2014-07-01 Thread Lucas Rolff
But if files was served from backend I would assume to see the 
$upstream_response_time  variable in nginx would return other stuff than 
a dash in 1.4.4

Like this, using logformat:
$request$status$body_bytes_sent$http_referer$http_user_agent$request_time$upstream_response_time'; 



GET /css/colors.css HTTP/1.1 304 0 http://viewabove.dk/?page_id=2; 
Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_2) AppleWebKit/537.36 
(KHTML, like Gecko) Chrome/34.0.1847.131 Safari/537.36 0.000 -


Again, configs is exactly the same, same operating system, same 
permissions, same site, so it seems odd to me, specially because nothing 
has been listed in the change logs about this 'fix' - it was in earlier 
versions, and was actually served by nginx, even when it did fetch 
headers from the backend.


Best regards,
Lucas Rolff

Valentin V. Bartenev wrote:

On Tuesday 01 July 2014 14:33:54 Lucas Rolff wrote:

Hmm, okay..

Then I'll go back to an old buggy version of nginx which gives me the
possibility to use the headers from Backend!


[..]

It doesn't do this either.  Probably, it just has different configuration or
permissions which results to that try_files always fails, and all requests are
served from your backend.

   wbr, Valentin V. Bartenev

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: proxy_pass_header not working in 1.6.0

2014-07-01 Thread Maxim Dounin
Hello!

On Tue, Jul 01, 2014 at 02:33:54PM +0200, Lucas Rolff wrote:

 Hmm, okay..
 
 Then I'll go back to an old buggy version of nginx which gives me the
 possibility to use the headers from Backend!

You don't need to go back (and I doubt it will help) - if you 
don't want nginx to serve files directly, just don't configure it 
to do so.  Just commenting out the location in question will do 
the trick.

It may be also a good idea to re-read the configuration you are 
using to make sure you understand what it does.  It looks like 
most, if not all, your question are results of misunderstanding of 
what's written in your nginx.conf.

-- 
Maxim Dounin
http://nginx.org/

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: How can the number of parallel/redundant open streams/temp_files be controlled/limited?

2014-07-01 Thread Maxim Dounin
Hello!

On Tue, Jul 01, 2014 at 08:44:47AM -0400, Paul Schlie wrote:

 As it appears a downstream response is not cached until first 
 completely read into a temp_file (which for a large file may 
 require 100's if not 1,000's of MB be transferred), there 
 appears to be no cache node formed which to lock or serve 
 stale responses from, and thereby until the first cache node 
 is useably created, proxy_cache_lock has nothing to lock 
 requests to?
 
 The code does not appear to be forming a cache node using the 
 designated cache_key until the requested downstream element has 
 completed transfer as you've noted?

Your reading of the code is incorrect.

A node in shared memory is created on a request start, and this is 
enough for proxy_cache_lock to work.  On the request completion, 
the temporary file is placed into the cache directory, and the 
node is updated to reflect that the cache file exists and can be 
used.

-- 
Maxim Dounin
http://nginx.org/

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: SSL slow on nginx

2014-07-01 Thread Maxim Dounin
Hello!

On Tue, Jul 01, 2014 at 03:10:07AM -0400, khav wrote:

 Thanks Maxim and GreenGecko for the insights
 
 
 The worker process does match my number of cpu cores (running on 8 cores
 atm)

Good.  It may be also good idea to make sure you don't have 
multi_accept enabled, just in case.

 How can i know  the number of handshakes per seconds occurring on the
 server

First of all, count the number of connections per second (and 
requests per second) - it should be trivial, and may be extracted 
even with nginx stub_status module.  I would generally recommend 
using logs though.  With logs, you should be also able to count 
number of uncached handshakes - by using $ssl_session_reused 
variable and the $connection_requests one.

See here:

http://nginx.org/r/$ssl_session_reused
http://nginx.org/r/$connection_requests
http://nginx.org/r/log_format

 The openssl speed result have been posted on http://pastebin.com/hNeVhJfa
 for readability

So, basically, your server is able to do about 800 plain RSA 
handshakes per second per core, 6400 handshakes total.

But as previously noted, things can be very much worse with DH 
ciphers, especially if you are using 2048 bit dhparams (or 
larger).

 If you can suggest faster ciphers with same level of compatibility , i
 would be awesome

It may be good idea to disable DH regardless of the level of 
compatibility.  It's just too slow.

 Will a faster cpu actually solve the issue ?
 My cpu load never reached a value  0.50 as far as i know and average is
 like 0.30

You mean - 50% CPU usage across all CPUs?  That's looks high 
enough, though not critical.  But it may be a good idea to look 
into per-CPU stats, as well as per process CPU usage.

Note well, CPU is a bottleneck I assumed based on few external 
tests.  It may not be a CPU, but, e.g., a packet loss somewhere.  
And, as I already said, numbers shown by Pingdom are close to 
theoretical minimum, and I don't think there is much room for 
improvement.  The one extra RTT probably deserves investigation, 
but I can't say it's an issue - it might be even legitimate.

-- 
Maxim Dounin
http://nginx.org/

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: How can the number of parallel/redundant open streams/temp_files be controlled/limited?

2014-07-01 Thread Paul Schlie
Then how could multiple streams and corresponding temp_files ever be created 
upon successive requests for the same $uri with proxy_cache_key $uri and 
proxy_cache_lock on; if all subsequent requests are locked to the same 
cache_node created by the first request even prior to its completion?

You've previously noted:

 In theory, cache code can be improved (compared to what we 
 currently have) to introduce sending of a response being loaded 
 into a cache to multiple clients.  I.e., stop waiting for a cache 
 lock once we've got the response headers, and stream the response 
 body being load to all clients waited for it.  This should/can 
 help when loading large files into a cache, when waiting with 
 proxy_cache_lock for a complete response isn't cheap.  In 
 practice, introducing such a code isn't cheap either, and it's not 
 about using other names for temporary files.

Being what I apparently incorrectly understood proxy_cache_lock to actually do.

So if not the above, what does proxy_cache_lock actually do upon receipt of 
subsequent requests for the same $uri?


On Jul 1, 2014, at 9:20 AM, Maxim Dounin mdou...@mdounin.ru wrote:

 Hello!
 
 On Tue, Jul 01, 2014 at 08:44:47AM -0400, Paul Schlie wrote:
 
 As it appears a downstream response is not cached until first 
 completely read into a temp_file (which for a large file may 
 require 100's if not 1,000's of MB be transferred), there 
 appears to be no cache node formed which to lock or serve 
 stale responses from, and thereby until the first cache node 
 is useably created, proxy_cache_lock has nothing to lock 
 requests to?
 
 The code does not appear to be forming a cache node using the 
 designated cache_key until the requested downstream element has 
 completed transfer as you've noted?
 
 Your reading of the code is incorrect.
 
 A node in shared memory is created on a request start, and this is 
 enough for proxy_cache_lock to work.  On the request completion, 
 the temporary file is placed into the cache directory, and the 
 node is updated to reflect that the cache file exists and can be 
 used.
 
 -- 
 Maxim Dounin
 http://nginx.org/
 
 ___
 nginx mailing list
 nginx@nginx.org
 http://mailman.nginx.org/mailman/listinfo/nginx

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: SSL slow on nginx

2014-07-01 Thread khav
I am currently using 1024 bit dhparams for maximum compatibility 


Here is my ssllabs report :
https://www.ssllabs.com/ssltest/analyze.html?d=filterbypass.me

If i remove the DH from my cipher suites , will handshake simulation be
still a success for all browsers listed in the ssllabs report above


What is the best cipher suite according to you that is both fast and has
maximum compatibility ?

Thanks again

Posted at Nginx Forum: 
http://forum.nginx.org/read.php?2,251277,251396#msg-251396

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: dav and dav_ext, mp4 module, PROPFIND not working for files

2014-07-01 Thread Roman Arutyunyan

On 01 Jul 2014, at 09:35, audvare nginx-fo...@nginx.us wrote:

 Roman Arutyunyan Wrote:
 ---
 
 Currently nginx does not seem to be able to do what you want.  If
 you’re ready to patch
 the source here’s the patch fixing the issue.
 
 diff -r 0dd77ef9f114 src/http/modules/ngx_http_mp4_module.c
 --- a/src/http/modules/ngx_http_mp4_module.cFri Jun 27 13:06:09
 2014 +0400
 +++ b/src/http/modules/ngx_http_mp4_module.cMon Jun 30 19:10:59
 2014 +0400
 @@ -431,7 +431,7 @@ ngx_http_mp4_handler(ngx_http_request_t 
 ngx_http_core_loc_conf_t  *clcf;
 
 if (!(r-method  (NGX_HTTP_GET|NGX_HTTP_HEAD))) {
 -return NGX_HTTP_NOT_ALLOWED;
 +return NGX_DECLINED;
 }
 
 if (r-uri.data[r-uri.len - 1] == '/') {
 
 
 Thanks. This works well.
 
  HTTP/1.1 207 Multi-Status
 ?xml version=1.0 encoding=utf-8 ?
 D:multistatus xmlns:D=DAV:
 D:response
 D:href/video/avgn/t_screwattack_avgn_bugsbcc_901_gt.mp4/D:href
 D:propstat
 D:prop
 ...
 
 Is there any change this will make it into upstream so I don't have to keep
 on patching?
 
 Not that I mind that much because with Gentoo and user patches it is
 extremely easy but I guess I would of course be concerned that the code may
 change drastically such that the patch will stop working.

Committing this into upstream is not planned.
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: How can the number of parallel/redundant open streams/temp_files be controlled/limited?

2014-07-01 Thread Maxim Dounin
Hello!

On Tue, Jul 01, 2014 at 10:15:47AM -0400, Paul Schlie wrote:

 Then how could multiple streams and corresponding temp_files 
 ever be created upon successive requests for the same $uri with 
 proxy_cache_key $uri and proxy_cache_lock on; if all 
 subsequent requests are locked to the same cache_node created by 
 the first request even prior to its completion?

Quoting documentation, http://nginx.org/r/proxy_cache_lock:

: When enabled, only one request at a time will be allowed to 
: populate a new cache element identified according to the 
: proxy_cache_key directive by passing a request to a proxied 
: server. Other requests of the same cache element will either wait 
: for a response to appear in the cache or the cache lock for this 
: element to be released, up to the time set by the 
: proxy_cache_lock_timeout directive.

So, there are at least two cases prior to its completion which 
are explicitly documented:

1. If the cache lock is released - this happens, e.g., if the 
   response isn't cacheable according to the response headers.

2. If proxy_cache_lock_timeout expires.

-- 
Maxim Dounin
http://nginx.org/

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: No CORS Workaround - SSL Proxy

2014-07-01 Thread Eric Swenson
Hello Maxim,


On 6/22/14, 7:32 AM, Maxim Dounin mdou...@mdounin.ru wrote:

If there is nothing in error logs, and you are getting 502 errors,
then there are two options:

1. The 502 errors are returned by your backend, not generated by
   nginx.

2. You did something wrong while configuring error logs and/or you
   are looking into a wrong log.

In this particular case, I would suggest the latter.

I¹ve verified that my error log are configured fine ‹ I do get errors
reported in my configured error log ‹ but nothing at the time that nginx
returns 502 errors to the client.

I¹ve checked the upstream server¹s logs, even when configured with debug
logging, and never see any requests making it to the upstream server when
nginx returns a 502 to the client.

If the issue were with the upstream server, why is it that simply
restarting nginx causes everything to proceed normally. I never have to
touch the upstream server (which, by the way is serving other requests
successfully from other proxies at the same time as the nginx proxy that
returns 502s is doing do.

‹ Eric

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: How can the number of parallel/redundant open streams/temp_files be controlled/limited?

2014-07-01 Thread Paul Schlie
Thank you for your patience.

I mistakenly thought the 5 second default value associated with 
proxy_cache_lock_timeout was the maximum delay allowed between successive 
responses from the backend server is satisfaction of the reverse proxy request 
being cached prior to the cache lock being released, not the maximum delay for 
the response to be completely received and cached as it appears to actually be.

Now that I understand, please consider setting the default value much higher, 
or more ideally set in proportion to the size of the item being cached and 
possibly some measure of the activity of the stream; as in most circumstances, 
redundant streams should never be opened, as it will tend to only make matters 
worse.

Thank you.

On Jul 1, 2014, at 12:40 PM, Maxim Dounin mdou...@mdounin.ru wrote:
 On Tue, Jul 01, 2014 at 10:15:47AM -0400, Paul Schlie wrote:
 Then how could multiple streams and corresponding temp_files 
 ever be created upon successive requests for the same $uri with 
 proxy_cache_key $uri and proxy_cache_lock on; if all 
 subsequent requests are locked to the same cache_node created by 
 the first request even prior to its completion?
 
 Quoting documentation, http://nginx.org/r/proxy_cache_lock:
 
 : When enabled, only one request at a time will be allowed to 
 : populate a new cache element identified according to the 
 : proxy_cache_key directive by passing a request to a proxied 
 : server. Other requests of the same cache element will either wait 
 : for a response to appear in the cache or the cache lock for this 
 : element to be released, up to the time set by the 
 : proxy_cache_lock_timeout directive.
 
 So, there are at least two cases prior to its completion which 
 are explicitly documented:
 
 1. If the cache lock is released - this happens, e.g., if the 
   response isn't cacheable according to the response headers.
 
 2. If proxy_cache_lock_timeout expires.
 
 -- 
 Maxim Dounin
 http://nginx.org/

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: How can the number of parallel/redundant open streams/temp_files be controlled/limited?

2014-07-01 Thread Paul Schlie
Lastly, is there any way to try to get proxy_store to work in combination with 
proxy_cache, possibly by enabling the completed temp_file to be saved as a 
proxy_store file within its uri logical path hierarchy, and the cache_file 
descriptor aliased to it, or visa versa?

(As it's often nice to be able to view/access cached files within their natural 
uri hierarchy, being virtually impossible if stored using their corresponding 
hashed names alone; and not lose the benefit of being able to lock multiple 
pending requests to the same cache_node being fetched so as to minimize 
otherwise redundant down-stream requests prior to the file being cached.)


On Jul 1, 2014, at 4:11 PM, Paul Schlie sch...@comcast.net wrote:

 Thank you for your patience.
 
 I mistakenly thought the 5 second default value associated with 
 proxy_cache_lock_timeout was the maximum delay allowed between successive 
 responses from the backend server is satisfaction of the reverse proxy 
 request being cached prior to the cache lock being released, not the maximum 
 delay for the response to be completely received and cached as it appears to 
 actually be.
 
 Now that I understand, please consider setting the default value much higher, 
 or more ideally set in proportion to the size of the item being cached and 
 possibly some measure of the activity of the stream; as in most 
 circumstances, redundant streams should never be opened, as it will tend to 
 only make matters worse.
 
 Thank you.
 
 On Jul 1, 2014, at 12:40 PM, Maxim Dounin mdou...@mdounin.ru wrote:
 On Tue, Jul 01, 2014 at 10:15:47AM -0400, Paul Schlie wrote:
 Then how could multiple streams and corresponding temp_files 
 ever be created upon successive requests for the same $uri with 
 proxy_cache_key $uri and proxy_cache_lock on; if all 
 subsequent requests are locked to the same cache_node created by 
 the first request even prior to its completion?
 
 Quoting documentation, http://nginx.org/r/proxy_cache_lock:
 
 : When enabled, only one request at a time will be allowed to 
 : populate a new cache element identified according to the 
 : proxy_cache_key directive by passing a request to a proxied 
 : server. Other requests of the same cache element will either wait 
 : for a response to appear in the cache or the cache lock for this 
 : element to be released, up to the time set by the 
 : proxy_cache_lock_timeout directive.
 
 So, there are at least two cases prior to its completion which 
 are explicitly documented:
 
 1. If the cache lock is released - this happens, e.g., if the 
  response isn't cacheable according to the response headers.
 
 2. If proxy_cache_lock_timeout expires.
 
 -- 
 Maxim Dounin
 http://nginx.org/
 
 ___
 nginx mailing list
 nginx@nginx.org
 http://mailman.nginx.org/mailman/listinfo/nginx

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


peer closed connection in SSL handshake while SSL handshaking

2014-07-01 Thread gp
Hello,

I am seeing an odd thing occur in the error logs. We are developing an API,
and when our mobile devices first hit the nginx server after waking up, the
mobile device is rejecting the ssl cert. In the logs, we see that the ssl
handshake is being closed.

[info] 1450#0: *16 peer closed connection in SSL handshake while SSL
handshaking, client: IP, server: 0.0.0.0:443

Oddly enough, if we hit the API again (or any subsequent time before the
device is turned off), this problem does not reoccur - only on the first
access.

The sites are configured pretty vanilla right now:
server_name SERVERNAME;
listen 443;
ssl on;
ssl_certificate ssl/newRSA.crt;
ssl_certificate_key ssl/newRSA.key;
root /www;
index index.html index.htm index.php;

If anybody has any pointers, that would be great.

Thanks

Posted at Nginx Forum: 
http://forum.nginx.org/read.php?2,251423,251423#msg-251423

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: peer closed connection in SSL handshake while SSL handshaking

2014-07-01 Thread gp
I forgot to mention that this is running on Ubuntu 12.04LTS, with nginx
version: nginx/1.6.0.

Posted at Nginx Forum: 
http://forum.nginx.org/read.php?2,251423,251424#msg-251424

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: changes to ngx.arg[1] not getting reflected in final response

2014-07-01 Thread jdewald
vamshi Wrote:
---
 header_filter_by_lua '
 ngx.header.content_length = nil
 ngx.header.set_cookie = nil
 
 if ngx.header.location then
 local _location = ngx.header.location
 _location = ngx.escape_uri(_location)
 _location = http://10.0.9.44/?_redir_=; ..
 _location
 ngx.header.location = _location
 end
 ';
 
 body_filter_by_lua '
 
 local escUri = function (m)
 local _esc = href=\\http://10.0.9.44/?_redir_=;
 .. ngx.escape_uri(m[1]) .. \\
 print(_esc)
 return _esc
 end
 
 local chunk, eof = ngx.arg[1], ngx.arg[2]
 local buffered = ngx.ctx.buffered
 if not buffered then
 buffered = {}
 ngx.ctx.buffered = buffered
 end
 
 if chunk ~=  then
 buffered[#buffered + 1] = chunk
 ngx.arg[1] = nil
 end
 
 if eof then
 local whole = table.concat(buffered)
 ngx.ctx.buffered = nil
 local newStr, n, err = ngx.re.gsub(whole,
 href=\\(.*)\\, escUri, i)
 ngx.arg[1] = whole
 print(whole)
 end
 ';
 ... 
 
 
 As you can see, print(_esc) show that the URL was successfully
 URLencoded. Yet, the print(whole) line does not reflect the gsub()
 
 What could be issue here ?
 
 -Vamshi

gsub is going to return the results of the substitution, not do it inline.
You should be outputting/assigning newStr not whole. 

Cheers,
Josh

Posted at Nginx Forum: 
http://forum.nginx.org/read.php?2,251248,251425#msg-251425

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: peer closed connection in SSL handshake while SSL handshaking

2014-07-01 Thread Kurt Cancemi
Hello,

Could your issue be caused by this bug
https://bugs.launchpad.net/ubuntu/+source/openssl/+bug/965371. It looks
like Ubuntu is not going to fix this bug in precise. Also see here
http://serverfault.com/questions/436737/forcing-a-particular-ssl-protocol-for-an-nginx-proxying-server.
In the previous link the person has the same problem and resolved it by
downgrading openssl.

There are a few solutions if you think this is your problem.

(This is a bug in OpenSSL that has been fixed in later versions.)

1. Upgrade your system openssl library. (I wouldn't recommend doing that
though as it may break other packages.)
2. Compile nginx with the latest openssl library. (Negative is that you
have to maintain your own packages and monitor for openssl security
vulnerabilities.)
3. Upgrade your Linux distribution to 14.04 LTS.

---
Kurt Cancemi
http://www.getwnmp.org
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx