Re: nginx-1.27.0 - shasum does not match with expected shasum from pkg-oss

2024-05-29 Thread Igor Ippolitov

On 29/05/2024 17:22, Igor Ippolitov wrote:

On 29/05/2024 16:52, Daniel Jagszent wrote:

Hello,

the SHA512 of https://nginx.org/download/nginx-1.27.0.tar.gz (downloaded
2024-05-29 15:42:02 UTC) is
251bfe65c717a8027ef05caae2ab2ea73b9b544577f539a1d419fe6adf0bcc846b73b58f54ea3f102df79aaf340e4fa56793ddadea3cd61bcbbe2364ef94bacb 



This does not match with the shasum expected here
https://hg.nginx.org/pkg-oss/file/tip/contrib/src/nginx/SHA512SUMS#l57
___

Daniel,

Thank you for spotting this.

Indeed, due to last minute change into CHANGES wording the checksum of 
published archives differs from what pkg-oss expects.

There are no changes into code though.
We are working on publishing new packages and updated pkg-oss 
repository with corrected checksums.


I will update you later when packages and pkg-oss are published.

Kind regards,
Igor
___


Daniel,

pkg-oss and packages themselves were published and should match sources 
precisely.


Kind regards,
Igor.
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


Re: nginx-1.27.0 - shasum does not match with expected shasum from pkg-oss

2024-05-29 Thread Igor Ippolitov

On 29/05/2024 16:52, Daniel Jagszent wrote:

Hello,

the SHA512 of https://nginx.org/download/nginx-1.27.0.tar.gz (downloaded
2024-05-29 15:42:02 UTC) is
251bfe65c717a8027ef05caae2ab2ea73b9b544577f539a1d419fe6adf0bcc846b73b58f54ea3f102df79aaf340e4fa56793ddadea3cd61bcbbe2364ef94bacb

This does not match with the shasum expected here
https://hg.nginx.org/pkg-oss/file/tip/contrib/src/nginx/SHA512SUMS#l57
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


Daniel,

Thank you for spotting this.

Indeed, due to last minute change into CHANGES wording the checksum of 
published archives differs from what pkg-oss expects.

There are no changes into code though.
We are working on publishing new packages and updated pkg-oss repository 
with corrected checksums.


I will update you later when packages and pkg-oss are published.

Kind regards,
Igor
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


nginx-1.27.0 - shasum does not match with expected shasum from pkg-oss

2024-05-29 Thread Daniel Jagszent
Hello,

the SHA512 of https://nginx.org/download/nginx-1.27.0.tar.gz (downloaded
2024-05-29 15:42:02 UTC) is
251bfe65c717a8027ef05caae2ab2ea73b9b544577f539a1d419fe6adf0bcc846b73b58f54ea3f102df79aaf340e4fa56793ddadea3cd61bcbbe2364ef94bacb

This does not match with the shasum expected here
https://hg.nginx.org/pkg-oss/file/tip/contrib/src/nginx/SHA512SUMS#l57
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


[nginx-ru-announce] nginx-1.26.1

2024-05-29 Thread Sergey Kandaurov
Изменения в nginx 1.26.1  29.05.2024

   *) Безопасность: при использовании HTTP/3 обработка специально созданной
  QUIC-сессии могла приводить к падению рабочего процесса, отправке
  клиенту содержимого памяти рабочего процесса на системах с MTU больше
  4096 байт, а также потенциально могла иметь другие последствия
  (CVE-2024-32760, CVE-2024-31079, CVE-2024-35200, CVE-2024-34161).
  Спасибо Nils Bars из CISPA.

   *) Исправление: уменьшено потребление памяти для долгоживущих запросов,
  если используются директивы gzip, gunzip, ssi, sub_filter или
  grpc_pass.

   *) Исправление: nginx не собирался gcc 14, если использовался параметр
  --with-atomic.
  Спасибо Edgar Bonet.

   *) Исправление: в HTTP/3.


-- 
Sergey Kandaurov

___
nginx-ru-announce mailing list
nginx-ru-announce@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx-ru-announce


[nginx-ru-announce] nginx security advisory (CVE-2024-31079, CVE-2024-32760, CVE-2024-34161, CVE-2024-35200)

2024-05-29 Thread Sergey Kandaurov
Hello!

В реализации HTTP/3 в nginx были обнаружены четыре проблемы, которые
позволяют атакующему с помощью специально созданной QUIC-сессии вызвать
падение рабочего процесса (CVE-2024-31079, CVE-2024-32760, CVE-2024-35200),
отправку клиенту части содержимого памяти рабочего процесса на системах
с MTU больше 4096 байт (CVE-2024-34161), а также потенциально могут иметь
другие последствия (CVE-2024-31079, CVE-2024-32760).

Проблемам подвержен nginx, если он собран с экспериментальным модулем
ngx_http_v3_module (по умолчанию не собирается) и в конфигурационном
файле используется параметр quic директивы listen.

Проблемам подвержен nginx 1.25.0-1.25.5, 1.26.0.
Проблемы исправлены в nginx 1.27.0, 1.26.1.

Спасибо Nils Bars из CISPA.


-- 
Sergey Kandaurov

___
nginx-ru-announce mailing list
nginx-ru-announce@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx-ru-announce


[nginx-ru-announce] nginx-1.27.0

2024-05-29 Thread Sergey Kandaurov
Изменения в nginx 1.27.0  29.05.2024

   *) Безопасность: при использовании HTTP/3 обработка специально созданной
  QUIC-сессии могла приводить к падению рабочего процесса, отправке
  клиенту содержимого памяти рабочего процесса на системах с MTU больше
  4096 байт, а также потенциально могла иметь другие последствия
  (CVE-2024-32760, CVE-2024-31079, CVE-2024-35200, CVE-2024-34161).
  Спасибо Nils Bars из CISPA.

   *) Добавление: директивы proxy_limit_rate, fastcgi_limit_rate,
  scgi_limit_rate и uwsgi_limit_rate поддерживают переменные.

   *) Исправление: уменьшено потребление памяти для долгоживущих запросов,
  если используются директивы gzip, gunzip, ssi, sub_filter или
  grpc_pass.

   *) Исправление: nginx не собирался gcc 14, если использовался параметр
  --with-atomic.
  Спасибо Edgar Bonet.

   *) Исправления в HTTP/3.


-- 
Sergey Kandaurov

___
nginx-ru-announce mailing list
nginx-ru-announce@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx-ru-announce


[nginx-announce] nginx security advisory (CVE-2024-31079, CVE-2024-32760, CVE-2024-34161, CVE-2024-35200)

2024-05-29 Thread Sergey Kandaurov
Hello!

Four security issues were identified in nginx HTTP/3 implementation, which
might allow an attacker that uses a specially crafted QUIC session to cause
a worker process crash (CVE-2024-31079, CVE-2024-32760, CVE-2024-35200),
worker process memory disclosure on systems with MTU larger than 4096
bytes (CVE-2024-34161), or might have potential other impact (CVE-2024-31079,
CVE-2024-32760).

The issues affect nginx compiled with the experimental ngx_http_v3_module
(not compiled by default) if the "quic" option of the "listen" directive
is used in a configuration file.

The issues affect nginx 1.25.0-1.25.5, 1.26.0.
The issues are fixed in nginx 1.27.0, 1.26.1.

Thanks to Nils Bars of CISPA.


-- 
Sergey Kandaurov
___
nginx-announce mailing list
nginx-announce@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx-announce


[nginx-announce] nginx-1.26.1

2024-05-29 Thread Sergey Kandaurov
Changes with nginx 1.26.129 May 2024

   *) Security: when using HTTP/3, processing of a specially crafted QUIC
  session might cause a worker process crash, worker process memory
  disclosure on systems with MTU larger than 4096 bytes, or might have
  potential other impact (CVE-2024-32760, CVE-2024-31079,
  CVE-2024-35200, CVE-2024-34161).
  Thanks to Nils Bars of CISPA.

   *) Bugfix: reduced memory consumption for long-lived requests if "gzip",
  "gunzip", "ssi", "sub_filter", or "grpc_pass" directives are used.

   *) Bugfix: nginx could not be built by gcc 14 if the --with-atomic
  option was used.
  Thanks to Edgar Bonet.

   *) Bugfix: in HTTP/3.


-- 
Sergey Kandaurov
_______
nginx-announce mailing list
nginx-announce@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx-announce


[nginx-announce] nginx-1.27.0

2024-05-29 Thread Sergey Kandaurov
Changes with nginx 1.27.029 May 2024

   *) Security: when using HTTP/3, processing of a specially crafted QUIC
  session might cause a worker process crash, worker process memory
  disclosure on systems with MTU larger than 4096 bytes, or might have
  potential other impact (CVE-2024-32760, CVE-2024-31079,
  CVE-2024-35200, CVE-2024-34161).
  Thanks to Nils Bars of CISPA.

   *) Feature: variables support in the "proxy_limit_rate",
  "fastcgi_limit_rate", "scgi_limit_rate", and "uwsgi_limit_rate"
  directives.

   *) Bugfix: reduced memory consumption for long-lived requests if "gzip",
  "gunzip", "ssi", "sub_filter", or "grpc_pass" directives are used.

   *) Bugfix: nginx could not be built by gcc 14 if the --with-atomic
  option was used.
  Thanks to Edgar Bonet.

   *) Bugfixes in HTTP/3.


-- 
Sergey Kandaurov
___
nginx-announce mailing list
nginx-announce@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx-announce


nginx security advisory (CVE-2024-31079, CVE-2024-32760, CVE-2024-34161, CVE-2024-35200)

2024-05-29 Thread Sergey Kandaurov
Hello!

В реализации HTTP/3 в nginx были обнаружены четыре проблемы, которые
позволяют атакующему с помощью специально созданной QUIC-сессии вызвать
падение рабочего процесса (CVE-2024-31079, CVE-2024-32760, CVE-2024-35200),
отправку клиенту части содержимого памяти рабочего процесса на системах
с MTU больше 4096 байт (CVE-2024-34161), а также потенциально могут иметь
другие последствия (CVE-2024-31079, CVE-2024-32760).

Проблемам подвержен nginx, если он собран с экспериментальным модулем
ngx_http_v3_module (по умолчанию не собирается) и в конфигурационном
файле используется параметр quic директивы listen.

Проблемам подвержен nginx 1.25.0-1.25.5, 1.26.0.
Проблемы исправлены в nginx 1.27.0, 1.26.1.

Спасибо Nils Bars из CISPA.


-- 
Sergey Kandaurov

___
nginx-ru mailing list
nginx-ru@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx-ru


nginx-1.26.1

2024-05-29 Thread Sergey Kandaurov
Изменения в nginx 1.26.1  29.05.2024

   *) Безопасность: при использовании HTTP/3 обработка специально созданной
  QUIC-сессии могла приводить к падению рабочего процесса, отправке
  клиенту содержимого памяти рабочего процесса на системах с MTU больше
  4096 байт, а также потенциально могла иметь другие последствия
  (CVE-2024-32760, CVE-2024-31079, CVE-2024-35200, CVE-2024-34161).
  Спасибо Nils Bars из CISPA.

   *) Исправление: уменьшено потребление памяти для долгоживущих запросов,
  если используются директивы gzip, gunzip, ssi, sub_filter или
  grpc_pass.

   *) Исправление: nginx не собирался gcc 14, если использовался параметр
  --with-atomic.
  Спасибо Edgar Bonet.

   *) Исправление: в HTTP/3.


-- 
Sergey Kandaurov

___
nginx-ru mailing list
nginx-ru@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx-ru


nginx-1.27.0

2024-05-29 Thread Sergey Kandaurov
Изменения в nginx 1.27.0  29.05.2024

   *) Безопасность: при использовании HTTP/3 обработка специально созданной
  QUIC-сессии могла приводить к падению рабочего процесса, отправке
  клиенту содержимого памяти рабочего процесса на системах с MTU больше
  4096 байт, а также потенциально могла иметь другие последствия
  (CVE-2024-32760, CVE-2024-31079, CVE-2024-35200, CVE-2024-34161).
  Спасибо Nils Bars из CISPA.

   *) Добавление: директивы proxy_limit_rate, fastcgi_limit_rate,
  scgi_limit_rate и uwsgi_limit_rate поддерживают переменные.

   *) Исправление: уменьшено потребление памяти для долгоживущих запросов,
  если используются директивы gzip, gunzip, ssi, sub_filter или
  grpc_pass.

   *) Исправление: nginx не собирался gcc 14, если использовался параметр
  --with-atomic.
  Спасибо Edgar Bonet.

   *) Исправления в HTTP/3.


-- 
Sergey Kandaurov

___
nginx-ru mailing list
nginx-ru@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx-ru


nginx security advisory (CVE-2024-31079, CVE-2024-32760, CVE-2024-34161, CVE-2024-35200)

2024-05-29 Thread Sergey Kandaurov
Hello!

Four security issues were identified in nginx HTTP/3 implementation, which
might allow an attacker that uses a specially crafted QUIC session to cause
a worker process crash (CVE-2024-31079, CVE-2024-32760, CVE-2024-35200),
worker process memory disclosure on systems with MTU larger than 4096
bytes (CVE-2024-34161), or might have potential other impact (CVE-2024-31079,
CVE-2024-32760).

The issues affect nginx compiled with the experimental ngx_http_v3_module
(not compiled by default) if the "quic" option of the "listen" directive
is used in a configuration file.

The issues affect nginx 1.25.0-1.25.5, 1.26.0.
The issues are fixed in nginx 1.27.0, 1.26.1.

Thanks to Nils Bars of CISPA.


-- 
Sergey Kandaurov
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


nginx-1.26.1

2024-05-29 Thread Sergey Kandaurov
Changes with nginx 1.26.129 May 2024

   *) Security: when using HTTP/3, processing of a specially crafted QUIC
  session might cause a worker process crash, worker process memory
  disclosure on systems with MTU larger than 4096 bytes, or might have
  potential other impact (CVE-2024-32760, CVE-2024-31079,
  CVE-2024-35200, CVE-2024-34161).
  Thanks to Nils Bars of CISPA.

   *) Bugfix: reduced memory consumption for long-lived requests if "gzip",
  "gunzip", "ssi", "sub_filter", or "grpc_pass" directives are used.

   *) Bugfix: nginx could not be built by gcc 14 if the --with-atomic
  option was used.
  Thanks to Edgar Bonet.

   *) Bugfix: in HTTP/3.


-- 
Sergey Kandaurov
_______
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


nginx-1.27.0

2024-05-29 Thread Sergey Kandaurov
Changes with nginx 1.27.029 May 2024

   *) Security: when using HTTP/3, processing of a specially crafted QUIC
  session might cause a worker process crash, worker process memory
  disclosure on systems with MTU larger than 4096 bytes, or might have
  potential other impact (CVE-2024-32760, CVE-2024-31079,
  CVE-2024-35200, CVE-2024-34161).
  Thanks to Nils Bars of CISPA.

   *) Feature: variables support in the "proxy_limit_rate",
  "fastcgi_limit_rate", "scgi_limit_rate", and "uwsgi_limit_rate"
  directives.

   *) Bugfix: reduced memory consumption for long-lived requests if "gzip",
  "gunzip", "ssi", "sub_filter", or "grpc_pass" directives are used.

   *) Bugfix: nginx could not be built by gcc 14 if the --with-atomic
  option was used.
  Thanks to Edgar Bonet.

   *) Bugfixes in HTTP/3.


-- 
Sergey Kandaurov
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


[nginx] release-1.26.1 tag

2024-05-29 Thread Sergey Kandaurov
details:   https://hg.nginx.org/nginx/rev/dd5bc1844be3
branches:  stable-1.26
changeset: 9268:dd5bc1844be3
user:  Sergey Kandaurov 
date:  Tue May 28 17:28:07 2024 +0400
description:
release-1.26.1 tag

diffstat:

 .hgtags |  1 +
 1 files changed, 1 insertions(+), 0 deletions(-)

diffs (8 lines):

diff -r a63c124e34bc -r dd5bc1844be3 .hgtags
--- a/.hgtags   Tue May 28 17:26:54 2024 +0400
+++ b/.hgtags   Tue May 28 17:28:07 2024 +0400
@@ -479,3 +479,4 @@ 294a3d07234f8f65d7b0e0b0e2c5b05c12c5da0a
 173a0a7dbce569adbb70257c6ec4f0f6bc585009 release-1.25.4
 8618e4d900cc71082fbe7dc72af087937d64faf5 release-1.25.5
 a58202a8c41bf0bd97eef1b946e13105a105520d release-1.26.0
+a63c124e34bcf2d1d1feb8d40ff075103b967c4c release-1.26.1
___
nginx-devel mailing list
nginx-devel@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx-devel


[nginx] nginx-1.26.1-RELEASE

2024-05-29 Thread Sergey Kandaurov
details:   https://hg.nginx.org/nginx/rev/a63c124e34bc
branches:  stable-1.26
changeset: 9267:a63c124e34bc
user:  Sergey Kandaurov 
date:  Tue May 28 17:26:54 2024 +0400
description:
nginx-1.26.1-RELEASE

diffstat:

 docs/xml/nginx/changes.xml |  56 ++
 1 files changed, 56 insertions(+), 0 deletions(-)

diffs (66 lines):

diff -r 5b3f409d55f0 -r a63c124e34bc docs/xml/nginx/changes.xml
--- a/docs/xml/nginx/changes.xmlTue May 28 17:20:45 2024 +0400
+++ b/docs/xml/nginx/changes.xmlTue May 28 17:26:54 2024 +0400
@@ -5,6 +5,62 @@
 
 
 
+
+
+
+
+при использовании HTTP/3 обработка специально созданной QUIC-сессии могла
+приводить к падению рабочего процесса, отправке клиенту содержимого памяти
+рабочего процесса на системах с MTU больше 4096 байт, а также потенциально
+могла иметь другие последствия
+(CVE-2024-32760, CVE-2024-31079, CVE-2024-35200, CVE-2024-34161).
+Спасибо Nils Bars из CISPA.
+
+
+when using HTTP/3, processing of a specially crafted QUIC session might
+cause a worker process crash, worker process memory disclosure on systems
+with MTU larger than 4096 bytes, or might have potential other impact
+(CVE-2024-32760, CVE-2024-31079, CVE-2024-35200, CVE-2024-34161).
+Thanks to Nils Bars of CISPA.
+
+
+
+
+
+уменьшено потребление памяти для долгоживущих запросов,
+если используются директивы gzip, gunzip, ssi, sub_filter или grpc_pass.
+
+
+reduced memory consumption for long-lived requests
+if "gzip", "gunzip", "ssi", "sub_filter", or "grpc_pass" directives are used.
+
+
+
+
+
+nginx не собирался gcc 14,
+если использовался параметр --with-atomic.
+Спасибо Edgar Bonet.
+
+
+nginx could not be built by gcc 14
+if the --with-atomic option was used.
+Thanks to Edgar Bonet.
+
+
+
+
+
+в HTTP/3.
+
+
+in HTTP/3.
+
+
+
+
+
+
 
 
 
_______
nginx-devel mailing list
nginx-devel@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx-devel


[nginx] HTTP/3: fixed dynamic table overflow.

2024-05-29 Thread Sergey Kandaurov
details:   https://hg.nginx.org/nginx/rev/ed593e26c79a
branches:  stable-1.26
changeset: 9263:ed593e26c79a
user:  Roman Arutyunyan 
date:  Tue May 28 17:18:50 2024 +0400
description:
HTTP/3: fixed dynamic table overflow.

While inserting a new entry into the dynamic table, first the entry is added,
and then older entries are evicted until table size is within capacity.  After
the first step, the number of entries may temporarily exceed the maximum
calculated from capacity by one entry, which previously caused table overflow.

The easiest way to trigger the issue is to keep adding entries with empty names
and values until first eviction.

The issue was introduced by 987bee4363d1.

diffstat:

 src/http/v3/ngx_http_v3_table.c |  2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diffs (12 lines):

diff -r 08f8e9c33a08 -r ed593e26c79a src/http/v3/ngx_http_v3_table.c
--- a/src/http/v3/ngx_http_v3_table.c   Tue May 28 17:18:28 2024 +0400
+++ b/src/http/v3/ngx_http_v3_table.c   Tue May 28 17:18:50 2024 +0400
@@ -308,7 +308,7 @@ ngx_http_v3_set_capacity(ngx_connection_
 prev_max = dt->capacity / 32;
 
 if (max > prev_max) {
-elts = ngx_alloc(max * sizeof(void *), c->log);
+elts = ngx_alloc((max + 1) * sizeof(void *), c->log);
 if (elts == NULL) {
 return NGX_ERROR;
 }
___
nginx-devel mailing list
nginx-devel@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx-devel


[nginx] HTTP/3: fixed handling of zero-length literal field line.

2024-05-29 Thread Sergey Kandaurov
details:   https://hg.nginx.org/nginx/rev/5b3f409d55f0
branches:  stable-1.26
changeset: 9266:5b3f409d55f0
user:  Sergey Kandaurov 
date:  Tue May 28 17:20:45 2024 +0400
description:
HTTP/3: fixed handling of zero-length literal field line.

Previously, st->value was passed with NULL data pointer to header handlers.

diffstat:

 src/http/v3/ngx_http_v3_parse.c |  3 +++
 1 files changed, 3 insertions(+), 0 deletions(-)

diffs (27 lines):

diff -r b32b516f36b1 -r 5b3f409d55f0 src/http/v3/ngx_http_v3_parse.c
--- a/src/http/v3/ngx_http_v3_parse.c   Tue May 28 17:19:21 2024 +0400
+++ b/src/http/v3/ngx_http_v3_parse.c   Tue May 28 17:20:45 2024 +0400
@@ -810,6 +810,7 @@ ngx_http_v3_parse_field_lri(ngx_connecti
 
 st->literal.length = st->pint.value;
 if (st->literal.length == 0) {
+st->value.data = (u_char *) "";
 goto done;
 }
 
@@ -932,6 +933,7 @@ ngx_http_v3_parse_field_l(ngx_connection
 
 st->literal.length = st->pint.value;
 if (st->literal.length == 0) {
+st->value.data = (u_char *) "";
 goto done;
 }
 
@@ -1072,6 +1074,7 @@ ngx_http_v3_parse_field_lpbi(ngx_connect
 
 st->literal.length = st->pint.value;
 if (st->literal.length == 0) {
+st->value.data = (u_char *) "";
 goto done;
 }
 
___
nginx-devel mailing list
nginx-devel@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx-devel


[nginx] QUIC: ignore CRYPTO frames after handshake completion.

2024-05-29 Thread Sergey Kandaurov
details:   https://hg.nginx.org/nginx/rev/e4e9d7003b31
branches:  stable-1.26
changeset: 9264:e4e9d7003b31
user:  Roman Arutyunyan 
date:  Tue May 28 17:19:08 2024 +0400
description:
QUIC: ignore CRYPTO frames after handshake completion.

Sending handshake-level CRYPTO frames after the client's Finished message could
lead to memory disclosure and a potential segfault, if those frames are sent in
one packet with the Finished frame.

diffstat:

 src/event/quic/ngx_event_quic_ssl.c |  5 +
 1 files changed, 5 insertions(+), 0 deletions(-)

diffs (15 lines):

diff -r ed593e26c79a -r e4e9d7003b31 src/event/quic/ngx_event_quic_ssl.c
--- a/src/event/quic/ngx_event_quic_ssl.c   Tue May 28 17:18:50 2024 +0400
+++ b/src/event/quic/ngx_event_quic_ssl.c   Tue May 28 17:19:08 2024 +0400
@@ -326,6 +326,11 @@ ngx_quic_handle_crypto_frame(ngx_connect
 ngx_quic_crypto_frame_t  *f;
 
 qc = ngx_quic_get_connection(c);
+
+if (!ngx_quic_keys_available(qc->keys, pkt->level, 0)) {
+return NGX_OK;
+}
+
 ctx = ngx_quic_get_send_ctx(qc, pkt->level);
 f = >u.crypto;
 
___
nginx-devel mailing list
nginx-devel@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx-devel


[nginx] QUIC: ngx_quic_buffer_t use-after-free protection.

2024-05-29 Thread Sergey Kandaurov
details:   https://hg.nginx.org/nginx/rev/b32b516f36b1
branches:  stable-1.26
changeset: 9265:b32b516f36b1
user:  Roman Arutyunyan 
date:  Tue May 28 17:19:21 2024 +0400
description:
QUIC: ngx_quic_buffer_t use-after-free protection.

Previously the last chain field of ngx_quic_buffer_t could still reference freed
chains and buffers after calling ngx_quic_free_buffer().  While normally an
ngx_quic_buffer_t object should not be used after freeing, resetting last_chain
field would prevent a potential use-after-free.

diffstat:

 src/event/quic/ngx_event_quic_frames.c |  1 +
 1 files changed, 1 insertions(+), 0 deletions(-)

diffs (11 lines):

diff -r e4e9d7003b31 -r b32b516f36b1 src/event/quic/ngx_event_quic_frames.c
--- a/src/event/quic/ngx_event_quic_frames.cTue May 28 17:19:08 2024 +0400
+++ b/src/event/quic/ngx_event_quic_frames.cTue May 28 17:19:21 2024 +0400
@@ -648,6 +648,7 @@ ngx_quic_free_buffer(ngx_connection_t *c
 ngx_quic_free_chain(c, qb->chain);
 
 qb->chain = NULL;
+qb->last_chain = NULL;
 }
 
 
_______
nginx-devel mailing list
nginx-devel@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx-devel


[nginx] HTTP/3: decoder stream pre-creation.

2024-05-29 Thread Sergey Kandaurov
details:   https://hg.nginx.org/nginx/rev/08f8e9c33a08
branches:  stable-1.26
changeset: 9262:08f8e9c33a08
user:  Roman Arutyunyan 
date:  Tue May 28 17:18:28 2024 +0400
description:
HTTP/3: decoder stream pre-creation.

Previously a decoder stream was created on demand for sending Section
Acknowledgement, Stream Cancellation and Insert Count Increment.  If conditions
for sending any of these instructions never happen, a decoder stream is not
created at all.  These conditions include client not using the dynamic table and
no streams abandoned by server (RFC 9204, Section 2.2.2.2).  However RFC 9204,
Section 4.2 defines only one condition for not creating a decoder stream:

   An endpoint MAY avoid creating a decoder stream if its decoder sets
   the maximum capacity of the dynamic table to zero.

The change enables pre-creation of the decoder stream at HTTP/3 session
initialization if maximum dynamic table capacity is not zero.  Note that this
value is currently hardcoded to 4096 bytes and is not configurable, so the
stream is now always created.

Also, the change fixes a potential stack overflow when creating a decoder
stream in ngx_http_v3_send_cancel_stream() while draining a request stream by
ngx_drain_connections().  Creating a decoder stream involves calling
ngx_get_connection(), which calls ngx_drain_connections(), which will drain the
same request stream again.  If client's MAX_STREAMS for uni stream is high
enough, these recursive calls will continue until we run out of stack.
Otherwise, decoder stream creation will fail at some point and the request
stream connection will be drained.  This may result in use-after-free, since
this connection could still be referenced up the stack.

diffstat:

 src/http/v3/ngx_http_v3_request.c |  20 ++--
 src/http/v3/ngx_http_v3_uni.c |   4 +---
 src/http/v3/ngx_http_v3_uni.h |   2 ++
 3 files changed, 17 insertions(+), 9 deletions(-)

diffs (73 lines):

diff -r 04bc350b2919 -r 08f8e9c33a08 src/http/v3/ngx_http_v3_request.c
--- a/src/http/v3/ngx_http_v3_request.c Tue May 28 17:17:19 2024 +0400
+++ b/src/http/v3/ngx_http_v3_request.c Tue May 28 17:18:28 2024 +0400
@@ -134,7 +134,17 @@ ngx_http_v3_init(ngx_connection_t *c)
 }
 }
 
-return ngx_http_v3_send_settings(c);
+if (ngx_http_v3_send_settings(c) != NGX_OK) {
+return NGX_ERROR;
+}
+
+if (h3scf->max_table_capacity > 0) {
+if (ngx_http_v3_get_uni_stream(c, NGX_HTTP_V3_STREAM_DECODER) == NULL) 
{
+return NGX_ERROR;
+}
+}
+
+return NGX_OK;
 }
 
 
@@ -398,14 +408,12 @@ ngx_http_v3_wait_request_handler(ngx_eve
 void
 ngx_http_v3_reset_stream(ngx_connection_t *c)
 {
-ngx_http_v3_session_t   *h3c;
-ngx_http_v3_srv_conf_t  *h3scf;
-
-h3scf = ngx_http_v3_get_module_srv_conf(c, ngx_http_v3_module);
+ngx_http_v3_session_t  *h3c;
 
 h3c = ngx_http_v3_get_session(c);
 
-if (h3scf->max_table_capacity > 0 && !c->read->eof && !h3c->hq
+if (!c->read->eof && !h3c->hq
+&& h3c->known_streams[NGX_HTTP_V3_STREAM_SERVER_DECODER]
 && (c->quic->id & NGX_QUIC_STREAM_UNIDIRECTIONAL) == 0)
 {
 (void) ngx_http_v3_send_cancel_stream(c, c->quic->id);
diff -r 04bc350b2919 -r 08f8e9c33a08 src/http/v3/ngx_http_v3_uni.c
--- a/src/http/v3/ngx_http_v3_uni.c Tue May 28 17:17:19 2024 +0400
+++ b/src/http/v3/ngx_http_v3_uni.c Tue May 28 17:18:28 2024 +0400
@@ -20,8 +20,6 @@ static void ngx_http_v3_close_uni_stream
 static void ngx_http_v3_uni_read_handler(ngx_event_t *rev);
 static void ngx_http_v3_uni_dummy_read_handler(ngx_event_t *wev);
 static void ngx_http_v3_uni_dummy_write_handler(ngx_event_t *wev);
-static ngx_connection_t *ngx_http_v3_get_uni_stream(ngx_connection_t *c,
-ngx_uint_t type);
 
 
 void
@@ -307,7 +305,7 @@ ngx_http_v3_uni_dummy_write_handler(ngx_
 }
 
 
-static ngx_connection_t *
+ngx_connection_t *
 ngx_http_v3_get_uni_stream(ngx_connection_t *c, ngx_uint_t type)
 {
 u_char buf[NGX_HTTP_V3_VARLEN_INT_LEN];
diff -r 04bc350b2919 -r 08f8e9c33a08 src/http/v3/ngx_http_v3_uni.h
--- a/src/http/v3/ngx_http_v3_uni.h Tue May 28 17:17:19 2024 +0400
+++ b/src/http/v3/ngx_http_v3_uni.h Tue May 28 17:18:28 2024 +0400
@@ -19,6 +19,8 @@ ngx_int_t ngx_http_v3_register_uni_strea
 
 ngx_int_t ngx_http_v3_cancel_stream(ngx_connection_t *c, ngx_uint_t stream_id);
 
+ngx_connection_t *ngx_http_v3_get_uni_stream(ngx_connection_t *c,
+ngx_uint_t type);
 ngx_int_t ngx_http_v3_send_settings(ngx_connection_t *c);
 ngx_int_t ngx_http_v3_send_goaway(ngx_connection_t *c, uint64_t id);
 ngx_int_t ngx_http_v3_send_ack_section(ngx_connection_t *c,
___
nginx-devel mailing list
nginx-devel@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx-devel


[nginx] QUIC: client transport parameter data length checking.

2024-05-29 Thread Sergey Kandaurov
details:   https://hg.nginx.org/nginx/rev/04bc350b2919
branches:  stable-1.26
changeset: 9261:04bc350b2919
user:  Sergey Kandaurov 
date:  Tue May 28 17:17:19 2024 +0400
description:
QUIC: client transport parameter data length checking.

diffstat:

 src/event/quic/ngx_event_quic_transport.c |  8 
 1 files changed, 8 insertions(+), 0 deletions(-)

diffs (18 lines):

diff -r b317a71f75ae -r 04bc350b2919 src/event/quic/ngx_event_quic_transport.c
--- a/src/event/quic/ngx_event_quic_transport.c Thu May 23 19:15:38 2024 +0400
+++ b/src/event/quic/ngx_event_quic_transport.c Tue May 28 17:17:19 2024 +0400
@@ -1750,6 +1750,14 @@ ngx_quic_parse_transport_params(u_char *
 return NGX_ERROR;
 }
 
+if ((size_t) (end - p) < len) {
+ngx_log_error(NGX_LOG_INFO, log, 0,
+  "quic failed to parse"
+  " transport param id:0x%xL, data length %uL too 
long",
+  id, len);
+return NGX_ERROR;
+}
+
 rc = ngx_quic_parse_transport_param(p, p + len, id, tp);
 
 if (rc == NGX_ERROR) {
___
nginx-devel mailing list
nginx-devel@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx-devel


[nginx] Optimized chain link usage (ticket #2614).

2024-05-29 Thread Sergey Kandaurov
details:   https://hg.nginx.org/nginx/rev/b317a71f75ae
branches:  stable-1.26
changeset: 9260:b317a71f75ae
user:  Roman Arutyunyan 
date:  Thu May 23 19:15:38 2024 +0400
description:
Optimized chain link usage (ticket #2614).

Previously chain links could sometimes be dropped instead of being reused,
which could result in increased memory consumption during long requests.

A similar chain link issue in ngx_http_gzip_filter_module was fixed in
da46bfc484ef (1.11.10).

Based on a patch by Sangmin Lee.

diffstat:

 src/core/ngx_output_chain.c  |  10 --
 src/http/modules/ngx_http_grpc_module.c  |   5 -
 src/http/modules/ngx_http_gunzip_filter_module.c |  18 ++
 src/http/modules/ngx_http_gzip_filter_module.c   |  10 +++---
 src/http/modules/ngx_http_ssi_filter_module.c|   8 ++--
 src/http/modules/ngx_http_sub_filter_module.c|   8 ++--
 6 files changed, 45 insertions(+), 14 deletions(-)

diffs (158 lines):

diff -r 31fe21f04103 -r b317a71f75ae src/core/ngx_output_chain.c
--- a/src/core/ngx_output_chain.c   Thu May 16 11:15:10 2024 +0200
+++ b/src/core/ngx_output_chain.c   Thu May 23 19:15:38 2024 +0400
@@ -117,7 +117,10 @@ ngx_output_chain(ngx_output_chain_ctx_t 
 
 ngx_debug_point();
 
-ctx->in = ctx->in->next;
+cl = ctx->in;
+ctx->in = cl->next;
+
+ngx_free_chain(ctx->pool, cl);
 
 continue;
 }
@@ -203,7 +206,10 @@ ngx_output_chain(ngx_output_chain_ctx_t 
 /* delete the completed buf from the ctx->in chain */
 
 if (ngx_buf_size(ctx->in->buf) == 0) {
-ctx->in = ctx->in->next;
+cl = ctx->in;
+ctx->in = cl->next;
+
+ngx_free_chain(ctx->pool, cl);
 }
 
 cl = ngx_alloc_chain_link(ctx->pool);
diff -r 31fe21f04103 -r b317a71f75ae src/http/modules/ngx_http_grpc_module.c
--- a/src/http/modules/ngx_http_grpc_module.c   Thu May 16 11:15:10 2024 +0200
+++ b/src/http/modules/ngx_http_grpc_module.c   Thu May 23 19:15:38 2024 +0400
@@ -1231,7 +1231,7 @@ ngx_http_grpc_body_output_filter(void *d
 ngx_buf_t  *b;
 ngx_int_t   rc;
 ngx_uint_t  next, last;
-ngx_chain_t*cl, *out, **ll;
+ngx_chain_t*cl, *out, *ln, **ll;
 ngx_http_upstream_t*u;
 ngx_http_grpc_ctx_t*ctx;
 ngx_http_grpc_frame_t  *f;
@@ -1459,7 +1459,10 @@ ngx_http_grpc_body_output_filter(void *d
 last = 1;
 }
 
+ln = in;
 in = in->next;
+
+ngx_free_chain(r->pool, ln);
 }
 
 ctx->in = in;
diff -r 31fe21f04103 -r b317a71f75ae 
src/http/modules/ngx_http_gunzip_filter_module.c
--- a/src/http/modules/ngx_http_gunzip_filter_module.c  Thu May 16 11:15:10 
2024 +0200
+++ b/src/http/modules/ngx_http_gunzip_filter_module.c  Thu May 23 19:15:38 
2024 +0400
@@ -333,6 +333,8 @@ static ngx_int_t
 ngx_http_gunzip_filter_add_data(ngx_http_request_t *r,
 ngx_http_gunzip_ctx_t *ctx)
 {
+ngx_chain_t  *cl;
+
 if (ctx->zstream.avail_in || ctx->flush != Z_NO_FLUSH || ctx->redo) {
 return NGX_OK;
 }
@@ -344,8 +346,11 @@ ngx_http_gunzip_filter_add_data(ngx_http
 return NGX_DECLINED;
 }
 
-ctx->in_buf = ctx->in->buf;
-ctx->in = ctx->in->next;
+cl = ctx->in;
+ctx->in_buf = cl->buf;
+ctx->in = cl->next;
+
+ngx_free_chain(r->pool, cl);
 
 ctx->zstream.next_in = ctx->in_buf->pos;
 ctx->zstream.avail_in = ctx->in_buf->last - ctx->in_buf->pos;
@@ -374,6 +379,7 @@ static ngx_int_t
 ngx_http_gunzip_filter_get_buf(ngx_http_request_t *r,
 ngx_http_gunzip_ctx_t *ctx)
 {
+ngx_chain_t *cl;
 ngx_http_gunzip_conf_t  *conf;
 
 if (ctx->zstream.avail_out) {
@@ -383,8 +389,12 @@ ngx_http_gunzip_filter_get_buf(ngx_http_
 conf = ngx_http_get_module_loc_conf(r, ngx_http_gunzip_filter_module);
 
 if (ctx->free) {
-ctx->out_buf = ctx->free->buf;
-ctx->free = ctx->free->next;
+
+cl = ctx->free;
+ctx->out_buf = cl->buf;
+ctx->free = cl->next;
+
+ngx_free_chain(r->pool, cl);
 
 ctx->out_buf->flush = 0;
 
diff -r 31fe21f04103 -r b317a71f75ae 
src/http/modules/ngx_http_gzip_filter_module.c
--- a/src/http/modules/ngx_http_gzip_filter_module.cThu May 16 11:15:10 
2024 +0200
+++ b/src/http/modules/ngx_http_gzip_filter_module.cThu May 23 19:15:38 
2024 +0400
@@ -985,10 +985,14 @@ static void
 ngx_http_gzip_filter_free_copy_buf(ngx_http_request_t *r,
 ngx_http_gzip_ctx_t *ctx)
 {
-ngx_chain_t  *cl;
+ngx_chain_t  *cl, *ln;
 
-for (cl = ctx->copied; cl; cl = cl->next) {
-ngx_pfree(r->pool

[nginx] Configure: fixed building libatomic test.

2024-05-29 Thread Sergey Kandaurov
details:   https://hg.nginx.org/nginx/rev/31fe21f04103
branches:  stable-1.26
changeset: 9259:31fe21f04103
user:  Edgar Bonet 
date:  Thu May 16 11:15:10 2024 +0200
description:
Configure: fixed building libatomic test.

Using "long *" instead of "AO_t *" leads either to -Wincompatible-pointer-types
or -Wpointer-sign warnings, depending on whether long and size_t are compatible
types (e.g., ILP32 versus LP64 data models).  Notably, -Wpointer-sign warnings
are enabled by default in Clang only, and -Wincompatible-pointer-types is an
error starting from GCC 14.

Signed-off-by: Edgar Bonet 

diffstat:

 auto/lib/libatomic/conf |  2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diffs (12 lines):

diff -r 73770db03e73 -r 31fe21f04103 auto/lib/libatomic/conf
--- a/auto/lib/libatomic/conf   Tue May 28 17:14:08 2024 +0400
+++ b/auto/lib/libatomic/conf   Thu May 16 11:15:10 2024 +0200
@@ -19,7 +19,7 @@ else
   #include "
 ngx_feature_path=
 ngx_feature_libs="-latomic_ops"
-ngx_feature_test="long  n = 0;
+ngx_feature_test="AO_t  n = 0;
   if (!AO_compare_and_swap(, 0, 1))
   return 1;
   if (AO_fetch_and_add(, 1) != 1)
_______
nginx-devel mailing list
nginx-devel@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx-devel


[nginx] Version bump.

2024-05-29 Thread Sergey Kandaurov
details:   https://hg.nginx.org/nginx/rev/73770db03e73
branches:  stable-1.26
changeset: 9258:73770db03e73
user:  Sergey Kandaurov 
date:  Tue May 28 17:14:08 2024 +0400
description:
Version bump.

diffstat:

 src/core/nginx.h |  4 ++--
 1 files changed, 2 insertions(+), 2 deletions(-)

diffs (14 lines):

diff -r cdf74ac25b47 -r 73770db03e73 src/core/nginx.h
--- a/src/core/nginx.h  Tue Apr 23 18:04:32 2024 +0400
+++ b/src/core/nginx.h  Tue May 28 17:14:08 2024 +0400
@@ -9,8 +9,8 @@
 #define _NGINX_H_INCLUDED_
 
 
-#define nginx_version  1026000
-#define NGINX_VERSION  "1.26.0"
+#define nginx_version  1026001
+#define NGINX_VERSION  "1.26.1"
 #define NGINX_VER  "nginx/" NGINX_VERSION
 
 #ifdef NGX_BUILD
___
nginx-devel mailing list
nginx-devel@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx-devel


[nginx] release-1.27.0 tag

2024-05-29 Thread Sergey Kandaurov
details:   https://hg.nginx.org/nginx/rev/02e9411009b9
branches:  
changeset: 9257:02e9411009b9
user:  Sergey Kandaurov 
date:  Tue May 28 17:22:30 2024 +0400
description:
release-1.27.0 tag

diffstat:

 .hgtags |  1 +
 1 files changed, 1 insertions(+), 0 deletions(-)

diffs (8 lines):

diff -r 2166e329fb4e -r 02e9411009b9 .hgtags
--- a/.hgtags   Tue May 28 17:19:38 2024 +0400
+++ b/.hgtags   Tue May 28 17:22:30 2024 +0400
@@ -478,3 +478,4 @@ 1d839f05409d1a50d0f15a2bf36547001f99ae40
 294a3d07234f8f65d7b0e0b0e2c5b05c12c5da0a release-1.25.3
 173a0a7dbce569adbb70257c6ec4f0f6bc585009 release-1.25.4
 8618e4d900cc71082fbe7dc72af087937d64faf5 release-1.25.5
+2166e329fb4ed7d6da7c823ee6499f7d06d7bc00 release-1.27.0
___
nginx-devel mailing list
nginx-devel@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx-devel


[nginx] nginx-1.27.0-RELEASE

2024-05-29 Thread Sergey Kandaurov
details:   https://hg.nginx.org/nginx/rev/2166e329fb4e
branches:  
changeset: 9256:2166e329fb4e
user:  Sergey Kandaurov 
date:  Tue May 28 17:19:38 2024 +0400
description:
nginx-1.27.0-RELEASE

diffstat:

 docs/xml/nginx/changes.xml |  68 ++
 1 files changed, 68 insertions(+), 0 deletions(-)

diffs (78 lines):

diff -r ebdeca3b392b -r 2166e329fb4e docs/xml/nginx/changes.xml
--- a/docs/xml/nginx/changes.xmlTue May 28 17:20:45 2024 +0400
+++ b/docs/xml/nginx/changes.xmlTue May 28 17:19:38 2024 +0400
@@ -5,6 +5,74 @@
 
 
 
+
+
+
+
+при использовании HTTP/3 обработка специально созданной QUIC-сессии могла
+приводить к падению рабочего процесса, отправке клиенту содержимого памяти
+рабочего процесса на системах с MTU больше 4096 байт, а также потенциально
+могла иметь другие последствия
+(CVE-2024-32760, CVE-2024-31079, CVE-2024-35200, CVE-2024-34161).
+Спасибо Nils Bars из CISPA.
+
+
+when using HTTP/3, processing of a specially crafted QUIC session might
+cause a worker process crash, worker process memory disclosure on systems
+with MTU larger than 4096 bytes, or might have potential other impact
+(CVE-2024-32760, CVE-2024-31079, CVE-2024-35200, CVE-2024-34161).
+Thanks to Nils Bars of CISPA.
+
+
+
+
+
+директивы proxy_limit_rate, fastcgi_limit_rate,
+scgi_limit_rate и uwsgi_limit_rate поддерживают переменные.
+
+
+variables support
+in the "proxy_limit_rate", "fastcgi_limit_rate", "scgi_limit_rate",
+and "uwsgi_limit_rate" directives.
+
+
+
+
+
+уменьшено потребление памяти для долгоживущих запросов,
+если используются директивы gzip, gunzip, ssi, sub_filter или grpc_pass.
+
+
+reduced memory consumption for long-lived requests
+if "gzip", "gunzip", "ssi", "sub_filter", or "grpc_pass" directives are used.
+
+
+
+
+
+nginx не собирался gcc 14,
+если использовался параметр --with-atomic.
+Спасибо Edgar Bonet.
+
+
+nginx could not be built by gcc 14
+if the --with-atomic option was used.
+Thanks to Edgar Bonet.
+
+
+
+
+
+Исправления в HTTP/3.
+
+
+Bugfixes in HTTP/3.
+
+
+
+
+
+
 
 
 
___
nginx-devel mailing list
nginx-devel@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx-devel


[nginx] HTTP/3: fixed handling of zero-length literal field line.

2024-05-29 Thread Sergey Kandaurov
details:   https://hg.nginx.org/nginx/rev/ebdeca3b392b
branches:  
changeset: 9255:ebdeca3b392b
user:  Sergey Kandaurov 
date:  Tue May 28 17:20:45 2024 +0400
description:
HTTP/3: fixed handling of zero-length literal field line.

Previously, st->value was passed with NULL data pointer to header handlers.

diffstat:

 src/http/v3/ngx_http_v3_parse.c |  3 +++
 1 files changed, 3 insertions(+), 0 deletions(-)

diffs (27 lines):

diff -r 55a6a45b7fa9 -r ebdeca3b392b src/http/v3/ngx_http_v3_parse.c
--- a/src/http/v3/ngx_http_v3_parse.c   Tue May 28 17:19:21 2024 +0400
+++ b/src/http/v3/ngx_http_v3_parse.c   Tue May 28 17:20:45 2024 +0400
@@ -810,6 +810,7 @@ ngx_http_v3_parse_field_lri(ngx_connecti
 
 st->literal.length = st->pint.value;
 if (st->literal.length == 0) {
+st->value.data = (u_char *) "";
 goto done;
 }
 
@@ -932,6 +933,7 @@ ngx_http_v3_parse_field_l(ngx_connection
 
 st->literal.length = st->pint.value;
 if (st->literal.length == 0) {
+st->value.data = (u_char *) "";
 goto done;
 }
 
@@ -1072,6 +1074,7 @@ ngx_http_v3_parse_field_lpbi(ngx_connect
 
 st->literal.length = st->pint.value;
 if (st->literal.length == 0) {
+st->value.data = (u_char *) "";
 goto done;
 }
 
___
nginx-devel mailing list
nginx-devel@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx-devel


[nginx] QUIC: ngx_quic_buffer_t use-after-free protection.

2024-05-29 Thread Sergey Kandaurov
details:   https://hg.nginx.org/nginx/rev/55a6a45b7fa9
branches:  
changeset: 9254:55a6a45b7fa9
user:  Roman Arutyunyan 
date:  Tue May 28 17:19:21 2024 +0400
description:
QUIC: ngx_quic_buffer_t use-after-free protection.

Previously the last chain field of ngx_quic_buffer_t could still reference freed
chains and buffers after calling ngx_quic_free_buffer().  While normally an
ngx_quic_buffer_t object should not be used after freeing, resetting last_chain
field would prevent a potential use-after-free.

diffstat:

 src/event/quic/ngx_event_quic_frames.c |  1 +
 1 files changed, 1 insertions(+), 0 deletions(-)

diffs (11 lines):

diff -r cf66916bc6a3 -r 55a6a45b7fa9 src/event/quic/ngx_event_quic_frames.c
--- a/src/event/quic/ngx_event_quic_frames.cTue May 28 17:19:08 2024 +0400
+++ b/src/event/quic/ngx_event_quic_frames.cTue May 28 17:19:21 2024 +0400
@@ -648,6 +648,7 @@ ngx_quic_free_buffer(ngx_connection_t *c
 ngx_quic_free_chain(c, qb->chain);
 
 qb->chain = NULL;
+qb->last_chain = NULL;
 }
 
 
_______
nginx-devel mailing list
nginx-devel@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx-devel


[nginx] QUIC: ignore CRYPTO frames after handshake completion.

2024-05-29 Thread Sergey Kandaurov
details:   https://hg.nginx.org/nginx/rev/cf66916bc6a3
branches:  
changeset: 9253:cf66916bc6a3
user:  Roman Arutyunyan 
date:  Tue May 28 17:19:08 2024 +0400
description:
QUIC: ignore CRYPTO frames after handshake completion.

Sending handshake-level CRYPTO frames after the client's Finished message could
lead to memory disclosure and a potential segfault, if those frames are sent in
one packet with the Finished frame.

diffstat:

 src/event/quic/ngx_event_quic_ssl.c |  5 +
 1 files changed, 5 insertions(+), 0 deletions(-)

diffs (15 lines):

diff -r a0cbbdeebccd -r cf66916bc6a3 src/event/quic/ngx_event_quic_ssl.c
--- a/src/event/quic/ngx_event_quic_ssl.c   Tue May 28 17:18:50 2024 +0400
+++ b/src/event/quic/ngx_event_quic_ssl.c   Tue May 28 17:19:08 2024 +0400
@@ -326,6 +326,11 @@ ngx_quic_handle_crypto_frame(ngx_connect
 ngx_quic_crypto_frame_t  *f;
 
 qc = ngx_quic_get_connection(c);
+
+if (!ngx_quic_keys_available(qc->keys, pkt->level, 0)) {
+return NGX_OK;
+}
+
 ctx = ngx_quic_get_send_ctx(qc, pkt->level);
 f = >u.crypto;
 
___
nginx-devel mailing list
nginx-devel@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx-devel


[nginx] HTTP/3: fixed dynamic table overflow.

2024-05-29 Thread Sergey Kandaurov
details:   https://hg.nginx.org/nginx/rev/a0cbbdeebccd
branches:  
changeset: 9252:a0cbbdeebccd
user:  Roman Arutyunyan 
date:  Tue May 28 17:18:50 2024 +0400
description:
HTTP/3: fixed dynamic table overflow.

While inserting a new entry into the dynamic table, first the entry is added,
and then older entries are evicted until table size is within capacity.  After
the first step, the number of entries may temporarily exceed the maximum
calculated from capacity by one entry, which previously caused table overflow.

The easiest way to trigger the issue is to keep adding entries with empty names
and values until first eviction.

The issue was introduced by 987bee4363d1.

diffstat:

 src/http/v3/ngx_http_v3_table.c |  2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diffs (12 lines):

diff -r 387470a87c8a -r a0cbbdeebccd src/http/v3/ngx_http_v3_table.c
--- a/src/http/v3/ngx_http_v3_table.c   Tue May 28 17:18:28 2024 +0400
+++ b/src/http/v3/ngx_http_v3_table.c   Tue May 28 17:18:50 2024 +0400
@@ -308,7 +308,7 @@ ngx_http_v3_set_capacity(ngx_connection_
 prev_max = dt->capacity / 32;
 
 if (max > prev_max) {
-elts = ngx_alloc(max * sizeof(void *), c->log);
+elts = ngx_alloc((max + 1) * sizeof(void *), c->log);
 if (elts == NULL) {
 return NGX_ERROR;
 }
___
nginx-devel mailing list
nginx-devel@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx-devel


[nginx] HTTP/3: decoder stream pre-creation.

2024-05-29 Thread Sergey Kandaurov
details:   https://hg.nginx.org/nginx/rev/387470a87c8a
branches:  
changeset: 9251:387470a87c8a
user:  Roman Arutyunyan 
date:  Tue May 28 17:18:28 2024 +0400
description:
HTTP/3: decoder stream pre-creation.

Previously a decoder stream was created on demand for sending Section
Acknowledgement, Stream Cancellation and Insert Count Increment.  If conditions
for sending any of these instructions never happen, a decoder stream is not
created at all.  These conditions include client not using the dynamic table and
no streams abandoned by server (RFC 9204, Section 2.2.2.2).  However RFC 9204,
Section 4.2 defines only one condition for not creating a decoder stream:

   An endpoint MAY avoid creating a decoder stream if its decoder sets
   the maximum capacity of the dynamic table to zero.

The change enables pre-creation of the decoder stream at HTTP/3 session
initialization if maximum dynamic table capacity is not zero.  Note that this
value is currently hardcoded to 4096 bytes and is not configurable, so the
stream is now always created.

Also, the change fixes a potential stack overflow when creating a decoder
stream in ngx_http_v3_send_cancel_stream() while draining a request stream by
ngx_drain_connections().  Creating a decoder stream involves calling
ngx_get_connection(), which calls ngx_drain_connections(), which will drain the
same request stream again.  If client's MAX_STREAMS for uni stream is high
enough, these recursive calls will continue until we run out of stack.
Otherwise, decoder stream creation will fail at some point and the request
stream connection will be drained.  This may result in use-after-free, since
this connection could still be referenced up the stack.

diffstat:

 src/http/v3/ngx_http_v3_request.c |  20 ++--
 src/http/v3/ngx_http_v3_uni.c |   4 +---
 src/http/v3/ngx_http_v3_uni.h |   2 ++
 3 files changed, 17 insertions(+), 9 deletions(-)

diffs (73 lines):

diff -r 371b6a7d0673 -r 387470a87c8a src/http/v3/ngx_http_v3_request.c
--- a/src/http/v3/ngx_http_v3_request.c Tue May 28 17:17:19 2024 +0400
+++ b/src/http/v3/ngx_http_v3_request.c Tue May 28 17:18:28 2024 +0400
@@ -134,7 +134,17 @@ ngx_http_v3_init(ngx_connection_t *c)
 }
 }
 
-return ngx_http_v3_send_settings(c);
+if (ngx_http_v3_send_settings(c) != NGX_OK) {
+return NGX_ERROR;
+}
+
+if (h3scf->max_table_capacity > 0) {
+if (ngx_http_v3_get_uni_stream(c, NGX_HTTP_V3_STREAM_DECODER) == NULL) 
{
+return NGX_ERROR;
+}
+}
+
+return NGX_OK;
 }
 
 
@@ -398,14 +408,12 @@ ngx_http_v3_wait_request_handler(ngx_eve
 void
 ngx_http_v3_reset_stream(ngx_connection_t *c)
 {
-ngx_http_v3_session_t   *h3c;
-ngx_http_v3_srv_conf_t  *h3scf;
-
-h3scf = ngx_http_v3_get_module_srv_conf(c, ngx_http_v3_module);
+ngx_http_v3_session_t  *h3c;
 
 h3c = ngx_http_v3_get_session(c);
 
-if (h3scf->max_table_capacity > 0 && !c->read->eof && !h3c->hq
+if (!c->read->eof && !h3c->hq
+&& h3c->known_streams[NGX_HTTP_V3_STREAM_SERVER_DECODER]
 && (c->quic->id & NGX_QUIC_STREAM_UNIDIRECTIONAL) == 0)
 {
 (void) ngx_http_v3_send_cancel_stream(c, c->quic->id);
diff -r 371b6a7d0673 -r 387470a87c8a src/http/v3/ngx_http_v3_uni.c
--- a/src/http/v3/ngx_http_v3_uni.c Tue May 28 17:17:19 2024 +0400
+++ b/src/http/v3/ngx_http_v3_uni.c Tue May 28 17:18:28 2024 +0400
@@ -20,8 +20,6 @@ static void ngx_http_v3_close_uni_stream
 static void ngx_http_v3_uni_read_handler(ngx_event_t *rev);
 static void ngx_http_v3_uni_dummy_read_handler(ngx_event_t *wev);
 static void ngx_http_v3_uni_dummy_write_handler(ngx_event_t *wev);
-static ngx_connection_t *ngx_http_v3_get_uni_stream(ngx_connection_t *c,
-ngx_uint_t type);
 
 
 void
@@ -307,7 +305,7 @@ ngx_http_v3_uni_dummy_write_handler(ngx_
 }
 
 
-static ngx_connection_t *
+ngx_connection_t *
 ngx_http_v3_get_uni_stream(ngx_connection_t *c, ngx_uint_t type)
 {
 u_char buf[NGX_HTTP_V3_VARLEN_INT_LEN];
diff -r 371b6a7d0673 -r 387470a87c8a src/http/v3/ngx_http_v3_uni.h
--- a/src/http/v3/ngx_http_v3_uni.h Tue May 28 17:17:19 2024 +0400
+++ b/src/http/v3/ngx_http_v3_uni.h Tue May 28 17:18:28 2024 +0400
@@ -19,6 +19,8 @@ ngx_int_t ngx_http_v3_register_uni_strea
 
 ngx_int_t ngx_http_v3_cancel_stream(ngx_connection_t *c, ngx_uint_t stream_id);
 
+ngx_connection_t *ngx_http_v3_get_uni_stream(ngx_connection_t *c,
+ngx_uint_t type);
 ngx_int_t ngx_http_v3_send_settings(ngx_connection_t *c);
 ngx_int_t ngx_http_v3_send_goaway(ngx_connection_t *c, uint64_t id);
 ngx_int_t ngx_http_v3_send_ack_section(ngx_connection_t *c,
___
nginx-devel mailing list
nginx-devel@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx-devel


[nginx] QUIC: client transport parameter data length checking.

2024-05-29 Thread Sergey Kandaurov
details:   https://hg.nginx.org/nginx/rev/371b6a7d0673
branches:  
changeset: 9250:371b6a7d0673
user:  Sergey Kandaurov 
date:  Tue May 28 17:17:19 2024 +0400
description:
QUIC: client transport parameter data length checking.

diffstat:

 src/event/quic/ngx_event_quic_transport.c |  8 
 1 files changed, 8 insertions(+), 0 deletions(-)

diffs (18 lines):

diff -r 2e9588d65dd9 -r 371b6a7d0673 src/event/quic/ngx_event_quic_transport.c
--- a/src/event/quic/ngx_event_quic_transport.c Sat Nov 25 21:57:09 2023 +
+++ b/src/event/quic/ngx_event_quic_transport.c Tue May 28 17:17:19 2024 +0400
@@ -1750,6 +1750,14 @@ ngx_quic_parse_transport_params(u_char *
 return NGX_ERROR;
 }
 
+if ((size_t) (end - p) < len) {
+ngx_log_error(NGX_LOG_INFO, log, 0,
+  "quic failed to parse"
+  " transport param id:0x%xL, data length %uL too 
long",
+  id, len);
+return NGX_ERROR;
+}
+
 rc = ngx_quic_parse_transport_param(p, p + len, id, tp);
 
 if (rc == NGX_ERROR) {
___
nginx-devel mailing list
nginx-devel@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx-devel


[nginx] Upstream: variables support in proxy_limit_rate and friends.

2024-05-27 Thread Sergey Kandaurov
details:   https://hg.nginx.org/nginx/rev/2e9588d65dd9
branches:  
changeset: 9249:2e9588d65dd9
user:  J Carter 
date:  Sat Nov 25 21:57:09 2023 +
description:
Upstream: variables support in proxy_limit_rate and friends.

diffstat:

 src/http/modules/ngx_http_fastcgi_module.c |  8 
 src/http/modules/ngx_http_proxy_module.c   |  8 
 src/http/modules/ngx_http_scgi_module.c|  8 
 src/http/modules/ngx_http_uwsgi_module.c   |  8 
 src/http/ngx_http_upstream.c   |  2 +-
 src/http/ngx_http_upstream.h   |  2 +-
 6 files changed, 18 insertions(+), 18 deletions(-)

diffs (152 lines):

diff -r f7d53c7f7014 -r 2e9588d65dd9 src/http/modules/ngx_http_fastcgi_module.c
--- a/src/http/modules/ngx_http_fastcgi_module.cThu May 23 19:15:38 
2024 +0400
+++ b/src/http/modules/ngx_http_fastcgi_module.cSat Nov 25 21:57:09 
2023 +
@@ -375,7 +375,7 @@ static ngx_command_t  ngx_http_fastcgi_c
 
 { ngx_string("fastcgi_limit_rate"),
   NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1,
-  ngx_conf_set_size_slot,
+  ngx_http_set_complex_value_size_slot,
   NGX_HTTP_LOC_CONF_OFFSET,
   offsetof(ngx_http_fastcgi_loc_conf_t, upstream.limit_rate),
   NULL },
@@ -2898,7 +2898,7 @@ ngx_http_fastcgi_create_loc_conf(ngx_con
 
 conf->upstream.send_lowat = NGX_CONF_UNSET_SIZE;
 conf->upstream.buffer_size = NGX_CONF_UNSET_SIZE;
-conf->upstream.limit_rate = NGX_CONF_UNSET_SIZE;
+conf->upstream.limit_rate = NGX_CONF_UNSET_PTR;
 
 conf->upstream.busy_buffers_size_conf = NGX_CONF_UNSET_SIZE;
 conf->upstream.max_temp_file_size_conf = NGX_CONF_UNSET_SIZE;
@@ -3015,8 +3015,8 @@ ngx_http_fastcgi_merge_loc_conf(ngx_conf
   prev->upstream.buffer_size,
   (size_t) ngx_pagesize);
 
-ngx_conf_merge_size_value(conf->upstream.limit_rate,
-  prev->upstream.limit_rate, 0);
+ngx_conf_merge_ptr_value(conf->upstream.limit_rate,
+  prev->upstream.limit_rate, NULL);
 
 
 ngx_conf_merge_bufs_value(conf->upstream.bufs, prev->upstream.bufs,
diff -r f7d53c7f7014 -r 2e9588d65dd9 src/http/modules/ngx_http_proxy_module.c
--- a/src/http/modules/ngx_http_proxy_module.c  Thu May 23 19:15:38 2024 +0400
+++ b/src/http/modules/ngx_http_proxy_module.c  Sat Nov 25 21:57:09 2023 +
@@ -494,7 +494,7 @@ static ngx_command_t  ngx_http_proxy_com
 
 { ngx_string("proxy_limit_rate"),
   NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1,
-  ngx_conf_set_size_slot,
+  ngx_http_set_complex_value_size_slot,
   NGX_HTTP_LOC_CONF_OFFSET,
   offsetof(ngx_http_proxy_loc_conf_t, upstream.limit_rate),
   NULL },
@@ -3371,7 +3371,7 @@ ngx_http_proxy_create_loc_conf(ngx_conf_
 
 conf->upstream.send_lowat = NGX_CONF_UNSET_SIZE;
 conf->upstream.buffer_size = NGX_CONF_UNSET_SIZE;
-conf->upstream.limit_rate = NGX_CONF_UNSET_SIZE;
+conf->upstream.limit_rate = NGX_CONF_UNSET_PTR;
 
 conf->upstream.busy_buffers_size_conf = NGX_CONF_UNSET_SIZE;
 conf->upstream.max_temp_file_size_conf = NGX_CONF_UNSET_SIZE;
@@ -3515,8 +3515,8 @@ ngx_http_proxy_merge_loc_conf(ngx_conf_t
   prev->upstream.buffer_size,
   (size_t) ngx_pagesize);
 
-ngx_conf_merge_size_value(conf->upstream.limit_rate,
-  prev->upstream.limit_rate, 0);
+ngx_conf_merge_ptr_value(conf->upstream.limit_rate,
+  prev->upstream.limit_rate, NULL);
 
 ngx_conf_merge_bufs_value(conf->upstream.bufs, prev->upstream.bufs,
   8, ngx_pagesize);
diff -r f7d53c7f7014 -r 2e9588d65dd9 src/http/modules/ngx_http_scgi_module.c
--- a/src/http/modules/ngx_http_scgi_module.c   Thu May 23 19:15:38 2024 +0400
+++ b/src/http/modules/ngx_http_scgi_module.c   Sat Nov 25 21:57:09 2023 +
@@ -223,7 +223,7 @@ static ngx_command_t ngx_http_scgi_comma
 
 { ngx_string("scgi_limit_rate"),
   NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1,
-  ngx_conf_set_size_slot,
+  ngx_http_set_complex_value_size_slot,
   NGX_HTTP_LOC_CONF_OFFSET,
   offsetof(ngx_http_scgi_loc_conf_t, upstream.limit_rate),
   NULL },
@@ -1301,7 +1301,7 @@ ngx_http_scgi_create_loc_conf(ngx_conf_t
 
 conf->upstream.send_lowat = NGX_CONF_UNSET_SIZE;
 conf->upstream.buffer_size = NGX_CONF_UNSET_SIZE;
-conf->upstream.limit_rate = NGX_CONF_UNSET_SIZE;
+conf->upstream.limit_rate = NGX_CONF_UNSET_PTR;
 
 conf->upstream.busy_buffers_size_conf = NGX_CONF_UNSET_SIZE;
 conf->upstream.max_temp_file_size_conf = NGX_CONF_UNSET_SIZE;
@@ -1413,8 +1413,8 @@ ngx_http_scgi_merge_loc_conf(ngx_conf_t 
 

[nginx] Optimized chain link usage (ticket #2614).

2024-05-27 Thread Roman Arutyunyan
details:   https://hg.nginx.org/nginx/rev/f7d53c7f7014
branches:  
changeset: 9248:f7d53c7f7014
user:  Roman Arutyunyan 
date:  Thu May 23 19:15:38 2024 +0400
description:
Optimized chain link usage (ticket #2614).

Previously chain links could sometimes be dropped instead of being reused,
which could result in increased memory consumption during long requests.

A similar chain link issue in ngx_http_gzip_filter_module was fixed in
da46bfc484ef (1.11.10).

Based on a patch by Sangmin Lee.

diffstat:

 src/core/ngx_output_chain.c  |  10 --
 src/http/modules/ngx_http_grpc_module.c  |   5 -
 src/http/modules/ngx_http_gunzip_filter_module.c |  18 ++
 src/http/modules/ngx_http_gzip_filter_module.c   |  10 +++---
 src/http/modules/ngx_http_ssi_filter_module.c|   8 ++--
 src/http/modules/ngx_http_sub_filter_module.c|   8 ++--
 6 files changed, 45 insertions(+), 14 deletions(-)

diffs (158 lines):

diff -r f58b6f636238 -r f7d53c7f7014 src/core/ngx_output_chain.c
--- a/src/core/ngx_output_chain.c   Thu May 16 11:15:10 2024 +0200
+++ b/src/core/ngx_output_chain.c   Thu May 23 19:15:38 2024 +0400
@@ -117,7 +117,10 @@ ngx_output_chain(ngx_output_chain_ctx_t 
 
 ngx_debug_point();
 
-ctx->in = ctx->in->next;
+cl = ctx->in;
+ctx->in = cl->next;
+
+ngx_free_chain(ctx->pool, cl);
 
 continue;
 }
@@ -203,7 +206,10 @@ ngx_output_chain(ngx_output_chain_ctx_t 
 /* delete the completed buf from the ctx->in chain */
 
 if (ngx_buf_size(ctx->in->buf) == 0) {
-ctx->in = ctx->in->next;
+cl = ctx->in;
+ctx->in = cl->next;
+
+ngx_free_chain(ctx->pool, cl);
 }
 
 cl = ngx_alloc_chain_link(ctx->pool);
diff -r f58b6f636238 -r f7d53c7f7014 src/http/modules/ngx_http_grpc_module.c
--- a/src/http/modules/ngx_http_grpc_module.c   Thu May 16 11:15:10 2024 +0200
+++ b/src/http/modules/ngx_http_grpc_module.c   Thu May 23 19:15:38 2024 +0400
@@ -1231,7 +1231,7 @@ ngx_http_grpc_body_output_filter(void *d
 ngx_buf_t  *b;
 ngx_int_t   rc;
 ngx_uint_t  next, last;
-ngx_chain_t*cl, *out, **ll;
+ngx_chain_t*cl, *out, *ln, **ll;
 ngx_http_upstream_t*u;
 ngx_http_grpc_ctx_t*ctx;
 ngx_http_grpc_frame_t  *f;
@@ -1459,7 +1459,10 @@ ngx_http_grpc_body_output_filter(void *d
 last = 1;
 }
 
+ln = in;
 in = in->next;
+
+ngx_free_chain(r->pool, ln);
 }
 
 ctx->in = in;
diff -r f58b6f636238 -r f7d53c7f7014 
src/http/modules/ngx_http_gunzip_filter_module.c
--- a/src/http/modules/ngx_http_gunzip_filter_module.c  Thu May 16 11:15:10 
2024 +0200
+++ b/src/http/modules/ngx_http_gunzip_filter_module.c  Thu May 23 19:15:38 
2024 +0400
@@ -333,6 +333,8 @@ static ngx_int_t
 ngx_http_gunzip_filter_add_data(ngx_http_request_t *r,
 ngx_http_gunzip_ctx_t *ctx)
 {
+ngx_chain_t  *cl;
+
 if (ctx->zstream.avail_in || ctx->flush != Z_NO_FLUSH || ctx->redo) {
 return NGX_OK;
 }
@@ -344,8 +346,11 @@ ngx_http_gunzip_filter_add_data(ngx_http
 return NGX_DECLINED;
 }
 
-ctx->in_buf = ctx->in->buf;
-ctx->in = ctx->in->next;
+cl = ctx->in;
+ctx->in_buf = cl->buf;
+ctx->in = cl->next;
+
+ngx_free_chain(r->pool, cl);
 
 ctx->zstream.next_in = ctx->in_buf->pos;
 ctx->zstream.avail_in = ctx->in_buf->last - ctx->in_buf->pos;
@@ -374,6 +379,7 @@ static ngx_int_t
 ngx_http_gunzip_filter_get_buf(ngx_http_request_t *r,
 ngx_http_gunzip_ctx_t *ctx)
 {
+ngx_chain_t *cl;
 ngx_http_gunzip_conf_t  *conf;
 
 if (ctx->zstream.avail_out) {
@@ -383,8 +389,12 @@ ngx_http_gunzip_filter_get_buf(ngx_http_
 conf = ngx_http_get_module_loc_conf(r, ngx_http_gunzip_filter_module);
 
 if (ctx->free) {
-ctx->out_buf = ctx->free->buf;
-ctx->free = ctx->free->next;
+
+cl = ctx->free;
+ctx->out_buf = cl->buf;
+ctx->free = cl->next;
+
+ngx_free_chain(r->pool, cl);
 
 ctx->out_buf->flush = 0;
 
diff -r f58b6f636238 -r f7d53c7f7014 
src/http/modules/ngx_http_gzip_filter_module.c
--- a/src/http/modules/ngx_http_gzip_filter_module.cThu May 16 11:15:10 
2024 +0200
+++ b/src/http/modules/ngx_http_gzip_filter_module.cThu May 23 19:15:38 
2024 +0400
@@ -985,10 +985,14 @@ static void
 ngx_http_gzip_filter_free_copy_buf(ngx_http_request_t *r,
 ngx_http_gzip_ctx_t *ctx)
 {
-ngx_chain_t  *cl;
+ngx_chain_t  *cl, *ln;
 
-for (cl = ctx->copied; cl; cl = cl->next) {
-ngx_pfree(r->pool, cl->buf->sta

Re: NGINX multiple authentication methods (one or the other) AND an IP check seems impossible

2024-05-27 Thread Gergő Vári
That works wonderfully, thank you!

On May 27, 2024 6:48:40 AM UTC, J Carter  wrote:
>Hello,
>
>[...]
>
>> ```
>> The goal is to bypass SSO if a correct HTTP Basic Auth header is present 
>> while making sure connections are only from said IPs.
>> 
>> When I disable the IP check it works flawlessly. How could I separate these 
>> requirements?
>> 
>> So (SSO or Basic Auth) and Correct IP
>
>Just use the geo module and "if" to reject unwanted IPs.
>
>"If" is evaluated prior to access & post_access phases, where auth_basic
>and co are evaluated.
>
>geo $allowed_ip {
>xxx.xxx.xxx.xxx/24 1;
>default0;
>}
>
>...
>
>location / {
>if ($allowed_ip = 0) {
>return 403;
>}
>
>rest of config without allow/deny.
>}
>___
>nginx mailing list
>nginx@nginx.org
>https://mailman.nginx.org/mailman/listinfo/nginx
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


Re: NGINX multiple authentication methods (one or the other) AND an IP check seems impossible

2024-05-27 Thread J Carter
Hello,

[...]

> ```
> The goal is to bypass SSO if a correct HTTP Basic Auth header is present 
> while making sure connections are only from said IPs.
> 
> When I disable the IP check it works flawlessly. How could I separate these 
> requirements?
> 
> So (SSO or Basic Auth) and Correct IP

Just use the geo module and "if" to reject unwanted IPs.

"If" is evaluated prior to access & post_access phases, where auth_basic
and co are evaluated.

geo $allowed_ip {
xxx.xxx.xxx.xxx/24 1;
default0;
}

...

location / {
if ($allowed_ip = 0) {
return 403;
}

rest of config without allow/deny.
}
_______
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


NGINX multiple authentication methods (one or the other) AND an IP check seems impossible

2024-05-26 Thread Gergő Vári
```
location / {
proxy_pass $forward_auth_target;

allow x/24;
deny all;

satisfy any; # This gets satisfied by the IP check, and auth is 
completely bypassed

auth_basic "";
auth_basic_user_file "/etc/nginx/basic_auth/$forward_auth_bypass";

auth_request /outpost.goauthentik.io/auth/nginx;
error_page   401 = @goauthentik_proxy_signin;

auth_request_set $auth_cookie $upstream_http_set_cookie;
add_header   Set-Cookie $auth_cookie;
proxy_set_header X-authentik-username $authentik_username;

auth_request_set $authentik_username 
$upstream_http_x_authentik_username;
auth_request_set $authentik_groups $upstream_http_x_authentik_groups;
proxy_set_header X-authentik-groups $authentik_groups;

auth_request_set $authentik_email $upstream_http_x_authentik_email;
proxy_set_header X-authentik-email $authentik_email;

auth_request_set $authentik_name $upstream_http_x_authentik_name;
proxy_set_header X-authentik-name $authentik_name;

auth_request_set $authentik_uid $upstream_http_x_authentik_uid;
proxy_set_header X-authentik-uid $authentik_uid;

auth_request_set $authentik_uid $upstream_http_x_authentik_uid;
proxy_set_header X-authentik-uid $authentik_uid;

auth_request_set $authentik_auth $upstream_http_authorization;
proxy_set_header Authorization $authentik_auth;
}

location /outpost.goauthentik.io {
proxy_pass  http:///outpost.goauthentik.io;
proxy_set_headerHost $host;
proxy_set_headerX-Original-URL $scheme://$http_host$request_uri;
add_header  Set-Cookie $auth_cookie;
auth_request_set$auth_cookie $upstream_http_set_cookie;
proxy_pass_request_body off;
proxy_set_headerContent-Length "";
proxy_ssl_verify off;
}

location @goauthentik_proxy_signin {
internal;
add_header Set-Cookie $auth_cookie;
return 302 /outpost.goauthentik.io/start?rd=$request_uri;
}
```
The goal is to bypass SSO if a correct HTTP Basic Auth header is present while 
making sure connections are only from said IPs.

When I disable the IP check it works flawlessly. How could I separate these 
requirements?

So (SSO or Basic Auth) and Correct IP_______
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


SVN front-end using Nginx?

2024-05-21 Thread Jeffrey Walton
Hi Everyone,

I'd like to use SVN, and Nginx as the web server for it. From what
I've found, it looks like Apache is required due to mod_dav_svn (and
the combo is Apache with Nginx proxy).

I also came across svnserve, but I am not familiar with it.

Is anyone aware of a way to use a pure Nginx environment for SVN?

Thanks in advance.
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


[nginx] Configure: fixed building libatomic test.

2024-05-21 Thread Sergey Kandaurov
details:   https://hg.nginx.org/nginx/rev/f58b6f636238
branches:  
changeset: 9247:f58b6f636238
user:  Edgar Bonet 
date:  Thu May 16 11:15:10 2024 +0200
description:
Configure: fixed building libatomic test.

Using "long *" instead of "AO_t *" leads either to -Wincompatible-pointer-types
or -Wpointer-sign warnings, depending on whether long and size_t are compatible
types (e.g., ILP32 versus LP64 data models).  Notably, -Wpointer-sign warnings
are enabled by default in Clang only, and -Wincompatible-pointer-types is an
error starting from GCC 14.

Signed-off-by: Edgar Bonet 

diffstat:

 auto/lib/libatomic/conf |  2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diffs (12 lines):

diff -r 89093b003fcb -r f58b6f636238 auto/lib/libatomic/conf
--- a/auto/lib/libatomic/conf   Fri May 03 20:26:05 2024 +0400
+++ b/auto/lib/libatomic/conf   Thu May 16 11:15:10 2024 +0200
@@ -19,7 +19,7 @@ else
   #include "
 ngx_feature_path=
 ngx_feature_libs="-latomic_ops"
-ngx_feature_test="long  n = 0;
+ngx_feature_test="AO_t  n = 0;
   if (!AO_compare_and_swap(, 0, 1))
   return 1;
   if (AO_fetch_and_add(, 1) != 1)
_______
nginx-devel mailing list
nginx-devel@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx-devel


Re: nginx configured as loadbalancer returning 404 not found

2024-05-17 Thread Kaushal Shriyan
On Fri, May 17, 2024 at 7:39 PM Sergey A. Osokin  wrote:

> Hi Kaushal,
>
> On Fri, May 17, 2024 at 04:49:59PM +0530, Kaushal Shriyan wrote:
> >
> > I am running nginx version 1.26 on "Ubuntu 22.04.4 LTS" I have configured
> > the nginx as load balancer and the configuration details are as follows
> >
> > # nginx -v
> > nginx version: nginx/1.26.0
> > #
> >
> > server {
> [...]
> >
> > location / {
> > # Define the upstream servers for load balancing
> > proxy_pass http://backend/;
>
> Could you please explain a reason why did you decide to use `/' after
> the backend's name in the proxy_pass directive.
>
> > # Set HTTP headers
> > proxy_set_header Host $host;
> > proxy_set_header X-Real-IP $remote_addr;
> > proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
> > proxy_set_header X-Forwarded-Proto $scheme;
> > }
> >
> > location /api/docs/ {
> > proxy_pass http://backend/api/docs/;
>
> It seems like '/api/docs/' can be safely removed, so
> I'd recommend to read the documentation for the proxy_pass directive, [1]
>
> 
>
> If proxy_pass is specified without a URI, the request URI is passed to the
> server in the same form as sent by a client when the original request is
> processed, or the full normalized request URI is passed when processing
> the changed URI:
>
> location /some/path/ {
> proxy_pass http://127.0.0.1;
> }
>
> 
>
> [...]
>
> > When i hit http://tead-local.com:80/api/docs/ I get http 200 response
> from
> > the backend server whereas when I try to hit using public IP :-
> > http://210.11.1.110:8085/api/docs/ I encounter http 404 not found.
> >
> > 101.0.62.200 - - [17/May/2024:16:38:24 +0530] "GET /api/docs/ HTTP/1.1"
> 404
> > 153 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7)
> > AppleWebKit/605.1.15 (KHTML, like Gecko) Version/17.5 Safari/605.1.15
> > Ddg/17.5" "-"
>
> To see the whole picture of processing a request by nginx, I'd
> also recommend to enable a debugging log, [2].
>
> Hope that helps.
>
> References
> --
> 1. https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_pass
> 2. https://nginx.org/en/docs/debugging_log.html
>
> --
> Sergey A. Osokin
>

Thanks Sergey for the detailed explanation. I have modified the
/etc/nginx/conf.d/loadbalancer.conf file (nginx server running in
loadbalancer mode). The upstream backend -> tead-local.com:80 is hosted on
docker based container running nginx service (version :- 1.21.6)

##loadbalancer.conf###
server {
listen 80;
server_name testbe.mydomain.com;
error_log /var/log/nginx/nginxdebug.log debug;

location / {
# Define the upstream servers for load balancing
proxy_pass http://backend;
# Set HTTP headers
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
   error_log /var/log/nginx/nginxlocationdebug.log debug;
}
}

upstream backend {
server tead-local.com:80;
}

##

[image: image.png]
# ll
total 12
drwxr-xr-x  2 root adm  93 May 18 01:05 ./
drwxrwxr-x 15 root syslog 4096 May 16 16:33 ../
-rw-r--r--  1 root root621 May 18 01:05 access.log
-rw-r--r--  1 root root594 May 18 01:05 error.log
-rw-r--r--  1 root root  0 May 18 01:05 nginxdebug.log
-rw-r--r--  1 root root  0 May 18 01:05 nginxlocationdebug.log
#

root@lb-01:/var/log/nginx# cat error.log
2024/05/18 01:05:15 [notice] 539625#539625: using the "epoll" event method
2024/05/18 01:05:15 [notice] 539625#539625: nginx/1.26.0
2024/05/18 01:05:15 [notice] 539625#539625: built by gcc 11.4.0 (Ubuntu
11.4.0-1ubuntu1~22.04)
2024/05/18 01:05:15 [notice] 539625#539625: OS: Linux 5.15.0-105-generic
2024/05/18 01:05:15 [notice] 539625#539625: getrlimit(RLIMIT_NOFILE):
1024:524288
2024/05/18 01:05:15 [notice] 539626#539626: start worker processes
2024/05/18 01:05:15 [notice] 539626#539626: start worker process 539627
2024/05/18 01:05:15 [notice] 539626#539626: start worker process 539628
root@lb-01:/var/log/nginx# ll

# cat access.log
101.0.62.200 - - [18/May/2024:01:05:19 +0530] "GET /api/docs HTTP/1.1" 404
555 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36
(KHTML, like Gecko) Chrome/12

Re: nginx configured as loadbalancer returning 404 not found

2024-05-17 Thread Sergey A. Osokin
Hi Kaushal,

On Fri, May 17, 2024 at 04:49:59PM +0530, Kaushal Shriyan wrote:
> 
> I am running nginx version 1.26 on "Ubuntu 22.04.4 LTS" I have configured
> the nginx as load balancer and the configuration details are as follows
> 
> # nginx -v
> nginx version: nginx/1.26.0
> #
> 
> server {
[...]
> 
> location / {
> # Define the upstream servers for load balancing
> proxy_pass http://backend/;

Could you please explain a reason why did you decide to use `/' after
the backend's name in the proxy_pass directive.

> # Set HTTP headers
> proxy_set_header Host $host;
> proxy_set_header X-Real-IP $remote_addr;
> proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
> proxy_set_header X-Forwarded-Proto $scheme;
> }
> 
> location /api/docs/ {
> proxy_pass http://backend/api/docs/;

It seems like '/api/docs/' can be safely removed, so
I'd recommend to read the documentation for the proxy_pass directive, [1]



If proxy_pass is specified without a URI, the request URI is passed to the
server in the same form as sent by a client when the original request is
processed, or the full normalized request URI is passed when processing
the changed URI:

location /some/path/ {
proxy_pass http://127.0.0.1;
}



[...]

> When i hit http://tead-local.com:80/api/docs/ I get http 200 response from
> the backend server whereas when I try to hit using public IP :-
> http://210.11.1.110:8085/api/docs/ I encounter http 404 not found.
> 
> 101.0.62.200 - - [17/May/2024:16:38:24 +0530] "GET /api/docs/ HTTP/1.1" 404
> 153 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7)
> AppleWebKit/605.1.15 (KHTML, like Gecko) Version/17.5 Safari/605.1.15
> Ddg/17.5" "-"

To see the whole picture of processing a request by nginx, I'd
also recommend to enable a debugging log, [2].

Hope that helps.

References
--
1. https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_pass
2. https://nginx.org/en/docs/debugging_log.html

-- 
Sergey A. Osokin
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


nginx configured as loadbalancer returning 404 not found

2024-05-17 Thread Kaushal Shriyan
Hi,

I am running nginx version 1.26 on "Ubuntu 22.04.4 LTS" I have configured
the nginx as load balancer and the configuration details are as follows

# nginx -v
nginx version: nginx/1.26.0
#

server {
listen 8085;
#server_name 172.30.2.11;
server name 210.11.1.110;

location / {
# Define the upstream servers for load balancing
proxy_pass http://backend/;
# Set HTTP headers
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}

location /api/docs/ {
proxy_pass http://backend/api/docs/;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}

}

upstream backend {
server tead-local.com:80;
}

When i hit http://tead-local.com:80/api/docs/ I get http 200 response from
the backend server whereas when I try to hit using public IP :-
http://210.11.1.110:8085/api/docs/ I encounter http 404 not found.

101.0.62.200 - - [17/May/2024:16:38:24 +0530] "GET /api/docs/ HTTP/1.1" 404
153 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7)
AppleWebKit/605.1.15 (KHTML, like Gecko) Version/17.5 Safari/605.1.15
Ddg/17.5" "-"

[image: image.png]



Please guide me. Thanks in advance.

Best Regards,

Kaushal
_______
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


Eprints (using PERL CGI)over NGINX reverse proxy

2024-05-07 Thread zen zenitram
Good day!

We have Institutional Repository that is made with the use of Eprints, It
has no problem uploading file up to 1 gb  as default when in local access.
but when we use NGINX as the reverse proxy it only accept up to 128 kb
file. Does PERL CGI affects the upload limit over NGINX?  we already set
the  client_max_body_size to 500M yet it still accepts less than 128 kb
upload.

If Perl CGI affects the upload limit, How can we configure the NGINX to
enable Perl CGI?

Thank you!
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


[nginx] Stream pass: disabled passing from or to udp.

2024-05-03 Thread Roman Arutyunyan
details:   https://hg.nginx.org/nginx/rev/89093b003fcb
branches:  
changeset: 9246:89093b003fcb
user:  Roman Arutyunyan 
date:  Fri May 03 20:26:05 2024 +0400
description:
Stream pass: disabled passing from or to udp.

Passing from udp was not possible for the most part due to preread buffer
restriction.  Passing to udp could occasionally work, but the connection would
still be bound to the original listen rbtree, which prevented it from being
deleted on connection closure.

diffstat:

 src/stream/ngx_stream_pass_module.c |  9 +
 1 files changed, 9 insertions(+), 0 deletions(-)

diffs (26 lines):

diff -r c4792b0f1976 -r 89093b003fcb src/stream/ngx_stream_pass_module.c
--- a/src/stream/ngx_stream_pass_module.c   Fri May 03 20:29:01 2024 +0400
+++ b/src/stream/ngx_stream_pass_module.c   Fri May 03 20:26:05 2024 +0400
@@ -83,6 +83,11 @@ ngx_stream_pass_handler(ngx_stream_sessi
 
 c->log->action = "passing connection to port";
 
+if (c->type == SOCK_DGRAM) {
+ngx_log_error(NGX_LOG_ERR, c->log, 0, "cannot pass udp connection");
+goto failed;
+}
+
 if (c->buffer && c->buffer->pos != c->buffer->last) {
 ngx_log_error(NGX_LOG_ERR, c->log, 0,
   "cannot pass connection with preread data");
@@ -217,6 +222,10 @@ ngx_stream_pass_cleanup(void *data)
 static ngx_int_t
 ngx_stream_pass_match(ngx_listening_t *ls, ngx_addr_t *addr)
 {
+if (ls->type == SOCK_DGRAM) {
+return NGX_DECLINED;
+}
+
 if (!ls->wildcard) {
 return ngx_cmp_sockaddr(ls->sockaddr, ls->socklen,
 addr->sockaddr, addr->socklen, 1);
___
nginx-devel mailing list
nginx-devel@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx-devel


[nginx] SSL: fixed possible configuration overwrite loading "engine:" keys.

2024-05-03 Thread Sergey Kandaurov
details:   https://hg.nginx.org/nginx/rev/c4792b0f1976
branches:  
changeset: 9245:c4792b0f1976
user:  Sergey Kandaurov 
date:  Fri May 03 20:29:01 2024 +0400
description:
SSL: fixed possible configuration overwrite loading "engine:" keys.

When loading certificate keys via ENGINE_load_private_key() in runtime,
it was possible to overwrite configuration on ENGINE_by_id() failure.
OpenSSL documention doesn't describe errors in details, the only reason
I found in the comment to example is when the engine is not available.

diffstat:

 src/event/ngx_event_openssl.c |  4 ++--
 1 files changed, 2 insertions(+), 2 deletions(-)

diffs (19 lines):

diff -r 690f46d3bc1f -r c4792b0f1976 src/event/ngx_event_openssl.c
--- a/src/event/ngx_event_openssl.c Fri May 03 20:28:32 2024 +0400
+++ b/src/event/ngx_event_openssl.c Fri May 03 20:29:01 2024 +0400
@@ -764,13 +764,13 @@ ngx_ssl_load_certificate_key(ngx_pool_t 
 
 engine = ENGINE_by_id((char *) p);
 
+*last++ = ':';
+
 if (engine == NULL) {
 *err = "ENGINE_by_id() failed";
 return NULL;
 }
 
-*last++ = ':';
-
 pkey = ENGINE_load_private_key(engine, (char *) last, 0, 0);
 
 if (pkey == NULL) {
___
nginx-devel mailing list
nginx-devel@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx-devel


[nginx] HTTP/3: fixed handling of malformed request body length.

2024-05-03 Thread Sergey Kandaurov
details:   https://hg.nginx.org/nginx/rev/690f46d3bc1f
branches:  
changeset: 9244:690f46d3bc1f
user:  Sergey Kandaurov 
date:  Fri May 03 20:28:32 2024 +0400
description:
HTTP/3: fixed handling of malformed request body length.

Previously, a request body larger than declared in Content-Length resulted in
a 413 status code, because Content-Length was mistakenly used as the maximum
allowed request body, similar to client_max_body_size.  Following the HTTP/3
specification, such requests are now rejected with the 400 error as malformed.

diffstat:

 src/http/v3/ngx_http_v3_request.c |  9 +
 1 files changed, 9 insertions(+), 0 deletions(-)

diffs (19 lines):

diff -r ff0312de0112 -r 690f46d3bc1f src/http/v3/ngx_http_v3_request.c
--- a/src/http/v3/ngx_http_v3_request.c Fri May 03 20:28:22 2024 +0400
+++ b/src/http/v3/ngx_http_v3_request.c Fri May 03 20:28:32 2024 +0400
@@ -1575,6 +1575,15 @@ ngx_http_v3_request_body_filter(ngx_http
 /* rc == NGX_OK */
 
 if (max != -1 && (uint64_t) (max - rb->received) < st->length) 
{
+
+if (r->headers_in.content_length_n != -1) {
+ngx_log_error(NGX_LOG_INFO, r->connection->log, 0,
+  "client intended to send body data "
+  "larger than declared");
+
+return NGX_HTTP_BAD_REQUEST;
+}
+
 ngx_log_error(NGX_LOG_ERR, r->connection->log, 0,
   "client intended to send too large "
   "body: %O+%ui bytes",
___
nginx-devel mailing list
nginx-devel@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx-devel


[nginx] Version bump.

2024-05-03 Thread Sergey Kandaurov
details:   https://hg.nginx.org/nginx/rev/ff0312de0112
branches:  
changeset: 9243:ff0312de0112
user:  Sergey Kandaurov 
date:  Fri May 03 20:28:22 2024 +0400
description:
Version bump.

diffstat:

 src/core/nginx.h |  4 ++--
 1 files changed, 2 insertions(+), 2 deletions(-)

diffs (14 lines):

diff -r 49dce50fad40 -r ff0312de0112 src/core/nginx.h
--- a/src/core/nginx.h  Tue Apr 16 18:29:59 2024 +0400
+++ b/src/core/nginx.h  Fri May 03 20:28:22 2024 +0400
@@ -9,8 +9,8 @@
 #define _NGINX_H_INCLUDED_
 
 
-#define nginx_version  1025005
-#define NGINX_VERSION  "1.25.5"
+#define nginx_version  1027000
+#define NGINX_VERSION  "1.27.0"
 #define NGINX_VER  "nginx/" NGINX_VERSION
 
 #ifdef NGX_BUILD
___
nginx-devel mailing list
nginx-devel@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx-devel


Re: I need help with our NGINX set up

2024-04-30 Thread zen zenitram
We dont use PHP,  our system use perl cgi.
So the problem is in cgi?


On Wed, May 1, 2024, 8:57 AM Dan Swaney  wrote:

> The clue is with the URL which failed.
>
> From first look, you appear to be using a FAST CGI URL with PHP?
>
> Just a wild guess, but try using:
> ```
>
> fastcgi_param PHP_VALUE "upload_max_filesize = 500M \n post_max_size=500M"
>
> ```
>
> Here is a reference link mentioning it:
> https://serverfault.com/a/704209
>
> On Tue, Apr 23, 2024, 4:49 AM zen zenitram  wrote:
>
>> Good day!
>>
>> Here is what happen when we try to upload file more than 128 kb. Too
>> check if it is on server side we run the server without nginx and it can
>> upload larger size files.
>>
>> Thank you!
>>
>>
>>
>> On Fri, Apr 19, 2024 at 6:18 PM Reinis Rozitis via nginx 
>> wrote:
>>
>>> > It only accepts maximum of 128 kb of data, but the
>>> client_max_body_size 500M;. Is there a way to locate the cause of error.
>>>
>>> Can you actually show what the "error" looks like?
>>>
>>> The default value of client_max_body_size is 1M so the 128Kb limit most
>>> likely comes from the backend  application or server which handles the POST
>>> request (as an example - PHP has its own post_max_size /
>>> upload_max_filesize  settings).
>>>
>>>
>>>
>>> p.s. while it's unlikely (as you specify the settings in particular
>>> location blocks) since you use wildcard includes it is always good to check
>>> with 'nginx -T' how the final configuration looks like. Maybe the request
>>> isn't handled in server/location block where you expect it ..
>>>
>>> rr
>>> _______
>>> nginx mailing list
>>> nginx@nginx.org
>>> https://mailman.nginx.org/mailman/listinfo/nginx
>>>
>> ___
>> nginx mailing list
>> nginx@nginx.org
>> https://mailman.nginx.org/mailman/listinfo/nginx
>>
> ___
> nginx mailing list
> nginx@nginx.org
> https://mailman.nginx.org/mailman/listinfo/nginx
>
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


Re: I need help with our NGINX set up

2024-04-30 Thread Dan Swaney
The clue is with the URL which failed.

>From first look, you appear to be using a FAST CGI URL with PHP?

Just a wild guess, but try using:
```

fastcgi_param PHP_VALUE "upload_max_filesize = 500M \n post_max_size=500M"

```

Here is a reference link mentioning it:
https://serverfault.com/a/704209

On Tue, Apr 23, 2024, 4:49 AM zen zenitram  wrote:

> Good day!
>
> Here is what happen when we try to upload file more than 128 kb. Too check
> if it is on server side we run the server without nginx and it can upload
> larger size files.
>
> Thank you!
>
>
>
> On Fri, Apr 19, 2024 at 6:18 PM Reinis Rozitis via nginx 
> wrote:
>
>> > It only accepts maximum of 128 kb of data, but the client_max_body_size
>> 500M;. Is there a way to locate the cause of error.
>>
>> Can you actually show what the "error" looks like?
>>
>> The default value of client_max_body_size is 1M so the 128Kb limit most
>> likely comes from the backend  application or server which handles the POST
>> request (as an example - PHP has its own post_max_size /
>> upload_max_filesize  settings).
>>
>>
>>
>> p.s. while it's unlikely (as you specify the settings in particular
>> location blocks) since you use wildcard includes it is always good to check
>> with 'nginx -T' how the final configuration looks like. Maybe the request
>> isn't handled in server/location block where you expect it ..
>>
>> rr
>> ___
>> nginx mailing list
>> nginx@nginx.org
>> https://mailman.nginx.org/mailman/listinfo/nginx
>>
> ___
> nginx mailing list
> nginx@nginx.org
> https://mailman.nginx.org/mailman/listinfo/nginx
>
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


Re: Leaky NGINX Plugin Advice

2024-04-25 Thread Roman Arutyunyan
Hello,

As this is a development-related question, a better list for it is 
nginx-devel@nginx.org <mailto:nginx-devel@nginx.org>.

> On 23 Apr 2024, at 1:40 PM, Alex Hussein-Kershaw (HE/HIM) via nginx 
>  wrote:
> 
> Hi Folks,
> 
> I've inherited an nginx plugin, written against 0.7.69 that has recently been 
> moved to use nginx 1.24.0 to resolve the need to ship old versions of 
> openssl. 
> 
> I've found during performance testing that it's leaking file descriptors. 
> After a few hours running and leaking I hit my configured limit of 100k 
> worker_connections which gets written to logs, and nginx starts "reusing 
> connections".
> 
> The leaked file descriptors don't show up in the output of "ss", they look 
> like this in lsof:
> 
> $ /usr/bin/lsof -p 2875952  | grep protocol  | head -2
> nginx 2875952 user 8u sock0,8   0t0 824178 
> protocol: TCP
> nginx 2875952 user 19u sock0,8   0t0 2266802646 
> protocol: TCP
> 
> Googling suggests this may be a socket that has been created but never had a 
> "bind" or "connect" call. I've combed through our plugin code, and am 
> confident it's not responsible for making and leaking these sockets. 
> 
> I should flag two stinkers which may be responsible:
> We have "lingering_timeout" set to an hour, a hack to allow long poll / COMET 
> requests to not be torn down before responding. Stopping load and waiting for 
> an hour does drop some of these leaked fds, but not all. After leaking 17k 
> fds, I stopped my load test and saw it drop to 7k fds which appeared to 
> remain indefinitely. Is this a terrible idea? 
> Within our plugin, we are incrementing the request count field for the same 
> purpose. I'm not really sure why we need both of these, maybe I'm missing 
> something but I can't get COMET polls to work without. I believe that was 
> inspired by Nchan which does something similar. Should I be able to avoid 
> requests getting torn down via this method without lingering_timeout? 
> 
> What could be responsible for these leaked file descriptors and worker 
> connections? I'm unexperienced with nginx so any pointers of where to look 
> are greatly appreciated. 

Incrementing request counter should be done carefully and can lead to socket 
leaks.

To investigate the issue deeper, you can enable debug logging in nginx and find 
the leaked socket there by "fd:" prefix.
Then track the leaked connection by its connection number (prefixed with '*' in 
log).

> 
> Many thanks,
> Alex
> 
> 
> ___
> nginx mailing list
> ng...@nginx.org <mailto:ng...@nginx.org>
> https://mailman.nginx.org/mailman/listinfo/nginx


Roman Arutyunyan
a...@nginx.com




___
nginx-devel mailing list
nginx-devel@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx-devel


[nginx-announce] nginx-1.26.0

2024-04-23 Thread Roman Arutyunyan
Changes with nginx 1.26.023 Apr 2024

*) 1.26.x stable branch.



Roman Arutyunyan
a...@nginx.com
___
nginx-announce mailing list
nginx-announce@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx-announce


[nginx-ru-announce] nginx-1.26.0

2024-04-23 Thread Roman Arutyunyan
Изменения в nginx 1.26.0  23.04.2024

*) Стабильная ветка 1.26.x.



Roman Arutyunyan
a...@nginx.com
___
nginx-ru-announce mailing list
nginx-ru-announce@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx-ru-announce


nginx-1.26.0

2024-04-23 Thread Roman Arutyunyan
Изменения в nginx 1.26.0  23.04.2024

*) Стабильная ветка 1.26.x.



Roman Arutyunyan
a...@nginx.com
___
nginx-ru mailing list
nginx-ru@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx-ru


nginx-1.26.0

2024-04-23 Thread Roman Arutyunyan
Changes with nginx 1.26.023 Apr 2024

*) 1.26.x stable branch.



Roman Arutyunyan
a...@nginx.com
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


[nginx] release-1.26.0 tag

2024-04-23 Thread Roman Arutyunyan
details:   https://hg.nginx.org/nginx/rev/cdf74ac25b47
branches:  stable-1.26
changeset: 9242:cdf74ac25b47
user:  Roman Arutyunyan 
date:  Tue Apr 23 18:04:32 2024 +0400
description:
release-1.26.0 tag

diffstat:

 .hgtags |  1 +
 1 files changed, 1 insertions(+), 0 deletions(-)

diffs (8 lines):

diff -r a58202a8c41b -r cdf74ac25b47 .hgtags
--- a/.hgtags   Tue Apr 23 17:40:08 2024 +0400
+++ b/.hgtags   Tue Apr 23 18:04:32 2024 +0400
@@ -478,3 +478,4 @@ 1d839f05409d1a50d0f15a2bf36547001f99ae40
 294a3d07234f8f65d7b0e0b0e2c5b05c12c5da0a release-1.25.3
 173a0a7dbce569adbb70257c6ec4f0f6bc585009 release-1.25.4
 8618e4d900cc71082fbe7dc72af087937d64faf5 release-1.25.5
+a58202a8c41bf0bd97eef1b946e13105a105520d release-1.26.0
___
nginx-devel mailing list
nginx-devel@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx-devel


[nginx] nginx-1.26.0-RELEASE

2024-04-23 Thread Roman Arutyunyan
details:   https://hg.nginx.org/nginx/rev/a58202a8c41b
branches:  stable-1.26
changeset: 9241:a58202a8c41b
user:  Roman Arutyunyan 
date:  Tue Apr 23 17:40:08 2024 +0400
description:
nginx-1.26.0-RELEASE

diffstat:

 docs/xml/nginx/changes.xml |  14 ++
 1 files changed, 14 insertions(+), 0 deletions(-)

diffs (24 lines):

diff -r 52f427a4c97e -r a58202a8c41b docs/xml/nginx/changes.xml
--- a/docs/xml/nginx/changes.xmlTue Apr 23 17:31:41 2024 +0400
+++ b/docs/xml/nginx/changes.xmlTue Apr 23 17:40:08 2024 +0400
@@ -5,6 +5,20 @@
 
 
 
+
+
+
+
+Стабильная ветка 1.26.x.
+
+
+1.26.x stable branch.
+
+
+
+
+
+
 
 
 
___
nginx-devel mailing list
nginx-devel@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx-devel


[nginx] Stable branch.

2024-04-23 Thread Roman Arutyunyan
details:   https://hg.nginx.org/nginx/rev/52f427a4c97e
branches:  stable-1.26
changeset: 9240:52f427a4c97e
user:  Roman Arutyunyan 
date:  Tue Apr 23 17:31:41 2024 +0400
description:
Stable branch.

diffstat:

 src/core/nginx.h |  4 ++--
 1 files changed, 2 insertions(+), 2 deletions(-)

diffs (14 lines):

diff -r 49dce50fad40 -r 52f427a4c97e src/core/nginx.h
--- a/src/core/nginx.h  Tue Apr 16 18:29:59 2024 +0400
+++ b/src/core/nginx.h  Tue Apr 23 17:31:41 2024 +0400
@@ -9,8 +9,8 @@
 #define _NGINX_H_INCLUDED_
 
 
-#define nginx_version  1025005
-#define NGINX_VERSION  "1.25.5"
+#define nginx_version  1026000
+#define NGINX_VERSION  "1.26.0"
 #define NGINX_VER  "nginx/" NGINX_VERSION
 
 #ifdef NGX_BUILD
___
nginx-devel mailing list
nginx-devel@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx-devel


Leaky NGINX Plugin Advice

2024-04-23 Thread Alex Hussein-Kershaw (HE/HIM) via nginx
Hi Folks,

I've inherited an nginx plugin, written against 0.7.69 that has recently been 
moved to use nginx 1.24.0 to resolve the need to ship old versions of openssl.

I've found during performance testing that it's leaking file descriptors. After 
a few hours running and leaking I hit my configured limit of 100k 
worker_connections which gets written to logs, and nginx starts "reusing 
connections".

The leaked file descriptors don't show up in the output of "ss", they look like 
this in lsof:

$ /usr/bin/lsof -p 2875952  | grep protocol  | head -2
nginx 2875952 user 8u sock0,8   0t0 824178 
protocol: TCP
nginx 2875952 user 19u sock0,8   0t0 2266802646 
protocol: TCP

Googling suggests this may be a socket that has been created but never had a 
"bind" or "connect" call. I've combed through our plugin code, and am confident 
it's not responsible for making and leaking these sockets.

I should flag two stinkers which may be responsible:

  *
We have "lingering_timeout" set to an hour, a hack to allow long poll / COMET 
requests to not be torn down before responding. Stopping load and waiting for 
an hour does drop some of these leaked fds, but not all. After leaking 17k fds, 
I stopped my load test and saw it drop to 7k fds which appeared to remain 
indefinitely. Is this a terrible idea?
  *
Within our plugin, we are incrementing the request count field for the same 
purpose. I'm not really sure why we need both of these, maybe I'm missing 
something but I can't get COMET polls to work without. I believe that was 
inspired by Nchan which does something similar. Should I be able to avoid 
requests getting torn down via this method without lingering_timeout?

What could be responsible for these leaked file descriptors and worker 
connections? I'm unexperienced with nginx so any pointers of where to look are 
greatly appreciated.

Many thanks,
Alex


_______
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


RE: I need help with our NGINX set up

2024-04-19 Thread Reinis Rozitis via nginx
> It only accepts maximum of 128 kb of data, but the client_max_body_size 
> 500M;. Is there a way to locate the cause of error.

Can you actually show what the "error" looks like?

The default value of client_max_body_size is 1M so the 128Kb limit most likely 
comes from the backend  application or server which handles the POST request 
(as an example - PHP has its own post_max_size /  upload_max_filesize  
settings).



p.s. while it's unlikely (as you specify the settings in particular location 
blocks) since you use wildcard includes it is always good to check with 'nginx 
-T' how the final configuration looks like. Maybe the request isn't handled in 
server/location block where you expect it ..

rr
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


Re: nginx-1.25.5

2024-04-19 Thread Kirill A . Korinsky
On Fri, 19 Apr 2024 10:02:00 +0200,
Sébastien Rebecchi wrote:
> 
> As I understand we better replace all proxy_pass to pass when upstream
> server is localhost, but pass does not work with remote upstreams.
> Is that right?
>

It depends on your use case I guess.

Frankly speaking I don't see any reason to use it, and do not accept
connection by target server with one exception: you need some module which
exists only for ngx_stream_...

-- 
wbr, Kirill
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


Re: nginx-1.25.5

2024-04-19 Thread Sébastien Rebecchi
Hello

As I understand we better replace all proxy_pass to pass when upstream
server is localhost, but pass does not work with remote upstreams.
Is that right?

Sébastien

Le ven. 19 avr. 2024 à 09:38, Kirill A. Korinsky  a
écrit :

> On Fri, 19 Apr 2024 03:14:44 +0200,
> Fabiano Furtado Pessoa Coelho wrote:
> >
> > Please, can you spot these overheads in proxying?
> >
>
> Establishing and accepting a brand new connection, writing and reading of
> requests. Maybe buffering. A lot of useless context switching between
> user and kernel spaces. With possibility to enjoy not enough free ports.
>
> Shall I continue?
>
> --
> wbr, Kirill
> ___
> nginx mailing list
> nginx@nginx.org
> https://mailman.nginx.org/mailman/listinfo/nginx
>
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


Re: nginx-1.25.5

2024-04-19 Thread Kirill A . Korinsky
On Fri, 19 Apr 2024 03:14:44 +0200,
Fabiano Furtado Pessoa Coelho wrote:
> 
> Please, can you spot these overheads in proxying?
> 

Establishing and accepting a brand new connection, writing and reading of
requests. Maybe buffering. A lot of useless context switching between
user and kernel spaces. With possibility to enjoy not enough free ports.

Shall I continue?

-- 
wbr, Kirill
_______
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


I need help with our NGINX set up

2024-04-19 Thread zen zenitram
Good day!

We have an Institutional Repository server that uses NGINX as load balancer
but we encountered problem when trying to upload documents to the
repository. It only accepts maximum of 128 kb of data, but the
client_max_body_size 500M;. Is there a way to locate the cause of error.

Here are our NGINX configuration files


*/etc/nginx/nginx.conf*


worker_processes auto;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;

events {
   worker_connections 768;
   # multi_accept on;
}

http {


   # Basic Settings

   sendfile on;
   tcp_nopush on;
   types_hash_max_size 2048;
   client_max_body_size 500M;
   # server_tokens off;

   # server_names_hash_bucket_size 64;
   # server_name_in_redirect off;

   include /etc/nginx/mime.types;
   default_type application/octet-stream;

   ##
   # SSL Settings
   ##

   ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3; # Dropping SSLv3, ref:
POODLE
   ssl_prefer_server_ciphers on;

   ##
   # Logging Settings
   ##

   access_log /var/log/nginx/access.log;
   error_log /var/log/nginx/error.log;

   ##
   # Gzip Settings
   ##

   gzip_vary on;

   # gzip_vary on;
   # gzip_proxied any;
   # gzip_comp_level 6;
   # gzip_buffers 16 8k;
   # gzip_http_version 1.1;
   # gzip_types text/plain text/css application/json
application/javascript text/xml application/xml application/xml+rss
text/javascript;

   ##
   # Virtual Host Configs
   ##

   include /etc/nginx/conf.d/*.conf;
   include /etc/nginx/sites-enabled/*;
}





*/etc/nginx/site-available/test.edu.ph <http://test.edu.ph/>*

server {
   server_name test.edu.ph;

   location / {
   proxy_pass https://192.168.8.243;
   proxy_set_header Host $host;
   proxy_set_header X-Real-IP $remote_addr;
   proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
   proxy_set_header X-Forwarded-Proto $scheme;
   client_max_body_size 500M;
   }

   listen 443 ssl; # managed by Certbot
   ssl_certificate /etc/letsencrypt/live/test.edu.ph/fullchain.pem; #
managed by Certbot
   ssl_certificate_key /etc/letsencrypt/live/test.edu.ph/privkey.pem; #
managed by Certbot
   include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
   ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
   client_max_body_size 500M;

}

server {
   if ($host = test.edu.ph) {
   return 301 https://$host$request_uri;
   } # managed by Certbot


   listen 80;
   server_name test.edu.ph;
   return 404; # managed by Certbot
   client_max_body_size 500M;
}

Need help with this one. Thank you!
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


Re: nginx-1.25.5

2024-04-18 Thread Fabiano Furtado Pessoa Coelho
Hi...

On Wed, Apr 17, 2024 at 3:27 PM Roman Arutyunyan wrote:
>
> Hello,
>
> On 17 Apr 2024, at 6:32 PM, Reinis Rozitis via nginx  wrote:
>
> *) Feature: the ngx_stream_pass_module.
>
>
> Hello,
> what is the difference between pass from ngx_stream_pass_module and
> proxy_pass from ngx_stream_proxy_module?
>
> As in what entails "directly" in "allows passing the accepted connection
> directly to any configured listening socket"?
>
>
> In case of "pass" there's no proxying, hence zero overhead.
> The connection is passed to the new listening socket like it was accepted by 
> it.

Please, can you spot these overheads in proxying?

Thanks.
_______
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


Re: nginx-1.25.5

2024-04-17 Thread Roman Arutyunyan
Hello,

> On 17 Apr 2024, at 6:32 PM, Reinis Rozitis via nginx  wrote:
> 
>> *) Feature: the ngx_stream_pass_module.
> 
> Hello,
> what is the difference between pass from ngx_stream_pass_module and
> proxy_pass from ngx_stream_proxy_module?
> 
> As in what entails "directly" in "allows passing the accepted connection
> directly to any configured listening socket"?

In case of "pass" there's no proxying, hence zero overhead.
The connection is passed to the new listening socket like it was accepted by it.


Roman Arutyunyan
a...@nginx.com




___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


Nginx-1.25.3 Proxy_temp Creating Files root partition

2024-04-17 Thread Mayiani, Martin Martine - mayianmm via nginx
Hi,

So for some odd reason Nginx creating temp files  in the root partition and 
filling up disk and deleting at a slow rate. Aren't this files supposed to be 
in /var/cache/nginx/proxy_temp . ?? currently they're locate at 
/etc/nginx/proxy_temp. How do I change that  and how  do I stop proxy_temp from 
creating many temp files and filling up my disk?

Thanks

Martin


___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


RE: nginx-1.25.5

2024-04-17 Thread Reinis Rozitis via nginx
>    *) Feature: the ngx_stream_pass_module.

Hello,
what is the difference between pass from ngx_stream_pass_module and
proxy_pass from ngx_stream_proxy_module?

As in what entails "directly" in "allows passing the accepted connection
directly to any configured listening socket"?


wbr

rr

_______
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


Re: nginx-1.25.5

2024-04-17 Thread juef via nginx
Hi,

(Tue, 16 Apr 20:40) Roman Arutyunyan:
> Changes with nginx 1.25.516 Apr 2024
> 
> *) Feature: virtual servers in the stream module.
> 
> *) Feature: the ngx_stream_pass_module.
> 
> *) Feature: the "deferred", "accept_filter", and "setfib" parameters of
>the "listen" directive in the stream module.
> 
> *) Feature: cache line size detection for some architectures.
>Thanks to Piotr Sikora.
> 
> *) Feature: support for Homebrew on Apple Silicon.
>Thanks to Piotr Sikora.
> 
> *) Bugfix: Windows cross-compilation bugfixes and improvements.
>Thanks to Piotr Sikora.
> 
> *) Bugfix: unexpected connection closure while using 0-RTT in QUIC.
>Thanks to Vladimir Khomutov.

I'm subscribed to Mercurial Atom feed also.

There are incorrect links, they contain redundant port definition,
and because of that there is an SSL error: packet length too long.

i.e. https://hg.nginx.org:80/nginx/rev/8618e4d900cc
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


Nginx 1.26

2024-04-16 Thread Vishwas Bm
Hi,

When will nginx 1.26.0 be available ?
Any specific timeline for this ?


Regards,
Vishwas
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


[nginx-announce] njs-0.8.4

2024-04-16 Thread Dmitry Volyntsev

Hello,

I'm glad to announce a new release of NGINX JavaScript module (njs).

This release introduced the initial QuickJS engine support in CLI
as well as regular bugfixes.

Notable new features:
- QuickJS in njs CLI:
: $ ./configure --cc-opt="-I/path/to/quickjs -L/path/to/quickjs" && make njs
: $ ./build/njs -n QuickJS
:
: >> new Map()
: [object Map]

Learn more about njs:

- Overview and introduction:
  https://nginx.org/en/docs/njs/
- NGINX JavaScript in Your Web Server Configuration:
  https://youtu.be/Jc_L6UffFOs
- Extending NGINX with Custom Code:
  https://youtu.be/0CVhq4AUU7M
- Using node modules with njs:
  https://nginx.org/en/docs/njs/node_modules.html
- Writing njs code using TypeScript definition files:
  https://nginx.org/en/docs/njs/typescript.html

Feel free to try it and give us feedback on:

- Github:
  https://github.com/nginx/njs/issues
- Mailing list:
  https://mailman.nginx.org/mailman/listinfo/nginx-devel

Additional examples and howtos can be found here:

- Github:
  https://github.com/nginx/njs-examples

Changes with njs 0.8.4   16 Apr 2024

    nginx modules:

    *) Feature: allowing to set Server header for outgoing headers.

    *) Improvement: validating URI and args arguments in r.subrequest().

    *) Improvement: checking for duplicate js_set variables.

    *) Bugfix: fixed clear() method of a shared dictionary without
   timeout introduced in 0.8.3.

    *) Bugfix: fixed r.send() with Buffer argument.

    Core:

    *) Feature: added QuickJS engine support in CLI.

    *) Bugfix: fixed atob() with non-padded base64 strings.
___
nginx-announce mailing list
nginx-announce@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx-announce


[nginx-ru-announce] nginx-1.25.5

2024-04-16 Thread Roman Arutyunyan
Изменения в nginx 1.25.5  16.04.2024

*) Добавление: виртуальные сервера в модуле stream.

*) Добавление: модуль ngx_stream_pass_module.

*) Добавление: параметры deferred, accept_filter и setfib директивы
   listen в модуле stream.

*) Добавление: определение размера строки кеша процессора для некоторых
   архитектур.
   Спасибо Piotr Sikora.

*) Добавление: поддержка Homebrew на Apple Silicon.
   Спасибо Piotr Sikora.

*) Исправление: улучшения и исправления кросс-компиляции для Windows.
   Спасибо Piotr Sikora.

*) Исправление: неожиданное закрытие соединения при использовании 0-RTT
   в QUIC.
   Спасибо Владимиру Хомутову.



Roman Arutyunyan
a...@nginx.com




___
nginx-ru-announce mailing list
nginx-ru-announce@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx-ru-announce


nginx-1.25.5

2024-04-16 Thread Roman Arutyunyan
Изменения в nginx 1.25.5  16.04.2024

*) Добавление: виртуальные сервера в модуле stream.

*) Добавление: модуль ngx_stream_pass_module.

*) Добавление: параметры deferred, accept_filter и setfib директивы
   listen в модуле stream.

*) Добавление: определение размера строки кеша процессора для некоторых
   архитектур.
   Спасибо Piotr Sikora.

*) Добавление: поддержка Homebrew на Apple Silicon.
   Спасибо Piotr Sikora.

*) Исправление: улучшения и исправления кросс-компиляции для Windows.
   Спасибо Piotr Sikora.

*) Исправление: неожиданное закрытие соединения при использовании 0-RTT
   в QUIC.
   Спасибо Владимиру Хомутову.



Roman Arutyunyan
a...@nginx.com




___
nginx-ru mailing list
nginx-ru@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx-ru


[nginx-announce] nginx-1.25.5

2024-04-16 Thread Roman Arutyunyan
Changes with nginx 1.25.516 Apr 2024

*) Feature: virtual servers in the stream module.

*) Feature: the ngx_stream_pass_module.

*) Feature: the "deferred", "accept_filter", and "setfib" parameters of
   the "listen" directive in the stream module.

*) Feature: cache line size detection for some architectures.
   Thanks to Piotr Sikora.

*) Feature: support for Homebrew on Apple Silicon.
   Thanks to Piotr Sikora.

*) Bugfix: Windows cross-compilation bugfixes and improvements.
   Thanks to Piotr Sikora.

*) Bugfix: unexpected connection closure while using 0-RTT in QUIC.
   Thanks to Vladimir Khomutov.



Roman Arutyunyan
a...@nginx.com




_______
nginx-announce mailing list
nginx-announce@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx-announce


nginx-1.25.5

2024-04-16 Thread Roman Arutyunyan
Changes with nginx 1.25.516 Apr 2024

*) Feature: virtual servers in the stream module.

*) Feature: the ngx_stream_pass_module.

*) Feature: the "deferred", "accept_filter", and "setfib" parameters of
   the "listen" directive in the stream module.

*) Feature: cache line size detection for some architectures.
   Thanks to Piotr Sikora.

*) Feature: support for Homebrew on Apple Silicon.
   Thanks to Piotr Sikora.

*) Bugfix: Windows cross-compilation bugfixes and improvements.
   Thanks to Piotr Sikora.

*) Bugfix: unexpected connection closure while using 0-RTT in QUIC.
   Thanks to Vladimir Khomutov.



Roman Arutyunyan
a...@nginx.com




_______
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


[nginx] release-1.25.5 tag

2024-04-16 Thread Roman Arutyunyan
details:   https://hg.nginx.org/nginx/rev/49dce50fad40
branches:  
changeset: 9239:49dce50fad40
user:  Roman Arutyunyan 
date:  Tue Apr 16 18:29:59 2024 +0400
description:
release-1.25.5 tag

diffstat:

 .hgtags |  1 +
 1 files changed, 1 insertions(+), 0 deletions(-)

diffs (8 lines):

diff -r 8618e4d900cc -r 49dce50fad40 .hgtags
--- a/.hgtags   Tue Apr 16 18:27:50 2024 +0400
+++ b/.hgtags   Tue Apr 16 18:29:59 2024 +0400
@@ -477,3 +477,4 @@ f8134640e8615448205785cf00b0bc810489b495
 1d839f05409d1a50d0f15a2bf36547001f99ae40 release-1.25.2
 294a3d07234f8f65d7b0e0b0e2c5b05c12c5da0a release-1.25.3
 173a0a7dbce569adbb70257c6ec4f0f6bc585009 release-1.25.4
+8618e4d900cc71082fbe7dc72af087937d64faf5 release-1.25.5
___
nginx-devel mailing list
nginx-devel@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx-devel


[nginx] nginx-1.25.5-RELEASE

2024-04-16 Thread Roman Arutyunyan
details:   https://hg.nginx.org/nginx/rev/8618e4d900cc
branches:  
changeset: 9238:8618e4d900cc
user:  Roman Arutyunyan 
date:  Tue Apr 16 18:27:50 2024 +0400
description:
nginx-1.25.5-RELEASE

diffstat:

 docs/xml/nginx/changes.xml |  77 ++
 1 files changed, 77 insertions(+), 0 deletions(-)

diffs (87 lines):

diff -r 9f84f2e49c62 -r 8618e4d900cc docs/xml/nginx/changes.xml
--- a/docs/xml/nginx/changes.xmlThu Apr 11 11:37:30 2024 +0400
+++ b/docs/xml/nginx/changes.xmlTue Apr 16 18:27:50 2024 +0400
@@ -5,6 +5,83 @@
 
 
 
+
+
+
+
+виртуальные сервера в модуле stream.
+
+
+virtual servers in the stream module.
+
+
+
+
+
+модуль ngx_stream_pass_module.
+
+
+the ngx_stream_pass_module.
+
+
+
+
+
+параметры deferred, accept_filter и setfib директивы listen в модуле stream.
+
+
+the "deferred", "accept_filter", and "setfib" parameters of the "listen"
+directive in the stream module.
+
+
+
+
+
+определение размера строки кеша процессора для некоторых архитектур.
+Спасибо Piotr Sikora.
+
+
+cache line size detection for some architectures.
+Thanks to Piotr Sikora.
+
+
+
+
+
+поддержка Homebrew на Apple Silicon.
+Спасибо Piotr Sikora.
+
+
+support for Homebrew on Apple Silicon.
+Thanks to Piotr Sikora.
+
+
+
+
+
+улучшения и исправления кросс-компиляции для Windows.
+Спасибо Piotr Sikora.
+
+
+Windows cross-compilation bugfixes and improvements.
+Thanks to Piotr Sikora.
+
+
+
+
+
+неожиданное закрытие соединения при использовании 0-RTT в QUIC.
+Спасибо Владимиру Хомутову.
+
+
+unexpected connection closure while using 0-RTT in QUIC.
+Thanks to Vladimir Khomutov.
+
+
+
+
+
+
 
 
 
___
nginx-devel mailing list
nginx-devel@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx-devel


Re: NGINX upload limit

2024-04-15 Thread zen zenitram
Good day!

Here is the NGINX Configuration, we tried everything but up to now the
upload max limit is still at 128 kb.


*/etc/nginx/nginx.conf*


worker_processes auto;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;

events {
   worker_connections 768;
   # multi_accept on;
}

http {


   # Basic Settings

   sendfile on;
   tcp_nopush on;
   types_hash_max_size 2048;
   client_max_body_size 500M;
   # server_tokens off;

   # server_names_hash_bucket_size 64;
   # server_name_in_redirect off;

   include /etc/nginx/mime.types;
   default_type application/octet-stream;

   ##
   # SSL Settings
   ##

   ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3; # Dropping SSLv3, ref:
POODLE
   ssl_prefer_server_ciphers on;

   ##
   # Logging Settings
   ##

   access_log /var/log/nginx/access.log;
   error_log /var/log/nginx/error.log;

   ##
   # Gzip Settings
   ##

   gzip_vary on;

   # gzip_vary on;
   # gzip_proxied any;
   # gzip_comp_level 6;
   # gzip_buffers 16 8k;
   # gzip_http_version 1.1;
   # gzip_types text/plain text/css application/json
application/javascript text/xml application/xml application/xml+rss
text/javascript;

   ##
   # Virtual Host Configs
   ##

   include /etc/nginx/conf.d/*.conf;
   include /etc/nginx/sites-enabled/*;
}





*/etc/nginx/site-available/test.edu.ph <http://test.edu.ph>*

server {
   server_name test.edu.ph;

   location / {
   proxy_pass https://192.168.8.243;
   proxy_set_header Host $host;
   proxy_set_header X-Real-IP $remote_addr;
   proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
   proxy_set_header X-Forwarded-Proto $scheme;
   client_max_body_size 500M;
   }

   listen 443 ssl; # managed by Certbot
   ssl_certificate /etc/letsencrypt/live/test.edu.ph/fullchain.pem; #
managed by Certbot
   ssl_certificate_key /etc/letsencrypt/live/test.edu.ph/privkey.pem; #
managed by Certbot
   include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
   ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
   client_max_body_size 500M;

}

server {
   if ($host = test.edu.ph) {
   return 301 https://$host$request_uri;
   } # managed by Certbot


   listen 80;
   server_name test.edu.ph;
   return 404; # managed by Certbot
   client_max_body_size 500M;
}






Can anyone help us solve our max upload size limit.

Our server is set to https only access.


Thank you!



On Fri, Mar 1, 2024 at 11:27 PM Sergey A. Osokin  wrote:

> Hi there,
>
> On Fri, Mar 01, 2024 at 04:45:07PM +0800, zen zenitram wrote:
> >
> > We created an institutional repository with eprints and using NGINX as
> load
> > balancer, but we encountered problem in uploading file to our repository.
> > It only alccepts 128 kb file upload, the client_max_body_size is set to 2
> > gb.
> >
> > but still it only accepts 128 kb max upload size.
> > How to solve this problem?
>
> I'd recommend to share the nginx configuration file in the maillist.
> Don't forget to remove any sensitive information or create a minimal
> nginx configuration reproduces the case.
>
> Thank you.
>
> --
> Sergey A. Osokin
> ___
> nginx mailing list
> nginx@nginx.org
> https://mailman.nginx.org/mailman/listinfo/nginx
>
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


Re: nginx web server configuration file for Suprema BioStar 2 Door Access System

2024-04-15 Thread Turritopsis Dohrnii Teo En Ming via nginx-devel
Noted with thanks.

Regards,

Mr. Turritopsis Dohrnii Teo En Ming
Targeted Individual in Singapore






On Saturday, March 9th, 2024 at 3:57 PM, Muhammad Nuzaihan 
 wrote:

> Hello,
> 
> I don't think nginx uses Java key store and that's specific only for
> Java applications. you should ask your manufacturer the details on how
> your door works.
> 
> Also this (nginx-devel@nginx.org) mailing list is specifically for
> developers to discuss about code/bugfix/features and not for problems
> from end-users.
> 
> Regards,
> A Singaporean living in Malaysia
> 
> On Sat, Mar 9, 2024 at 3:18 PM Turritopsis Dohrnii Teo En Ming via
> nginx-devel nginx-devel@nginx.org wrote:
> 
> > Subject: nginx web server configuration file for Suprema BioStar 2 Door 
> > Access System
> > 
> > Good day from Singapore,
> > 
> > On 7 Mar 2024 Thursday, I was installing NEW self-signed SSL certificate 
> > for Suprema BioStar 2 door access system version 2.7.12.39 for a law firm 
> > in Singapore because the common name (CN) in the existing SSL certificate 
> > was pointing to the WRONG private IPv4 address 192.168.0.149.
> > 
> > I have referred to the following Suprema technical support guide to install 
> > new self-signed SSL certificate for the door access system.
> > 
> > Article: [BioStar 2] How to Apply a Private Certificate for HTTPS
> > Link: 
> > https://support.supremainc.com/en/support/solutions/articles/2405211--biostar-2-how-to-apply-a-private-certificate-for-https
> > 
> > The server certificate/public key (biostar_cert.crt), private key 
> > (biostar_cert.key), PKCS12 file (biostar_cert.p12) and Java Keystore 
> > (keystore.jks) are all located inside the folder C:\Program Files\BioStar 
> > 2(x64)\nginx\conf
> > 
> > Looking at the above directory pathname, it is apparent that the South 
> > Korean Suprema BioStar 2 door access system is using the open source nginx 
> > web server.
> > 
> > But why are ssl_certificate and ssl_certificate_key directives NOT 
> > configured for the HTTPS section in the nginx configuration file? The 
> > entire HTTPS section was also commented out.
> > 
> > I am baffled.
> > 
> > Why is there a Java Keystore (keystore.jks)? Is nginx web server being used 
> > in conjunction with some type of open source Java web server?
> > 
> > Looking forward to your reply.
> > 
> > Thank you.
> > 
> > I shall reproduce the nginx web server configuration file for the Suprema 
> > BioStar 2 door access system below for your reference.
> > 
> > nginx.conf is inside C:\Program Files\BioStar 2(x64)\nginx\conf
> > 
> > 
> > 
> > #user nobody;
> > worker_processes 1;
> > 
> > #error_log logs/error.log;
> > #error_log logs/error.log notice;
> > #error_log logs/error.log info;
> > 
> > #pid logs/nginx.pid;
> > 
> > events {
> > worker_connections 1024;
> > }
> > 
> > http {
> > include mime.types;
> > default_type application/octet-stream;
> > 
> > #log_format main '$remote_addr - $remote_user [$time_local] "$request" '
> > # '$status $body_bytes_sent "$http_referer" '
> > # '"$http_user_agent" "$http_x_forwarded_for"';
> > 
> > #access_log logs/access.log main;
> > 
> > sendfile on;
> > #tcp_nopush on;
> > 
> > #keepalive_timeout 0;
> > keepalive_timeout 65;
> > 
> > #gzip on;
> > 
> > server {
> > listen 80;
> > server_name localhost;
> > 
> > #charset koi8-r;
> > 
> > #access_log logs/host.access.log main;
> > 
> > location / {
> > root html;
> > index index.html index.htm;
> > }
> > 
> > #error_page 404 /404.html;
> > 
> > # redirect server error pages to the static page /50x.html
> > #
> > error_page 500 502 503 504 /50x.html;
> > location = /50x.html {
> > root html;
> > }
> > 
> > # proxy the PHP scripts to Apache listening on 127.0.0.1:80
> > #
> > #location ~ \.php$ {
> > # proxy_pass http://127.0.0.1;
> > #}
> > 
> > # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
> > #
> > #location ~ \.php$ {
> > # root html;
> > # fastcgi_pass 127.0.0.1:9000;
> > # fastcgi_index index.php;
> > # fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name;
> > # include fastcgi_params;
> > #}
> > 
> > # Swagger document location
> > location /biostar {
> > root html;
> > }
> > 
> > #

[nginx] Stream pass: limited the number of passes per connection.

2024-04-11 Thread Roman Arutyunyan
details:   https://hg.nginx.org/nginx/rev/9f84f2e49c62
branches:  
changeset: 9237:9f84f2e49c62
user:  Roman Arutyunyan 
date:  Thu Apr 11 11:37:30 2024 +0400
description:
Stream pass: limited the number of passes per connection.

Previously a cycle in pass configuration resulted in stack overflow.

diffstat:

 src/stream/ngx_stream_pass_module.c |  51 +
 1 files changed, 51 insertions(+), 0 deletions(-)

diffs (82 lines):

diff -r 155c9093de9d -r 9f84f2e49c62 src/stream/ngx_stream_pass_module.c
--- a/src/stream/ngx_stream_pass_module.c   Wed Apr 10 09:38:10 2024 +0300
+++ b/src/stream/ngx_stream_pass_module.c   Thu Apr 11 11:37:30 2024 +0400
@@ -10,6 +10,9 @@
 #include 
 
 
+#define NGX_STREAM_PASS_MAX_PASSES  10
+
+
 typedef struct {
 ngx_addr_t  *addr;
 ngx_stream_complex_value_t  *addr_value;
@@ -17,6 +20,8 @@ typedef struct {
 
 
 static void ngx_stream_pass_handler(ngx_stream_session_t *s);
+static ngx_int_t ngx_stream_pass_check_cycle(ngx_connection_t *c);
+static void ngx_stream_pass_cleanup(void *data);
 static ngx_int_t ngx_stream_pass_match(ngx_listening_t *ls, ngx_addr_t *addr);
 static void *ngx_stream_pass_create_srv_conf(ngx_conf_t *cf);
 static char *ngx_stream_pass(ngx_conf_t *cf, ngx_command_t *cmd, void *conf);
@@ -125,6 +130,10 @@ ngx_stream_pass_handler(ngx_stream_sessi
 ngx_log_debug1(NGX_LOG_DEBUG_STREAM, c->log, 0,
"stream pass addr: \"%V\"", >name);
 
+if (ngx_stream_pass_check_cycle(c) != NGX_OK) {
+goto failed;
+}
+
 ls = ngx_cycle->listening.elts;
 
 for (i = 0; i < ngx_cycle->listening.nelts; i++) {
@@ -164,6 +173,48 @@ failed:
 
 
 static ngx_int_t
+ngx_stream_pass_check_cycle(ngx_connection_t *c)
+{
+ngx_uint_t  *num;
+ngx_pool_cleanup_t  *cln;
+
+for (cln = c->pool->cleanup; cln; cln = cln->next) {
+if (cln->handler != ngx_stream_pass_cleanup) {
+continue;
+}
+
+num = cln->data;
+
+if (++(*num) > NGX_STREAM_PASS_MAX_PASSES) {
+ngx_log_error(NGX_LOG_ERR, c->log, 0, "stream pass cycle");
+return NGX_ERROR;
+}
+
+return NGX_OK;
+}
+
+cln = ngx_pool_cleanup_add(c->pool, sizeof(ngx_uint_t));
+if (cln == NULL) {
+return NGX_ERROR;
+}
+
+cln->handler = ngx_stream_pass_cleanup;
+
+num = cln->data;
+*num = 1;
+
+return NGX_OK;
+}
+
+
+static void
+ngx_stream_pass_cleanup(void *data)
+{
+return;
+}
+
+
+static ngx_int_t
 ngx_stream_pass_match(ngx_listening_t *ls, ngx_addr_t *addr)
 {
 if (!ls->wildcard) {
___
nginx-devel mailing list
nginx-devel@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx-devel


Re: [nginx] QUIC: "handshake_timeout" configuration parameter.

2024-04-10 Thread Roman Arutyunyan
Hi,

> On 10 Apr 2024, at 10:57 AM, Vladimir Homutov  wrote:
> 
> On Tue, Apr 09, 2024 at 03:02:21PM +0400, Roman Arutyunyan wrote:
>> Hello Vladimir,
>> 
>> On Mon, Apr 08, 2024 at 03:03:27PM +0300, Vladimir Homutov via nginx-devel 
>> wrote:
>>> On Fri, Sep 22, 2023 at 03:36:25PM +, Roman Arutyunyan wrote:
>>>> details:   https://hg.nginx.org/nginx/rev/ad3d34ddfdcc
>>>> branches:
>>>> changeset: 9158:ad3d34ddfdcc
>>>> user:  Roman Arutyunyan 
>>>> date:  Wed Sep 13 17:59:37 2023 +0400
>>>> description:
>>>> QUIC: "handshake_timeout" configuration parameter.
>>>> 
>>>> Previously QUIC did not have such parameter and handshake duration was
>>>> controlled by HTTP/3.  However that required creating and storing HTTP/3
>>>> session on first client datagram.  Apparently there's no convenient way to
>>>> store the session object until QUIC handshake is complete.  In the followup
>>>> patches session creation will be postponed to init() callback.
>>>> 
>>> 
>>> [...]
>>> 
>>>> diff -r daf8f5ba23d8 -r ad3d34ddfdcc src/event/quic/ngx_event_quic.c
>>>> --- a/src/event/quic/ngx_event_quic.c  Fri Sep 01 20:31:46 2023 +0400
>>>> +++ b/src/event/quic/ngx_event_quic.c  Wed Sep 13 17:59:37 2023 +0400
>>>> @@ -211,6 +211,8 @@ ngx_quic_run(ngx_connection_t *c, ngx_qu
>>>> qc = ngx_quic_get_connection(c);
>>>> 
>>>> ngx_add_timer(c->read, qc->tp.max_idle_timeout);
>>>> +ngx_add_timer(>close, qc->conf->handshake_timeout);
>>>> +
>>> 
>>> It looks like I've hit an issue with early data in such case.
>>> See the attached patch with details.
>> 
>> Indeed, there's an issue there.
>> 
>>> While there, I suggest a little debug improvement to better track
>>> stream and their parent connections.
>>> 
>>> 
>> 
>>> # HG changeset patch
>>> # User Vladimir Khomutov 
>>> # Date 1712576340 -10800
>>> #  Mon Apr 08 14:39:00 2024 +0300
>>> # Node ID 6e79f4ec40ed1c1ffec6a46b453051c01e556610
>>> # Parent  99e7050ac886f7c70a4048691e46846b930b1e28
>>> QUIC: fixed close timer processing with early data.
>>> 
>>> The ngx_quic_run() function uses qc->close timer to limit the handshake
>>> duration.  Normally it is removed by ngx_quic_do_init_streams() which is
>>> called once when we are done with initial SSL processing.
>>> 
>>> The problem happens when the client sends early data and streams are
>>> initialized in the ngx_quic_run() -> ngx_quic_handle_datagram() call.
>>> The order of set/remove timer calls is now reversed; the close timer is
>>> set up and the timer fires when assigned, starting the unexpected connection
>>> close process.
>>> 
>>> The patch moves timer cancelling right before the place where the stream
>>> initialization flag is tested, thus making it work with early data.
>>> 
>>> The issue was introduced in ad3d34ddfdcc.
>>> 
>>> diff --git a/src/event/quic/ngx_event_quic_streams.c 
>>> b/src/event/quic/ngx_event_quic_streams.c
>>> --- a/src/event/quic/ngx_event_quic_streams.c
>>> +++ b/src/event/quic/ngx_event_quic_streams.c
>>> @@ -575,6 +575,10 @@ ngx_quic_init_streams(ngx_connection_t *
>>> 
>>> qc = ngx_quic_get_connection(c);
>>> 
>>> +if (!qc->closing && qc->close.timer_set) {
>>> +ngx_del_timer(>close);
>>> +}
>>> +
>>> if (qc->streams.initialized) {
>>> return NGX_OK;
>>> }
>>> @@ -630,10 +634,6 @@ ngx_quic_do_init_streams(ngx_connection_
>>> 
>>> qc->streams.initialized = 1;
>>> 
>>> -if (!qc->closing && qc->close.timer_set) {
>>> -ngx_del_timer(>close);
>>> -}
>>> -
>>> return NGX_OK;
>>> }
>> 
>> This assumes that ngx_quic_init_streams() is always called on handshake end,
>> even if not needed.  This is true now, but it's not something we can to rely 
>> on.
>> 
>> Also, we probably don't need to limit handshake duration after streams are
>> initialized.  Application level will set the required keepalive timeout for
>> this.  Also, we need to include OCSP validation time in handshake timeout,
>> which your removed.
>> 
>> I assume a simpler solution would be not to set the timer in ngx_quic_run()
>> if streams are already initialized.
> 
> Agreed, see the updated patch:
> 
> 
> 

Thanks, committed!


Roman Arutyunyan
a...@nginx.com




___
nginx-devel mailing list
nginx-devel@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx-devel


[nginx] QUIC: fixed close timer processing with early data.

2024-04-10 Thread Roman Arutyunyan
details:   https://hg.nginx.org/nginx/rev/155c9093de9d
branches:  
changeset: 9236:155c9093de9d
user:  Vladimir Khomutov 
date:  Wed Apr 10 09:38:10 2024 +0300
description:
QUIC: fixed close timer processing with early data.

The ngx_quic_run() function uses qc->close timer to limit the handshake
duration.  Normally it is removed by ngx_quic_do_init_streams() which is
called once when we are done with initial SSL processing.

The problem happens when the client sends early data and streams are
initialized in the ngx_quic_run() -> ngx_quic_handle_datagram() call.
The order of set/remove timer calls is now reversed; the close timer is
set up and the timer fires when assigned, starting the unexpected connection
close process.

The fix is to skip setting the timer if streams were initialized during
handling of the initial datagram.  The idle timer for quic is set anyway,
and stream-related timeouts are managed by application layer.

diffstat:

 src/event/quic/ngx_event_quic.c |  5 -
 1 files changed, 4 insertions(+), 1 deletions(-)

diffs (15 lines):

diff -r 99e7050ac886 -r 155c9093de9d src/event/quic/ngx_event_quic.c
--- a/src/event/quic/ngx_event_quic.c   Mon Feb 26 20:00:48 2024 +
+++ b/src/event/quic/ngx_event_quic.c   Wed Apr 10 09:38:10 2024 +0300
@@ -211,7 +211,10 @@ ngx_quic_run(ngx_connection_t *c, ngx_qu
 qc = ngx_quic_get_connection(c);
 
 ngx_add_timer(c->read, qc->tp.max_idle_timeout);
-ngx_add_timer(>close, qc->conf->handshake_timeout);
+
+if (!qc->streams.initialized) {
+ngx_add_timer(>close, qc->conf->handshake_timeout);
+}
 
 ngx_quic_connstate_dbg(c);
 
_______
nginx-devel mailing list
nginx-devel@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx-devel


Re: [nginx] QUIC: "handshake_timeout" configuration parameter.

2024-04-10 Thread Vladimir Homutov via nginx-devel
On Tue, Apr 09, 2024 at 03:02:21PM +0400, Roman Arutyunyan wrote:
> Hello Vladimir,
>
> On Mon, Apr 08, 2024 at 03:03:27PM +0300, Vladimir Homutov via nginx-devel 
> wrote:
> > On Fri, Sep 22, 2023 at 03:36:25PM +, Roman Arutyunyan wrote:
> > > details:   https://hg.nginx.org/nginx/rev/ad3d34ddfdcc
> > > branches:
> > > changeset: 9158:ad3d34ddfdcc
> > > user:  Roman Arutyunyan 
> > > date:  Wed Sep 13 17:59:37 2023 +0400
> > > description:
> > > QUIC: "handshake_timeout" configuration parameter.
> > >
> > > Previously QUIC did not have such parameter and handshake duration was
> > > controlled by HTTP/3.  However that required creating and storing HTTP/3
> > > session on first client datagram.  Apparently there's no convenient way to
> > > store the session object until QUIC handshake is complete.  In the 
> > > followup
> > > patches session creation will be postponed to init() callback.
> > >
> >
> > [...]
> >
> > > diff -r daf8f5ba23d8 -r ad3d34ddfdcc src/event/quic/ngx_event_quic.c
> > > --- a/src/event/quic/ngx_event_quic.c Fri Sep 01 20:31:46 2023 +0400
> > > +++ b/src/event/quic/ngx_event_quic.c Wed Sep 13 17:59:37 2023 +0400
> > > @@ -211,6 +211,8 @@ ngx_quic_run(ngx_connection_t *c, ngx_qu
> > >  qc = ngx_quic_get_connection(c);
> > >
> > >  ngx_add_timer(c->read, qc->tp.max_idle_timeout);
> > > +ngx_add_timer(>close, qc->conf->handshake_timeout);
> > > +
> >
> > It looks like I've hit an issue with early data in such case.
> > See the attached patch with details.
>
> Indeed, there's an issue there.
>
> > While there, I suggest a little debug improvement to better track
> > stream and their parent connections.
> >
> >
>
> > # HG changeset patch
> > # User Vladimir Khomutov 
> > # Date 1712576340 -10800
> > #  Mon Apr 08 14:39:00 2024 +0300
> > # Node ID 6e79f4ec40ed1c1ffec6a46b453051c01e556610
> > # Parent  99e7050ac886f7c70a4048691e46846b930b1e28
> > QUIC: fixed close timer processing with early data.
> >
> > The ngx_quic_run() function uses qc->close timer to limit the handshake
> > duration.  Normally it is removed by ngx_quic_do_init_streams() which is
> > called once when we are done with initial SSL processing.
> >
> > The problem happens when the client sends early data and streams are
> > initialized in the ngx_quic_run() -> ngx_quic_handle_datagram() call.
> > The order of set/remove timer calls is now reversed; the close timer is
> > set up and the timer fires when assigned, starting the unexpected connection
> > close process.
> >
> > The patch moves timer cancelling right before the place where the stream
> > initialization flag is tested, thus making it work with early data.
> >
> > The issue was introduced in ad3d34ddfdcc.
> >
> > diff --git a/src/event/quic/ngx_event_quic_streams.c 
> > b/src/event/quic/ngx_event_quic_streams.c
> > --- a/src/event/quic/ngx_event_quic_streams.c
> > +++ b/src/event/quic/ngx_event_quic_streams.c
> > @@ -575,6 +575,10 @@ ngx_quic_init_streams(ngx_connection_t *
> >
> >  qc = ngx_quic_get_connection(c);
> >
> > +if (!qc->closing && qc->close.timer_set) {
> > +ngx_del_timer(>close);
> > +}
> > +
> >  if (qc->streams.initialized) {
> >  return NGX_OK;
> >  }
> > @@ -630,10 +634,6 @@ ngx_quic_do_init_streams(ngx_connection_
> >
> >  qc->streams.initialized = 1;
> >
> > -if (!qc->closing && qc->close.timer_set) {
> > -ngx_del_timer(>close);
> > -}
> > -
> >  return NGX_OK;
> >  }
>
> This assumes that ngx_quic_init_streams() is always called on handshake end,
> even if not needed.  This is true now, but it's not something we can to rely 
> on.
>
> Also, we probably don't need to limit handshake duration after streams are
> initialized.  Application level will set the required keepalive timeout for
> this.  Also, we need to include OCSP validation time in handshake timeout,
> which your removed.
>
> I assume a simpler solution would be not to set the timer in ngx_quic_run()
> if streams are already initialized.

Agreed, see the updated patch:


# HG changeset patch
# User Vladimir Khomutov 
# Date 1712731090 -10800
#  Wed Apr 10 09:38:10 2024 +0300
# Node ID 155c9093de9db02e3c0a511a45930d39ff51c709
# Parent  99e7050ac886f7c70a4048691e46846b930b1e28
QUIC: fixed clos

Re: [nginx] QUIC: "handshake_timeout" configuration parameter.

2024-04-09 Thread Roman Arutyunyan
Hello Vladimir,

On Mon, Apr 08, 2024 at 03:03:27PM +0300, Vladimir Homutov via nginx-devel 
wrote:
> On Fri, Sep 22, 2023 at 03:36:25PM +, Roman Arutyunyan wrote:
> > details:   https://hg.nginx.org/nginx/rev/ad3d34ddfdcc
> > branches:
> > changeset: 9158:ad3d34ddfdcc
> > user:  Roman Arutyunyan 
> > date:  Wed Sep 13 17:59:37 2023 +0400
> > description:
> > QUIC: "handshake_timeout" configuration parameter.
> >
> > Previously QUIC did not have such parameter and handshake duration was
> > controlled by HTTP/3.  However that required creating and storing HTTP/3
> > session on first client datagram.  Apparently there's no convenient way to
> > store the session object until QUIC handshake is complete.  In the followup
> > patches session creation will be postponed to init() callback.
> >
> 
> [...]
> 
> > diff -r daf8f5ba23d8 -r ad3d34ddfdcc src/event/quic/ngx_event_quic.c
> > --- a/src/event/quic/ngx_event_quic.c   Fri Sep 01 20:31:46 2023 +0400
> > +++ b/src/event/quic/ngx_event_quic.c   Wed Sep 13 17:59:37 2023 +0400
> > @@ -211,6 +211,8 @@ ngx_quic_run(ngx_connection_t *c, ngx_qu
> >  qc = ngx_quic_get_connection(c);
> >
> >  ngx_add_timer(c->read, qc->tp.max_idle_timeout);
> > +ngx_add_timer(>close, qc->conf->handshake_timeout);
> > +
> 
> It looks like I've hit an issue with early data in such case.
> See the attached patch with details.

Indeed, there's an issue there.

> While there, I suggest a little debug improvement to better track
> stream and their parent connections.
> 
> 

> # HG changeset patch
> # User Vladimir Khomutov 
> # Date 1712576340 -10800
> #  Mon Apr 08 14:39:00 2024 +0300
> # Node ID 6e79f4ec40ed1c1ffec6a46b453051c01e556610
> # Parent  99e7050ac886f7c70a4048691e46846b930b1e28
> QUIC: fixed close timer processing with early data.
> 
> The ngx_quic_run() function uses qc->close timer to limit the handshake
> duration.  Normally it is removed by ngx_quic_do_init_streams() which is
> called once when we are done with initial SSL processing.
> 
> The problem happens when the client sends early data and streams are
> initialized in the ngx_quic_run() -> ngx_quic_handle_datagram() call.
> The order of set/remove timer calls is now reversed; the close timer is
> set up and the timer fires when assigned, starting the unexpected connection
> close process.
> 
> The patch moves timer cancelling right before the place where the stream
> initialization flag is tested, thus making it work with early data.
> 
> The issue was introduced in ad3d34ddfdcc.
> 
> diff --git a/src/event/quic/ngx_event_quic_streams.c 
> b/src/event/quic/ngx_event_quic_streams.c
> --- a/src/event/quic/ngx_event_quic_streams.c
> +++ b/src/event/quic/ngx_event_quic_streams.c
> @@ -575,6 +575,10 @@ ngx_quic_init_streams(ngx_connection_t *
>  
>  qc = ngx_quic_get_connection(c);
>  
> +if (!qc->closing && qc->close.timer_set) {
> +ngx_del_timer(>close);
> +}
> +
>  if (qc->streams.initialized) {
>  return NGX_OK;
>  }
> @@ -630,10 +634,6 @@ ngx_quic_do_init_streams(ngx_connection_
>  
>  qc->streams.initialized = 1;
>  
> -if (!qc->closing && qc->close.timer_set) {
> -ngx_del_timer(>close);
> -}
> -
>  return NGX_OK;
>  }

This assumes that ngx_quic_init_streams() is always called on handshake end,
even if not needed.  This is true now, but it's not something we can to rely on.

Also, we probably don't need to limit handshake duration after streams are
initialized.  Application level will set the required keepalive timeout for
this.  Also, we need to include OCSP validation time in handshake timeout,
which your removed.

I assume a simpler solution would be not to set the timer in ngx_quic_run()
if streams are already initialized.

> # HG changeset patch
> # User Vladimir Khomutov 
> # Date 1712575741 -10800
> #  Mon Apr 08 14:29:01 2024 +0300
> # Node ID d9b80de50040bb8ac2a7e193971d1dfeb579cfc9
> # Parent  6e79f4ec40ed1c1ffec6a46b453051c01e556610
> QUIC: added debug logging of stream creation.
> 
> Currently, it is hard to associate stream connection number with its parent
> connection.  The typical case is to identify QUIC connection number given
> some user-visible URI (which occurs in request stream).
> 
> The patch adds the debug log message which reports about stream creation in
> the stream log and also shows the parent connection number.
> 
> diff --git a/src/event/quic/ngx_event_quic_streams.c 
> b/src/event/quic/ngx_event_quic_streams.c
> --- a/src/event/quic/ngx_event_quic_st

Re: [nginx] QUIC: "handshake_timeout" configuration parameter.

2024-04-08 Thread Vladimir Homutov via nginx-devel
On Fri, Sep 22, 2023 at 03:36:25PM +, Roman Arutyunyan wrote:
> details:   https://hg.nginx.org/nginx/rev/ad3d34ddfdcc
> branches:
> changeset: 9158:ad3d34ddfdcc
> user:  Roman Arutyunyan 
> date:  Wed Sep 13 17:59:37 2023 +0400
> description:
> QUIC: "handshake_timeout" configuration parameter.
>
> Previously QUIC did not have such parameter and handshake duration was
> controlled by HTTP/3.  However that required creating and storing HTTP/3
> session on first client datagram.  Apparently there's no convenient way to
> store the session object until QUIC handshake is complete.  In the followup
> patches session creation will be postponed to init() callback.
>

[...]

> diff -r daf8f5ba23d8 -r ad3d34ddfdcc src/event/quic/ngx_event_quic.c
> --- a/src/event/quic/ngx_event_quic.c Fri Sep 01 20:31:46 2023 +0400
> +++ b/src/event/quic/ngx_event_quic.c Wed Sep 13 17:59:37 2023 +0400
> @@ -211,6 +211,8 @@ ngx_quic_run(ngx_connection_t *c, ngx_qu
>  qc = ngx_quic_get_connection(c);
>
>  ngx_add_timer(c->read, qc->tp.max_idle_timeout);
> +ngx_add_timer(>close, qc->conf->handshake_timeout);
> +

It looks like I've hit an issue with early data in such case.
See the attached patch with details.

While there, I suggest a little debug improvement to better track
stream and their parent connections.


# HG changeset patch
# User Vladimir Khomutov 
# Date 1712576340 -10800
#  Mon Apr 08 14:39:00 2024 +0300
# Node ID 6e79f4ec40ed1c1ffec6a46b453051c01e556610
# Parent  99e7050ac886f7c70a4048691e46846b930b1e28
QUIC: fixed close timer processing with early data.

The ngx_quic_run() function uses qc->close timer to limit the handshake
duration.  Normally it is removed by ngx_quic_do_init_streams() which is
called once when we are done with initial SSL processing.

The problem happens when the client sends early data and streams are
initialized in the ngx_quic_run() -> ngx_quic_handle_datagram() call.
The order of set/remove timer calls is now reversed; the close timer is
set up and the timer fires when assigned, starting the unexpected connection
close process.

The patch moves timer cancelling right before the place where the stream
initialization flag is tested, thus making it work with early data.

The issue was introduced in ad3d34ddfdcc.

diff --git a/src/event/quic/ngx_event_quic_streams.c 
b/src/event/quic/ngx_event_quic_streams.c
--- a/src/event/quic/ngx_event_quic_streams.c
+++ b/src/event/quic/ngx_event_quic_streams.c
@@ -575,6 +575,10 @@ ngx_quic_init_streams(ngx_connection_t *
 
 qc = ngx_quic_get_connection(c);
 
+if (!qc->closing && qc->close.timer_set) {
+ngx_del_timer(>close);
+}
+
 if (qc->streams.initialized) {
 return NGX_OK;
 }
@@ -630,10 +634,6 @@ ngx_quic_do_init_streams(ngx_connection_
 
 qc->streams.initialized = 1;
 
-if (!qc->closing && qc->close.timer_set) {
-ngx_del_timer(>close);
-}
-
 return NGX_OK;
 }
 
# HG changeset patch
# User Vladimir Khomutov 
# Date 1712575741 -10800
#  Mon Apr 08 14:29:01 2024 +0300
# Node ID d9b80de50040bb8ac2a7e193971d1dfeb579cfc9
# Parent  6e79f4ec40ed1c1ffec6a46b453051c01e556610
QUIC: added debug logging of stream creation.

Currently, it is hard to associate stream connection number with its parent
connection.  The typical case is to identify QUIC connection number given
some user-visible URI (which occurs in request stream).

The patch adds the debug log message which reports about stream creation in
the stream log and also shows the parent connection number.

diff --git a/src/event/quic/ngx_event_quic_streams.c 
b/src/event/quic/ngx_event_quic_streams.c
--- a/src/event/quic/ngx_event_quic_streams.c
+++ b/src/event/quic/ngx_event_quic_streams.c
@@ -805,6 +805,10 @@ ngx_quic_create_stream(ngx_connection_t 
 
 ngx_rbtree_insert(>streams.tree, >node);
 
+ngx_log_debug2(NGX_LOG_DEBUG_EVENT, sc->log, 0,
+   "quic stream id:0x%xL created in connection *%uA", id,
+   c->log->connection);
+
 return qs;
 }
 
___
nginx-devel mailing list
nginx-devel@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx-devel


Re: Nginx ignores proxy_no_cache

2024-04-07 Thread Maxim Dounin
Hello!

On Sun, Apr 07, 2024 at 01:36:21PM +0200, Kirill A. Korinsky wrote:

> Greetings,
> 
> Let assume that I would like behavior on LB from the backend and force it to
> cache only resposnes that have a X-No-Cache header with value NO.
> 
> Nginx should cache a response with any code, if it has such headers.
> 
> This works well until the backend is unavailable and nginx returns a
> hardcoded 502 that doesn't have a control header, but such a response is
> cached anyway.
> 
> Here is the config that allows to reproduce the issue:
> 
>   http {
>   default_type  application/octet-stream;
> 
>   proxy_cache_path/tmp/nginx_cache keys_zone=the_zone:1m;
>   proxy_cache the_zone;
>   proxy_cache_valid   any 15m;
>   proxy_cache_methods GET HEAD POST;
> 
>   add_header  X-Cache-Status $upstream_cache_status always;
> 
>   map $upstream_http_x_no_cache $no_cache {
>   default 1;
>   "NO"0;
>   }
> 
>   proxy_no_cache  $no_cache;
> 
>   upstream echo {
>   server 127.127.127.127:80;
>   }
> 
>   server {
>   listen   1234;
>   server_name  localhost;
> 
>   location / {
>   proxy_pass http://echo;
>   }
>   }
>   }
> 
> when I run:
> 
>   curl -D - http://127.0.0.1:1234/
> 
> it returns MISS on the first request, and HIT on the second one.
> 
> Here I expect both requests to return MISS.

Thanks for the report.

Indeed, proxy_no_cache is only checked for proper upstream 
responses, but not when caching errors, including internally 
generated 502/504 in ngx_http_upstream_finalize_request(), and 
intercepted errors in ngx_http_upstream_intercept_errors().

Quick look suggests there will be also issues with caching errors 
after proxy_cache_bypass (errors won't be cached even if they 
should), as well as issues with proxy_cache_max_range_offset after 
proxy_cache_bypass (it will be ignored).

This needs cleanup / fixes, added to my TODO list.

-- 
Maxim Dounin
http://mdounin.ru/
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


Nginx ignores proxy_no_cache

2024-04-07 Thread Kirill A . Korinsky
Greetings,

Let assume that I would like behavior on LB from the backend and force it to
cache only resposnes that have a X-No-Cache header with value NO.

Nginx should cache a response with any code, if it has such headers.

This works well until the backend is unavailable and nginx returns a
hardcoded 502 that doesn't have a control header, but such a response is
cached anyway.

Here is the config that allows to reproduce the issue:

  http {
  default_type  application/octet-stream;

  proxy_cache_path/tmp/nginx_cache keys_zone=the_zone:1m;
  proxy_cache the_zone;
  proxy_cache_valid   any 15m;
  proxy_cache_methods GET HEAD POST;

  add_header  X-Cache-Status $upstream_cache_status always;

  map $upstream_http_x_no_cache $no_cache {
  default 1;
  "NO"0;
  }

  proxy_no_cache  $no_cache;

  upstream echo {
  server 127.127.127.127:80;
  }

  server {
  listen   1234;
  server_name  localhost;

  location / {
  proxy_pass http://echo;
  }
  }
  }

when I run:

  curl -D - http://127.0.0.1:1234/

it returns MISS on the first request, and HIT on the second one.

Here I expect both requests to return MISS.

-- 
wbr, Kirill
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


[njs] Tests: adapt stream_js.t to nginx changes.

2024-04-02 Thread Dmitry Volyntsev
details:   https://hg.nginx.org/njs/rev/17af51d42ad9
branches:  
changeset: 2307:17af51d42ad9
user:  Dmitry Volyntsev 
date:  Mon Apr 01 23:13:25 2024 -0700
description:
Tests: adapt stream_js.t to nginx changes.

Make the test more robust against changes in nginx, specifically
cf890df37bb6 (Stream: socket peek in preread phase).

The filter callbacks may be called multiple times by nginx and the exact
number is not specified. The new test avoids relying on the exact number
of calls from nginx.

diffstat:

 nginx/t/stream_js.t |  15 ---
 1 files changed, 8 insertions(+), 7 deletions(-)

diffs (57 lines):

diff -r 454d9c032c60 -r 17af51d42ad9 nginx/t/stream_js.t
--- a/nginx/t/stream_js.t   Mon Apr 01 23:13:24 2024 -0700
+++ b/nginx/t/stream_js.t   Mon Apr 01 23:13:25 2024 -0700
@@ -227,9 +227,10 @@ EOF
 }
 
 var res = '';
+var step = (v) => { if (!res || res[res.length - 1] != v) res += v };
 
 function access_step(s) {
-res += '1';
+step(1);
 
 setTimeout(function() {
 if (s.remoteAddress.match('127.0.0.1')) {
@@ -240,8 +241,8 @@ EOF
 
 function preread_step(s) {
 s.on('upload', function (data) {
-res += '2';
-if (res.length >= 3) {
+step(2);
+if (data.length > 0) {
 s.done();
 }
 });
@@ -249,18 +250,18 @@ EOF
 
 function filter_step(s) {
 s.on('upload', function(data, flags) {
+step(3);
 s.send(data);
-res += '3';
 });
 
 s.on('download', function(data, flags) {
 
 if (!flags.last) {
-res += '4';
+step(4);
 s.send(data);
 
 } else {
-res += '5';
+step(5);
 s.send(res, {last:1});
 s.off('download');
 }
@@ -409,7 +410,7 @@ is(stream('127.0.0.1:' . port(8082))->re
 is(stream('127.0.0.1:' . port(8083))->read(), '', 'stream js unknown 
function');
 is(stream('127.0.0.1:' . port(8084))->read(), 'sess_unk=undefined', 's.unk');
 
-is(stream('127.0.0.1:' . port(8086))->io('0'), '0122345',
+is(stream('127.0.0.1:' . port(8086))->io('0'), '012345',
'async handlers order');
 is(stream('127.0.0.1:' . port(8087))->io('#'), 'OK', 'access_undecided');
 is(stream('127.0.0.1:' . port(8088))->io('#'), 'OK', 'access_allow');
___
nginx-devel mailing list
nginx-devel@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx-devel


[njs] Tests: adapt stream_js_preload_object.t to nginx changes.

2024-04-02 Thread Dmitry Volyntsev
details:   https://hg.nginx.org/njs/rev/454d9c032c60
branches:  
changeset: 2306:454d9c032c60
user:  Dmitry Volyntsev 
date:  Mon Apr 01 23:13:24 2024 -0700
description:
Tests: adapt stream_js_preload_object.t to nginx changes.

Make the test more robust against changes in nginx, specifically
cf890df37bb6 (Stream: socket peek in preread phase).

The filter callbacks may be called multiple times by nginx and the exact
number is not specified. The new test avoids relying on the exact number
of calls from nginx.

diffstat:

 nginx/t/stream_js_preload_object.t |  21 ++---
 1 files changed, 10 insertions(+), 11 deletions(-)

diffs (56 lines):

diff -r 498b2387ef04 -r 454d9c032c60 nginx/t/stream_js_preload_object.t
--- a/nginx/t/stream_js_preload_object.tMon Apr 01 23:13:23 2024 -0700
+++ b/nginx/t/stream_js_preload_object.tMon Apr 01 23:13:24 2024 -0700
@@ -66,16 +66,17 @@ EOF
 
 $t->write_file('lib.js', <= 3) {
+pup = g1.b[1];
+if (data.length > 0) {
 s.done();
 }
 });
@@ -83,18 +84,16 @@ EOF
 
 function filter(s) {
 s.on('upload', function(data, flags) {
+fup = g1.c.prop[0].a;
 s.send(data);
-res += g1.c.prop[0].a;
 });
 
 s.on('download', function(data, flags) {
-if (!flags.last) {
-res += g1.b[3];
-s.send(data);
+fdown = g1.b[3];
+s.send(data);
 
-} else {
-res += g1.b[4];
-s.send(res, {last:1});
+if (flags.last) {
+s.send(`\${acc}\${pup}\${fup}\${fdown}`, flags);
 s.off('download');
 }
 });
@@ -117,6 +116,6 @@ EOF
 ###
 
 is(stream('127.0.0.1:' . port(8081))->read(), 'element', 'foo.bar.p');
-is(stream('127.0.0.1:' . port(8082))->io('0'), 'x122345', 'lib.access');
+is(stream('127.0.0.1:' . port(8082))->io('0'), 'x1234', 'filter chain');
 
 ###
___
nginx-devel mailing list
nginx-devel@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx-devel


Re: Nginx prematurely closing connections when reloaded

2024-03-29 Thread Sébastien Rebecchi
Hi Igor,

I did not have error_log directive at main context, so it took default
conf, which seems why i got only 1 log file. I added directive and now I
have more logs when I do nginx -s reload:
2024/03/29 09:04:20 [notice] 1064394#0: signal process started
2024/03/29 09:04:20 [notice] 3718160#0: signal 1 (SIGHUP) received from
1064394, reconfiguring
2024/03/29 09:04:20 [notice] 3718160#0: reconfiguring
2024/03/29 09:04:20 [notice] 3718160#0: using the "epoll" event method
2024/03/29 09:04:20 [notice] 3718160#0: start worker processes
2024/03/29 09:04:20 [notice] 3718160#0: start worker process 1064395
2024/03/29 09:04:20 [notice] 3718160#0: start worker process 1064396
2024/03/29 09:04:20 [notice] 3718160#0: start worker process 1064397
2024/03/29 09:04:20 [notice] 3718160#0: start worker process 1064398
2024/03/29 09:04:20 [notice] 3718160#0: start worker process 1064399
2024/03/29 09:04:20 [notice] 3718160#0: start worker process 1064400
2024/03/29 09:04:20 [notice] 3718160#0: start worker process 1064401
2024/03/29 09:04:20 [notice] 3718160#0: start worker process 1064402
2024/03/29 09:04:20 [notice] 3718160#0: start worker process 1064403
2024/03/29 09:04:20 [notice] 3718160#0: start worker process 1064404
2024/03/29 09:04:20 [notice] 3718160#0: start worker process 1064405
2024/03/29 09:04:20 [notice] 3718160#0: start worker process 1064406
2024/03/29 09:04:20 [notice] 1063598#0: gracefully shutting down
2024/03/29 09:04:20 [notice] 1063599#0: gracefully shutting down
2024/03/29 09:04:20 [notice] 1063600#0: gracefully shutting down
2024/03/29 09:04:20 [notice] 1063601#0: gracefully shutting down
2024/03/29 09:04:20 [notice] 1063602#0: gracefully shutting down
2024/03/29 09:04:20 [notice] 1063603#0: gracefully shutting down
2024/03/29 09:04:20 [notice] 1063604#0: gracefully shutting down
2024/03/29 09:04:20 [notice] 1063607#0: gracefully shutting down
2024/03/29 09:04:20 [notice] 1063608#0: gracefully shutting down
2024/03/29 09:04:20 [notice] 1063597#0: gracefully shutting down
2024/03/29 09:04:20 [notice] 1063605#0: gracefully shutting down
2024/03/29 09:04:20 [notice] 1063609#0: gracefully shutting down
2024/03/29 09:04:23 [notice] 3718160#0: signal 17 (SIGCHLD) received from
3989432
2024/03/29 09:04:23 [notice] 3718160#0: worker process 3989432 exited with
code 0
2024/03/29 09:04:23 [notice] 3718160#0: signal 29 (SIGIO) received
2024/03/29 09:04:26 [notice] 1060347#0: exiting
2024/03/29 09:04:26 [notice] 1060347#0: exit
2024/03/29 09:04:26 [notice] 3718160#0: signal 17 (SIGCHLD) received from
1060347
2024/03/29 09:04:26 [notice] 3718160#0: worker process 1060347 exited with
code 0
2024/03/29 09:04:26 [notice] 3718160#0: signal 29 (SIGIO) received
2024/03/29 09:04:29 [notice] 3718160#0: signal 17 (SIGCHLD) received from
3989423
2024/03/29 09:04:29 [notice] 3718160#0: worker process 3989423 exited with
code 0
2024/03/29 09:04:29 [notice] 3718160#0: signal 29 (SIGIO) received
... etc ...

Could the pb I encounter be linked to that discussion?
https://mailman.nginx.org/pipermail/nginx-devel/2024-January/YSJATQMPXDIBETCDS46OTKUZNOJK6Q22.html
Seems a race condition between a client that have started to send a new
request at the same time that the server has decided to close the idle
connection. Is there a plan to add a bugfix in nginx to handle that
properly?

Thanks,

Sébastien

Le ven. 29 mars 2024 à 00:04, Igor Ippolitov  a
écrit :

> Sébastien,
>
> Is it possible that messages go to another log file? These messages go to
> the main error log file, defined in the root context.
> Another common pitfall is a log level above notice. Try setting error log
> to a more verbose one, maybe?
>
> Regards,
> Igor.
>
>
> On 28/03/2024 18:27, Sébastien Rebecchi wrote:
>
> Hi Igor,
>
> Thanks for the answer.
>
> I really got that message 'signal process started' every time i do 'nginx
> -s reload' and this is the only log line I have, I don't have the other
> lines you mentioned. Is there anything to do to enable those logs?
>
> Sébastien
>
> Le jeu. 28 mars 2024, 16:40, Igor Ippolitov  a
> écrit :
>
>> Sébastien,
>>
>> The message about the signal process is only the beginning of the process.
>> You are interested in messages like the following:
>>
>> 2024/03/26 13:36:36 [notice] 723#723: signal 1 (SIGHUP) received from
>> 69064, reconfiguring
>> 2024/03/26 13:36:36 [notice] 723#723: reconfiguring
>> 2024/03/26 13:36:36 [notice] 723#723: using the "epoll" event method
>> 2024/03/26 13:36:36 [notice] 723#723: start worker processes
>> 2024/03/26 13:36:36 [notice] 723#723: start worker process 69065
>> 2024/03/26 13:36:36 [notice] 723#723: start worker process 69066
>> 2024/03/26 13:36:36 [notice] 723#723: start cache manager process 69067
>> 2024/03/26 13:36:36 [notice] 61903#61903: gracefully shutting down
>> 2024/03/26 13:3

Re: Nginx prematurely closing connections when reloaded

2024-03-28 Thread Igor Ippolitov

Sébastien,

Is it possible that messages go to another log file? These messages go 
to the main error log file, defined in the root context.
Another common pitfall is a log level above notice. Try setting error 
log to a more verbose one, maybe?


Regards,
Igor.


On 28/03/2024 18:27, Sébastien Rebecchi wrote:

Hi Igor,

Thanks for the answer.

I really got that message 'signal process started' every time i do 
'nginx -s reload' and this is the only log line I have, I don't have 
the other lines you mentioned. Is there anything to do to enable those 
logs?


Sébastien

Le jeu. 28 mars 2024, 16:40, Igor Ippolitov  a 
écrit :


Sébastien,

The message about the signal process is only the beginning of the
process.
You are interested in messages like the following:


2024/03/26 13:36:36 [notice] 723#723: signal 1 (SIGHUP) received
from 69064, reconfiguring
2024/03/26 13:36:36 [notice] 723#723: reconfiguring
2024/03/26 13:36:36 [notice] 723#723: using the "epoll" event method
2024/03/26 13:36:36 [notice] 723#723: start worker processes
2024/03/26 13:36:36 [notice] 723#723: start worker process 69065
2024/03/26 13:36:36 [notice] 723#723: start worker process 69066
2024/03/26 13:36:36 [notice] 723#723: start cache manager process
69067
2024/03/26 13:36:36 [notice] 61903#61903: gracefully shutting down
2024/03/26 13:36:36 [notice] 61905#61905: exiting
2024/03/26 13:36:36 [notice] 61903#61903: exiting
2024/03/26 13:36:36 [notice] 61904#61904: gracefully shutting down
2024/03/26 13:36:36 [notice] 61904#61904: exiting
2024/03/26 13:36:36 [notice] 61903#61903: exit


Note the 'gracefully shutting down' and 'exiting' message from
workers. Also the 'start' and 'reconfiguring' messages from the
master process.
There should be a similar sequence somewhere in your logs.
Having these logs may help explaining what happens on a reload.

Kind regards,
Igor.

On 26/03/2024 12:41, Sébastien Rebecchi wrote:

Hi Igor

There is no special logs on the IP_1 (the reloaded one) side,
only 1 log line, which is expected:
--- BEGIN ---
2024/03/26 13:37:55 [notice] 3928855#0: signal process started
--- END ---

I did not configure worker_shutdown_timeout, it is unlimited.

Sébastien.

Le lun. 25 mars 2024 à 17:59, Igor Ippolitov
 a écrit :

Sébastien,

    Nginx should keep active connections open and wait for a
request to complete before closing.
A reload starts a new set of workers while old workers wait
for old connections to shut down.
The only exception I'm aware of is having
worker_shutdown_timeout configured: in this case a worker
will wait till this timeout and forcibly close a connection.
Be default there is no timeout.

It would be curious to see error log of nginx at IP_1 (the
reloaded one) while the reload happens. It may explain the
reason for connection resets.

Kind regards,
Igor.

On 25/03/2024 12:31, Sébastien Rebecchi wrote:


Hello


I have an issue with nginx closing prematurely connections
when reload is performed.


I have some nginx servers configured to proxy_pass requests
to an upstream group. This group itself is composed of
several servers which are nginx themselves, and is
configured to use keepalive connections.

When I trigger a reload (-s reload) on an nginx of one of
the servers which is target of the upstream, I see in error
logs of all servers in front that connection was reset by
    the nginx which was reloaded.


Here configuration of upstream group (IPs are hidden
replaced by IP_X):

--- BEGIN ---

upstream data_api {

random;


server IP_1:80 max_fails=3 fail_timeout=30s;

server IP_2:80 max_fails=3 fail_timeout=30s;

server IP_3:80 max_fails=3 fail_timeout=30s;

server IP_4:80 max_fails=3 fail_timeout=30s;

server IP_5:80 max_fails=3 fail_timeout=30s;

server IP_6:80 max_fails=3 fail_timeout=30s;

server IP_7:80 max_fails=3 fail_timeout=30s;

server IP_8:80 max_fails=3 fail_timeout=30s;

server IP_9:80 max_fails=3 fail_timeout=30s;

server IP_10:80 max_fails=3 fail_timeout=30s;


keepalive 20;

}

--- END ---


Here configuration of the location using this upstream:

--- BEGIN ---

location / {

proxy_pass http://data_api;


proxy_http_version 1.1;

proxy_set_header Connection "";


proxy_set_header Host $host;

proxy_set_header X-Real-IP $real_ip;

proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;


proxy_connect_timeout 2s;

proxy_send_timeout 6s;

proxy_read_timeout 10s;


proxy

Re: Nginx prematurely closing connections when reloaded

2024-03-28 Thread Sébastien Rebecchi
Hi Igor,

Thanks for the answer.

I really got that message 'signal process started' every time i do 'nginx
-s reload' and this is the only log line I have, I don't have the other
lines you mentioned. Is there anything to do to enable those logs?

Sébastien

Le jeu. 28 mars 2024, 16:40, Igor Ippolitov  a écrit :

> Sébastien,
>
> The message about the signal process is only the beginning of the process.
> You are interested in messages like the following:
>
> 2024/03/26 13:36:36 [notice] 723#723: signal 1 (SIGHUP) received from
> 69064, reconfiguring
> 2024/03/26 13:36:36 [notice] 723#723: reconfiguring
> 2024/03/26 13:36:36 [notice] 723#723: using the "epoll" event method
> 2024/03/26 13:36:36 [notice] 723#723: start worker processes
> 2024/03/26 13:36:36 [notice] 723#723: start worker process 69065
> 2024/03/26 13:36:36 [notice] 723#723: start worker process 69066
> 2024/03/26 13:36:36 [notice] 723#723: start cache manager process 69067
> 2024/03/26 13:36:36 [notice] 61903#61903: gracefully shutting down
> 2024/03/26 13:36:36 [notice] 61905#61905: exiting
> 2024/03/26 13:36:36 [notice] 61903#61903: exiting
> 2024/03/26 13:36:36 [notice] 61904#61904: gracefully shutting down
> 2024/03/26 13:36:36 [notice] 61904#61904: exiting
> 2024/03/26 13:36:36 [notice] 61903#61903: exit
>
>
> Note the 'gracefully shutting down' and 'exiting' message from workers.
> Also the 'start' and 'reconfiguring' messages from the master process.
> There should be a similar sequence somewhere in your logs.
> Having these logs may help explaining what happens on a reload.
>
> Kind regards,
> Igor.
>
> On 26/03/2024 12:41, Sébastien Rebecchi wrote:
>
> Hi Igor
>
> There is no special logs on the IP_1 (the reloaded one) side, only 1 log
> line, which is expected:
> --- BEGIN ---
> 2024/03/26 13:37:55 [notice] 3928855#0: signal process started
> --- END ---
>
> I did not configure worker_shutdown_timeout, it is unlimited.
>
> Sébastien.
>
> Le lun. 25 mars 2024 à 17:59, Igor Ippolitov  a
> écrit :
>
>> Sébastien,
>>
>> Nginx should keep active connections open and wait for a request to
>> complete before closing.
>> A reload starts a new set of workers while old workers wait for old
>> connections to shut down.
>> The only exception I'm aware of is having worker_shutdown_timeout
>> configured: in this case a worker will wait till this timeout and forcibly
>> close a connection. Be default there is no timeout.
>>
>> It would be curious to see error log of nginx at IP_1 (the reloaded one)
>> while the reload happens. It may explain the reason for connection resets.
>>
>> Kind regards,
>> Igor.
>>
>> On 25/03/2024 12:31, Sébastien Rebecchi wrote:
>>
>> Hello
>>
>>
>> I have an issue with nginx closing prematurely connections when reload
>> is performed.
>>
>>
>> I have some nginx servers configured to proxy_pass requests to an
>> upstream group. This group itself is composed of several servers which are
>> nginx themselves, and is configured to use keepalive connections.
>>
>> When I trigger a reload (-s reload) on an nginx of one of the servers
>> which is target of the upstream, I see in error logs of all servers in
>> front that connection was reset by the nginx which was reloaded.
>>
>>
>> Here configuration of upstream group (IPs are hidden replaced by IP_X):
>>
>> --- BEGIN ---
>>
>> upstream data_api {
>>
>> random;
>>
>>
>> server IP_1:80 max_fails=3 fail_timeout=30s;
>>
>> server IP_2:80 max_fails=3 fail_timeout=30s;
>>
>> server IP_3:80 max_fails=3 fail_timeout=30s;
>>
>> server IP_4:80 max_fails=3 fail_timeout=30s;
>>
>> server IP_5:80 max_fails=3 fail_timeout=30s;
>>
>> server IP_6:80 max_fails=3 fail_timeout=30s;
>>
>> server IP_7:80 max_fails=3 fail_timeout=30s;
>>
>> server IP_8:80 max_fails=3 fail_timeout=30s;
>>
>> server IP_9:80 max_fails=3 fail_timeout=30s;
>>
>> server IP_10:80 max_fails=3 fail_timeout=30s;
>>
>>
>> keepalive 20;
>>
>> }
>>
>> --- END ---
>>
>>
>> Here configuration of the location using this upstream:
>>
>> --- BEGIN ---
>>
>> location / {
>>
>> proxy_pass http://data_api;
>>
>>
>> proxy_http_version 1.1;
>>
>> proxy_set_header Connection "";
>>
>>
>> proxy_set_header Host $host;
>>
>> proxy_set_header X-Real-IP $real_ip;
>>
>> proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
>>
>>
>>

Re: Nginx prematurely closing connections when reloaded

2024-03-28 Thread Igor Ippolitov

Sébastien,

The message about the signal process is only the beginning of the process.
You are interested in messages like the following:

2024/03/26 13:36:36 [notice] 723#723: signal 1 (SIGHUP) received from 
69064, reconfiguring

2024/03/26 13:36:36 [notice] 723#723: reconfiguring
2024/03/26 13:36:36 [notice] 723#723: using the "epoll" event method
2024/03/26 13:36:36 [notice] 723#723: start worker processes
2024/03/26 13:36:36 [notice] 723#723: start worker process 69065
2024/03/26 13:36:36 [notice] 723#723: start worker process 69066
2024/03/26 13:36:36 [notice] 723#723: start cache manager process 69067
2024/03/26 13:36:36 [notice] 61903#61903: gracefully shutting down
2024/03/26 13:36:36 [notice] 61905#61905: exiting
2024/03/26 13:36:36 [notice] 61903#61903: exiting
2024/03/26 13:36:36 [notice] 61904#61904: gracefully shutting down
2024/03/26 13:36:36 [notice] 61904#61904: exiting
2024/03/26 13:36:36 [notice] 61903#61903: exit


Note the 'gracefully shutting down' and 'exiting' message from workers. 
Also the 'start' and 'reconfiguring' messages from the master process.

There should be a similar sequence somewhere in your logs.
Having these logs may help explaining what happens on a reload.

Kind regards,
Igor.

On 26/03/2024 12:41, Sébastien Rebecchi wrote:

Hi Igor

There is no special logs on the IP_1 (the reloaded one) side, only 1 
log line, which is expected:

--- BEGIN ---
2024/03/26 13:37:55 [notice] 3928855#0: signal process started
--- END ---

I did not configure worker_shutdown_timeout, it is unlimited.

Sébastien.

Le lun. 25 mars 2024 à 17:59, Igor Ippolitov  a 
écrit :


Sébastien,

Nginx should keep active connections open and wait for a request
to complete before closing.
A reload starts a new set of workers while old workers wait for
old connections to shut down.
The only exception I'm aware of is having worker_shutdown_timeout
configured: in this case a worker will wait till this timeout and
forcibly close a connection. Be default there is no timeout.

It would be curious to see error log of nginx at IP_1 (the
reloaded one) while the reload happens. It may explain the reason
for connection resets.

Kind regards,
Igor.

On 25/03/2024 12:31, Sébastien Rebecchi wrote:


Hello


I have an issue with nginx closing prematurely connections when
reload is performed.


I have some nginx servers configured to proxy_pass requests to an
upstream group. This group itself is composed of several servers
which are nginx themselves, and is configured to use keepalive
connections.

When I trigger a reload (-s reload) on an nginx of one of the
servers which is target of the upstream, I see in error logs of
all servers in front that connection was reset by the nginx which
was reloaded.


Here configuration of upstream group (IPs are hidden replaced by
IP_X):

--- BEGIN ---

upstream data_api {

random;


server IP_1:80 max_fails=3 fail_timeout=30s;

server IP_2:80 max_fails=3 fail_timeout=30s;

server IP_3:80 max_fails=3 fail_timeout=30s;

server IP_4:80 max_fails=3 fail_timeout=30s;

server IP_5:80 max_fails=3 fail_timeout=30s;

server IP_6:80 max_fails=3 fail_timeout=30s;

server IP_7:80 max_fails=3 fail_timeout=30s;

server IP_8:80 max_fails=3 fail_timeout=30s;

server IP_9:80 max_fails=3 fail_timeout=30s;

server IP_10:80 max_fails=3 fail_timeout=30s;


keepalive 20;

}

--- END ---


Here configuration of the location using this upstream:

--- BEGIN ---

location / {

proxy_pass http://data_api;


proxy_http_version 1.1;

proxy_set_header Connection "";


proxy_set_header Host $host;

proxy_set_header X-Real-IP $real_ip;

proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;


proxy_connect_timeout 2s;

proxy_send_timeout 6s;

proxy_read_timeout 10s;


proxy_next_upstream error timeout http_502 http_504;

}

--- END ---


And here the kind of error messages I get when I reload nginx of
"IP_1":

--- BEGIN ---

2024/03/25 11:24:25 [error] 3758170#0: *1795895162 recv() failed
(104: Connection reset by peer) while reading response header
from upstream, client: CLIENT_IP_HIDDEN, server: SERVER_HIDDEN,
request: "POST /REQUEST_LOCATION_HIDDEN HTTP/2.0", upstream:
"http://IP_1:80/REQUEST_LOCATION_HIDDEN
<http://IP_1:80/REQUEST_LOCATION_HIDDEN>", host: "HOST_HIDDEN",
referrer: "REFERRER_HIDDEN"

--- END ---


I thought -s reload was doing graceful shutdown of connections.
Is it due to the fact that nginx can not handle that when using
keepalive connections? Is it a bug?

I am using nginx 1.24.0 everywhere, no particular


Thank you for any help.


    Sébastien


    ___
nginx mai

[nginx] Configure: allow cross-compiling to Windows using Clang.

2024-03-27 Thread Sergey Kandaurov
details:   https://hg.nginx.org/nginx/rev/99e7050ac886
branches:  
changeset: 9235:99e7050ac886
user:  Piotr Sikora 
date:  Mon Feb 26 20:00:48 2024 +
description:
Configure: allow cross-compiling to Windows using Clang.

Signed-off-by: Piotr Sikora 

diffstat:

 auto/os/win32 |  2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diffs (12 lines):

diff -r c2e753d214b0 -r 99e7050ac886 auto/os/win32
--- a/auto/os/win32 Mon Feb 26 20:00:46 2024 +
+++ b/auto/os/win32 Mon Feb 26 20:00:48 2024 +
@@ -18,7 +18,7 @@ ngx_binext=".exe"
 
 case "$NGX_CC_NAME" in
 
-gcc)
+clang | gcc)
 CORE_LIBS="$CORE_LIBS -ladvapi32 -lws2_32"
 MAIN_LINK="$MAIN_LINK -Wl,--export-all-symbols"
 MAIN_LINK="$MAIN_LINK -Wl,--out-implib=$NGX_OBJS/libnginx.a"
_______
nginx-devel mailing list
nginx-devel@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx-devel


[nginx] Configure: fixed "make install" when cross-compiling to Windows.

2024-03-27 Thread Sergey Kandaurov
details:   https://hg.nginx.org/nginx/rev/c2e753d214b0
branches:  
changeset: 9234:c2e753d214b0
user:  Piotr Sikora 
date:  Mon Feb 26 20:00:46 2024 +
description:
Configure: fixed "make install" when cross-compiling to Windows.

Signed-off-by: Piotr Sikora 

diffstat:

 auto/install |  2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diffs (12 lines):

diff -r 398495d816f0 -r c2e753d214b0 auto/install
--- a/auto/install  Mon Feb 26 20:00:43 2024 +
+++ b/auto/install  Mon Feb 26 20:00:46 2024 +
@@ -112,7 +112,7 @@ install:build $NGX_INSTALL_PERL_MODULES
test ! -f '\$(DESTDIR)$NGX_SBIN_PATH' \\
|| mv '\$(DESTDIR)$NGX_SBIN_PATH' \\
'\$(DESTDIR)$NGX_SBIN_PATH.old'
-   cp $NGX_OBJS/nginx '\$(DESTDIR)$NGX_SBIN_PATH'
+   cp $NGX_OBJS/nginx$ngx_binext '\$(DESTDIR)$NGX_SBIN_PATH'
 
test -d '\$(DESTDIR)$NGX_CONF_PREFIX' \\
|| mkdir -p '\$(DESTDIR)$NGX_CONF_PREFIX'
___
nginx-devel mailing list
nginx-devel@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx-devel


[nginx] Configure: added support for Homebrew on Apple Silicon.

2024-03-27 Thread Sergey Kandaurov
details:   https://hg.nginx.org/nginx/rev/398495d816f0
branches:  
changeset: 9233:398495d816f0
user:  Piotr Sikora 
date:  Mon Feb 26 20:00:43 2024 +
description:
Configure: added support for Homebrew on Apple Silicon.

Signed-off-by: Piotr Sikora 

diffstat:

 auto/lib/geoip/conf|  17 +
 auto/lib/google-perftools/conf |  16 
 auto/lib/libgd/conf|  17 +
 auto/lib/openssl/conf  |  18 ++
 auto/lib/pcre/conf |  16 
 5 files changed, 84 insertions(+), 0 deletions(-)

diffs (134 lines):

diff -r 427aa785edf8 -r 398495d816f0 auto/lib/geoip/conf
--- a/auto/lib/geoip/conf   Wed Mar 27 19:36:51 2024 +0400
+++ b/auto/lib/geoip/conf   Mon Feb 26 20:00:43 2024 +
@@ -64,6 +64,23 @@ if [ $ngx_found = no ]; then
 fi
 
 
+if [ $ngx_found = no ]; then
+
+# Homebrew on Apple Silicon
+
+ngx_feature="GeoIP library in /opt/homebrew/"
+ngx_feature_path="/opt/homebrew/include"
+
+if [ $NGX_RPATH = YES ]; then
+ngx_feature_libs="-R/opt/homebrew/lib -L/opt/homebrew/lib -lGeoIP"
+else
+ngx_feature_libs="-L/opt/homebrew/lib -lGeoIP"
+fi
+
+. auto/feature
+fi
+
+
 if [ $ngx_found = yes ]; then
 
 CORE_INCS="$CORE_INCS $ngx_feature_path"
diff -r 427aa785edf8 -r 398495d816f0 auto/lib/google-perftools/conf
--- a/auto/lib/google-perftools/confWed Mar 27 19:36:51 2024 +0400
+++ b/auto/lib/google-perftools/confMon Feb 26 20:00:43 2024 +
@@ -46,6 +46,22 @@ if [ $ngx_found = no ]; then
 fi
 
 
+if [ $ngx_found = no ]; then
+
+# Homebrew on Apple Silicon
+
+ngx_feature="Google perftools in /opt/homebrew/"
+
+if [ $NGX_RPATH = YES ]; then
+ngx_feature_libs="-R/opt/homebrew/lib -L/opt/homebrew/lib -lprofiler"
+else
+ngx_feature_libs="-L/opt/homebrew/lib -lprofiler"
+fi
+
+. auto/feature
+fi
+
+
 if [ $ngx_found = yes ]; then
 CORE_LIBS="$CORE_LIBS $ngx_feature_libs"
 
diff -r 427aa785edf8 -r 398495d816f0 auto/lib/libgd/conf
--- a/auto/lib/libgd/conf   Wed Mar 27 19:36:51 2024 +0400
+++ b/auto/lib/libgd/conf   Mon Feb 26 20:00:43 2024 +
@@ -65,6 +65,23 @@ if [ $ngx_found = no ]; then
 fi
 
 
+if [ $ngx_found = no ]; then
+
+# Homebrew on Apple Silicon
+
+ngx_feature="GD library in /opt/homebrew/"
+ngx_feature_path="/opt/homebrew/include"
+
+if [ $NGX_RPATH = YES ]; then
+ngx_feature_libs="-R/opt/homebrew/lib -L/opt/homebrew/lib -lgd"
+else
+ngx_feature_libs="-L/opt/homebrew/lib -lgd"
+fi
+
+. auto/feature
+fi
+
+
 if [ $ngx_found = yes ]; then
 
 CORE_INCS="$CORE_INCS $ngx_feature_path"
diff -r 427aa785edf8 -r 398495d816f0 auto/lib/openssl/conf
--- a/auto/lib/openssl/conf Wed Mar 27 19:36:51 2024 +0400
+++ b/auto/lib/openssl/conf Mon Feb 26 20:00:43 2024 +
@@ -122,6 +122,24 @@ else
 . auto/feature
 fi
 
+if [ $ngx_found = no ]; then
+
+# Homebrew on Apple Silicon
+
+ngx_feature="OpenSSL library in /opt/homebrew/"
+ngx_feature_path="/opt/homebrew/include"
+
+if [ $NGX_RPATH = YES ]; then
+ngx_feature_libs="-R/opt/homebrew/lib -L/opt/homebrew/lib 
-lssl -lcrypto"
+else
+ngx_feature_libs="-L/opt/homebrew/lib -lssl -lcrypto"
+fi
+
+ngx_feature_libs="$ngx_feature_libs $NGX_LIBDL $NGX_LIBPTHREAD"
+
+. auto/feature
+fi
+
 if [ $ngx_found = yes ]; then
 have=NGX_SSL . auto/have
 CORE_INCS="$CORE_INCS $ngx_feature_path"
diff -r 427aa785edf8 -r 398495d816f0 auto/lib/pcre/conf
--- a/auto/lib/pcre/confWed Mar 27 19:36:51 2024 +0400
+++ b/auto/lib/pcre/confMon Feb 26 20:00:43 2024 +
@@ -182,6 +182,22 @@ else
 . auto/feature
 fi
 
+if [ $ngx_found = no ]; then
+
+# Homebrew on Apple Silicon
+
+ngx_feature="PCRE library in /opt/homebrew/"
+ngx_feature_path="/opt/homebrew/include"
+
+if [ $NGX_RPATH = YES ]; then
+ngx_feature_libs="-R/opt/homebrew/lib -L/opt/homebrew/lib 
-lpcre"
+else
+ngx_feature_libs="-L/opt/homebrew/lib -lpcre"
+fi
+
+    . auto/feature
+fi
+
 if [ $ngx_found = yes ]; then
     CORE_INCS="$CORE_INCS $ngx_feature_path"
 CORE_LIBS="$CORE_LIBS $ngx_feature_libs"
___
nginx-devel mailing list
nginx-devel@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx-devel


[nginx] Configure: set cache line size for more architectures.

2024-03-27 Thread Sergey Kandaurov
details:   https://hg.nginx.org/nginx/rev/427aa785edf8
branches:  
changeset: 9232:427aa785edf8
user:  Sergey Kandaurov 
date:  Wed Mar 27 19:36:51 2024 +0400
description:
Configure: set cache line size for more architectures.

Based on a patch by Piotr Sikora.

diffstat:

 auto/os/conf |  15 +++
 1 files changed, 15 insertions(+), 0 deletions(-)

diffs (25 lines):

diff -r 61cd12c25878 -r 427aa785edf8 auto/os/conf
--- a/auto/os/conf  Mon Feb 26 20:00:40 2024 +
+++ b/auto/os/conf  Wed Mar 27 19:36:51 2024 +0400
@@ -115,6 +115,21 @@ case "$NGX_MACHINE" in
 NGX_MACH_CACHE_LINE=64
 ;;
 
+ppc64* | powerpc64*)
+have=NGX_ALIGNMENT value=16 . auto/define
+NGX_MACH_CACHE_LINE=128
+;;
+
+riscv64)
+have=NGX_ALIGNMENT value=16 . auto/define
+NGX_MACH_CACHE_LINE=64
+;;
+
+s390x)
+have=NGX_ALIGNMENT value=16 . auto/define
+NGX_MACH_CACHE_LINE=256
+;;
+
 *)
 have=NGX_ALIGNMENT value=16 . auto/define
 NGX_MACH_CACHE_LINE=32
___
nginx-devel mailing list
nginx-devel@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx-devel


[nginx] Detect cache line size at runtime on macOS.

2024-03-27 Thread Sergey Kandaurov
details:   https://hg.nginx.org/nginx/rev/61cd12c25878
branches:  
changeset: 9231:61cd12c25878
user:  Piotr Sikora 
date:  Mon Feb 26 20:00:40 2024 +
description:
Detect cache line size at runtime on macOS.

Notably, Apple Silicon CPUs have 128 byte cache line size,
which is twice the default configured for generic aarch64.

Signed-off-by: Piotr Sikora 

diffstat:

 src/os/unix/ngx_darwin_init.c |  16 +++-
 src/os/unix/ngx_posix_init.c  |   5 -
 2 files changed, 15 insertions(+), 6 deletions(-)

diffs (55 lines):

diff -r fb989e24c60a -r 61cd12c25878 src/os/unix/ngx_darwin_init.c
--- a/src/os/unix/ngx_darwin_init.c Mon Feb 26 20:00:38 2024 +
+++ b/src/os/unix/ngx_darwin_init.c Mon Feb 26 20:00:40 2024 +
@@ -9,11 +9,12 @@
 #include 
 
 
-charngx_darwin_kern_ostype[16];
-charngx_darwin_kern_osrelease[128];
-int ngx_darwin_hw_ncpu;
-int ngx_darwin_kern_ipc_somaxconn;
-u_long  ngx_darwin_net_inet_tcp_sendspace;
+char ngx_darwin_kern_ostype[16];
+char ngx_darwin_kern_osrelease[128];
+int  ngx_darwin_hw_ncpu;
+int  ngx_darwin_kern_ipc_somaxconn;
+u_long   ngx_darwin_net_inet_tcp_sendspace;
+int64_t  ngx_darwin_hw_cachelinesize;
 
 ngx_uint_t  ngx_debug_malloc;
 
@@ -56,6 +57,10 @@ sysctl_t sysctls[] = {
   _darwin_kern_ipc_somaxconn,
   sizeof(ngx_darwin_kern_ipc_somaxconn), 0 },
 
+{ "hw.cachelinesize",
+  _darwin_hw_cachelinesize,
+  sizeof(ngx_darwin_hw_cachelinesize), 0 },
+
 { NULL, NULL, 0, 0 }
 };
 
@@ -155,6 +160,7 @@ ngx_os_specific_init(ngx_log_t *log)
 return NGX_ERROR;
 }
 
+ngx_cacheline_size = ngx_darwin_hw_cachelinesize;
 ngx_ncpu = ngx_darwin_hw_ncpu;
 
 if (ngx_darwin_kern_ipc_somaxconn > 32767) {
diff -r fb989e24c60a -r 61cd12c25878 src/os/unix/ngx_posix_init.c
--- a/src/os/unix/ngx_posix_init.c  Mon Feb 26 20:00:38 2024 +
+++ b/src/os/unix/ngx_posix_init.c  Mon Feb 26 20:00:40 2024 +
@@ -51,7 +51,10 @@ ngx_os_init(ngx_log_t *log)
 }
 
 ngx_pagesize = getpagesize();
-ngx_cacheline_size = NGX_CPU_CACHE_LINE;
+
+if (ngx_cacheline_size == 0) {
+ngx_cacheline_size = NGX_CPU_CACHE_LINE;
+}
 
 for (n = ngx_pagesize; n >>= 1; ngx_pagesize_shift++) { /* void */ }
 
___
nginx-devel mailing list
nginx-devel@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx-devel


[nginx] Configure: fixed Linux crypt_r() test to add libcrypt.

2024-03-27 Thread Sergey Kandaurov
details:   https://hg.nginx.org/nginx/rev/fb989e24c60a
branches:  
changeset: 9230:fb989e24c60a
user:  Sergey Kandaurov 
date:  Mon Feb 26 20:00:38 2024 +
description:
Configure: fixed Linux crypt_r() test to add libcrypt.

Previously, the resulting binary was successfully linked
because libcrypt was added in a separate test for crypt().

Patch by Piotr Sikora.

diffstat:

 auto/os/linux |  4 
 1 files changed, 4 insertions(+), 0 deletions(-)

diffs (14 lines):

diff -r 000e2ded0a51 -r fb989e24c60a auto/os/linux
--- a/auto/os/linux Mon Feb 26 20:00:35 2024 +
+++ b/auto/os/linux Mon Feb 26 20:00:38 2024 +
@@ -228,6 +228,10 @@ ngx_feature_test="struct crypt_data  cd;
   crypt_r(\"key\", \"salt\", );"
 . auto/feature
 
+if [ $ngx_found = yes ]; then
+CRYPT_LIB="-lcrypt"
+fi
+
 
 ngx_include="sys/vfs.h"; . auto/include
 
_______
nginx-devel mailing list
nginx-devel@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx-devel


  1   2   3   4   5   6   7   8   9   10   >