Re: Need help on Oauth-2.0 Token with Nginx reverse proxy

2019-07-30 Thread blason
Here are the error messages I am seeing in access.log

1.2.3.4 - - [31/Jul/2019:10:07:58 +0530] "POST /connect/token HTTP/1.1" 400
80 "https://test.example.net/; "Mozilla/5.0 (Windows NT 10.0; Win64; x64)
AppleWebKit/537.36 (KHTML, like Gecko) Chrome/75.0.3770.142 Safari/537.36"
1.2.3.4 - - [31/Jul/2019:10:07:58 +0530] "POST
/AdsvaluAPI/api/Authentication/UpdateLoginAttemptFailed HTTP/1.1" 201 132
"https://test.example.net/; "Mozilla/5.0 (Windows NT 10.0; Win64; x64)
AppleWebKit/537.36 (KHTML, like Gecko) Chrome/75.0.3770.142 Safari/537.36"

Posted at Nginx Forum: 
https://forum.nginx.org/read.php?2,285048,285050#msg-285050

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: Need help on Oauth-2.0 Token with Nginx reverse proxy

2019-07-30 Thread blason
blason Wrote:
---
> Hi Folks,
> 
> I am trying to setup a reverse proxy on nginx with server at backend
> and from HAR file I understand it uses Oauth-Token-2.0 with POST
> method.
> 
> However I am unable to set the stuff and seeking help here.
> 
> My original server here is assuming
> 
> https://test.example.net:9084
> And for Outh from har file I can see the request goes to
> https://test.example.net:99/connect/token
> 
> Here is my config
> *
> server {
> listen 443 ssl;
> listen 8084;
> listen 88;
> server_name test.example.net;
> ssl_protocols  TLSv1.1 TLSv1.2;
>ssl_certificate   /etc/nginx/certs/star_.com.crt;
>ssl_certificate_key   /etc/nginx/certs/server.key;
>ssl on;
> ssl_ciphers
> 'EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH';
> gzip on;
> gzip_proxied any;
> gzip_types text/plain text/xml text/css
> application/x-javascript;
> gzip_vary on;
> gzip_comp_level 6;
> gzip_buffers 16 8k;
> gzip_http_version 1.1;
> gzip_min_length 256;
> gzip_disable "MSIE [1-6]\.(?!.*SV1)";
> ssl_prefer_server_ciphers on;
> ssl_session_cache shared:SSL:10m;
> access_log /var/log/nginx/test/access.log;
> error_log /var/log/nginx/test/error.log;
> 
> 
> location / {
> proxy_pass https://test.example.net:9084;
> proxy_redirect  https://test.example.net:99/ /;
>client_max_body_size10m;
> client_body_buffer_size 128k;
> #proxy_redirect off;
> proxy_send_timeout   90;
> proxy_read_timeout   90;
> proxy_buffer_size128k;
> proxy_buffers 4 256k;
> proxy_busy_buffers_size 256k;
> proxy_temp_file_write_size 256k;
> proxy_connect_timeout 30s;
> proxy_set_header   Host   $host;
> proxy_set_header   X-Real-IP  $remote_addr;
> proxy_set_header  X-Forwarded-Proto  $scheme;
> proxy_set_header   X-Forwarded-For
> $proxy_add_x_forwarded_for;
> add_header Strict-Transport-Security "max-age=31536000;
> includeSubDomains" always;
> add_header X-Content-Type-Options nosniff;
> add_header X-XSS-Protection "1; mode=block";
> add_header Referrer-Policy "no-referrer-when-downgrade";
> add_header X-Frame-Options "SAMEORIGIN" always;
> }

Here are HAR file Headers

Date
Tue, 30 Jul 2019 07:56:26 GMT
Strict-Transport-Security   
max-age=31536000; includeSubDomains
X-Content-Type-Options  
nosniff
X-AspNet-Version
4.0.30319
X-Powered-By
ASP.NET
Connection  
keep-alive
Content-Length  
919
X-XSS-Protection
1; mode=block
Pragma  
no-cache
Referrer-Policy 
no-referrer-when-downgrade
Server  
nginx
X-Frame-Options 
SAMEORIGIN
Access-Control-Allow-Methods
*
Content-Type
application/json; charset=utf-8
Access-Control-Allow-Origin 
*
Cache-Control   
no-store, no-cache, max-age=0, private
Access-Control-Allow-Headers
Origin, X-Requested-With, Content-Type, Accept
Request Headers
Accept  
application/json, text/plain, */*
Referer 
https://test.example.net/
Origin  
https://test.example.net
User-Agent  
Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like
Gecko) Chrome/75.0.3770.142
 Safari/537.36
Content-Type
application/x-www-form-urlencoded

Posted at Nginx Forum: 
https://forum.nginx.org/read.php?2,285048,285049#msg-285049

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Need help on Oauth-2.0 Token with Nginx reverse proxy

2019-07-30 Thread blason
Hi Folks,

I am trying to setup a reverse proxy on nginx with server at backend and
from HAR file I understand it uses Oauth-Token-2.0 with POST method.

However I am unable to set the stuff and seeking help here.

My original server here is assuming

https://test.example.net:9084
And for Outh from har file I can see the request goes to
https://test.example.net:99/connect/token

Here is my config
*
server {
listen 443 ssl;
listen 8084;
listen 88;
server_name test.example.net;
ssl_protocols  TLSv1.1 TLSv1.2;
   ssl_certificate   /etc/nginx/certs/star_.com.crt;
   ssl_certificate_key   /etc/nginx/certs/server.key;
   ssl on;
ssl_ciphers 'EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH';
gzip on;
gzip_proxied any;
gzip_types text/plain text/xml text/css application/x-javascript;
gzip_vary on;
gzip_comp_level 6;
gzip_buffers 16 8k;
gzip_http_version 1.1;
gzip_min_length 256;
gzip_disable "MSIE [1-6]\.(?!.*SV1)";
ssl_prefer_server_ciphers on;
ssl_session_cache shared:SSL:10m;
access_log /var/log/nginx/test/access.log;
error_log /var/log/nginx/test/error.log;


location / {
proxy_pass https://test.example.net:9084;
proxy_redirect  https://test.example.net:99/ /;
   client_max_body_size10m;
client_body_buffer_size 128k;
#proxy_redirect off;
proxy_send_timeout   90;
proxy_read_timeout   90;
proxy_buffer_size128k;
proxy_buffers 4 256k;
proxy_busy_buffers_size 256k;
proxy_temp_file_write_size 256k;
proxy_connect_timeout 30s;
proxy_set_header   Host   $host;
proxy_set_header   X-Real-IP  $remote_addr;
proxy_set_header  X-Forwarded-Proto  $scheme;
proxy_set_header   X-Forwarded-For $proxy_add_x_forwarded_for;
add_header Strict-Transport-Security "max-age=31536000;
includeSubDomains" always;
add_header X-Content-Type-Options nosniff;
add_header X-XSS-Protection "1; mode=block";
add_header Referrer-Policy "no-referrer-when-downgrade";
add_header X-Frame-Options "SAMEORIGIN" always;
}

Posted at Nginx Forum: 
https://forum.nginx.org/read.php?2,285048,285048#msg-285048

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


handling cookie

2019-07-30 Thread Kuroishi Mitsuo


Hi,

I'm developing a module that handles the cookie header for
Nginx.

It's kind of awkward though, the cookie sometimes contains the
same key name. For example,

  Cookie: a=xxx; a=yyy

Currently I use ngx_http_parse_multi_header_lines() like below.

  ngx_str_t buf;
  ngx_str_t key = ngx_string ("a");
  ngx_http_parse_multi_header_lines(>headers_in.cookies, , );

But the function only seems to be able to get the 1st one.

Is there any way to get the 2nd value? And any idea is welcome.

Thanks in advance.

-- 
Kuroishi Mitsuo
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Why 301 permanent redirect with appended slash?

2019-07-30 Thread J. Lewis Muir
Hello, all!

I have a minimal nginx.conf with one server block that sets the root
directory and one location with a prefix string of "/foo/", and for a
request of "/foo", it returns a 301 permanent redirect to "/foo/".  Why?
I expected it to return 404 or similar.  I also tried a prefix string of
"/foo", but that also results in the same 301.

Here's the server block (entire nginx.conf at end of message):


server {
listen  127.0.0.1:80;
listen  [::1]:80;
server_name localhost "" 127.0.0.1 [::1];
root/srv/www/localhost;

location /foo/ {
}
}


And here's the curl invocation:


$ curl -I 'http://localhost/foo'
HTTP/1.1 301 Moved Permanently
Server: nginx/1.12.2
Date: Tue, 30 Jul 2019 21:54:44 GMT
Content-Type: text/html
Content-Length: 185
Location: http://localhost/foo/
Connection: keep-alive



I've read in

  https://nginx.org/en/docs/http/ngx_http_core_module.html#location

where it says

  If a location is defined by a prefix string that ends with the
  slash character, and requests are processed by one of proxy_pass,
  fastcgi_pass, uwsgi_pass, scgi_pass, memcached_pass, or grpc_pass,
  then the special processing is performed. In response to a request
  with URI equal to this string, but without the trailing slash, a
  permanent redirect with the code 301 will be returned to the requested
  URI with the slash appended.

But in my case, I don't believe the request is being processed by any of
those *_pass directives.

Thank you!

Lewis

 Complete nginx.conf 
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;

events {
worker_connections 1024;
}

http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';

access_log /var/log/nginx/access.log main;

sendfileon;
tcp_nopush  on;
tcp_nodelay on;
keepalive_timeout   65;
types_hash_max_size 2048;

include  /etc/nginx/mime.types;
default_type application/octet-stream;

server {
listen  127.0.0.1:80;
listen  [::1]:80;
server_name localhost "" 127.0.0.1 [::1];
root/srv/www/localhost;

location /foo/ {
}
}
}

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Implicit root location?

2019-07-30 Thread J. Lewis Muir
Hello, all!

I have a minimal nginx.conf with one server block that sets the root
directory but has *no* location directives, yet for a request of "/", it
serves "/index.html".  Why?  With no locations specified, I expected it
to return 404 or similar for any request.

Here's the server block (entire nginx.conf at end of message):


server {
listen  127.0.0.1:80;
listen  [::1]:80;
server_name localhost "" 127.0.0.1 [::1];
root/srv/www/localhost;
}


Here's the contents of /srv/www/localhost:


$ ls -al /srv/www/localhost
total 4
drwxr-xr-x. 2 root root  24 Jul 30 15:50 .
drwxr-xr-x. 3 root root  23 Jun 26 21:34 ..
-rw-r--r--. 1 root root 140 Jun 26 22:22 index.html


And here's the curl invocation:


$ curl 'http://localhost/'




localhost


localhost




I know that the default index directive is


index index.html;


That explains how it knows to try index.html, but what makes it try
the root when there are no location directives?  Is there an implicit
location directive?

There is no default listed for the location directive:

  https://nginx.org/en/docs/http/ngx_http_core_module.html#location

And I couldn't find this behavior stated in "How nginx processes a
request:"

  https://nginx.org/en/docs/http/request_processing.html

Thank you!

Lewis

 Complete nginx.conf 
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;

events {
worker_connections 1024;
}

http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';

access_log /var/log/nginx/access.log main;

sendfileon;
tcp_nopush  on;
tcp_nodelay on;
keepalive_timeout   65;
types_hash_max_size 2048;

include  /etc/nginx/mime.types;
default_type application/octet-stream;

server {
listen  127.0.0.1:80;
listen  [::1]:80;
server_name localhost "" 127.0.0.1 [::1];
root/srv/www/localhost;
}
}

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


[njs] Refactored usage of njs_ret_t.

2019-07-30 Thread Dmitry Volyntsev
details:   https://hg.nginx.org/njs/rev/f4ac8168e856
branches:  
changeset: 1087:f4ac8168e856
user:  Dmitry Volyntsev 
date:  Tue Jul 30 21:12:08 2019 +0300
description:
Refactored usage of njs_ret_t.

Currently njs_ret_t is used in 2 different cases: as a jump offset for
bytecode and as a return value for ordinary functions. The second case
is quite similar with njs_int_t (and is often confused with).

1) Splitting this two cases into different types to avoid
confusion with njs_int_t.

2) Renaming njs_ret_t to njs_jump_off_t to better reflect its
purpose.

NO functional changes.

diffstat:

 nginx/ngx_http_js_module.c   |  132 ++--
 nginx/ngx_stream_js_module.c |   74 
 src/njs.h|   31 +++---
 src/njs_array.c  |   92 +-
 src/njs_array.h  |8 +-
 src/njs_boolean.c|6 +-
 src/njs_boolean.h|2 +-
 src/njs_builtin.c|   14 +-
 src/njs_crypto.c |   26 ++--
 src/njs_crypto.h |4 +-
 src/njs_date.c   |   88 +-
 src/njs_date.h   |4 +-
 src/njs_error.c  |   30 +++---
 src/njs_error.h  |   20 ++--
 src/njs_extern.c |2 +-
 src/njs_fs.c |   44 
 src/njs_function.c   |   46 
 src/njs_function.h   |   26 ++--
 src/njs_generator.c  |   52 +-
 src/njs_json.c   |   48 +-
 src/njs_math.c   |   76 
 src/njs_module.c |3 +-
 src/njs_module.h |2 +-
 src/njs_number.c |   26 ++--
 src/njs_number.h |   12 +-
 src/njs_object.c |   84 +-
 src/njs_object.h |   24 ++--
 src/njs_object_property.c|   50 +-
 src/njs_parser.c |   28 +++---
 src/njs_parser.h |2 +-
 src/njs_parser_terminal.c|   12 +-
 src/njs_regexp.c |   42 
 src/njs_regexp.h |8 +-
 src/njs_shell.c  |   22 ++--
 src/njs_string.c |  198 +-
 src/njs_string.h |   32 +++---
 src/njs_timer.c  |   10 +-
 src/njs_timer.h  |6 +-
 src/njs_value.c  |4 +-
 src/njs_value.h  |   16 +-
 src/njs_variable.c   |   22 ++--
 src/njs_variable.h   |8 +-
 src/njs_vm.c |   26 ++--
 src/njs_vm.h |2 +-
 src/njs_vmcode.c |  168 ++--
 src/njs_vmcode.h |   27 +++--
 src/test/njs_unit_test.c |   30 +++---
 47 files changed, 847 insertions(+), 842 deletions(-)

diffs (truncated from 6313 to 1000 lines):

diff -r 8b01e5bbbd16 -r f4ac8168e856 nginx/ngx_http_js_module.c
--- a/nginx/ngx_http_js_module.cTue Jul 30 20:11:46 2019 +0300
+++ b/nginx/ngx_http_js_module.cTue Jul 30 21:12:08 2019 +0300
@@ -60,76 +60,76 @@ static ngx_int_t ngx_http_js_init_vm(ngx
 static void ngx_http_js_cleanup_ctx(void *data);
 static void ngx_http_js_cleanup_vm(void *data);
 
-static njs_ret_t ngx_http_js_ext_get_string(njs_vm_t *vm, njs_value_t *value,
+static njs_int_t ngx_http_js_ext_get_string(njs_vm_t *vm, njs_value_t *value,
 void *obj, uintptr_t data);
-static njs_ret_t ngx_http_js_ext_foreach_header(njs_vm_t *vm, void *obj,
+static njs_int_t ngx_http_js_ext_foreach_header(njs_vm_t *vm, void *obj,
 void *next, uintptr_t data);
-static njs_ret_t ngx_http_js_ext_next_header(njs_vm_t *vm, njs_value_t *value,
+static njs_int_t ngx_http_js_ext_next_header(njs_vm_t *vm, njs_value_t *value,
 void *obj, void *next);
 static ngx_table_elt_t *ngx_http_js_get_header(ngx_list_part_t *part,
 u_char *data, size_t len);
-static njs_ret_t ngx_http_js_ext_get_header_out(njs_vm_t *vm,
+static njs_int_t ngx_http_js_ext_get_header_out(njs_vm_t *vm,
 njs_value_t *value, void *obj, uintptr_t data);
-static njs_ret_t ngx_http_js_ext_set_header_out(njs_vm_t *vm, void *obj,
+static njs_int_t ngx_http_js_ext_set_header_out(njs_vm_t *vm, void *obj,
 uintptr_t data, njs_str_t *value);
-static njs_ret_t ngx_http_js_ext_delete_header_out(njs_vm_t *vm, void *obj,
+static njs_int_t ngx_http_js_ext_delete_header_out(njs_vm_t *vm, void *obj,
 uintptr_t data, njs_bool_t delete);
-static njs_ret_t ngx_http_js_ext_foreach_header_out(njs_vm_t *vm, void *obj,
+static njs_int_t ngx_http_js_ext_foreach_header_out(njs_vm_t *vm, void *obj,
 void *next); /*FIXME*/
-static njs_ret_t ngx_http_js_ext_get_status(njs_vm_t *vm, njs_value_t *value,
+static njs_int_t ngx_http_js_ext_get_status(njs_vm_t *vm, njs_value_t *value,
 void *obj, uintptr_t data);
-static njs_ret_t ngx_http_js_ext_set_status(njs_vm_t *vm, void *obj,
+static njs_int_t ngx_http_js_ext_set_status(njs_vm_t *vm, void *obj,
 uintptr_t data, njs_str_t *value);

[njs] Refactored file hierarchy.

2019-07-30 Thread Dmitry Volyntsev
details:   https://hg.nginx.org/njs/rev/8b01e5bbbd16
branches:  
changeset: 1086:8b01e5bbbd16
user:  Dmitry Volyntsev 
date:  Tue Jul 30 20:11:46 2019 +0300
description:
Refactored file hierarchy.

1) all source files are moved to src directory.
2) nxt files are renamed with "njs" prefix.
3) some files are renamed to avoid collisions:
nxt_array.c -> njs_arr.c
nxt_array.h -> njs_arr.h
nxt_string.h -> njs_str.h
nxt_time.c -> njs_time.c
nxt_time.h -> njs_time.h
njs_time.c -> njs_timer.c
njs_time.h -> njs_timer.c
njs_core.h -> njs_main.h
4) C tests are moved to src/test dir.
5) Other tests are moved to test dir.
6) Some structs are renamed to avoid collisions:
nxt_array_t -> njs_arr_t
nxt_string_t -> njs_str_t

appropriate functions and macros are also renamed.

7) all macros, functions and other identifiers with "NXT_" and "nxt_"
prefixes are renamed to corresponding "NJS_" or "njs_" prefix.

NO functional changes.

diffstat:

 auto/clang   |266 +-
 auto/define  |  6 +-
 auto/deps| 20 +-
 auto/expect  | 20 +-
 auto/explicit_bzero  | 20 +-
 auto/feature | 86 +-
 auto/getrandom   | 36 +-
 auto/make|256 +-
 auto/memalign| 26 +-
 auto/os  | 32 +-
 auto/pcre| 26 +-
 auto/readline| 54 +-
 auto/sources |121 +-
 auto/time| 46 +-
 configure| 34 +-
 nginx/config |  4 +-
 nginx/ngx_http_js_module.c   |220 +-
 nginx/ngx_stream_js_module.c |154 +-
 njs/njs.h|307 -
 njs/njs_array.c  |   2211 
 njs/njs_array.h  | 32 -
 njs/njs_boolean.c|166 -
 njs/njs_boolean.h| 18 -
 njs/njs_builtin.c|   1355 --
 njs/njs_builtin.h| 17 -
 njs/njs_core.h   | 56 -
 njs/njs_crypto.c |714 -
 njs/njs_crypto.h | 24 -
 njs/njs_date.c   |   2336 
 njs/njs_date.h   | 22 -
 njs/njs_disassembler.c   |473 -
 njs/njs_error.c  |942 -
 njs/njs_error.h  | 88 -
 njs/njs_event.c  | 97 -
 njs/njs_event.h  | 42 -
 njs/njs_extern.c |412 -
 njs/njs_extern.h | 54 -
 njs/njs_fs.c |   1078 --
 njs/njs_fs.h | 13 -
 njs/njs_function.c   |   1264 --
 njs/njs_function.h   |222 -
 njs/njs_generator.c  |   3336 --
 njs/njs_generator.h  | 34 -
 njs/njs_json.c   |   2586 -
 njs/njs_json.h   | 14 -
 njs/njs_lexer.c  |848 -
 njs/njs_lexer.h  |266 -
 njs/njs_lexer_keyword.c  |195 -
 njs/njs_math.c   |   1167 --
 njs/njs_math.h   | 17 -
 njs/njs_module.c |547 -
 njs/njs_module.h | 30 -
 njs/njs_number.c |900 -
 njs/njs_number.h |199 -
 njs/njs_object.c |   2293 
 njs/njs_object.h |148 -
 njs/njs_object_hash.h|288 -
 njs/njs_object_property.c|   1405 --
 njs/njs_parser.c |   2371 
 njs/njs_parser.h |277 -
 njs/njs_parser_expression.c  |   1019 --
 njs/njs_parser_terminal.c|   1316 --
 njs/njs_regexp.c |   1239 --
 njs/njs_regexp.h | 41 -
 njs/njs_regexp_pattern.h | 48 -
 njs/njs_shell.c  |   1239 --
 njs/njs_string.c |   4957 -
 njs/njs_string.h |209 -
 njs/njs_time.c   |150 -
 njs/njs_time.h   | 23 -
 njs/njs_value.c  |454 -
 njs/njs_value.h  |838 -
 njs/njs_variable.c   |673 -
 njs/njs_variable.h   | 77 -
 njs/njs_vm.c  

Re: Crash in mail module during SMTP setup

2019-07-30 Thread Maxim Dounin
Hello!

On Tue, Jul 30, 2019 at 06:32:43PM +0300, Maxim Dounin wrote:

> Hello!
> 
> On Tue, Jul 30, 2019 at 10:39:56PM +1000, Rob N ★ wrote:
> 
> > On Tue, 30 Jul 2019, at 4:26 AM, Maxim Dounin wrote:
> > > Looking at "p *c" and "p *s" might be also interesting.
> > 
> > Program received signal SIGSEGV, Segmentation fault.
> > 0x005562f2 in ngx_mail_smtp_resolve_name_handler (ctx=0x7bcaa40)
> >  at src/mail/ngx_mail_smtp_handler.c:215
> > 215 ngx_log_debug1(NGX_LOG_DEBUG_MAIL, c->log, 0,
> > 
> > (gdb) p *c
> > $14 = {data = 0x30, read = 0x111, write = 0xc2cfff0, fd = 263201712,
> >  recv = 0xfb023c0, send = 0x0, recv_chain = 0xb0, send_chain = 0x350cf90,
> >  listening = 0x0, sent = 55627856, log = 0x0, pool = 0x350cff0,
> >  type = -1242759166, sockaddr = 0x0, socklen = 7, addr_text = {len = 0,
> >  data = 0x2c4e8fc ""}, proxy_protocol_addr = {len = 0,
> >  data = 0x54eb79  
> > "UH\211\345H\203\354@H\211}\330H\211u\320H\211U\310H\213E\330H\213@@H\205\300tCH\213E\330H\213P@H\213u\310H\213E\320H\211\321\272\234\064z"},
> >  proxy_protocol_port = 53344,
> >  ssl = 0x484cb1 , udp = 0x2018d20,
> >  local_sockaddr = 0x7a414a, local_socklen = 0, buffer = 0x33312e32322e3438,
> >  queue = {prev = 0x3031312e36, next = 0x0}, number = 204275712,
> >  requests = 139872032560632, buffered = 0, log_error = 0, timedout = 0,
> >  error = 0, destroyed = 0, idle = 0, reusable = 0, close = 1, shared = 0,
> >  sendfile = 1, sndlowat = 1, tcp_nodelay = 2, tcp_nopush = 0,
> >  need_last_buf = 0}
> 
> It looks like "c" points to garbage.  
> 
> > 
> > (gdb) p *s
> > $15 = {signature = 155588656, connection = 0x350cf80, out = {len = 35,
> 
> Signature should be 0x4C49414D ("MAIL") == 1279869261, so this 
> looks like garbage too.  And this explains why "c" points to 
> garbage.
> 
> >  data = 0x20ae3e0 "220 smtp.fastmail.com ESMTP ready\r\n250 
> > smtp.fastmail.com\r\n250-smtp.fastmail.com\r\n250-PIPELINING\r\n250-SIZE 
> > 7100\r\n250-ENHANCEDSTATUSCODES\r\n250-8BITMIME\r\n250-AUTH PLAIN 
> > LOGIN\r\n250 AUTH=PLAIN LOGIN\r\n2"...}, buffer = 0x0, ctx = 0xfb02470, 
> > main_conf = 0x2015218,
> 
> Except there are some seemingly valid fields - it looks like 
> s->out is set to sscf->greeting.  So it looks like this might be 
> an already closed and partially overwritten session.
> 
> Given that "s->out = sscf->greeting;" is expected to happen after 
> client address resolution, likely this is a duplicate handler call 
> from the resolver.
> 
> I think I see the problem - when using SMTP with SSL and resolver, 
> read events might be enabled during address resolving, leading to 
> duplicate ngx_mail_ssl_handshake_handler() calls if something 
> arrives from the client, and duplicate session initialization - 
> including starting another resolving.
> 
> The following patch should resolve this:
> 
> # HG changeset patch
> # User Maxim Dounin 
> # Date 1564500680 -10800
> #  Tue Jul 30 18:31:20 2019 +0300
> # Node ID 63604bfd60a09c7c91ce62c89df468a6e54d2f1c
> # Parent  e7181cfe9212de7f67df805bb746519c059b490b
> Mail: fixed duplicate resolving.
> 
> When using SMTP with SSL and resolver, read events might be enabled
> during address resolving, leading to duplicate 
> ngx_mail_ssl_handshake_handler()
> calls if something arrives from the client, and duplicate session
> initialization - including starting another resolving.  This can lead
> to a segmentation fault if the session is closed after first resolving
> finished.  Fix is to block read events while resolving.
> 
> Reported by Robert Norris,
> http://mailman.nginx.org/pipermail/nginx/2019-July/058204.html.
> 
> diff --git a/src/mail/ngx_mail_smtp_handler.c 
> b/src/mail/ngx_mail_smtp_handler.c
> --- a/src/mail/ngx_mail_smtp_handler.c
> +++ b/src/mail/ngx_mail_smtp_handler.c
> @@ -15,6 +15,7 @@
>  static void ngx_mail_smtp_resolve_addr_handler(ngx_resolver_ctx_t *ctx);
>  static void ngx_mail_smtp_resolve_name(ngx_event_t *rev);
>  static void ngx_mail_smtp_resolve_name_handler(ngx_resolver_ctx_t *ctx);
> +static void ngx_mail_smtp_block_reading(ngx_event_t *rev);
>  static void ngx_mail_smtp_greeting(ngx_mail_session_t *s, ngx_connection_t 
> *c);
>  static void ngx_mail_smtp_invalid_pipelining(ngx_event_t *rev);
>  static ngx_int_t ngx_mail_smtp_create_buffer(ngx_mail_session_t *s,
> @@ -91,6 +92,9 @@ ngx_mail_smtp_init_session(ngx_mail_sess
>  if (ngx_resolve_addr(ctx) != NGX_OK) {
>  ngx_mail_close_connection(c);
>  }
> +
> +s->resolver_ctx = ctx;
> +c->read->handler = ngx_mail_smtp_block_reading;
>  }
>  
>  
> @@ -172,6 +176,9 @@ ngx_mail_smtp_resolve_name(ngx_event_t *
>  if (ngx_resolve_name(ctx) != NGX_OK) {
>  ngx_mail_close_connection(c);
>  }
> +
> +s->resolver_ctx = ctx;
> +c->read->handler = ngx_mail_smtp_block_reading;
>  }

Err, this should be before ngx_resolve_addr()/ngx_resolve_name().
Updated patch:

# HG changeset patch
# User Maxim Dounin 
# Date 1564502955 -10800
#  Tue Jul 30 19:09:15 

Re: Crash in mail module during SMTP setup

2019-07-30 Thread Maxim Dounin
Hello!

On Tue, Jul 30, 2019 at 10:39:56PM +1000, Rob N ★ wrote:

> On Tue, 30 Jul 2019, at 4:26 AM, Maxim Dounin wrote:
> > Looking at "p *c" and "p *s" might be also interesting.
> 
> Program received signal SIGSEGV, Segmentation fault.
> 0x005562f2 in ngx_mail_smtp_resolve_name_handler (ctx=0x7bcaa40)
>  at src/mail/ngx_mail_smtp_handler.c:215
> 215 ngx_log_debug1(NGX_LOG_DEBUG_MAIL, c->log, 0,
> 
> (gdb) p *c
> $14 = {data = 0x30, read = 0x111, write = 0xc2cfff0, fd = 263201712,
>  recv = 0xfb023c0, send = 0x0, recv_chain = 0xb0, send_chain = 0x350cf90,
>  listening = 0x0, sent = 55627856, log = 0x0, pool = 0x350cff0,
>  type = -1242759166, sockaddr = 0x0, socklen = 7, addr_text = {len = 0,
>  data = 0x2c4e8fc ""}, proxy_protocol_addr = {len = 0,
>  data = 0x54eb79  
> "UH\211\345H\203\354@H\211}\330H\211u\320H\211U\310H\213E\330H\213@@H\205\300tCH\213E\330H\213P@H\213u\310H\213E\320H\211\321\272\234\064z"},
>  proxy_protocol_port = 53344,
>  ssl = 0x484cb1 , udp = 0x2018d20,
>  local_sockaddr = 0x7a414a, local_socklen = 0, buffer = 0x33312e32322e3438,
>  queue = {prev = 0x3031312e36, next = 0x0}, number = 204275712,
>  requests = 139872032560632, buffered = 0, log_error = 0, timedout = 0,
>  error = 0, destroyed = 0, idle = 0, reusable = 0, close = 1, shared = 0,
>  sendfile = 1, sndlowat = 1, tcp_nodelay = 2, tcp_nopush = 0,
>  need_last_buf = 0}

It looks like "c" points to garbage.  

> 
> (gdb) p *s
> $15 = {signature = 155588656, connection = 0x350cf80, out = {len = 35,

Signature should be 0x4C49414D ("MAIL") == 1279869261, so this 
looks like garbage too.  And this explains why "c" points to 
garbage.

>  data = 0x20ae3e0 "220 smtp.fastmail.com ESMTP ready\r\n250 
> smtp.fastmail.com\r\n250-smtp.fastmail.com\r\n250-PIPELINING\r\n250-SIZE 
> 7100\r\n250-ENHANCEDSTATUSCODES\r\n250-8BITMIME\r\n250-AUTH PLAIN 
> LOGIN\r\n250 AUTH=PLAIN LOGIN\r\n2"...}, buffer = 0x0, ctx = 0xfb02470, 
> main_conf = 0x2015218,

Except there are some seemingly valid fields - it looks like 
s->out is set to sscf->greeting.  So it looks like this might be 
an already closed and partially overwritten session.

Given that "s->out = sscf->greeting;" is expected to happen after 
client address resolution, likely this is a duplicate handler call 
from the resolver.

I think I see the problem - when using SMTP with SSL and resolver, 
read events might be enabled during address resolving, leading to 
duplicate ngx_mail_ssl_handshake_handler() calls if something 
arrives from the client, and duplicate session initialization - 
including starting another resolving.

The following patch should resolve this:

# HG changeset patch
# User Maxim Dounin 
# Date 1564500680 -10800
#  Tue Jul 30 18:31:20 2019 +0300
# Node ID 63604bfd60a09c7c91ce62c89df468a6e54d2f1c
# Parent  e7181cfe9212de7f67df805bb746519c059b490b
Mail: fixed duplicate resolving.

When using SMTP with SSL and resolver, read events might be enabled
during address resolving, leading to duplicate ngx_mail_ssl_handshake_handler()
calls if something arrives from the client, and duplicate session
initialization - including starting another resolving.  This can lead
to a segmentation fault if the session is closed after first resolving
finished.  Fix is to block read events while resolving.

Reported by Robert Norris,
http://mailman.nginx.org/pipermail/nginx/2019-July/058204.html.

diff --git a/src/mail/ngx_mail_smtp_handler.c b/src/mail/ngx_mail_smtp_handler.c
--- a/src/mail/ngx_mail_smtp_handler.c
+++ b/src/mail/ngx_mail_smtp_handler.c
@@ -15,6 +15,7 @@
 static void ngx_mail_smtp_resolve_addr_handler(ngx_resolver_ctx_t *ctx);
 static void ngx_mail_smtp_resolve_name(ngx_event_t *rev);
 static void ngx_mail_smtp_resolve_name_handler(ngx_resolver_ctx_t *ctx);
+static void ngx_mail_smtp_block_reading(ngx_event_t *rev);
 static void ngx_mail_smtp_greeting(ngx_mail_session_t *s, ngx_connection_t *c);
 static void ngx_mail_smtp_invalid_pipelining(ngx_event_t *rev);
 static ngx_int_t ngx_mail_smtp_create_buffer(ngx_mail_session_t *s,
@@ -91,6 +92,9 @@ ngx_mail_smtp_init_session(ngx_mail_sess
 if (ngx_resolve_addr(ctx) != NGX_OK) {
 ngx_mail_close_connection(c);
 }
+
+s->resolver_ctx = ctx;
+c->read->handler = ngx_mail_smtp_block_reading;
 }
 
 
@@ -172,6 +176,9 @@ ngx_mail_smtp_resolve_name(ngx_event_t *
 if (ngx_resolve_name(ctx) != NGX_OK) {
 ngx_mail_close_connection(c);
 }
+
+s->resolver_ctx = ctx;
+c->read->handler = ngx_mail_smtp_block_reading;
 }
 
 
@@ -239,6 +246,38 @@ found:
 
 
 static void
+ngx_mail_smtp_block_reading(ngx_event_t *rev)
+{
+ngx_connection_t*c;
+ngx_mail_session_t  *s;
+ngx_resolver_ctx_t  *ctx;
+
+c = rev->data;
+s = c->data;
+
+ngx_log_debug0(NGX_LOG_DEBUG_MAIL, c->log, 0,
+   "smtp reading blocked");
+
+if (ngx_handle_read_event(rev, 0) != NGX_OK) {
+if (s->resolver_ctx) {
+ctx = s->resolver_ctx;
+
+if 

Re: Resident memory not released

2019-07-30 Thread fredr
Maxim Dounin Wrote:
---
> 
> Whether or not allocated (and then freed) memory will be returned 
> to the OS depends mostly on your system allocator and its 
> settings.

That is very interesting! I had no idea, thanks!


Maxim Dounin Wrote:
---
> On Linux with standard glibc allocator, consider tuning 
> MALLOC_MMAP_THRESHOLD_ and MALLOC_TRIM_THRESHOLD_ environment 
> variables, as documented here:
> 
> http://man7.org/linux/man-pages/man3/mallopt.3.html

I've been playing around with MALLOC_MMAP_THRESHOLD_ and
MALLOC_TRIM_THRESHOLD_ without much success. I noticed that when setting a
low value on MALLOC_TRIM_THRESHOLD_, nginx would allocate more memory, and
then when disconnecting release about half of that. So a bit of progress I
guess.

I then tried setting MALLOC_CHECK=1, and that magically solved it, it seems.
When disconnecting the websockets, all memory was reclaimed by the OS. But I
don't understand why, from reading the man pages you linked, I thought it
would only trigger some logging of memory related errors.

I haven't gotten it to work with the kubernetes nginx ingress yet, it seems
the environment variables isn't passed to the nginx processes for some
reason. But I'm working on that.

Thanks for your help! 
If anyone knows more about MALLOC_CHECK and if it is not recommended to set
in a production environment, please let me know.

Posted at Nginx Forum: 
https://forum.nginx.org/read.php?2,285025,285036#msg-285036

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: Crash in mail module during SMTP setup

2019-07-30 Thread Rob N ★
On Tue, 30 Jul 2019, at 4:26 AM, Maxim Dounin wrote:
> Looking at "p *c" and "p *s" might be also interesting.

Program received signal SIGSEGV, Segmentation fault.
0x005562f2 in ngx_mail_smtp_resolve_name_handler (ctx=0x7bcaa40)
 at src/mail/ngx_mail_smtp_handler.c:215
215 ngx_log_debug1(NGX_LOG_DEBUG_MAIL, c->log, 0,

(gdb) p *c
$14 = {data = 0x30, read = 0x111, write = 0xc2cfff0, fd = 263201712,
 recv = 0xfb023c0, send = 0x0, recv_chain = 0xb0, send_chain = 0x350cf90,
 listening = 0x0, sent = 55627856, log = 0x0, pool = 0x350cff0,
 type = -1242759166, sockaddr = 0x0, socklen = 7, addr_text = {len = 0,
 data = 0x2c4e8fc ""}, proxy_protocol_addr = {len = 0,
 data = 0x54eb79  
"UH\211\345H\203\354@H\211}\330H\211u\320H\211U\310H\213E\330H\213@@H\205\300tCH\213E\330H\213P@H\213u\310H\213E\320H\211\321\272\234\064z"},
 proxy_protocol_port = 53344,
 ssl = 0x484cb1 , udp = 0x2018d20,
 local_sockaddr = 0x7a414a, local_socklen = 0, buffer = 0x33312e32322e3438,
 queue = {prev = 0x3031312e36, next = 0x0}, number = 204275712,
 requests = 139872032560632, buffered = 0, log_error = 0, timedout = 0,
 error = 0, destroyed = 0, idle = 0, reusable = 0, close = 1, shared = 0,
 sendfile = 1, sndlowat = 1, tcp_nodelay = 2, tcp_nopush = 0,
 need_last_buf = 0}

(gdb) p *s
$15 = {signature = 155588656, connection = 0x350cf80, out = {len = 35,
 data = 0x20ae3e0 "220 smtp.fastmail.com ESMTP ready\r\n250 
smtp.fastmail.com\r\n250-smtp.fastmail.com\r\n250-PIPELINING\r\n250-SIZE 
7100\r\n250-ENHANCEDSTATUSCODES\r\n250-8BITMIME\r\n250-AUTH PLAIN 
LOGIN\r\n250 AUTH=PLAIN LOGIN\r\n2"...}, buffer = 0x0, ctx = 0xfb02470, 
main_conf = 0x2015218,
 srv_conf = 0x202af60, resolver_ctx = 0x0, proxy = 0x0, mail_state = 0,
 protocol = 2, blocked = 0, quit = 0, quoted = 0, backslash = 0,
 no_sync_literal = 0, starttls = 0, esmtp = 0, auth_method = 0,
 auth_wait = 0, login = {len = 0, data = 0x0}, passwd = {len = 0,
 data = 0x0}, salt = {len = 0, data = 0x0}, tag = {len = 0, data = 0x0},
 tagged_line = {len = 0, data = 0x0}, text = {len = 0, data = 0x0},
 addr_text = 0x20b0768, host = {len = 20,
 data = 0xfb024a8 "aldo-gw.g-service.ru"}, smtp_helo = {len = 0,
 data = 0x0}, smtp_from = {len = 0, data = 0x0}, smtp_to = {len = 0,
 data = 0x0}, cmd = {len = 0, data = 0x0}, command = 0, args = {elts = 0x0,
 nelts = 0, size = 0, nalloc = 0, pool = 0x0}, login_attempt = 0,
 state = 0, cmd_start = 0x0, arg_start = 0x0, arg_end = 0x0,
 literal_len = 384}

> Any changes to nginx code and/or additional modules?

This small patch set (which we've had for years): 
https://github.com/fastmailops/nginx/commits/1.17.2-fastmail

Modules: lua(+luajit), headers_more, ndk, vts (though none of these do anything 
with the mail module (I know, they're in the same binary though)).

> Additionally, consider configuring debug logging. Given that it's 
> slow gathering cores, normal debug logging might not be an option, 
> though configuring large enough memory buffer might work, see 
> here:

Working on this!

Rob N.___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: zero size buf in writer in 1.17.2

2019-07-30 Thread Witold Filipczyk
On Mon, Jul 29, 2019 at 07:48:41PM +0300, Maxim Dounin wrote:
> Hello!
> 
> On Sun, Jul 28, 2019 at 04:32:18PM +0200, Witold Filipczyk wrote:
> 
> > Hi,
> > There is error in log:
> > 2019/07/28 09:46:10 [alert] 2471467#2471467: *407 zero size buf in writer 
> > t:1 r:1 f:0 7F482A259000 7F482A259000-7F482A259000 
> >  0-0 while sending response to client, client: 127.0.0.1, 
> > server: localhost, request: "GET /Skrypty-m.js HTTP/1.1", host: "localhost"
> > 
> > Reproducible at least on two machines.
> 
> [...]
> 
> > Skrypty-m.js in the attachment.
> > The error does not occur in 1.17.1 and earlier.
> 
> Thank you for the report, it seems to be a problem introduced in 
> ac5a741d39cf.  I'm able to reproduce it with the file and gzip 
> configuration provided.
> 
> The following patch should fix this:
> 
> # HG changeset patch
> # User Maxim Dounin 
> # Date 1564415524 -10800
> #  Mon Jul 29 18:52:04 2019 +0300
> # Node ID aff4d33c72d8ee1a986d3e4c8e5c0f3d1b20962f
> # Parent  e7181cfe9212de7f67df805bb746519c059b490b
> Gzip: fixed "zero size buf" alerts after ac5a741d39cf.
> 
> After ac5a741d39cf it is now possible that after zstream.avail_out
> reaches 0 and we allocate additional buffer, there will be no more data
> to put into this buffer, triggering "zero size buf" alert.
> 
> Fix is to avoid allocating additional buffer in this case, by checking
> if last deflate() call returned Z_STREAM_END.
> 
> diff --git a/src/http/modules/ngx_http_gzip_filter_module.c 
> b/src/http/modules/ngx_http_gzip_filter_module.c
> --- a/src/http/modules/ngx_http_gzip_filter_module.c
> +++ b/src/http/modules/ngx_http_gzip_filter_module.c
> @@ -778,7 +778,7 @@ ngx_http_gzip_filter_deflate(ngx_http_re
>  
>  ctx->out_buf->last = ctx->zstream.next_out;
>  
> -if (ctx->zstream.avail_out == 0) {
> +if (ctx->zstream.avail_out == 0 && rc != Z_STREAM_END) {
>  
>  /* zlib wants to output some more gzipped data */
>  

No error so far, thanks.
___
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel