Re: [1.9.6] One of haproxy processes using 100% CPU

2019-04-05 Thread Olivier Houchard
Hi Maciej,

On Fri, Apr 05, 2019 at 01:33:29PM +0200, Maciej Zdeb wrote:
> I think I found something, please look at the session 0x2110edc0 and
> "calls", it never ends:
> 
> socat /run/haproxy/haproxy2.sock - <<< "show sess all"
> 0x2110edc0: [05/Apr/2019:12:03:32.141927] id=14505 proto=tcpv4
> source=S.S.S.S:52414
>   flags=0x4504e, conn_retries=3, srv_conn=0xe2e7fb0, pend_pos=(nil)
> waiting=0
>   frontend=dev_metrics_5044_front (id=1015 mode=tcp), listener=? (id=2)
> addr=B.B.B.B:5044
>   backend=B.B.B.B:5044_back (id=1016 mode=tcp) addr=X.X.X.X:16222
>   server=slot_5_1 (id=34) addr=N.N.N.N:31728
>   task=0x21105f90 (state=0x00 nice=0 calls=-1222180819 exp= tmask=0x1
> age=1h6m)
>   si[0]=0x2110f058 (state=EST flags=0x4 endp0=CS:0x210fe2a0 exp=
> et=0x000 sub=0)
>   si[1]=0x2110f098 (state=EST flags=0x80010 endp1=CS:0x2109eb10 exp=
> et=0x200 sub=0)
>   co0=0x210f8040 ctrl=tcpv4 xprt=RAW mux=PASS data=STRM
> target=LISTENER:0xe25a290
>   flags=0x00283300 fd=2404 fd.state=22 fd.cache=0 updt=0 fd.tmask=0x1
>   cs=0x210fe2a0 csf=0x0640 ctx=0x210fe2c0
>   co1=0x20d84cb0 ctrl=tcpv4 xprt=RAW mux=PASS data=STRM
> target=SERVER:0xe2e7fb0
>   flags=0x003c3310 fd=1959 fd.state=22 fd.cache=0 updt=0 fd.tmask=0x1
>   cs=0x2109eb10 csf=0x1000 ctx=(nil)
>   req=0x2110edd0 (f=0x848800 an=0x0 pipe=0 tofwd=-1 total=38767)
>   an_exp= rex= wex=?
>   buf=0x2110edd8 data=0x20e8a7e0 o=15360 p=15360 req.next=0 i=0
> size=16384
>   res=0x2110ee30 (f=0x8004a028 an=0x0 pipe=0 tofwd=0 total=66)
>   an_exp= rex= wex=
>   buf=0x2110ee38 data=(nil) o=0 p=0 rsp.next=0 i=0 size=0
> 0x20eec3f0: [05/Apr/2019:13:18:24.444808] id=29808 proto=unix_stream
> source=unix:2
>   flags=0x8, conn_retries=0, srv_conn=(nil), pend_pos=(nil) waiting=0
>   frontend=GLOBAL (id=0 mode=tcp), listener=? (id=2) addr=unix:2
>   backend= (id=-1 mode=-)
>   server= (id=-1)
>   task=0x21028880 (state=0x00 nice=-64 calls=1 exp=30s tmask=0x1 age=?)
>   si[0]=0x20eec688 (state=EST flags=0x28 endp0=CS:0x20cd4600
> exp= et=0x000 sub=1)
>   si[1]=0x20eec6c8 (state=EST flags=0x204018 endp1=APPCTX:0x20cdfe60
> exp= et=0x000 sub=0)
>   co0=0x210af6a0 ctrl=unix_stream xprt=RAW mux=PASS data=STRM
> target=LISTENER:0x11b7d20
>   flags=0x00203306 fd=10 fd.state=25 fd.cache=0 updt=0 fd.tmask=0x1
>   cs=0x20cd4600 csf=0x0200 ctx=0x20cd4620
>   app1=0x20cdfe60 st0=7 st1=0 st2=3 applet= tmask=0x1, nice=-64,
> calls=2, cpu=0, lat=0
>   req=0x20eec400 (f=0xc48202 an=0x0 pipe=0 tofwd=-1 total=14)
>   an_exp= rex=30s wex=
>   buf=0x20eec408 data=0x20e2a530 o=0 p=0 req.next=0 i=0 size=16384
>   res=0x20eec460 (f=0x80008002 an=0x0 pipe=0 tofwd=-1 total=1465)
>   an_exp= rex= wex=
>   buf=0x20eec468 data=0x20d5bbd0 o=1465 p=1465 rsp.next=0 i=0 size=16384
> 

This is indeed very strange. I'm quite interested in seeing the output of
"show fd", as you mentionned you had it.

Thanks !

Olivier



Re: [1.9.6] One of haproxy processes using 100% CPU

2019-04-05 Thread Maciej Zdeb
I think I found something, please look at the session 0x2110edc0 and
"calls", it never ends:

socat /run/haproxy/haproxy2.sock - <<< "show sess all"
0x2110edc0: [05/Apr/2019:12:03:32.141927] id=14505 proto=tcpv4
source=S.S.S.S:52414
  flags=0x4504e, conn_retries=3, srv_conn=0xe2e7fb0, pend_pos=(nil)
waiting=0
  frontend=dev_metrics_5044_front (id=1015 mode=tcp), listener=? (id=2)
addr=B.B.B.B:5044
  backend=B.B.B.B:5044_back (id=1016 mode=tcp) addr=X.X.X.X:16222
  server=slot_5_1 (id=34) addr=N.N.N.N:31728
  task=0x21105f90 (state=0x00 nice=0 calls=-1222180819 exp= tmask=0x1
age=1h6m)
  si[0]=0x2110f058 (state=EST flags=0x4 endp0=CS:0x210fe2a0 exp=
et=0x000 sub=0)
  si[1]=0x2110f098 (state=EST flags=0x80010 endp1=CS:0x2109eb10 exp=
et=0x200 sub=0)
  co0=0x210f8040 ctrl=tcpv4 xprt=RAW mux=PASS data=STRM
target=LISTENER:0xe25a290
  flags=0x00283300 fd=2404 fd.state=22 fd.cache=0 updt=0 fd.tmask=0x1
  cs=0x210fe2a0 csf=0x0640 ctx=0x210fe2c0
  co1=0x20d84cb0 ctrl=tcpv4 xprt=RAW mux=PASS data=STRM
target=SERVER:0xe2e7fb0
  flags=0x003c3310 fd=1959 fd.state=22 fd.cache=0 updt=0 fd.tmask=0x1
  cs=0x2109eb10 csf=0x1000 ctx=(nil)
  req=0x2110edd0 (f=0x848800 an=0x0 pipe=0 tofwd=-1 total=38767)
  an_exp= rex= wex=?
  buf=0x2110edd8 data=0x20e8a7e0 o=15360 p=15360 req.next=0 i=0
size=16384
  res=0x2110ee30 (f=0x8004a028 an=0x0 pipe=0 tofwd=0 total=66)
  an_exp= rex= wex=
  buf=0x2110ee38 data=(nil) o=0 p=0 rsp.next=0 i=0 size=0
0x20eec3f0: [05/Apr/2019:13:18:24.444808] id=29808 proto=unix_stream
source=unix:2
  flags=0x8, conn_retries=0, srv_conn=(nil), pend_pos=(nil) waiting=0
  frontend=GLOBAL (id=0 mode=tcp), listener=? (id=2) addr=unix:2
  backend= (id=-1 mode=-)
  server= (id=-1)
  task=0x21028880 (state=0x00 nice=-64 calls=1 exp=30s tmask=0x1 age=?)
  si[0]=0x20eec688 (state=EST flags=0x28 endp0=CS:0x20cd4600
exp= et=0x000 sub=1)
  si[1]=0x20eec6c8 (state=EST flags=0x204018 endp1=APPCTX:0x20cdfe60
exp= et=0x000 sub=0)
  co0=0x210af6a0 ctrl=unix_stream xprt=RAW mux=PASS data=STRM
target=LISTENER:0x11b7d20
  flags=0x00203306 fd=10 fd.state=25 fd.cache=0 updt=0 fd.tmask=0x1
  cs=0x20cd4600 csf=0x0200 ctx=0x20cd4620
  app1=0x20cdfe60 st0=7 st1=0 st2=3 applet= tmask=0x1, nice=-64,
calls=2, cpu=0, lat=0
  req=0x20eec400 (f=0xc48202 an=0x0 pipe=0 tofwd=-1 total=14)
  an_exp= rex=30s wex=
  buf=0x20eec408 data=0x20e2a530 o=0 p=0 req.next=0 i=0 size=16384
  res=0x20eec460 (f=0x80008002 an=0x0 pipe=0 tofwd=-1 total=1465)
  an_exp= rex= wex=
  buf=0x20eec468 data=0x20d5bbd0 o=1465 p=1465 rsp.next=0 i=0 size=16384

netstat -untap | grep S.S.S.S


pt., 5 kwi 2019 o 13:05 Maciej Zdeb  napisał(a):

> After process spin up to 100% cpu, I switched traffic to another (backup)
> server then I attached gdb and executed "generate-core-file". However I
> don't know how to proceed with it further (it's quite big 3,2GB).
> When I'm attached with gdb sometimes (not always) "bt full" shows this (no
> idea if it's relevant, I've no experience with gdb):
>
> gdb --pid 75107
> GNU gdb (Ubuntu 7.7.1-0ubuntu5~14.04.3) 7.7.1
> Copyright (C) 2014 Free Software Foundation, Inc.
> License GPLv3+: GNU GPL version 3 or later <
> http://gnu.org/licenses/gpl.html>
> This is free software: you are free to change and redistribute it.
> There is NO WARRANTY, to the extent permitted by law.  Type "show copying"
> and "show warranty" for details.
> This GDB was configured as "x86_64-linux-gnu".
> Type "show configuration" for configuration details.
> For bug reporting instructions, please see:
> .
> Find the GDB manual and other documentation resources online at:
> .
> For help, type "help".
> Type "apropos word" to search for commands related to "word".
> Attaching to process 75107
> Reading symbols from /usr/sbin/haproxy...done.
> Reading symbols from /lib/x86_64-linux-gnu/libcrypt.so.1...Reading symbols
> from /usr/lib/debug//lib/x86_64-linux-gnu/libcrypt-2.19.so...done.
> done.
> Loaded symbols for /lib/x86_64-linux-gnu/libcrypt.so.1
> Reading symbols from /lib/x86_64-linux-gnu/libz.so.1...(no debugging
> symbols found)...done.
> Loaded symbols for /lib/x86_64-linux-gnu/libz.so.1
> Reading symbols from /lib/x86_64-linux-gnu/libpthread.so.0...Reading
> symbols from /usr/lib/debug//lib/x86_64-linux-gnu/libpthread-2.19.so...done.
> done.
> [Thread debugging using libthread_db enabled]
> Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
> Loaded symbols for /lib/x86_64-linux-gnu/libpthread.so.0
> Reading symbols from /lib/x86_64-linux-gnu/librt.so.1...Reading symbols
> from /usr/lib/debug//lib/x86_64-linux-gnu/librt-2.19.so...done.
> done.
> Loaded symbols for /lib/x86_64-linux-gnu/librt.so.1
> Reading symbols from
> /usr/lib/haproxy/openssl_1.1.1b/lib/libssl.so.1.1...(no debugging symbols
> found)...done.
> Loaded symbols for 

Re: [1.9.6] One of haproxy processes using 100% CPU

2019-04-05 Thread Maciej Zdeb
After process spin up to 100% cpu, I switched traffic to another (backup)
server then I attached gdb and executed "generate-core-file". However I
don't know how to proceed with it further (it's quite big 3,2GB).
When I'm attached with gdb sometimes (not always) "bt full" shows this (no
idea if it's relevant, I've no experience with gdb):

gdb --pid 75107
GNU gdb (Ubuntu 7.7.1-0ubuntu5~14.04.3) 7.7.1
Copyright (C) 2014 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later 
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.  Type "show copying"
and "show warranty" for details.
This GDB was configured as "x86_64-linux-gnu".
Type "show configuration" for configuration details.
For bug reporting instructions, please see:
.
Find the GDB manual and other documentation resources online at:
.
For help, type "help".
Type "apropos word" to search for commands related to "word".
Attaching to process 75107
Reading symbols from /usr/sbin/haproxy...done.
Reading symbols from /lib/x86_64-linux-gnu/libcrypt.so.1...Reading symbols
from /usr/lib/debug//lib/x86_64-linux-gnu/libcrypt-2.19.so...done.
done.
Loaded symbols for /lib/x86_64-linux-gnu/libcrypt.so.1
Reading symbols from /lib/x86_64-linux-gnu/libz.so.1...(no debugging
symbols found)...done.
Loaded symbols for /lib/x86_64-linux-gnu/libz.so.1
Reading symbols from /lib/x86_64-linux-gnu/libpthread.so.0...Reading
symbols from /usr/lib/debug//lib/x86_64-linux-gnu/libpthread-2.19.so...done.
done.
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
Loaded symbols for /lib/x86_64-linux-gnu/libpthread.so.0
Reading symbols from /lib/x86_64-linux-gnu/librt.so.1...Reading symbols
from /usr/lib/debug//lib/x86_64-linux-gnu/librt-2.19.so...done.
done.
Loaded symbols for /lib/x86_64-linux-gnu/librt.so.1
Reading symbols from
/usr/lib/haproxy/openssl_1.1.1b/lib/libssl.so.1.1...(no debugging symbols
found)...done.
Loaded symbols for /usr/lib/haproxy/openssl_1.1.1b/lib/libssl.so.1.1
Reading symbols from
/usr/lib/haproxy/openssl_1.1.1b/lib/libcrypto.so.1.1...(no debugging
symbols found)...done.
Loaded symbols for /usr/lib/haproxy/openssl_1.1.1b/lib/libcrypto.so.1.1
Reading symbols from /lib/x86_64-linux-gnu/libm.so.6...Reading symbols from
/usr/lib/debug//lib/x86_64-linux-gnu/libm-2.19.so...done.
done.
Loaded symbols for /lib/x86_64-linux-gnu/libm.so.6
Reading symbols from /lib/x86_64-linux-gnu/libdl.so.2...Reading symbols
from /usr/lib/debug//lib/x86_64-linux-gnu/libdl-2.19.so...done.
done.
Loaded symbols for /lib/x86_64-linux-gnu/libdl.so.2
Reading symbols from /usr/lib/haproxy/pcre_8.42/lib/libpcre.so.1...(no
debugging symbols found)...done.
Loaded symbols for /usr/lib/haproxy/pcre_8.42/lib/libpcre.so.1
Reading symbols from /lib/x86_64-linux-gnu/libc.so.6...Reading symbols from
/usr/lib/debug//lib/x86_64-linux-gnu/libc-2.19.so...done.
done.
Loaded symbols for /lib/x86_64-linux-gnu/libc.so.6
Reading symbols from /lib64/ld-linux-x86-64.so.2...Reading symbols from
/usr/lib/debug//lib/x86_64-linux-gnu/ld-2.19.so...done.
done.
Loaded symbols for /lib64/ld-linux-x86-64.so.2
Reading symbols from /usr/lib/haproxy/lua/random.so...(no debugging symbols
found)...done.
Loaded symbols for /usr/lib/haproxy/lua/random.so
Reading symbols from /lib/x86_64-linux-gnu/libnss_files.so.2...Reading
symbols from
/usr/lib/debug//lib/x86_64-linux-gnu/libnss_files-2.19.so...done.
done.
Loaded symbols for /lib/x86_64-linux-gnu/libnss_files.so.2
0x7f762bb226b3 in __epoll_wait_nocancel () at
../sysdeps/unix/syscall-template.S:81
81 ../sysdeps/unix/syscall-template.S: No such file or directory.
(gdb) bt full
#0  0x7f762bb226b3 in __epoll_wait_nocancel () at
../sysdeps/unix/syscall-template.S:81
No locals.
#1  0x00421ebe in _do_poll (p=, exp=-316679748) at
src/ev_epoll.c:156
timeout = 0
status = 
fd = 
count = 
updt_idx = 
old_fd = 
#2  0x004bd9e4 in run_poll_loop () at src/haproxy.c:2675
next = 
exp = 
#3  run_thread_poll_loop (data=data@entry=0x11af740) at src/haproxy.c:2707
ptif = 0x7f50b0 
ptdf = 
start_lock = 0
#4  0x0041fb96 in main (argc=, argv=0x7ffd0e185478)
at src/haproxy.c:3343
tids = 0x11af740
threads = 0x1e2a5460
i = 
old_sig = {__val = {0, 140145545627744, 140145545626880,
140145543473713, 0, 140145545613504, 1, 0, 18446603344811130881,
140145545626880, 1, 140145545675208, 0, 140145545676064, 0, 24}}
blocked_sig = {__val = {1844674406710583, 18446744073709551615
}}
err = 
retry = 
limit = {rlim_cur = 100, rlim_max = 100}
errmsg = "\000\000\000\000\000\000\000\000n\000\000\000w", '\000'
,

Re: Abort on exit "libgcc_s.so.1 must be installed for pthread_cancel to work"

2019-04-05 Thread William Lallemand
On Fri, Apr 05, 2019 at 12:55:11PM +0200, Emmanuel Hocdet wrote:
> 
> Hi,
> 
> To test deinit, i come across this:
> 
> #  /srv/sources/haproxy/haproxy -f /etc/haproxy/ssl.cfg -d -x 
> /run/haproxy_ssl.sock -sf 15716
> 
> log on 15716 process:
> Available polling systems :
>   epoll : pref=300,  test result OK
>poll : pref=200,  test result OK
>  select : pref=150,  test result FAILED
> Total: 3 (2 usable), will use epoll.
> 
> Available filters :
>   [SPOE] spoe
>   [COMP] compression
>   [CACHE] cache
>   [TRACE] trace
> Using epoll() as the polling mechanism.
> :GLOBAL.accept(0005)=000d from [unix:1] ALPN=
> :GLOBAL.srvcls[adfd:]
> :GLOBAL.clicls[adfd:]
> :GLOBAL.closed[adfd:]
> [WARNING] 094/124050 (15809) : Stopping frontend GLOBAL in 0 ms.
> [WARNING] 094/124050 (15809) : Stopping frontend f-redir in 0 ms.
> [WARNING] 094/124050 (15809) : Stopping backend redir in 0 ms.
> [WARNING] 094/124050 (15809) : Stopping backend varnish in 0 ms.
> [WARNING] 094/124050 (15809) : Proxy GLOBAL stopped (FE: 1 conns, BE: 1 
> conns).
> [WARNING] 094/124050 (15809) : Proxy f-redir stopped (FE: 0 conns, BE: 0 
> conns).
> [WARNING] 094/124050 (15809) : Proxy redir stopped (FE: 0 conns, BE: 0 conns).
> [WARNING] 094/124051 (15809) : Proxy varnish stopped (FE: 0 conns, BE: 0 
> conns).
> libgcc_s.so.1 must be installed for pthread_cancel to work
> Aborted
> 
> Link with -lgcc_s fix that, and haproxy return with error code 0.
> I think it will be not very portable…
> 
> ++
> Manu
> 
> 

Hi Emmanuel,

This bug is caused by libpthread which is not linked with libgcc_s, so it tries
to load libgcc_s during the call to phtread_exit() but that can't work if the
process is chroot'ed.

Your solution is the good one, but it was not working on some distributions, I
add to do ADDLIB="-Wl,--no-as-needed -lgcc_s -Wl,--as-needed" instead.

Regards,

-- 
William Lallemand



Abort on exit "libgcc_s.so.1 must be installed for pthread_cancel to work"

2019-04-05 Thread Emmanuel Hocdet

Hi,

To test deinit, i come across this:

#  /srv/sources/haproxy/haproxy -f /etc/haproxy/ssl.cfg -d -x 
/run/haproxy_ssl.sock -sf 15716

log on 15716 process:
Available polling systems :
  epoll : pref=300,  test result OK
   poll : pref=200,  test result OK
 select : pref=150,  test result FAILED
Total: 3 (2 usable), will use epoll.

Available filters :
[SPOE] spoe
[COMP] compression
[CACHE] cache
[TRACE] trace
Using epoll() as the polling mechanism.
:GLOBAL.accept(0005)=000d from [unix:1] ALPN=
:GLOBAL.srvcls[adfd:]
:GLOBAL.clicls[adfd:]
:GLOBAL.closed[adfd:]
[WARNING] 094/124050 (15809) : Stopping frontend GLOBAL in 0 ms.
[WARNING] 094/124050 (15809) : Stopping frontend f-redir in 0 ms.
[WARNING] 094/124050 (15809) : Stopping backend redir in 0 ms.
[WARNING] 094/124050 (15809) : Stopping backend varnish in 0 ms.
[WARNING] 094/124050 (15809) : Proxy GLOBAL stopped (FE: 1 conns, BE: 1 conns).
[WARNING] 094/124050 (15809) : Proxy f-redir stopped (FE: 0 conns, BE: 0 conns).
[WARNING] 094/124050 (15809) : Proxy redir stopped (FE: 0 conns, BE: 0 conns).
[WARNING] 094/124051 (15809) : Proxy varnish stopped (FE: 0 conns, BE: 0 conns).
libgcc_s.so.1 must be installed for pthread_cancel to work
Aborted

Link with -lgcc_s fix that, and haproxy return with error code 0.
I think it will be not very portable…

++
Manu




Re: [ANNOUNCE] haproxy-1.9.6

2019-04-05 Thread Emmanuel Hocdet
Hi Aleks,

Thanks you to have integrate BoringSSL!

> Le 29 mars 2019 à 14:51, Aleksandar Lazic  a écrit :
> 
> Am 29.03.2019 um 14:25 schrieb Willy Tarreau:
>> Hi Aleks,
>> 
>> On Fri, Mar 29, 2019 at 02:09:28PM +0100, Aleksandar Lazic wrote:
>>> With openssl are 2 tests failed but I'm not sure because of the setup or a 
>>> bug.
>>> https://gitlab.com/aleks001/haproxy19-centos/-/jobs/186769272
>> 
>> Thank you for the quick feedback. I remember about the first one being
>> caused by a mismatch in the exact computed response size due to headers
>> encoding causing some very faint variations, though I have no idea why
>> I don't see it here, since I should as well, I'll have to check my regtest
>> script. For the second one, it looks related to the reactivation of the
>> HEAD method in this test which was broken in previous vtest. But I'm
>> seeing in your trace that you're taking it from the git repo so that
>> can't be that. I need to dig as well.
>> 
>>> With boringssl are 3 tests failed but I'm not sure because of the setup or 
>>> a bug.
>>> https://gitlab.com/aleks001/haproxy-19-boringssl/-/jobs/186780822
>> 
>> For this one I don't know, curl reports some unexpected EOFs. I don't
>> see why it would fail only with boringssl. Did it use to work in the
>> past ?
> 
> No. The tests with Boringssl always failed in one or another way.
> 

It’s strange. After quick test, it works in my environnements.
I need to comment "${no-htx} option http-use-htx »
to test with varnishtest.


> https://gitlab.com/aleks001/haproxy-19-boringssl/-/jobs/157743825 
> 
> https://gitlab.com/aleks001/haproxy-19-boringssl/-/jobs/157730793 
> 
> 
> I'm not sure if the docker setup on gitlab is the limitation or just a bug.
> Sorry to be so unspecific.
> 
>> Thanks,
>> Willy
> 
> Regards
> Aleks

++
Manu




Re: [1.9.6] One of haproxy processes using 100% CPU

2019-04-05 Thread Maciej Zdeb
Hi again,

I verified that local DoS did not cause that high cpu usage. I'm upgrading
from 1.8.19 to 1.9.6.

Please look at strace:

of process with 100% cpu usage:
strace -fp 9790 -c
Process 9790 attached
% time seconds  usecs/call callserrors syscall
-- --- --- - - 
 27.510.009501   0942536   clock_gettime
 13.650.004716   0471268   epoll_wait
 12.090.004176   0305211157644 recvfrom
 11.090.003831   0109003 64867 connect
  9.040.003122   0141908  6479 sendto
  6.550.002264   0193372   epoll_ctl
  5.050.001746   0 57430   close
  4.460.001539   0118067   setsockopt
  3.050.001053   0 94756  1860 getsockopt
  2.550.000882   0 50009   socket
  1.430.000493   0 50009   fcntl
  1.320.000457   0 14947  7353 accept4
  0.890.000306   0 11660 1 shutdown
  0.580.000199   0 12172  3226 read
  0.470.000164   0  3838   279 write
  0.210.74   0  6640   bind
  0.060.19   0  1860   getsockname
  0.000.00   0 4   brk
  0.000.00   024   sendmsg
-- --- --- - - 
100.000.034542   2584714241709 total

fragment of strace:
[...]
epoll_ctl(3, EPOLL_CTL_ADD, 2020, {EPOLLOUT, {u32=2020, u64=2020}}) = 0
clock_gettime(CLOCK_THREAD_CPUTIME_ID, {848, 87792876}) = 0
epoll_wait(3, {}, 200, 0)   = 0
clock_gettime(CLOCK_THREAD_CPUTIME_ID, {848, 87804387}) = 0
clock_gettime(CLOCK_THREAD_CPUTIME_ID, {848, 87811541}) = 0
epoll_wait(3, {}, 200, 0)   = 0
clock_gettime(CLOCK_THREAD_CPUTIME_ID, {848, 87823913}) = 0
clock_gettime(CLOCK_THREAD_CPUTIME_ID, {848, 87830810}) = 0
epoll_wait(3, {}, 200, 0)   = 0
clock_gettime(CLOCK_THREAD_CPUTIME_ID, {848, 87842796}) = 0
clock_gettime(CLOCK_THREAD_CPUTIME_ID, {848, 87849448}) = 0
epoll_wait(3, {}, 200, 0)   = 0
clock_gettime(CLOCK_THREAD_CPUTIME_ID, {848, 87859750}) = 0
clock_gettime(CLOCK_THREAD_CPUTIME_ID, {848, 87866006}) = 0
epoll_wait(3, {}, 200, 0)   = 0
clock_gettime(CLOCK_THREAD_CPUTIME_ID, {848, 87877936}) = 0
clock_gettime(CLOCK_THREAD_CPUTIME_ID, {848, 87885329}) = 0
epoll_wait(3, {}, 200, 0)   = 0
clock_gettime(CLOCK_THREAD_CPUTIME_ID, {848, 87897045}) = 0
clock_gettime(CLOCK_THREAD_CPUTIME_ID, {848, 87904108}) = 0
epoll_wait(3, {}, 200, 0)   = 0
clock_gettime(CLOCK_THREAD_CPUTIME_ID, {848, 87915961}) = 0
clock_gettime(CLOCK_THREAD_CPUTIME_ID, {848, 87922580}) = 0
epoll_wait(3, {}, 200, 0)   = 0
clock_gettime(CLOCK_THREAD_CPUTIME_ID, {848, 87933941}) = 0
clock_gettime(CLOCK_THREAD_CPUTIME_ID, {848, 87940258}) = 0
epoll_wait(3, {}, 200, 0)   = 0
clock_gettime(CLOCK_THREAD_CPUTIME_ID, {848, 87952045}) = 0
clock_gettime(CLOCK_THREAD_CPUTIME_ID, {848, 87959070}) = 0
epoll_wait(3, {}, 200, 0)   = 0
clock_gettime(CLOCK_THREAD_CPUTIME_ID, {848, 87971104}) = 0
clock_gettime(CLOCK_THREAD_CPUTIME_ID, {848, 87978136}) = 0
epoll_wait(3, {}, 200, 0)   = 0
clock_gettime(CLOCK_THREAD_CPUTIME_ID, {848, 87990210}) = 0
clock_gettime(CLOCK_THREAD_CPUTIME_ID, {848, 87998045}) = 0
epoll_wait(3, {}, 200, 0)   = 0
clock_gettime(CLOCK_THREAD_CPUTIME_ID, {848, 88009585}) = 0
clock_gettime(CLOCK_THREAD_CPUTIME_ID, {848, 88015976}) = 0
epoll_wait(3, {}, 200, 0)   = 0
clock_gettime(CLOCK_THREAD_CPUTIME_ID, {848, 88027627}) = 0
clock_gettime(CLOCK_THREAD_CPUTIME_ID, {848, 88034407}) = 0
epoll_wait(3, {}, 200, 0)   = 0
clock_gettime(CLOCK_THREAD_CPUTIME_ID, {848, 88046698}) = 0
clock_gettime(CLOCK_THREAD_CPUTIME_ID, {848, 88053419}) = 0
epoll_wait(3, {}, 200, 0)   = 0
clock_gettime(CLOCK_THREAD_CPUTIME_ID, {848, 88065381}) = 0
clock_gettime(CLOCK_THREAD_CPUTIME_ID, {848, 88071551}) = 0
epoll_wait(3, {}, 200, 0)   = 0
clock_gettime(CLOCK_THREAD_CPUTIME_ID, {848, 88082999}) = 0
clock_gettime(CLOCK_THREAD_CPUTIME_ID, {848, 88089594}) = 0
epoll_wait(3, {}, 200, 0)   = 0
clock_gettime(CLOCK_THREAD_CPUTIME_ID, {848, 88112393}) = 0
clock_gettime(CLOCK_THREAD_CPUTIME_ID, {848, 88118910}) = 0
epoll_wait(3, {}, 200, 0)   = 0
clock_gettime(CLOCK_THREAD_CPUTIME_ID, {848, 88130295}) = 0
clock_gettime(CLOCK_THREAD_CPUTIME_ID, {848, 88136707}) = 0
epoll_wait(3, {}, 200, 0)   = 0
clock_gettime(CLOCK_THREAD_CPUTIME_ID, {848, 88148429}) = 0
clock_gettime(CLOCK_THREAD_CPUTIME_ID, {848, 88155219}) = 0
epoll_wait(3, {}, 200, 0)   = 0
clock_gettime(CLOCK_THREAD_CPUTIME_ID, {848, 88167019}) = 0

cookie nocache headers

2019-04-05 Thread Gerardo Esteban Malazdrewicz
Hello!

I have a small issue and a small wishlist here

Issue:

>From src/proto_http.c and src/proto_htx.c:

/* Here, we will tell an eventual cache on the client side
that
we don't
 * want it to cache this reply because HTTP/1.0 caches also
cach
e cookies !
 * Some caches understand the correct form:
'no-cache="set-cooki
e"', but
 * others don't (eg: apache <= 1.3.26). So we use 'private'
inst
ead.
 */

Is this still true, in general? RFC 7234 is almost 5 years old.

What about 'private="set-cookie"'? Would that be more accurate?

Wishlist:

While the header is case-insensitive, Cache-Control is more ubiquitous than
Cache-control.

Would make sense to add a nocache config option, like

nocache content

or

nocache hdr_name content

defaulting to Cache-control private, as the current behavior? If so, will
see to make a patch

Thanks,
  Gerardo