Re: multithreading issuse in haproxy 1.8.5

2018-04-23 Thread Willy Tarreau
On Mon, Apr 23, 2018 at 09:41:18PM +0300, Slawa Olhovchenkov wrote:
> On Mon, Apr 23, 2018 at 08:32:39PM +0200, Willy Tarreau wrote:
> 
> > On Mon, Apr 23, 2018 at 06:36:29PM +0300, Slawa Olhovchenkov wrote:
> > > On Sat, Apr 21, 2018 at 04:38:48PM +0300, Slawa Olhovchenkov wrote:
> > > 
> > > > On Fri, Apr 20, 2018 at 03:55:25PM +0200, Willy Tarreau wrote:
> > > > 
> > > > > Thus for you it's better to stick to a single listener, and if you 
> > > > > want to
> > > > > increase the fairness between the sockets, you can reduce 
> > > > > tune.maxaccept in
> > > > > the global section like below :
> > > > > 
> > > > >   global
> > > > >  tune.maxaccept 8
> > > > > 
> > > > > The kqueue issue you report is still unclear to me however, I'm not 
> > > > > much
> > > > > used to kqueue and always having a hard time decoding it.
> > > > 
> > > > I am try to decode first event on looped thread.
> > > > 
> > > > ev0 id 21 filt -2 flag 0 fflag 0 data 2400 udata 0
> > > > 
> > > > This is EVFILT_WRITE (available 2400 bytes) on socket 21.
> > > > 
> > > > This is DNS socket:
> > > > 
> > > > 12651 haproxy 21 s - rw---n--   9   0 UDP 
> > > > 185.38.13.221:28298 8.8.8.8:53
> > > > 
> > > > Actualy I am have only one dns requests per 2 seconds.
> > > 
> > > Can this (DNS use) cause 100% CPU use?
> > 
> > It should not but there could be a bug. Olivier tried to reproduce here but
> > failed to get any such problem. We'll definitely need your configuration,
> > we've been guessing too much now and we're not making any progress on this
> > issue.
> 
> I am mean need some (timing) combonation of http requests and DNS 
> request/response.
> 
> global
> nbproc 1
> nbthread 8
> cpu-map auto:1/1-8 0-7
>   log /dev/log local0
> tune.ssl.default-dh-param 2048
> tune.ssl.cachesize 100
> tune.ssl.lifetime 600
> tune.ssl.maxrecord 1460
> tune.maxaccept 1
> tune.maxpollevents 20
> maxconn 14
> stats socket /var/run/haproxy.sock level admin
> user www
> group www
> daemon
> 
> defaults
> log global
> modehttp
> http-reuse always
> option http-keep-alive
> option  httplog
> option  dontlognull
> retries 3
> maxconn 14
> backlog 4096
> timeout connect 5000
> timeout client  15000
> timeout server  5
> 
> listen  stats
> bind:
> modehttp
> log global
> 
> maxconn 10
> 
> timeout client  100s
> timeout server  100s
> timeout connect  100s
> timeout queue   100s
> 
> stats enable
> stats hide-version
> stats refresh 30s
> stats show-node
> stats uri  /haproxy?stats
> 
> frontend balancer
> bind *:80
> bind *:443 ssl crt 
> # remove X-Forwarded-For header
> http-request set-header X-Forwarded-Port %[dst_port]
> http-request set-header X-Forwarded-Proto https if { ssl_fc }
> http-request set-header X-Forwarded-Proto http if ! { ssl_fc }
> reqidel ^X-Forwarded-For:.*
> option dontlog-normal
> option forwardfor
> timeout client 5ms
> timeout client 5ms
>   use_backend ssl-pool if { ssl_fc }
> default_backend default-pool  
> 
> backend default-pool
> balance roundrobin
> option httpchk GET /health-check
> timeout connect 1000ms
> timeout server 5000ms
> server  elb1 x.eu-west-1.elb.amazonaws.com:80 maxconn 7000 id 1 check 
> resolvers mydns resolve-prefer ipv4
> server  elb2 y.eu-west-1.elb.amazonaws.com:80 maxconn 7000 id 2 check 
> resolvers mydns resolve-prefer ipv4
> 
> backend ssl-pool
> balance roundrobin
> option httpchk GET /health-check
> timeout connect 1000ms
> timeout server 35s
> server  elb1 x.eu-west-1.elb.amazonaws.com:443 ssl verify none 
> maxconn 7000 id 1 check resolvers mydns resolve-prefer ipv4
> server  elb2 y.eu-west-1.elb.amazonaws.com:443 ssl verify none 
> maxconn 7000 id 2 check resolvers mydns resolve-prefer ipv4
> 
> resolvers mydns
>   nameserver dns1 8.8.8.8:53
>   resolve_retries   3
>   timeout retry 1s
>   hold other   30s
>   hold refused 30s
>   hold nx  30s
>   hold timeout 30s
>   hold valid   10s


Thank you, we'll retry with this.

Willy



1.9dev LUA core.tcp() cannot be used from different threads

2018-04-23 Thread PiBa-NL

Hi List, Thierry (LUA maintainer), Christopher (Multi-Threading),

When im making a tcp connection to a (mail) server from a lua task this 
error pops up randomly when using 'nbthread 4', the error luckily seems 
pretty self explanatory, but ill leave that to the threading and lua 
experts to come up with a fix ;) i think somehow the script or at least 
its socket commands must be forced to always be executed on the same 
thread? or perhaps there is another way..


Also i do wonder how far lua is safe to use at all in a multithreaded 
program. Or would that become impossible to keep safe.?. But thats a bit 
offtopic perhaps..


Line 240: recieve = mailer.receive(mailer, "*l")
[ALERT] 110/232212 (678) : Lua task: runtime error: 
/root/haproxytest/test.lua:240: connect: cannot use socket on other thread.


Line 266:  local mailer = core.tcp()
Line 267:    ret = mailer.connect(mailer, self.mailserver, 
self.mailserverport)
[ALERT] 110/232321 (682) : Lua task: runtime error: 
/root/haproxytest/test.lua:267: connect: cannot use socket on other thread.


Let me know if there is a patch or something else i can test/check. Or 
should configure differently.?.

Thanks in advance.

Regards,

PiBa-NL (Pieter)

 haproxy.conf & lua scripts
Basically the serverhealth_smtpmail_haproxy.conf 
 
and the files it links to are here:

https://github.com/PiBa-NL/MyPublicProjects/tree/master/haproxy/lua-scripts

p.s.
The 'mailer' code if anyone is interested that was used is written in 
some 'libraries' ive committed on github link, maybe they are of use to 
someone else as well :) comments and fixes are welcome ;).. They are 
'first versions' but seem functional with limited testing sofar :).





1.9dev LUA register_task to function that ends performs a core dump..

2018-04-23 Thread PiBa-NL

Hi List, Thierry,

Below script makes haproxy perform a coredump when a function that 
doesnt loop forever is put into register_task.. is it possible to add 
some safety checks around such calls.?.


The coredump does not seem to contain any useful info when read by gdb.. 
unkown functions at unkown addresses...


Also i tried to register a new second task inside the c==5 check, but 
then it just seemed to hang..


Maybe not really important as people should probably never use a 
function that can exit for a task.. , but its never nice to have 
something perform a coredump..


Regards,

PiBa-NL (Pieter)

 haproxy.conf 

global
  nbthread 1
  lua-load /root/haproxytest/print_r.lua
  lua-load /root/haproxytest/test.lua

defaults
    mode http
    timeout connect 5s
    timeout client 30s
    timeout server 60s

frontend TestSite
    bind *:80

 Lua script 

mytask = function()
    c = 0
    repeat
        core.Info("Task")
        core.sleep(1)
        c = c + 1
        if c == 3 then
            break
        end
    until false
    core.Info("Stopping task")
end
core.register_task(mytask)

 output ###

[info] 112/224221 (7881) : Task
[info] 112/224222 (7881) : Task
[info] 112/224223 (7881) : Task
[info] 112/224224 (7881) : Stopping task
Segmentation fault (core dumped)




1.9dev LUA shows partial results from print_r(core.get_info()) after adding headers ?

2018-04-23 Thread PiBa-NL

Hi List, Thierry,

The second print_r(core.get_info()) only shows 'some' of its results and 
the final message never shows..
Is there some memory buffer overflow bug in there.? Possibly caused by 
the 'add_header' calls.. as removing those seems to fix the behaviour of 
the CORE2 print_r call..


Using haproxy 1.9dev, with config below on FreeBSD.

Is there a bug in my script, or is it more likely that 'something' needs 
fixing in the lua api / interaction?
Lemme know what i can to to help track this down somehow.. I tried 
memory 'poisoning' in haproxy but that doesn't seem to affect any 
effects i'm seeing..


Regards,

PiBa-NL (Pieter)


 Content of haproxy.conf 

global
  nbthread 1
  lua-load /root/haproxytest/print_r.lua
  lua-load /root/haproxytest/test.lua

defaults
    mode http
    timeout connect 5s
    timeout client 30s
    timeout server 60s

frontend TestSite
    bind *:80

    acl webrequest path -m beg /webrequest
    http-request use-service lua.testitweb-webrequest if webrequest

 Content of test.lua 

testitweb = {}
testitweb.webrequest = function(applet)
        core.Info("# CORE 1")
        print_r(core.get_info())
        core.Info("# CORE 1 ^")

        local resp = ""
        print_r(core.get_info(),false,function(x)
            resp=resp..string.gsub(x,"\n","")
        end
        )
        response = "CoreInfo:"..resp

        applet:add_header("Server", "haproxy/webstats")
        applet:add_header("Content-Length", string.len(response))
        applet:add_header("Content-Type", "text/html")
        applet:add_header("Refresh", "10")
        applet:start_response()
        applet:send(response)

        core.Info("# CORE 2")
        print_r(core.get_info())
        core.Info("# CORE 2 ^")
    end
core.register_service("testitweb-webrequest", "http", testitweb.webrequest)


 (partial) output :

First CORE1 gets printed fully until the last item of the get_state in 
memory, CumConns in this case (thats gets assigned random though upon 
each start..)


    "Uptime_sec": (number) 3
    "Pid": (number) 7848
    "CumConns": (number) 3
]
[info] 112/222621 (7848) : # CORE 1 ^
[info] 112/222621 (7848) : # CORE 2
(table) table: 0x8023ff540 [
    "CurrSslConns": (number) 0
    "Version": (string) "1.9-dev0-564d15-357"
    "SslRate": (number) 0
    "PoolAlloc_MB": (number) 0
    "Hard_maxconn": (number) 2000
    "Nbthread": (number) 1
    "CurrConns": (number) 1
    "Memmax_MB": (number) 0
    "Maxsock": (number) 4011
    "ConnRateLimit": (number) 0
    "CompressBpsIn": (number) 0
    "Process_num": (number) 1
    "node": (string) "freebsd11"
    "Idle_pct": (number) 100
    "SessRate": (number) 1
    "CompressBpsRateLim": (number) 0
    "Tasks": (number) 4
    "Release_date": [info] 112/222622 (7848) : # CORE 1
(table) table: 0x8023ffc80 [
    "CurrSslConns": (number) 0
    "Version": (string) "1.9-dev0-564d15-357"
    "SslRate": (number) 0
    "PoolAlloc_MB": (number) 0
    "Hard_maxconn": (number) 2000
    "Nbthread": (number) 1
    "CurrConns": (number) 1
    "Memmax_MB": (number) 0
    "Maxsock": (number) 4011

As you can see the CORE2 is truncated and a new CORE1 continues printing 
after a new call to the webservice is made.. (there was time between it 
stopping output on screen and the next web call..)





Re: multithreading issuse in haproxy 1.8.5

2018-04-23 Thread Slawa Olhovchenkov
On Mon, Apr 23, 2018 at 08:32:39PM +0200, Willy Tarreau wrote:

> On Mon, Apr 23, 2018 at 06:36:29PM +0300, Slawa Olhovchenkov wrote:
> > On Sat, Apr 21, 2018 at 04:38:48PM +0300, Slawa Olhovchenkov wrote:
> > 
> > > On Fri, Apr 20, 2018 at 03:55:25PM +0200, Willy Tarreau wrote:
> > > 
> > > > Thus for you it's better to stick to a single listener, and if you want 
> > > > to
> > > > increase the fairness between the sockets, you can reduce 
> > > > tune.maxaccept in
> > > > the global section like below :
> > > > 
> > > >   global
> > > >  tune.maxaccept 8
> > > > 
> > > > The kqueue issue you report is still unclear to me however, I'm not much
> > > > used to kqueue and always having a hard time decoding it.
> > > 
> > > I am try to decode first event on looped thread.
> > > 
> > > ev0 id 21 filt -2 flag 0 fflag 0 data 2400 udata 0
> > > 
> > > This is EVFILT_WRITE (available 2400 bytes) on socket 21.
> > > 
> > > This is DNS socket:
> > > 
> > > 12651 haproxy 21 s - rw---n--   9   0 UDP 
> > > 185.38.13.221:28298 8.8.8.8:53
> > > 
> > > Actualy I am have only one dns requests per 2 seconds.
> > 
> > Can this (DNS use) cause 100% CPU use?
> 
> It should not but there could be a bug. Olivier tried to reproduce here but
> failed to get any such problem. We'll definitely need your configuration,
> we've been guessing too much now and we're not making any progress on this
> issue.

I am mean need some (timing) combonation of http requests and DNS 
request/response.

global
nbproc 1
nbthread 8
cpu-map auto:1/1-8 0-7
log /dev/log local0
tune.ssl.default-dh-param 2048
tune.ssl.cachesize 100
tune.ssl.lifetime 600
tune.ssl.maxrecord 1460
tune.maxaccept 1
tune.maxpollevents 20
maxconn 14
stats socket /var/run/haproxy.sock level admin
user www
group www
daemon

defaults
log global
modehttp
http-reuse always
option http-keep-alive
option  httplog
option  dontlognull
retries 3
maxconn 14
backlog 4096
timeout connect 5000
timeout client  15000
timeout server  5

listen  stats
bind:
modehttp
log global

maxconn 10

timeout client  100s
timeout server  100s
timeout connect  100s
timeout queue   100s

stats enable
stats hide-version
stats refresh 30s
stats show-node
stats uri  /haproxy?stats

frontend balancer
bind *:80
bind *:443 ssl crt 
# remove X-Forwarded-For header
http-request set-header X-Forwarded-Port %[dst_port]
http-request set-header X-Forwarded-Proto https if { ssl_fc }
http-request set-header X-Forwarded-Proto http if ! { ssl_fc }
reqidel ^X-Forwarded-For:.*
option dontlog-normal
option forwardfor
timeout client 5ms
timeout client 5ms
use_backend ssl-pool if { ssl_fc }
default_backend default-pool  

backend default-pool
balance roundrobin
option httpchk GET /health-check
timeout connect 1000ms
timeout server 5000ms
server  elb1 x.eu-west-1.elb.amazonaws.com:80 maxconn 7000 id 1 check 
resolvers mydns resolve-prefer ipv4
server  elb2 y.eu-west-1.elb.amazonaws.com:80 maxconn 7000 id 2 check 
resolvers mydns resolve-prefer ipv4

backend ssl-pool
balance roundrobin
option httpchk GET /health-check
timeout connect 1000ms
timeout server 35s
server  elb1 x.eu-west-1.elb.amazonaws.com:443 ssl verify none maxconn 
7000 id 1 check resolvers mydns resolve-prefer ipv4
server  elb2 y.eu-west-1.elb.amazonaws.com:443 ssl verify none maxconn 
7000 id 2 check resolvers mydns resolve-prefer ipv4

resolvers mydns
  nameserver dns1 8.8.8.8:53
  resolve_retries   3
  timeout retry 1s
  hold other   30s
  hold refused 30s
  hold nx  30s
  hold timeout 30s
  hold valid   10s



Re: multithreading issuse in haproxy 1.8.5

2018-04-23 Thread Willy Tarreau
On Mon, Apr 23, 2018 at 06:36:29PM +0300, Slawa Olhovchenkov wrote:
> On Sat, Apr 21, 2018 at 04:38:48PM +0300, Slawa Olhovchenkov wrote:
> 
> > On Fri, Apr 20, 2018 at 03:55:25PM +0200, Willy Tarreau wrote:
> > 
> > > Thus for you it's better to stick to a single listener, and if you want to
> > > increase the fairness between the sockets, you can reduce tune.maxaccept 
> > > in
> > > the global section like below :
> > > 
> > >   global
> > >  tune.maxaccept 8
> > > 
> > > The kqueue issue you report is still unclear to me however, I'm not much
> > > used to kqueue and always having a hard time decoding it.
> > 
> > I am try to decode first event on looped thread.
> > 
> > ev0 id 21 filt -2 flag 0 fflag 0 data 2400 udata 0
> > 
> > This is EVFILT_WRITE (available 2400 bytes) on socket 21.
> > 
> > This is DNS socket:
> > 
> > 12651 haproxy 21 s - rw---n--   9   0 UDP 
> > 185.38.13.221:28298 8.8.8.8:53
> > 
> > Actualy I am have only one dns requests per 2 seconds.
> 
> Can this (DNS use) cause 100% CPU use?

It should not but there could be a bug. Olivier tried to reproduce here but
failed to get any such problem. We'll definitely need your configuration,
we've been guessing too much now and we're not making any progress on this
issue.

Willy



RE: Use SNI with healthchecks

2018-04-23 Thread GALLISSOT VINCENT
Thank you very much for your answers,

I'll migrate to 1.8 asap to fix this.


Vincent



De : lu...@ltri.eu  de la part de Lukas Tribus 
Envoyé : lundi 23 avril 2018 17:18
À : GALLISSOT VINCENT
Cc : haproxy@formilux.org
Objet : Re: Use SNI with healthchecks

Hello Vincent,


On 23 April 2018 at 16:38, GALLISSOT VINCENT  wrote:
> Does anybody know how can I use healthchecks over HTTPS with SNI support ?

You need haproxy 1.8 for this, it contains the check-sni directive
which allows to set SNI to a specific string for the health check:

http://cbonte.github.io/haproxy-dconv/1.8/configuration.html#5.2-check-sni




Regards,

Lukas


Re: Use SNI with healthchecks

2018-04-23 Thread Lukas Tribus
Hello Vincent,


On 23 April 2018 at 16:38, GALLISSOT VINCENT  wrote:
> Does anybody know how can I use healthchecks over HTTPS with SNI support ?

You need haproxy 1.8 for this, it contains the check-sni directive
which allows to set SNI to a specific string for the health check:

http://cbonte.github.io/haproxy-dconv/1.8/configuration.html#5.2-check-sni




Regards,

Lukas



Re: Use SNI with healthchecks

2018-04-23 Thread Jerome Magnin
Hi Vincent,

On Mon, Apr 23, 2018 at 02:38:32PM +, GALLISSOT VINCENT wrote:
> Hi all,
> 
> 
> I want to use SNI with httpchk on HAProxy 1.7.10 to connect to  CloudFront 
> distributions as backend servers.
> 
> I saw in this mailing-list archives that SNI is not used by default even when 
> using the ssl directive.
> 
> We don't pay for SNI on that distribution, that means CloudFront doesn't 
> provide a certificate on its default vhost.
> 
> Because of that, all healthchecks fail with "handshake failure".
> 
> 
> I temporarily by-passed the issue by adding "port 80" to allow healthchecks 
> over HTTP:
> 
> 
> option httpchk HEAD /check HTTP/1.1\r\nHost:\ 
> mydistribution.cloudfront.net
> server mydistribution mydistribution.cloudfront.net:443 check resolvers 
> mydns port 80 cookie no-sslv3 ssl verify required ca-file ca-certificates.crt
> 
> 
> Does anybody know how can I use healthchecks over HTTPS with SNI support ?
>

Prior to 1.8 if you want SNI in the health checks you have to use something
along these lines:

backend moo
mode http
option httpchk GET / HTTP/1.0
server s1 my.example.host:443 check addr 127.0.0.1 port 1234 ssl sni 
str("my.example.host") 


listen foo
bind 127.0.0.1:1234
server s1 my.example.host:443 sni str("my.example.host") ssl

That's because sni keyword only applies to proxied traffic, and not checks, so
you check through a listener that will add the sni.

With 1.8 and later, you just use check-sni  on server lines.

cheers,
Jérôme 



Use SNI with healthchecks

2018-04-23 Thread GALLISSOT VINCENT
Hi all,


I want to use SNI with httpchk on HAProxy 1.7.10 to connect to  CloudFront 
distributions as backend servers.

I saw in this mailing-list archives that SNI is not used by default even when 
using the ssl directive.

We don't pay for SNI on that distribution, that means CloudFront doesn't 
provide a certificate on its default vhost.

Because of that, all healthchecks fail with "handshake failure".


I temporarily by-passed the issue by adding "port 80" to allow healthchecks 
over HTTP:


option httpchk HEAD /check HTTP/1.1\r\nHost:\ mydistribution.cloudfront.net
server mydistribution mydistribution.cloudfront.net:443 check resolvers 
mydns port 80 cookie no-sslv3 ssl verify required ca-file ca-certificates.crt


Does anybody know how can I use healthchecks over HTTPS with SNI support ?


Many thanks,

Vincent


Re: [PATCH] MEDIUM: cli: Add multi-line mode support

2018-04-23 Thread Aurélien Nephtali
Hello Willy,

On Fri, Apr 20, 2018 at 05:12:22PM +0200, Willy Tarreau wrote:
> Hi Aurélien,
>
> It seems to me that some places in the parser look for "<<" anywhere on the
> line (mainly the strstr() which even skips trailing spaces/tabs), and some
> parts of the logic only expect it at the end.
>
> I'm personally perfectly fine with being strict at the beginning and doing
> as you documented, ie using "<<\n" as the delimiter so that users don't get
> used to start to put it anywhere. I noticed anyway that it's not recognized
> as the delimiter if not at the end, but the logic seems to check for it.

That's right. The logic in cli_io_handler() looks for the pattern at the end of
the first line received. In cli_parse_request(), before the
tokenization, it will
rescan the input to look for the pattern again only if it knows it is there (the
flag APPCTX_CLI_ST1_PAYLOAD is set). I thought about storing its position
to avoid this second lookup but felt using an extra field for that
would not be that
good considering the parsing is not something done millions of times per
second. I don't like doing/storing things multiple times but here I think it's a
fair trade-off.
I will add some comments since I can see how it can be disturbing to see two
different logics.

> > + p = appctx->chunk->str;
> > + end = p + appctx->chunk->len;
> > +
> > + /* look for a payload */
> > + if (appctx->st1 & APPCTX_CLI_ST1_PAYLOAD) {
> > + payload = strstr(p, PAYLOAD_PATTERN);
> > + end = payload;
> > + /* skip the pattern */
> > + payload += strlen(PAYLOAD_PATTERN);
> > + /* skip whitespaces */
> > + payload += strspn(payload, " \t");
> > + }
>
> Be extremely careful with the str* functions in haproxy. Due to being used
> to process buffers in-place, most of the time our strings are *not* zero-
> terminated, which is why they're often put in chunks made of (str,len),
> or the new immediate strings (ist) also made of (ptr,len). The internal
> API is not very rich regarding this so some functions are implemented
> like strnistr() and others using ist like istist() which does the same
> as strstr() but on ist strings. It's among the things we have to do to
> unify the internal strings API as for now many operations are simply
> open-coded. At the very least, wherever the chunk is filled, a comment
> should indicate that the trailing zero is assumed to be present in the
> rest of the code, because the risk of the code being changed without
> keeping it is high.

I took extra care to be sure I was dealing only with C-strings because
even if it's not very fun to parse strings in C, it's even worse when
the are not zero terminated (hello nginx) and you always end up needing
a str-like() function the internal API does not provide. I will also
add comments.

> > + chunk_appendf(appctx->chunk, "%s", trash.str);
>
> Given that this will result in appctx->chunk always being equal to trash.str,
> I think you should take a look at cli_io_handler() to check if it still makes
> sense at all to use the trash there. Maybe you should simply rely on this
> allocated chunk all the time and save this copy and formatting.

Mmh, yes, I think we can read input data into appctx->chunk and drop the trash
variable.

>
> > From 63439f2b089613973f72a37dd5dda17f45f26545 Mon Sep 17 00:00:00 2001
> > From: =?UTF-8?q?Aur=C3=A9lien=20Nephtali?= 
> > Date: Wed, 18 Apr 2018 14:04:47 +0200
> > Subject: [PATCH 2/3] MINOR: map: Add payload support to "add map"
> (...)
> > diff --git a/src/map.c b/src/map.c
> > index d02a0255c..64222dc2e 100644
> > --- a/src/map.c
> > +++ b/src/map.c
> > + const char *end = payload + strlen(payload);
> > +
> > + while (payload < end) {
> (...)
> > + /* value */
> > + payload += strspn(payload, " \t");
> > + value = payload;
> > + l = strcspn(value, "\n");
> > + value[l] = 0;
> > + payload += l + 1;
>
> I think this one is not exactly good, as a string ending in "123\0" will
> cause payload to point past the \0. While there's theorically nothing
> wrong with this, and we know that on all supported architecture, ~0 is
> already not a valid pointer so this will not cause payload to become < end,
> it can at least be confusing when debugging. I'd rather do :
>
> payload += l;
> if (*payload)
> payload++;

Yes, 'payload' will go past the end of the string and that is why I
use 'end' and stop
dereferencing 'payload' after the increment but I totally see why it
can confuse people.

>
> > From 1b7f6c40b010521a8b55103c984d3c8f305e818c Mon Sep 17 00:00:00 2001
> > From: =?UTF-8?q?Aur=C3=A9lien=20Nephtali?= 
> > Date: Wed, 18 Apr 2018 14:04:58 +0200
> > Subject: [PATCH 3/3] MINOR: ssl: Add payload support to "set ssl
> (...)
> > --- a/src/ssl_sock.c
> > +++ b/src/ssl_sock.c
> > @@ -8565,16 +8565,27 @@ static int cli_parse_set_ocspresponse(char **args, 
> > char *payload, struct appctx
> >  {
> >  #if (defined SSL_CTRL_SET_TLSEXT_STATUS_REQ_CB && !defined OPENSSL_NO_OCSP)
> >   char *err = NULL;
> > + int i, j;
> > +
> > + i