Re: core dump, lua service, 1.6-dev6 ss-20150930

2015-10-14 Thread Thierry FOURNIER
On Wed, 14 Oct 2015 05:39:58 +0200
Willy Tarreau  wrote:

> Hi Pieter,
> 
> On Wed, Oct 14, 2015 at 01:43:40AM +0200, PiBa-NL wrote:
> > Ok got some good news here :).. 1.6.0-release nolonger has the error i 
> > encountered.
> > 
> > The commit below fixed the issue already.
> > --
> > CLEANUP: cli: ensure we can never double-free error messages
> > http://git.haproxy.org/?p=haproxy.git;a=commit;h=6457d0fac304b7bba3e8af13501bf5ecf82bfa67
> > --
> 
> Great news!


I agree, its a great news. I was a little bit sad to see the release
with a bug like this one. Now I can stop my virtual FreeBSD and restart
my very heavy browser ;)


> > I was still testing with 1.6-dev7 the fix above came the day after.. 
> > Probably your testing with HEAD, which is why it doesn't happen for you. 
> > Using snapshots or HEAD is not as easy as just following dev releases.. 
> > So i usually stick to those unless i have reason to believe a newer 
> > version might fix it already. I should have tested again sooner sorry.. 
> > (I actually did test latest snapshot at the moment when i first reported 
> > the issue..)
> 
> No problem, it's just that we weren't clear on our respective versions,
> it's neither the first nor the last time.
> 
> > Anyway i burned some more hours on both your and my side than was 
> > probably needed.
> > One more issue gone :)
> 
> Please keep in mind that I got a few segfaults with pipelined requests
> (the very large ones), so I'm pretty sure that a few bugs remain, though
> less easy to reproduce than the one you were suffering from.
> 
> > Thanks for the support!
> 
> You're welcome, thanks for your feedback as well :-)


I agree again, thanks for the tests.

Thierry



Re: core dump, lua service, 1.6-dev6 ss-20150930

2015-10-13 Thread Willy Tarreau
Hi Pieter,

On Wed, Oct 14, 2015 at 01:43:40AM +0200, PiBa-NL wrote:
> Ok got some good news here :).. 1.6.0-release nolonger has the error i 
> encountered.
> 
> The commit below fixed the issue already.
> --
> CLEANUP: cli: ensure we can never double-free error messages
> http://git.haproxy.org/?p=haproxy.git;a=commit;h=6457d0fac304b7bba3e8af13501bf5ecf82bfa67
> --

Great news!

> I was still testing with 1.6-dev7 the fix above came the day after.. 
> Probably your testing with HEAD, which is why it doesn't happen for you. 
> Using snapshots or HEAD is not as easy as just following dev releases.. 
> So i usually stick to those unless i have reason to believe a newer 
> version might fix it already. I should have tested again sooner sorry.. 
> (I actually did test latest snapshot at the moment when i first reported 
> the issue..)

No problem, it's just that we weren't clear on our respective versions,
it's neither the first nor the last time.

> Anyway i burned some more hours on both your and my side than was 
> probably needed.
> One more issue gone :)

Please keep in mind that I got a few segfaults with pipelined requests
(the very large ones), so I'm pretty sure that a few bugs remain, though
less easy to reproduce than the one you were suffering from.

> Thanks for the support!

You're welcome, thanks for your feedback as well :-)

Willy




Re: core dump, lua service, 1.6-dev6 ss-20150930

2015-10-13 Thread Thierry FOURNIER
On Tue, 13 Oct 2015 00:46:43 +0200
Willy Tarreau  wrote:

> On Tue, Oct 13, 2015 at 12:32:08AM +0200, PiBa-NL wrote:
> > >Yep so here rqh is in fact req->buf->i and as you noticed it's been
> > >decremented a second time.
> > >
> > >I'm seeing this which I find suspicious in hlua.c :
> > >
> > >   5909
> > >   5910  /* skip the requests bytes. */
> > >   5911  bo_skip(si_oc(si), strm->txn->req.eoh + 2);
> > >
> > >First I don't understand why "eoh+2", I suspect that it's for the CRLF
> > >in which case it's wrong since it can be a lone LF. Second, I'm not
> > >seeing sov being reset afterwards. Could you please just add this
> > >after this line :
> > >
> > >strm->txn->req.next -= strm->txn->req.sov;
> > >strm->txn->req.sov = 0;
> > This did not seem to resolve the issue.
> 
> OK thanks for testing at least!
> 
> > If you have any other idea where it might go wrong please let me know :)
> > Ill try and dig a little further tomorrow evening.
> 
> Unfortunately no I don't have any other idea. At this point I think
> I'll have to discuss with Thierry about this. We're in a situation
> where I know pretty well how HTTP forwarding works, while he knows
> very well how Lua works, but both of us have very unclear idea of
> the other one's part, so that doesn't help much :-/


I'm silently following the thread :) I can't reproducethe issue, I
supposed that is because it appears on FreeBSD, but I suppose that it
will be appear on Linux.

Westerday my laptop were out of battery, so it rebooted. I loose my
browsers with hundreds of internet, that is free some ram and I can
start a freebsd VM without swapping (I dont have a simple life).

I'm currently install a freebsd distrib, hoping that I reproduce the
problem.

Thierry 


> I'm still working on the management doc that I hoped to finish by today
> and seems to take longer, so the release might be deferred to tomorrow,
> maybe in the mean time we'll have the opportunity to find something odd,
> but I prefer not to put my nose there myself or it will further postpone
> the doc and the release which is waiting for it.
> 
> Thanks for your feedback!
> 
> Willy
>  
> 



Re: core dump, lua service, 1.6-dev6 ss-20150930

2015-10-13 Thread Willy Tarreau
Hi again :-)

On Tue, Oct 13, 2015 at 06:10:33PM +0200, Willy Tarreau wrote:
> I can't reproduce either unfortunately. I'm seeing some other minor
> issues related to how the closed input is handled and showing that
> pipelining doesn't work (only the first request is handled) but that's
> all I'm seeing I'm sorry.
> 
> I've tried injecting on stats in parallel to the other frontend, I've
> tried with close and keep-alive etc... I tried to change the poller
> just in case you would be facing a race condition, no way :-(
> 
> In general it's good to keep in mind that buffer_slow_realign() is
> called to realign wrapped requests, so that normally means that
> pipelining is needed. But even then for now I can't succeed.

As usual, sending an e-mail scares the bug and it starts to shake the
white flag :-)

So by configuring the buffer size to 1 and sending large 8kB requests,
I'm seeing a random behaviour. First, most of then time I end up with a
stuck session which never ends (no expiration timer set). And from time
to time it may crash. This time it was not in buffer_slow_realign() but
in buffer_insert_line2(), though the problem is the same :

(gdb) up
#2  0x0046e094 in http_header_add_tail2 (msg=0x7ce628, 
hdr_idx=0x7ce5c8, text=0x53b339 "Connection: close", len=17) at 
src/proto_http.c:595
595 bytes = buffer_insert_line2(msg->chn->buf, msg->chn->buf->p + 
msg->eoh, text, len);

(gdb) p msg->eoh
$6 = 8057
(gdb) p *msg->chn->buf
$7 = {p = 0x7f8e7b44bf9e "3456789.123456789\n", 'P' ..., 
size = 10008, i = 0, o = 8058, data = 0x7f8e7b44a024 "GET /1234567"}

(gdb) p msg->chn->buf->p - msg->chn->buf->data
$8 = 8058

As one may notice, since p is already 8kB from the beginning of the buffer
(hence 2kB from the end), writing at p + eoh is definitely wrong. Here we're
having a problem that msg->eoh is wrong or buf->p is wrong.

My opinion here is that buf->p is the wrong one, since we're dealing with a
8kB request, so it should definitely have been realigned. Or maybe it was
stripped and removed from the request buffer with HTTP processing still
enabled.

All this part is still totally unclear to me I'm afraid. I suggest that we
don't rush too fast on lua services and try to fix that during the stable
cycle. I don't want to postpone the release any further for something that
was added very recently and that is not causing any regression to existing
configs.

Best regards,
willy




Re: core dump, lua service, 1.6-dev6 ss-20150930

2015-10-13 Thread Willy Tarreau
Hi guys,

On Tue, Oct 13, 2015 at 10:07:33AM +0200, Thierry FOURNIER wrote:
> I'm silently following the thread :) I can't reproducethe issue, I
> supposed that is because it appears on FreeBSD, but I suppose that it
> will be appear on Linux.
> 
> Westerday my laptop were out of battery, so it rebooted. I loose my
> browsers with hundreds of internet, that is free some ram and I can
> start a freebsd VM without swapping (I dont have a simple life).
> 
> I'm currently install a freebsd distrib, hoping that I reproduce the
> problem.

I can't reproduce either unfortunately. I'm seeing some other minor
issues related to how the closed input is handled and showing that
pipelining doesn't work (only the first request is handled) but that's
all I'm seeing I'm sorry.

I've tried injecting on stats in parallel to the other frontend, I've
tried with close and keep-alive etc... I tried to change the poller
just in case you would be facing a race condition, no way :-(

In general it's good to keep in mind that buffer_slow_realign() is
called to realign wrapped requests, so that normally means that
pipelining is needed. But even then for now I can't succeed.

Willy




Re: core dump, lua service, 1.6-dev6 ss-20150930

2015-10-13 Thread PiBa-NL

Hi Willy, Thierry, others,

Op 13-10-2015 om 18:29 schreef Willy Tarreau:

Hi again :-)

On Tue, Oct 13, 2015 at 06:10:33PM +0200, Willy Tarreau wrote:

I can't reproduce either unfortunately. I'm seeing some other minor
issues related to how the closed input is handled and showing that
pipelining doesn't work (only the first request is handled) but that's
all I'm seeing I'm sorry.

I've tried injecting on stats in parallel to the other frontend, I've
tried with close and keep-alive etc... I tried to change the poller
just in case you would be facing a race condition, no way :-(

In general it's good to keep in mind that buffer_slow_realign() is
called to realign wrapped requests, so that normally means that
pipelining is needed. But even then for now I can't succeed.

As usual, sending an e-mail scares the bug and it starts to shake the
white flag :-)

So by configuring the buffer size to 1 and sending large 8kB requests,
I'm seeing a random behaviour. First, most of then time I end up with a
stuck session which never ends (no expiration timer set). And from time
to time it may crash. This time it was not in buffer_slow_realign() but
in buffer_insert_line2(), though the problem is the same :

(gdb) up
#2  0x0046e094 in http_header_add_tail2 (msg=0x7ce628, hdr_idx=0x7ce5c8, 
text=0x53b339 "Connection: close", len=17) at src/proto_http.c:595
595 bytes = buffer_insert_line2(msg->chn->buf, msg->chn->buf->p + 
msg->eoh, text, len);

(gdb) p msg->eoh
$6 = 8057
(gdb) p *msg->chn->buf
$7 = {p = 0x7f8e7b44bf9e "3456789.123456789\n", 'P' ..., size = 10008, 
i = 0, o = 8058, data = 0x7f8e7b44a024 "GET /1234567"}

(gdb) p msg->chn->buf->p - msg->chn->buf->data
$8 = 8058

As one may notice, since p is already 8kB from the beginning of the buffer
(hence 2kB from the end), writing at p + eoh is definitely wrong. Here we're
having a problem that msg->eoh is wrong or buf->p is wrong.

My opinion here is that buf->p is the wrong one, since we're dealing with a
8kB request, so it should definitely have been realigned. Or maybe it was
stripped and removed from the request buffer with HTTP processing still
enabled.

All this part is still totally unclear to me I'm afraid. I suggest that we
don't rush too fast on lua services and try to fix that during the stable
cycle. I don't want to postpone the release any further for something that
was added very recently and that is not causing any regression to existing
configs.

Best regards,
willy

Ok got some good news here :).. 1.6.0-release nolonger has the error i 
encountered.


The commit below fixed the issue already.
--
CLEANUP: cli: ensure we can never double-free error messages
http://git.haproxy.org/?p=haproxy.git;a=commit;h=6457d0fac304b7bba3e8af13501bf5ecf82bfa67
--

I was still testing with 1.6-dev7 the fix above came the day after.. 
Probably your testing with HEAD, which is why it doesn't happen for you. 
Using snapshots or HEAD is not as easy as just following dev releases.. 
So i usually stick to those unless i have reason to believe a newer 
version might fix it already. I should have tested again sooner sorry.. 
(I actually did test latest snapshot at the moment when i first reported 
the issue..)


Anyway i burned some more hours on both your and my side than was 
probably needed.

One more issue gone :)

Thanks for the support!

PiBa-NL



Re: core dump, lua service, 1.6-dev6 ss-20150930

2015-10-12 Thread PiBa-NL

Hi Willy,

Op 12-10-2015 om 7:28 schreef Willy Tarreau:

Hi Pieter,

On Mon, Oct 12, 2015 at 01:22:48AM +0200, PiBa-NL wrote:

#1  0x00417388 in buffer_slow_realign (buf=0x7d3c90) at
src/buffer.c:166
   block1 = -3306
   block2 = 0

I'm puzzled by this above, no block should have a negative size.


#2  0x00480c42 in http_wait_for_request (s=0x80247d600,
req=0x80247d610, an_bit=4)
   at src/proto_http.c:2686
   cur_idx = -6336
   sess = (struct session *) 0x80241e400
   txn = (struct http_txn *) 0x802bb2140
   msg = (struct http_msg *) 0x802bb21a0
   ctx = {line = 0x2711079 , idx = 3, val = 0, vlen = 7, tws = 0, del = 33, prev = 0}

And this above, similarly cur_idx shouldn't be negative.


Seems that buffer_slow_realign() isn't used regularly during normal
haproxy operation, and it crashes first time that specific function
gets called.
Reproduction is pretty consistent with chrome browser refreshing stats
every second.
Then starting: wrk -c 200 -t 2 -d 10 http://127.0.0.1:801/
I tried adding some Alert(); items in the code to see what parameters
are set at what step, but am not understanding the exact internals of
that code..

This negative bh=-7800 is not supposed to be there i think? It is from
one of the dprintf statements, how are those supposed generate output?..
[891069718] http_wait_for_request: stream=0x80247d600 b=0x80247d610,
exp(r,w)=0,0 bf=00c08200 bh=-7800 analysers=34

Anything else i can check or provide to help get this fixed?

Best regards,
PiBa-NL

Just a little 'bump' to this issue..

Anyone know when/how this buffer_slow_realign() is suppose to work?

Yes, it's supposed to be used only when a request or response is wrapped
in the request or response buffer. It uses memcpy(), hence the "slow"
aspect of the realign.


I suspect it either contains a bug, or is called with bogus parameters..

It's very sensitive to the consistency of the buffer being realigned. So
errors such as buf->i + buf->o > buf->size, or buf->p > buf->data + buf->size,
or buf->p < buf->data etc... can lead to crashes. But these must never happen
at all otherwise it proves that there's a bug somewhere else.

Here since block1 is -3306 and block2 = 0, I suspect that they were assigned
at line 159 from buf->i, which definitely means that the buffer was already
corrupted.


How can we/i determine which it is?

The difficulty consists in finding what can lead to a corrupted buffer :-/
In the past we had such issues when trying to forward more data than was
available in the buffer, due to option send-name-header. I wouldn't be
surprized that it can happen here on corner cases when building a message
from Lua if the various message pointers are not all perfectly correct.


Even though with a small change in the config (adding a backend) i cant
reproduce it, that doesnt mean there isn't a problem with the fuction..
As the whole function doesn't seem to get called in that circumstance..

It could be related to an uninitialized variable somewhere as well. You
can try to start haproxy with "-dM" to see if it makes the issues 100%

-dM doesnt seem to make much difference if any i this case..

reproducible or not. This poisons all buffers (fills them with a constant
byte 0x50 after malloc) so that we don't rely on an uninitialized zero byte
somewhere.

Regards,
Willy

Been running some more tests with the information that req->buf->i 
should be >= 0.


What i find is that after 1 request i already see rqh=-103 , it seems 
like the initial request size which in this case is also is 103 bytes is 
subtracted twice? It does not immediately crash, but if this is already 
a sign of 'corruption' then the cause should be a little more easy to find..


@Willy can you confirm this indicates the problem could be in progress 
of heading to a crash? Even though in the last line it restores to 0..


See its full output below replaced the DPRINTF already in code of 
stream.c with Alert..


Thanks in advance,
PiBa-NL

root@OPNsense:/usr/ports/net/haproxy-devel # haproxy -f 
/var/haproxy.config -d

[ALERT] 277/063055 (61489) : SSLv3 support requested but unavailable.
Available polling systems :
 kqueue : pref=300,  test result OK
   poll : pref=200,  test result OK
 select : pref=150,  test result FAILED
Total: 3 (2 usable), will use kqueue.
Using kqueue() as the polling mechanism.
:http_frt.accept(0006)=0008 from [127.0.0.1:62358]
[ALERT] 277/063058 (61489) : [910446771] process_stream:1655: 
task=0x80244b9b0 s=0x80247d600, sfl=0x0080, rq=0x80247d610, 
rp=0x80247d650, exp(r,w)=0,0 rqf=00d08000 rpf=8000 rqh=0 rqt=0 rph=0 
rpt=0 cs=7 ss=0, cet=0x0 set=0x0 retr=0
[ALERT] 277/063058 (61489) : [910446771] http_wait_for_request: 
stream=0x80247d600 b=0x80247d610, exp(r,w)=0,0 bf=00d08000 bh=0 analysers=34
[ALERT] 277/063058 (61489) : [910446772] process_stream:1655: 
task=0x80244b9b0 s=0x80247d600, sfl=0x0080, rq=0x80247d610, 
rp=0x80247d650, 

Re: core dump, lua service, 1.6-dev6 ss-20150930

2015-10-12 Thread Willy Tarreau
Hi Pieter,

On Mon, Oct 12, 2015 at 10:29:05PM +0200, PiBa-NL wrote:
> Been running some more tests with the information that req->buf->i 
> should be >= 0.
> 
> What i find is that after 1 request i already see rqh=-103 , it seems 
> like the initial request size which in this case is also is 103 bytes is 
> subtracted twice? It does not immediately crash, but if this is already 
> a sign of 'corruption' then the cause should be a little more easy to 
> find..

Oh yes definitely, good catch!

> @Willy can you confirm this indicates the problem could be in progress 
> of heading to a crash? Even though in the last line it restores to 0..

Absolutely. Everytime we're trying to track such a painful bug, I end
up looking for initial symptoms, like this one. In general the first
corruption is minor and undetected but once it's seeded its disease,
the problem will definitely occur.

> See its full output below replaced the DPRINTF already in code of 
> stream.c with Alert..
> 
> Thanks in advance,
> PiBa-NL
> 
> root@OPNsense:/usr/ports/net/haproxy-devel # haproxy -f 
> /var/haproxy.config -d
> [ALERT] 277/063055 (61489) : SSLv3 support requested but unavailable.
> Available polling systems :
>  kqueue : pref=300,  test result OK
>poll : pref=200,  test result OK
>  select : pref=150,  test result FAILED
> Total: 3 (2 usable), will use kqueue.
> Using kqueue() as the polling mechanism.
> :http_frt.accept(0006)=0008 from [127.0.0.1:62358]
> [ALERT] 277/063058 (61489) : [910446771] process_stream:1655: 
> task=0x80244b9b0 s=0x80247d600, sfl=0x0080, rq=0x80247d610, 
> rp=0x80247d650, exp(r,w)=0,0 rqf=00d08000 rpf=8000 rqh=0 rqt=0 rph=0 
> rpt=0 cs=7 ss=0, cet=0x0 set=0x0 retr=0
> [ALERT] 277/063058 (61489) : [910446771] http_wait_for_request: 
> stream=0x80247d600 b=0x80247d610, exp(r,w)=0,0 bf=00d08000 bh=0 
> analysers=34
> [ALERT] 277/063058 (61489) : [910446772] process_stream:1655: 
> task=0x80244b9b0 s=0x80247d600, sfl=0x0080, rq=0x80247d610, 
> rp=0x80247d650, exp(r,w)=0,0 rqf=0002 rpf=8000 rqh=103 rqt=0 
> rph=0 rpt=0 cs=7 ss=0, cet=0x0 set=0x0 retr=0
> [ALERT] 277/063058 (61489) : [910446772] http_wait_for_request: 
> stream=0x80247d600 b=0x80247d610, exp(r,w)=0,0 bf=00808002 bh=103 
> analysers=34
> :http_frt.clireq[0008:]: GET / HTTP/1.1
> :http_frt.clihdr[0008:]: Host: 127.0.0.1:801
> :http_frt.clihdr[0008:]: Accept: */*
> :http_frt.clihdr[0008:]: User-Agent: fetch libfetch/2.0
> :http_frt.clihdr[0008:]: Connection: close
> [ALERT] 277/063058 (61489) : [910446772] process_switching_rules: 
> stream=0x80247d600 b=0x80247d610, exp(r,w)=0,0 bf=00808002 bh=103 
> analysers=00
> [ALERT] 277/063058 (61489) : [910446772] sess_prepare_conn_req: 
> sess=0x80247d600 rq=0x80247d610, rp=0x80247d650, exp(r,w)=0,0 
> rqf=00808002 rpf=8000 rqh=0 rqt=103 rph=0 rpt=0 cs=7 ss=1
> [ALERT] 277/063058 (61489) : [910446772] process_stream:1655: 
> task=0x80244b9b0 s=0x80247d600, sfl=0x048a, rq=0x80247d610, 
> rp=0x80247d650, exp(r,w)=910456772,0 rqf=00808200 rpf=8023 rqh=0 
> rqt=0 rph=97 rpt=0 cs=7 ss=7, cet=0x0 set=0x0 retr=0
> :http_frt.srvrep[0008:]: HTTP/1.1 200 OK
> :http_frt.srvhdr[0008:]: content-length: 13
> :http_frt.srvhdr[0008:]: content-type: text/plain
> :http_frt.srvhdr[0008:]: Connection: close
> [ALERT] 277/063058 (61489) : [910446773] process_stream:1655: 
> task=0x80244b9b0 s=0x80247d600, sfl=0x048a, rq=0x80247d610, 
> rp=0x80247d650, exp(r,w)=910456773,0 rqf=00808200 rpf=8004a221 rqh=-103 
> rqt=0 rph=0 rpt=0 cs=7 ss=7, cet=0x0 set=0x0 retr=0

Yep so here rqh is in fact req->buf->i and as you noticed it's been
decremented a second time.

I'm seeing this which I find suspicious in hlua.c :

  5909
  5910  /* skip the requests bytes. */
  5911  bo_skip(si_oc(si), strm->txn->req.eoh + 2);

First I don't understand why "eoh+2", I suspect that it's for the CRLF
in which case it's wrong since it can be a lone LF. Second, I'm not
seeing sov being reset afterwards. Could you please just add this
after this line :

   strm->txn->req.next -= strm->txn->req.sov;
   strm->txn->req.sov = 0;

That's equivalent to what we're doing when dealing with a redirect (http.c:4258
if you're curious) since we also have to "eat" the request. There may be a few
other corner cases, the use-service mechanism is fairly new and puts its feet
in a place where things used to work just because they were trained to... But
it's a terribly powerful thing to have so we must fix it even if it needs a
few -stable cycles.

Thanks!
Willy




Re: core dump, lua service, 1.6-dev6 ss-20150930

2015-10-12 Thread Willy Tarreau
On Tue, Oct 13, 2015 at 12:32:08AM +0200, PiBa-NL wrote:
> >Yep so here rqh is in fact req->buf->i and as you noticed it's been
> >decremented a second time.
> >
> >I'm seeing this which I find suspicious in hlua.c :
> >
> >   5909
> >   5910  /* skip the requests bytes. */
> >   5911  bo_skip(si_oc(si), strm->txn->req.eoh + 2);
> >
> >First I don't understand why "eoh+2", I suspect that it's for the CRLF
> >in which case it's wrong since it can be a lone LF. Second, I'm not
> >seeing sov being reset afterwards. Could you please just add this
> >after this line :
> >
> >strm->txn->req.next -= strm->txn->req.sov;
> >strm->txn->req.sov = 0;
> This did not seem to resolve the issue.

OK thanks for testing at least!

> If you have any other idea where it might go wrong please let me know :)
> Ill try and dig a little further tomorrow evening.

Unfortunately no I don't have any other idea. At this point I think
I'll have to discuss with Thierry about this. We're in a situation
where I know pretty well how HTTP forwarding works, while he knows
very well how Lua works, but both of us have very unclear idea of
the other one's part, so that doesn't help much :-/

I'm still working on the management doc that I hoped to finish by today
and seems to take longer, so the release might be deferred to tomorrow,
maybe in the mean time we'll have the opportunity to find something odd,
but I prefer not to put my nose there myself or it will further postpone
the doc and the release which is waiting for it.

Thanks for your feedback!

Willy
 



Re: core dump, lua service, 1.6-dev6 ss-20150930

2015-10-12 Thread PiBa-NL

Hi Willy,

Op 12-10-2015 om 23:06 schreef Willy Tarreau:

Hi Pieter,

On Mon, Oct 12, 2015 at 10:29:05PM +0200, PiBa-NL wrote:

Been running some more tests with the information that req->buf->i
should be >= 0.

What i find is that after 1 request i already see rqh=-103 , it seems
like the initial request size which in this case is also is 103 bytes is
subtracted twice? It does not immediately crash, but if this is already
a sign of 'corruption' then the cause should be a little more easy to
find..

Oh yes definitely, good catch!


@Willy can you confirm this indicates the problem could be in progress
of heading to a crash? Even though in the last line it restores to 0..

Absolutely. Everytime we're trying to track such a painful bug, I end
up looking for initial symptoms, like this one. In general the first
corruption is minor and undetected but once it's seeded its disease,
the problem will definitely occur.


See its full output below replaced the DPRINTF already in code of
stream.c with Alert..

Thanks in advance,
PiBa-NL

root@OPNsense:/usr/ports/net/haproxy-devel # haproxy -f
/var/haproxy.config -d
[ALERT] 277/063055 (61489) : SSLv3 support requested but unavailable.
Available polling systems :
  kqueue : pref=300,  test result OK
poll : pref=200,  test result OK
  select : pref=150,  test result FAILED
Total: 3 (2 usable), will use kqueue.
Using kqueue() as the polling mechanism.
:http_frt.accept(0006)=0008 from [127.0.0.1:62358]
[ALERT] 277/063058 (61489) : [910446771] process_stream:1655:
task=0x80244b9b0 s=0x80247d600, sfl=0x0080, rq=0x80247d610,
rp=0x80247d650, exp(r,w)=0,0 rqf=00d08000 rpf=8000 rqh=0 rqt=0 rph=0
rpt=0 cs=7 ss=0, cet=0x0 set=0x0 retr=0
[ALERT] 277/063058 (61489) : [910446771] http_wait_for_request:
stream=0x80247d600 b=0x80247d610, exp(r,w)=0,0 bf=00d08000 bh=0
analysers=34
[ALERT] 277/063058 (61489) : [910446772] process_stream:1655:
task=0x80244b9b0 s=0x80247d600, sfl=0x0080, rq=0x80247d610,
rp=0x80247d650, exp(r,w)=0,0 rqf=0002 rpf=8000 rqh=103 rqt=0
rph=0 rpt=0 cs=7 ss=0, cet=0x0 set=0x0 retr=0
[ALERT] 277/063058 (61489) : [910446772] http_wait_for_request:
stream=0x80247d600 b=0x80247d610, exp(r,w)=0,0 bf=00808002 bh=103
analysers=34
:http_frt.clireq[0008:]: GET / HTTP/1.1
:http_frt.clihdr[0008:]: Host: 127.0.0.1:801
:http_frt.clihdr[0008:]: Accept: */*
:http_frt.clihdr[0008:]: User-Agent: fetch libfetch/2.0
:http_frt.clihdr[0008:]: Connection: close
[ALERT] 277/063058 (61489) : [910446772] process_switching_rules:
stream=0x80247d600 b=0x80247d610, exp(r,w)=0,0 bf=00808002 bh=103
analysers=00
[ALERT] 277/063058 (61489) : [910446772] sess_prepare_conn_req:
sess=0x80247d600 rq=0x80247d610, rp=0x80247d650, exp(r,w)=0,0
rqf=00808002 rpf=8000 rqh=0 rqt=103 rph=0 rpt=0 cs=7 ss=1
[ALERT] 277/063058 (61489) : [910446772] process_stream:1655:
task=0x80244b9b0 s=0x80247d600, sfl=0x048a, rq=0x80247d610,
rp=0x80247d650, exp(r,w)=910456772,0 rqf=00808200 rpf=8023 rqh=0
rqt=0 rph=97 rpt=0 cs=7 ss=7, cet=0x0 set=0x0 retr=0
:http_frt.srvrep[0008:]: HTTP/1.1 200 OK
:http_frt.srvhdr[0008:]: content-length: 13
:http_frt.srvhdr[0008:]: content-type: text/plain
:http_frt.srvhdr[0008:]: Connection: close
[ALERT] 277/063058 (61489) : [910446773] process_stream:1655:
task=0x80244b9b0 s=0x80247d600, sfl=0x048a, rq=0x80247d610,
rp=0x80247d650, exp(r,w)=910456773,0 rqf=00808200 rpf=8004a221 rqh=-103
rqt=0 rph=0 rpt=0 cs=7 ss=7, cet=0x0 set=0x0 retr=0

Yep so here rqh is in fact req->buf->i and as you noticed it's been
decremented a second time.

I'm seeing this which I find suspicious in hlua.c :

   5909
   5910  /* skip the requests bytes. */
   5911  bo_skip(si_oc(si), strm->txn->req.eoh + 2);

First I don't understand why "eoh+2", I suspect that it's for the CRLF
in which case it's wrong since it can be a lone LF. Second, I'm not
seeing sov being reset afterwards. Could you please just add this
after this line :

strm->txn->req.next -= strm->txn->req.sov;
strm->txn->req.sov = 0;

This did not seem to resolve the issue.


That's equivalent to what we're doing when dealing with a redirect (http.c:4258
if you're curious) since we also have to "eat" the request. There may be a few
other corner cases, the use-service mechanism is fairly new and puts its feet
in a place where things used to work just because they were trained to... But
it's a terribly powerful thing to have so we must fix it even if it needs a
few -stable cycles.

Thanks!
Willy


If you have any other idea where it might go wrong please let me know :)
Ill try and dig a little further tomorrow evening.

Regards,
PiBa-NL



Re: core dump, lua service, 1.6-dev6 ss-20150930

2015-10-11 Thread PiBa-NL

Hi All,

Op 7-10-2015 om 0:31 schreef PiBa-NL:

Hi Thierry,
Op 6-10-2015 om 9:47 schreef Thierry FOURNIER:

On Mon, 5 Oct 2015 21:04:08 +0200
PiBa-NL  wrote:


Hi Thierry,

Hi Pieter,



With or without "option http-server-close" does not seem to make any
difference.


Sure, it is only an answer to the Cyril keep alive problem. I encounter
again the keepalive problem :(

The HAProxy applet (services) can't directly uses the keepalive. The
service send its response with an "internal" connection: close. If you
activate the debug, you will see the header "connection: close".

You must configure HAProxy to use keepalive between the frontend and
the client.
Ok. whell without further specific configuration it is keeping 
connections alive, but as that is the default thats ok.


Adding a empty backend does seem to resolve the problem, stats also 
show

the backend handling connections and tracking its 2xx http result
session totals when configured like this.:

frontend http_frt
mode http
bind :801
http-request use-service lua.hello-world
default_backend http-lua-service
backend http-lua-service
mode http


I can't reproduce the problem with the last dev version. But, I
regognize the backtrace, I already encounter the same. I'm believe that
is fixed in the dev6 :(

Using dev7 i can still reproduce it..

I try to bench with my http injector, and I try with ab with and
without keep alive. I try also to stress the admin page, and I can't
reproduce pthe problem.

Argg, I see a major difference: to use freebsd. I don't have the
environment for testing it. I must install a VM.



Op 5-10-2015 om 16:06 schreef Thierry FOURNIER:

Hi,

I process this email later. For waiting, I propose to you to set the
"option http-server-close". Actually, the "services" doesn't support
itself the keepalive, but HAProxy does this job.

The "option http-server-close" expectes a server-close from the 
service

stream. The front of HAProxy maintains the keep-alive between the
client and the haproxy.

This method embbed a limitation: if some servers are declared in the
backend, the "option http-server-close" forbid the keepalive between
haproxy and the serveur.

Can you test with this option ?

Thierry



On Thu, 1 Oct 2015 23:00:45 +0200
Cyril Bonté  wrote:


Hi,

Le 01/10/2015 20:52, PiBa-NL a écrit :

Hi List,

With the config below while running 'siege' i get a core dump 
within a
few hundreds of requests.. Viewing the stats page from a chrome 
browser

while siege is running seems to crash it sooner..

Is below enough to find the cause? Anything else i should try?
This is embarrassing because with your configuration, I currently 
can't

reproduce a segfault but I can reproduce another issue with HTTP
keep-alive requests !

(details below)


Using the haproxy snapshot from: 1.6-dev6 ss-20150930
Or perhaps i just did compile it wrong?..
make NO_CHECKSUM=yes clean debug=1 reinstall WITH_DEBUG=yes

global
   stats socket /tmp/hap.socket level admin
   maxconn 6
   lua-load /haproxy/brute/hello.lua

defaults
   timeout client 1
   timeout connect 1
   timeout server 1

frontend HAProxyLocalStats
   bind :2300 name localstats
   mode http
   stats enable
   stats refresh 1000
   stats admin if TRUE
   stats uri /
frontend http_frt
 bind :801
 mode http
 http-request use-service lua.hello-world

Here, if I use :
$ ab -n100 -c1 -k http://127.0.0.1:801/
There will be a 1ms delay after the first request.

Or with another test case :
echo -ne "GET / HTTP/1.1\r\nHost: localhost\r\n\r\nGET /
HTTP/1.1\r\nHost: localhost\r\n\r\n"| nc localhost 801
HTTP/1.1 200 OK
content-type: text/plain
Transfer-encoding: chunked

d
Hello World !
0

=> only 1 response

Now, if I change "frontend http_frt" to "listen http_frt", I get the
awaited behaviour.

The second test case with "listen" :
echo -ne "GET / HTTP/1.1\r\nHost: localhost\r\n\r\nGET /
HTTP/1.1\r\nHost: localhost\r\n\r\n"| nc localhost 801
HTTP/1.1 200 OK
content-type: text/plain
Transfer-encoding: chunked

d
Hello World !
0

HTTP/1.1 200 OK
content-type: text/plain
Transfer-encoding: chunked

d
Hello World !
0

=> the 2 responses are returned


core.register_service("hello-world", "http", function(applet)
  local response = "Hello World !"
  applet:set_status(200)
  applet:add_header("content-type", "text/plain")
  applet:start_response()
  applet:send(response)
end )

(gdb) bt full
#0  0x000801a2da75 in memcpy () from /lib/libc.so.7
No symbol table info available.
#1  0x00417388 in buffer_slow_realign (buf=0x7d3c90) at
src/buffer.c:166
   block1 = -3306
   block2 = 0
#2  0x00480c42 in http_wait_for_request (s=0x80247d600,
req=0x80247d610, an_bit=4)
   at src/proto_http.c:2686
   cur_idx = -6336
   sess = (struct session *) 0x80241e400
   txn = (struct http_txn *) 0x802bb2140
   

Re: core dump, lua service, 1.6-dev6 ss-20150930

2015-10-11 Thread Willy Tarreau
Hi Pieter,

On Mon, Oct 12, 2015 at 01:22:48AM +0200, PiBa-NL wrote:
> >>#1  0x00417388 in buffer_slow_realign (buf=0x7d3c90) at
> >>src/buffer.c:166
> >>   block1 = -3306
> >>   block2 = 0

I'm puzzled by this above, no block should have a negative size.

> >>#2  0x00480c42 in http_wait_for_request (s=0x80247d600,
> >>req=0x80247d610, an_bit=4)
> >>   at src/proto_http.c:2686
> >>   cur_idx = -6336
> >>   sess = (struct session *) 0x80241e400
> >>   txn = (struct http_txn *) 0x802bb2140
> >>   msg = (struct http_msg *) 0x802bb21a0
> >>   ctx = {line = 0x2711079  >>bounds>, idx = 3, val = 0, vlen = 7, tws = 0, del = 33, prev = 0}

And this above, similarly cur_idx shouldn't be negative.

> >Seems that buffer_slow_realign() isn't used regularly during normal 
> >haproxy operation, and it crashes first time that specific function 
> >gets called.
> >Reproduction is pretty consistent with chrome browser refreshing stats 
> >every second.
> >Then starting: wrk -c 200 -t 2 -d 10 http://127.0.0.1:801/
> >I tried adding some Alert(); items in the code to see what parameters 
> >are set at what step, but am not understanding the exact internals of 
> >that code..
> >
> >This negative bh=-7800 is not supposed to be there i think? It is from 
> >one of the dprintf statements, how are those supposed generate output?..
> >[891069718] http_wait_for_request: stream=0x80247d600 b=0x80247d610, 
> >exp(r,w)=0,0 bf=00c08200 bh=-7800 analysers=34
> >
> >Anything else i can check or provide to help get this fixed?
> >
> >Best regards,
> >PiBa-NL
> Just a little 'bump' to this issue..
> 
> Anyone know when/how this buffer_slow_realign() is suppose to work?

Yes, it's supposed to be used only when a request or response is wrapped
in the request or response buffer. It uses memcpy(), hence the "slow"
aspect of the realign.

> I suspect it either contains a bug, or is called with bogus parameters..

It's very sensitive to the consistency of the buffer being realigned. So
errors such as buf->i + buf->o > buf->size, or buf->p > buf->data + buf->size,
or buf->p < buf->data etc... can lead to crashes. But these must never happen
at all otherwise it proves that there's a bug somewhere else.

Here since block1 is -3306 and block2 = 0, I suspect that they were assigned
at line 159 from buf->i, which definitely means that the buffer was already
corrupted.

> How can we/i determine which it is?

The difficulty consists in finding what can lead to a corrupted buffer :-/
In the past we had such issues when trying to forward more data than was
available in the buffer, due to option send-name-header. I wouldn't be
surprized that it can happen here on corner cases when building a message
from Lua if the various message pointers are not all perfectly correct.

> Even though with a small change in the config (adding a backend) i cant 
> reproduce it, that doesnt mean there isn't a problem with the fuction.. 
> As the whole function doesn't seem to get called in that circumstance..

It could be related to an uninitialized variable somewhere as well. You
can try to start haproxy with "-dM" to see if it makes the issues 100%
reproducible or not. This poisons all buffers (fills them with a constant
byte 0x50 after malloc) so that we don't rely on an uninitialized zero byte
somewhere.

Regards,
Willy




Re: core dump, lua service, 1.6-dev6 ss-20150930

2015-10-06 Thread PiBa-NL

Hi Thierry,
Op 6-10-2015 om 9:47 schreef Thierry FOURNIER:

On Mon, 5 Oct 2015 21:04:08 +0200
PiBa-NL  wrote:


Hi Thierry,

Hi Pieter,



With or without "option http-server-close" does not seem to make any
difference.


Sure, it is only an answer to the Cyril keep alive problem. I encounter
again the keepalive problem :(

The HAProxy applet (services) can't directly uses the keepalive. The
service send its response with an "internal" connection: close. If you
activate the debug, you will see the header "connection: close".

You must configure HAProxy to use keepalive between the frontend and
the client.
Ok. whell without further specific configuration it is keeping 
connections alive, but as that is the default thats ok.



Adding a empty backend does seem to resolve the problem, stats also show
the backend handling connections and tracking its 2xx http result
session totals when configured like this.:

frontend http_frt
mode http
bind :801
http-request use-service lua.hello-world
default_backend http-lua-service
backend http-lua-service
mode http


I can't reproduce the problem with the last dev version. But, I
regognize the backtrace, I already encounter the same. I'm believe that
is fixed in the dev6 :(

Using dev7 i can still reproduce it..

I try to bench with my http injector, and I try with ab with and
without keep alive. I try also to stress the admin page, and I can't
reproduce pthe problem.

Argg, I see a major difference: to use freebsd. I don't have the
environment for testing it. I must install a VM.



Op 5-10-2015 om 16:06 schreef Thierry FOURNIER:

Hi,

I process this email later. For waiting, I propose to you to set the
"option http-server-close". Actually, the "services" doesn't support
itself the keepalive, but HAProxy does this job.

The "option http-server-close" expectes a server-close from the service
stream. The front of HAProxy maintains the keep-alive between the
client and the haproxy.

This method embbed a limitation: if some servers are declared in the
backend, the "option http-server-close" forbid the keepalive between
haproxy and the serveur.

Can you test with this option ?

Thierry



On Thu, 1 Oct 2015 23:00:45 +0200
Cyril Bonté  wrote:


Hi,

Le 01/10/2015 20:52, PiBa-NL a écrit :

Hi List,

With the config below while running 'siege' i get a core dump within a
few hundreds of requests.. Viewing the stats page from a chrome browser
while siege is running seems to crash it sooner..

Is below enough to find the cause? Anything else i should try?

This is embarrassing because with your configuration, I currently can't
reproduce a segfault but I can reproduce another issue with HTTP
keep-alive requests !

(details below)


Using the haproxy snapshot from: 1.6-dev6 ss-20150930
Or perhaps i just did compile it wrong?..
make NO_CHECKSUM=yes clean debug=1 reinstall WITH_DEBUG=yes

global
   stats socket /tmp/hap.socket level admin
   maxconn 6
   lua-load /haproxy/brute/hello.lua

defaults
   timeout client 1
   timeout connect 1
   timeout server 1

frontend HAProxyLocalStats
   bind :2300 name localstats
   mode http
   stats enable
   stats refresh 1000
   stats admin if TRUE
   stats uri /
frontend http_frt
 bind :801
 mode http
 http-request use-service lua.hello-world

Here, if I use :
$ ab -n100 -c1 -k http://127.0.0.1:801/
There will be a 1ms delay after the first request.

Or with another test case :
echo -ne "GET / HTTP/1.1\r\nHost: localhost\r\n\r\nGET /
HTTP/1.1\r\nHost: localhost\r\n\r\n"| nc localhost 801
HTTP/1.1 200 OK
content-type: text/plain
Transfer-encoding: chunked

d
Hello World !
0

=> only 1 response

Now, if I change "frontend http_frt" to "listen http_frt", I get the
awaited behaviour.

The second test case with "listen" :
echo -ne "GET / HTTP/1.1\r\nHost: localhost\r\n\r\nGET /
HTTP/1.1\r\nHost: localhost\r\n\r\n"| nc localhost 801
HTTP/1.1 200 OK
content-type: text/plain
Transfer-encoding: chunked

d
Hello World !
0

HTTP/1.1 200 OK
content-type: text/plain
Transfer-encoding: chunked

d
Hello World !
0

=> the 2 responses are returned


core.register_service("hello-world", "http", function(applet)
  local response = "Hello World !"
  applet:set_status(200)
  applet:add_header("content-type", "text/plain")
  applet:start_response()
  applet:send(response)
end )

(gdb) bt full
#0  0x000801a2da75 in memcpy () from /lib/libc.so.7
No symbol table info available.
#1  0x00417388 in buffer_slow_realign (buf=0x7d3c90) at
src/buffer.c:166
   block1 = -3306
   block2 = 0
#2  0x00480c42 in http_wait_for_request (s=0x80247d600,
req=0x80247d610, an_bit=4)
   at src/proto_http.c:2686
   cur_idx = -6336
   sess = (struct session *) 0x80241e400
   txn = (struct http_txn *) 0x802bb2140
   msg = (struct http_msg *) 0x802bb21a0
 

Re: core dump, lua service, 1.6-dev6 ss-20150930

2015-10-06 Thread Thierry FOURNIER
On Mon, 5 Oct 2015 21:04:08 +0200
PiBa-NL  wrote:

> Hi Thierry,

Hi Pieter,


> With or without "option http-server-close" does not seem to make any 
> difference.


Sure, it is only an answer to the Cyril keep alive problem. I encounter
again the keepalive problem :(

The HAProxy applet (services) can't directly uses the keepalive. The
service send its response with an "internal" connection: close. If you
activate the debug, you will see the header "connection: close".

You must configure HAProxy to use keepalive between the frontend and
the client.


> Adding a empty backend does seem to resolve the problem, stats also show 
> the backend handling connections and tracking its 2xx http result 
> session totals when configured like this.:
> 
> frontend http_frt
>mode http
>bind :801
>http-request use-service lua.hello-world
>default_backend http-lua-service
> backend http-lua-service
>mode http


I can't reproduce the problem with the last dev version. But, I
regognize the backtrace, I already encounter the same. I'm believe that
is fixed in the dev6 :(

I try to bench with my http injector, and I try with ab with and
without keep alive. I try also to stress the admin page, and I can't
reproduce pthe problem.

Argg, I see a major difference: to use freebsd. I don't have the
environment for testing it. I must install a VM.


> Op 5-10-2015 om 16:06 schreef Thierry FOURNIER:
> > Hi,
> >
> > I process this email later. For waiting, I propose to you to set the
> > "option http-server-close". Actually, the "services" doesn't support
> > itself the keepalive, but HAProxy does this job.
> >
> > The "option http-server-close" expectes a server-close from the service
> > stream. The front of HAProxy maintains the keep-alive between the
> > client and the haproxy.
> >
> > This method embbed a limitation: if some servers are declared in the
> > backend, the "option http-server-close" forbid the keepalive between
> > haproxy and the serveur.
> >
> > Can you test with this option ?
> >
> > Thierry
> >
> >
> >
> > On Thu, 1 Oct 2015 23:00:45 +0200
> > Cyril Bonté  wrote:
> >
> >> Hi,
> >>
> >> Le 01/10/2015 20:52, PiBa-NL a écrit :
> >>> Hi List,
> >>>
> >>> With the config below while running 'siege' i get a core dump within a
> >>> few hundreds of requests.. Viewing the stats page from a chrome browser
> >>> while siege is running seems to crash it sooner..
> >>>
> >>> Is below enough to find the cause? Anything else i should try?
> >> This is embarrassing because with your configuration, I currently can't
> >> reproduce a segfault but I can reproduce another issue with HTTP
> >> keep-alive requests !
> >>
> >> (details below)
> >>
> >>> Using the haproxy snapshot from: 1.6-dev6 ss-20150930
> >>> Or perhaps i just did compile it wrong?..
> >>> make NO_CHECKSUM=yes clean debug=1 reinstall WITH_DEBUG=yes
> >>>
> >>> global
> >>>   stats socket /tmp/hap.socket level admin
> >>>   maxconn 6
> >>>   lua-load /haproxy/brute/hello.lua
> >>>
> >>> defaults
> >>>   timeout client 1
> >>>   timeout connect 1
> >>>   timeout server 1
> >>>
> >>> frontend HAProxyLocalStats
> >>>   bind :2300 name localstats
> >>>   mode http
> >>>   stats enable
> >>>   stats refresh 1000
> >>>   stats admin if TRUE
> >>>   stats uri /
> >>> frontend http_frt
> >>> bind :801
> >>> mode http
> >>> http-request use-service lua.hello-world
> >> Here, if I use :
> >> $ ab -n100 -c1 -k http://127.0.0.1:801/
> >> There will be a 1ms delay after the first request.
> >>
> >> Or with another test case :
> >> echo -ne "GET / HTTP/1.1\r\nHost: localhost\r\n\r\nGET /
> >> HTTP/1.1\r\nHost: localhost\r\n\r\n"| nc localhost 801
> >> HTTP/1.1 200 OK
> >> content-type: text/plain
> >> Transfer-encoding: chunked
> >>
> >> d
> >> Hello World !
> >> 0
> >>
> >> => only 1 response
> >>
> >> Now, if I change "frontend http_frt" to "listen http_frt", I get the
> >> awaited behaviour.
> >>
> >> The second test case with "listen" :
> >> echo -ne "GET / HTTP/1.1\r\nHost: localhost\r\n\r\nGET /
> >> HTTP/1.1\r\nHost: localhost\r\n\r\n"| nc localhost 801
> >> HTTP/1.1 200 OK
> >> content-type: text/plain
> >> Transfer-encoding: chunked
> >>
> >> d
> >> Hello World !
> >> 0
> >>
> >> HTTP/1.1 200 OK
> >> content-type: text/plain
> >> Transfer-encoding: chunked
> >>
> >> d
> >> Hello World !
> >> 0
> >>
> >> => the 2 responses are returned
> >>
> >>> core.register_service("hello-world", "http", function(applet)
> >>>  local response = "Hello World !"
> >>>  applet:set_status(200)
> >>>  applet:add_header("content-type", "text/plain")
> >>>  applet:start_response()
> >>>  applet:send(response)
> >>> end )
> >>>
> >>> (gdb) bt full
> >>> #0  0x000801a2da75 in memcpy () from /lib/libc.so.7
> >>> No symbol table info available.
> >>> #1  0x00417388 in buffer_slow_realign (buf=0x7d3c90) at
> >>> 

Re: core dump, lua service, 1.6-dev6 ss-20150930

2015-10-05 Thread PiBa-NL

Hi Thierry,

With or without "option http-server-close" does not seem to make any 
difference.


Adding a empty backend does seem to resolve the problem, stats also show 
the backend handling connections and tracking its 2xx http result 
session totals when configured like this.:


frontend http_frt
  mode http
  bind :801
  http-request use-service lua.hello-world
  default_backend http-lua-service
backend http-lua-service
  mode http

Op 5-10-2015 om 16:06 schreef Thierry FOURNIER:

Hi,

I process this email later. For waiting, I propose to you to set the
"option http-server-close". Actually, the "services" doesn't support
itself the keepalive, but HAProxy does this job.

The "option http-server-close" expectes a server-close from the service
stream. The front of HAProxy maintains the keep-alive between the
client and the haproxy.

This method embbed a limitation: if some servers are declared in the
backend, the "option http-server-close" forbid the keepalive between
haproxy and the serveur.

Can you test with this option ?

Thierry



On Thu, 1 Oct 2015 23:00:45 +0200
Cyril Bonté  wrote:


Hi,

Le 01/10/2015 20:52, PiBa-NL a écrit :

Hi List,

With the config below while running 'siege' i get a core dump within a
few hundreds of requests.. Viewing the stats page from a chrome browser
while siege is running seems to crash it sooner..

Is below enough to find the cause? Anything else i should try?

This is embarrassing because with your configuration, I currently can't
reproduce a segfault but I can reproduce another issue with HTTP
keep-alive requests !

(details below)


Using the haproxy snapshot from: 1.6-dev6 ss-20150930
Or perhaps i just did compile it wrong?..
make NO_CHECKSUM=yes clean debug=1 reinstall WITH_DEBUG=yes

global
  stats socket /tmp/hap.socket level admin
  maxconn 6
  lua-load /haproxy/brute/hello.lua

defaults
  timeout client 1
  timeout connect 1
  timeout server 1

frontend HAProxyLocalStats
  bind :2300 name localstats
  mode http
  stats enable
  stats refresh 1000
  stats admin if TRUE
  stats uri /
frontend http_frt
bind :801
mode http
http-request use-service lua.hello-world

Here, if I use :
$ ab -n100 -c1 -k http://127.0.0.1:801/
There will be a 1ms delay after the first request.

Or with another test case :
echo -ne "GET / HTTP/1.1\r\nHost: localhost\r\n\r\nGET /
HTTP/1.1\r\nHost: localhost\r\n\r\n"| nc localhost 801
HTTP/1.1 200 OK
content-type: text/plain
Transfer-encoding: chunked

d
Hello World !
0

=> only 1 response

Now, if I change "frontend http_frt" to "listen http_frt", I get the
awaited behaviour.

The second test case with "listen" :
echo -ne "GET / HTTP/1.1\r\nHost: localhost\r\n\r\nGET /
HTTP/1.1\r\nHost: localhost\r\n\r\n"| nc localhost 801
HTTP/1.1 200 OK
content-type: text/plain
Transfer-encoding: chunked

d
Hello World !
0

HTTP/1.1 200 OK
content-type: text/plain
Transfer-encoding: chunked

d
Hello World !
0

=> the 2 responses are returned


core.register_service("hello-world", "http", function(applet)
 local response = "Hello World !"
 applet:set_status(200)
 applet:add_header("content-type", "text/plain")
 applet:start_response()
 applet:send(response)
end )

(gdb) bt full
#0  0x000801a2da75 in memcpy () from /lib/libc.so.7
No symbol table info available.
#1  0x00417388 in buffer_slow_realign (buf=0x7d3c90) at
src/buffer.c:166
  block1 = -3306
  block2 = 0
#2  0x00480c42 in http_wait_for_request (s=0x80247d600,
req=0x80247d610, an_bit=4)
  at src/proto_http.c:2686
  cur_idx = -6336
  sess = (struct session *) 0x80241e400
  txn = (struct http_txn *) 0x802bb2140
  msg = (struct http_msg *) 0x802bb21a0
  ctx = {line = 0x2711079 , idx
= 3, val = 0, vlen = 7, tws = 0,
del = 33, prev = 0}
#3  0x004d55b1 in process_stream (t=0x80244b390) at
src/stream.c:1759
  max_loops = 199
  ana_list = 52
  ana_back = 52
  flags = 4227584
  srv = (struct server *) 0x0
  s = (struct stream *) 0x80247d600
  sess = (struct session *) 0x80241e400
  rqf_last = 8397312
  rpf_last = 2248179715
  rq_prod_last = 7
  rq_cons_last = 9
  rp_cons_last = 7
  rp_prod_last = 0
  req_ana_back = 8192
  req = (struct channel *) 0x80247d610
  res = (struct channel *) 0x80247d650
  si_f = (struct stream_interface *) 0x80247d7f8
  si_b = (struct stream_interface *) 0x80247d818
#4  0x0041fe78 in process_runnable_tasks () at src/task.c:238
  t = (struct task *) 0x80244b390
  max_processed = 0
#5  0x0040cc4e in run_poll_loop () at src/haproxy.c:1539
  next = 549107027
#6  0x0040daee in main (argc=4, argv=0x7fffeaf0) at
src/haproxy.c:1892
  err = 0
  retry = 200
  

Re: core dump, lua service, 1.6-dev6 ss-20150930

2015-10-05 Thread Thierry FOURNIER
Hi,

I process this email later. For waiting, I propose to you to set the
"option http-server-close". Actually, the "services" doesn't support
itself the keepalive, but HAProxy does this job.

The "option http-server-close" expectes a server-close from the service
stream. The front of HAProxy maintains the keep-alive between the
client and the haproxy.

This method embbed a limitation: if some servers are declared in the
backend, the "option http-server-close" forbid the keepalive between
haproxy and the serveur.

Can you test with this option ?

Thierry



On Thu, 1 Oct 2015 23:00:45 +0200
Cyril Bonté  wrote:

> Hi,
> 
> Le 01/10/2015 20:52, PiBa-NL a écrit :
> > Hi List,
> >
> > With the config below while running 'siege' i get a core dump within a
> > few hundreds of requests.. Viewing the stats page from a chrome browser
> > while siege is running seems to crash it sooner..
> >
> > Is below enough to find the cause? Anything else i should try?
> 
> This is embarrassing because with your configuration, I currently can't 
> reproduce a segfault but I can reproduce another issue with HTTP 
> keep-alive requests !
> 
> (details below)
> 
> > Using the haproxy snapshot from: 1.6-dev6 ss-20150930
> > Or perhaps i just did compile it wrong?..
> > make NO_CHECKSUM=yes clean debug=1 reinstall WITH_DEBUG=yes
> >
> > global
> >  stats socket /tmp/hap.socket level admin
> >  maxconn 6
> >  lua-load /haproxy/brute/hello.lua
> >
> > defaults
> >  timeout client 1
> >  timeout connect 1
> >  timeout server 1
> >
> > frontend HAProxyLocalStats
> >  bind :2300 name localstats
> >  mode http
> >  stats enable
> >  stats refresh 1000
> >  stats admin if TRUE
> >  stats uri /
> > frontend http_frt
> >bind :801
> >mode http
> >http-request use-service lua.hello-world
> 
> Here, if I use :
> $ ab -n100 -c1 -k http://127.0.0.1:801/
> There will be a 1ms delay after the first request.
> 
> Or with another test case :
> echo -ne "GET / HTTP/1.1\r\nHost: localhost\r\n\r\nGET / 
> HTTP/1.1\r\nHost: localhost\r\n\r\n"| nc localhost 801
> HTTP/1.1 200 OK
> content-type: text/plain
> Transfer-encoding: chunked
> 
> d
> Hello World !
> 0
> 
> => only 1 response
> 
> Now, if I change "frontend http_frt" to "listen http_frt", I get the 
> awaited behaviour.
> 
> The second test case with "listen" :
> echo -ne "GET / HTTP/1.1\r\nHost: localhost\r\n\r\nGET / 
> HTTP/1.1\r\nHost: localhost\r\n\r\n"| nc localhost 801
> HTTP/1.1 200 OK
> content-type: text/plain
> Transfer-encoding: chunked
> 
> d
> Hello World !
> 0
> 
> HTTP/1.1 200 OK
> content-type: text/plain
> Transfer-encoding: chunked
> 
> d
> Hello World !
> 0
> 
> => the 2 responses are returned
> 
> >
> > core.register_service("hello-world", "http", function(applet)
> > local response = "Hello World !"
> > applet:set_status(200)
> > applet:add_header("content-type", "text/plain")
> > applet:start_response()
> > applet:send(response)
> > end )
> >
> > (gdb) bt full
> > #0  0x000801a2da75 in memcpy () from /lib/libc.so.7
> > No symbol table info available.
> > #1  0x00417388 in buffer_slow_realign (buf=0x7d3c90) at
> > src/buffer.c:166
> >  block1 = -3306
> >  block2 = 0
> > #2  0x00480c42 in http_wait_for_request (s=0x80247d600,
> > req=0x80247d610, an_bit=4)
> >  at src/proto_http.c:2686
> >  cur_idx = -6336
> >  sess = (struct session *) 0x80241e400
> >  txn = (struct http_txn *) 0x802bb2140
> >  msg = (struct http_msg *) 0x802bb21a0
> >  ctx = {line = 0x2711079 , idx
> > = 3, val = 0, vlen = 7, tws = 0,
> >del = 33, prev = 0}
> > #3  0x004d55b1 in process_stream (t=0x80244b390) at
> > src/stream.c:1759
> >  max_loops = 199
> >  ana_list = 52
> >  ana_back = 52
> >  flags = 4227584
> >  srv = (struct server *) 0x0
> >  s = (struct stream *) 0x80247d600
> >  sess = (struct session *) 0x80241e400
> >  rqf_last = 8397312
> >  rpf_last = 2248179715
> >  rq_prod_last = 7
> >  rq_cons_last = 9
> >  rp_cons_last = 7
> >  rp_prod_last = 0
> >  req_ana_back = 8192
> >  req = (struct channel *) 0x80247d610
> >  res = (struct channel *) 0x80247d650
> >  si_f = (struct stream_interface *) 0x80247d7f8
> >  si_b = (struct stream_interface *) 0x80247d818
> > #4  0x0041fe78 in process_runnable_tasks () at src/task.c:238
> >  t = (struct task *) 0x80244b390
> >  max_processed = 0
> > #5  0x0040cc4e in run_poll_loop () at src/haproxy.c:1539
> >  next = 549107027
> > #6  0x0040daee in main (argc=4, argv=0x7fffeaf0) at
> > src/haproxy.c:1892
> >  err = 0
> >  retry = 200
> >  limit = {rlim_cur = 120032, rlim_max = 120032}
> >  errmsg =
> > 

Re: core dump, lua service, 1.6-dev6 ss-20150930

2015-10-01 Thread Cyril Bonté

Hi,

Le 01/10/2015 20:52, PiBa-NL a écrit :

Hi List,

With the config below while running 'siege' i get a core dump within a
few hundreds of requests.. Viewing the stats page from a chrome browser
while siege is running seems to crash it sooner..

Is below enough to find the cause? Anything else i should try?


This is embarrassing because with your configuration, I currently can't 
reproduce a segfault but I can reproduce another issue with HTTP 
keep-alive requests !


(details below)


Using the haproxy snapshot from: 1.6-dev6 ss-20150930
Or perhaps i just did compile it wrong?..
make NO_CHECKSUM=yes clean debug=1 reinstall WITH_DEBUG=yes

global
 stats socket /tmp/hap.socket level admin
 maxconn 6
 lua-load /haproxy/brute/hello.lua

defaults
 timeout client 1
 timeout connect 1
 timeout server 1

frontend HAProxyLocalStats
 bind :2300 name localstats
 mode http
 stats enable
 stats refresh 1000
 stats admin if TRUE
 stats uri /
frontend http_frt
   bind :801
   mode http
   http-request use-service lua.hello-world


Here, if I use :
$ ab -n100 -c1 -k http://127.0.0.1:801/
There will be a 1ms delay after the first request.

Or with another test case :
echo -ne "GET / HTTP/1.1\r\nHost: localhost\r\n\r\nGET / 
HTTP/1.1\r\nHost: localhost\r\n\r\n"| nc localhost 801

HTTP/1.1 200 OK
content-type: text/plain
Transfer-encoding: chunked

d
Hello World !
0

=> only 1 response

Now, if I change "frontend http_frt" to "listen http_frt", I get the 
awaited behaviour.


The second test case with "listen" :
echo -ne "GET / HTTP/1.1\r\nHost: localhost\r\n\r\nGET / 
HTTP/1.1\r\nHost: localhost\r\n\r\n"| nc localhost 801

HTTP/1.1 200 OK
content-type: text/plain
Transfer-encoding: chunked

d
Hello World !
0

HTTP/1.1 200 OK
content-type: text/plain
Transfer-encoding: chunked

d
Hello World !
0

=> the 2 responses are returned



core.register_service("hello-world", "http", function(applet)
local response = "Hello World !"
applet:set_status(200)
applet:add_header("content-type", "text/plain")
applet:start_response()
applet:send(response)
end )

(gdb) bt full
#0  0x000801a2da75 in memcpy () from /lib/libc.so.7
No symbol table info available.
#1  0x00417388 in buffer_slow_realign (buf=0x7d3c90) at
src/buffer.c:166
 block1 = -3306
 block2 = 0
#2  0x00480c42 in http_wait_for_request (s=0x80247d600,
req=0x80247d610, an_bit=4)
 at src/proto_http.c:2686
 cur_idx = -6336
 sess = (struct session *) 0x80241e400
 txn = (struct http_txn *) 0x802bb2140
 msg = (struct http_msg *) 0x802bb21a0
 ctx = {line = 0x2711079 , idx
= 3, val = 0, vlen = 7, tws = 0,
   del = 33, prev = 0}
#3  0x004d55b1 in process_stream (t=0x80244b390) at
src/stream.c:1759
 max_loops = 199
 ana_list = 52
 ana_back = 52
 flags = 4227584
 srv = (struct server *) 0x0
 s = (struct stream *) 0x80247d600
 sess = (struct session *) 0x80241e400
 rqf_last = 8397312
 rpf_last = 2248179715
 rq_prod_last = 7
 rq_cons_last = 9
 rp_cons_last = 7
 rp_prod_last = 0
 req_ana_back = 8192
 req = (struct channel *) 0x80247d610
 res = (struct channel *) 0x80247d650
 si_f = (struct stream_interface *) 0x80247d7f8
 si_b = (struct stream_interface *) 0x80247d818
#4  0x0041fe78 in process_runnable_tasks () at src/task.c:238
 t = (struct task *) 0x80244b390
 max_processed = 0
#5  0x0040cc4e in run_poll_loop () at src/haproxy.c:1539
 next = 549107027
#6  0x0040daee in main (argc=4, argv=0x7fffeaf0) at
src/haproxy.c:1892
 err = 0
 retry = 200
 limit = {rlim_cur = 120032, rlim_max = 120032}
 errmsg =
"\000êÿÿÿ\177\000\000\030ëÿÿÿ\177\000\000ðêÿÿÿ\177\000\000\004\000\000\000\000\000\000\000Ðêÿÿÿ\177\000\000]A}\000\b\000\000\000pêÿÿÿ\177\000\000\000\000\000\000\000\000\000\000èêÿÿÿ\177\000\000\030ëÿÿÿ\177\000\000ðêÿÿÿ\177\000\000\004\000\000\000\000\000\000\000\220êÿÿ"

 pidfd = -1

# haproxy -vv
[ALERT] 273/021153 (10691) : SSLv3 support requested but unavailable.
HA-Proxy version 1.6-dev6-10770fa 2015/09/29
Copyright 2000-2015 Willy Tarreau 

Build options :
   TARGET  = freebsd
   CPU = generic
   CC  = cc
   CFLAGS  = -pipe -g -fstack-protector -fno-strict-aliasing
-DFREEBSD_PORTS
   OPTIONS = USE_GETADDRINFO=1 USE_ZLIB=1 USE_OPENSSL=1 USE_LUA=1
USE_STATIC_PCRE=1 USE_PCRE_JIT=1

Default settings :
   maxconn = 2000, bufsize = 16384, maxrewrite = 1024, maxpollevents = 200

Encrypted password support via crypt(3): yes
Built with zlib version : 1.2.8
Compression algorithms supported : identity("identity"),
deflate("deflate"), raw-deflate("deflate"), gzip("gzip")
Built with OpenSSL version : OpenSSL 1.0.2d 9 Jul 2015
Running on OpenSSL version : OpenSSL 

Re: core dump, lua service, 1.6-dev6 ss-20150930

2015-10-01 Thread PiBa-NL

Hi,
small update on the repro..

Op 1-10-2015 om 23:00 schreef Cyril Bonté:

Hi,

Le 01/10/2015 20:52, PiBa-NL a écrit :

Hi List,

With the config below while running 'siege' i get a core dump within a
few hundreds of requests.. Viewing the stats page from a chrome browser
while siege is running seems to crash it sooner..
Viewing the stats page is actually required to break it.. And the few 
hundred requests probably was a bit optimistic estimate.
Using ab while the stats are refreshing every second(in chrome despite 
"stats refresh 1000" is expressed in seconds if i am not mistaken..), 
haproxy sometimes crashes before 10.000 requests sometimes after 25.000 
have been handled.
That is with ab running with these parameters, the 100 concurrent 
requests seem to be a requirement..:

ab -r -c 100 -n 5 http://192.168.0.112:801/

If the stats page is not shown in between it will happily serve 100.000+


Is below enough to find the cause? Anything else i should try?


This is embarrassing because with your configuration, I currently 
can't reproduce a segfault but I can reproduce another issue with HTTP 
keep-alive requests !


(details below)


Using the haproxy snapshot from: 1.6-dev6 ss-20150930
Or perhaps i just did compile it wrong?..
make NO_CHECKSUM=yes clean debug=1 reinstall WITH_DEBUG=yes

global
 stats socket /tmp/hap.socket level admin
 maxconn 6
 lua-load /haproxy/brute/hello.lua

defaults
 timeout client 1
 timeout connect 1
 timeout server 1

frontend HAProxyLocalStats
 bind :2300 name localstats
 mode http
 stats enable
 stats refresh 1000
 stats admin if TRUE
 stats uri /
frontend http_frt
   bind :801
   mode http
   http-request use-service lua.hello-world


Here, if I use :
$ ab -n100 -c1 -k http://127.0.0.1:801/
There will be a 1ms delay after the first request.

Or with another test case :
echo -ne "GET / HTTP/1.1\r\nHost: localhost\r\n\r\nGET / 
HTTP/1.1\r\nHost: localhost\r\n\r\n"| nc localhost 801

HTTP/1.1 200 OK
content-type: text/plain
Transfer-encoding: chunked

d
Hello World !
0

=> only 1 response

Now, if I change "frontend http_frt" to "listen http_frt", I get the 
awaited behaviour.


The second test case with "listen" :
echo -ne "GET / HTTP/1.1\r\nHost: localhost\r\n\r\nGET / 
HTTP/1.1\r\nHost: localhost\r\n\r\n"| nc localhost 801

HTTP/1.1 200 OK
content-type: text/plain
Transfer-encoding: chunked

d
Hello World !
0

HTTP/1.1 200 OK
content-type: text/plain
Transfer-encoding: chunked

d
Hello World !
0

=> the 2 responses are returned



core.register_service("hello-world", "http", function(applet)
local response = "Hello World !"
applet:set_status(200)
applet:add_header("content-type", "text/plain")
applet:start_response()
applet:send(response)
end )

(gdb) bt full
#0  0x000801a2da75 in memcpy () from /lib/libc.so.7
No symbol table info available.
#1  0x00417388 in buffer_slow_realign (buf=0x7d3c90) at
src/buffer.c:166
 block1 = -3306
 block2 = 0
#2  0x00480c42 in http_wait_for_request (s=0x80247d600,
req=0x80247d610, an_bit=4)
 at src/proto_http.c:2686
 cur_idx = -6336
 sess = (struct session *) 0x80241e400
 txn = (struct http_txn *) 0x802bb2140
 msg = (struct http_msg *) 0x802bb21a0
 ctx = {line = 0x2711079 , idx
= 3, val = 0, vlen = 7, tws = 0,
   del = 33, prev = 0}
#3  0x004d55b1 in process_stream (t=0x80244b390) at
src/stream.c:1759
 max_loops = 199
 ana_list = 52
 ana_back = 52
 flags = 4227584
 srv = (struct server *) 0x0
 s = (struct stream *) 0x80247d600
 sess = (struct session *) 0x80241e400
 rqf_last = 8397312
 rpf_last = 2248179715
 rq_prod_last = 7
 rq_cons_last = 9
 rp_cons_last = 7
 rp_prod_last = 0
 req_ana_back = 8192
 req = (struct channel *) 0x80247d610
 res = (struct channel *) 0x80247d650
 si_f = (struct stream_interface *) 0x80247d7f8
 si_b = (struct stream_interface *) 0x80247d818
#4  0x0041fe78 in process_runnable_tasks () at src/task.c:238
 t = (struct task *) 0x80244b390
 max_processed = 0
#5  0x0040cc4e in run_poll_loop () at src/haproxy.c:1539
 next = 549107027
#6  0x0040daee in main (argc=4, argv=0x7fffeaf0) at
src/haproxy.c:1892
 err = 0
 retry = 200
 limit = {rlim_cur = 120032, rlim_max = 120032}
 errmsg =
"\000êÿÿÿ\177\000\000\030ëÿÿÿ\177\000\000ðêÿÿÿ\177\000\000\004\000\000\000\000\000\000\000Ðêÿÿÿ\177\000\000]A}\000\b\000\000\000pêÿÿÿ\177\000\000\000\000\000\000\000\000\000\000èêÿÿÿ\177\000\000\030ëÿÿÿ\177\000\000ðêÿÿÿ\177\000\000\004\000\000\000\000\000\000\000\220êÿÿ" 



 pidfd = -1

# haproxy -vv
[ALERT] 273/021153 (10691) : SSLv3 support requested but unavailable.
HA-Proxy version 1.6-dev6-10770fa 2015/09/29
Copyright 

Re: core dump, lua service, 1.6-dev6 ss-20150930

2015-10-01 Thread PiBa-NL

Hi again,
File attached with console 'debug' output, and its backtrace(looks +- 
the same as before). This time it crashed after only a 'few' 108 
requests, maybe it helps.?.
This happened while running with ab and keepalive:  ab -k -r -c 100 -n 
5 http://192.168.0.112:801/


Op 1-10-2015 om 23:49 schreef PiBa-NL:

Hi,
small update on the repro..

Op 1-10-2015 om 23:00 schreef Cyril Bonté:

Hi,

Le 01/10/2015 20:52, PiBa-NL a écrit :

Hi List,

With the config below while running 'siege' i get a core dump within a
few hundreds of requests.. Viewing the stats page from a chrome browser
while siege is running seems to crash it sooner..
Viewing the stats page is actually required to break it.. And the few 
hundred requests probably was a bit optimistic estimate.
Using ab while the stats are refreshing every second(in chrome despite 
"stats refresh 1000" is expressed in seconds if i am not mistaken..), 
haproxy sometimes crashes before 10.000 requests sometimes after 
25.000 have been handled.
That is with ab running with these parameters, the 100 concurrent 
requests seem to be a requirement..:

ab -r -c 100 -n 5 http://192.168.0.112:801/

If the stats page is not shown in between it will happily serve 100.000+


Is below enough to find the cause? Anything else i should try?


This is embarrassing because with your configuration, I currently 
can't reproduce a segfault but I can reproduce another issue with 
HTTP keep-alive requests !


(details below)


Using the haproxy snapshot from: 1.6-dev6 ss-20150930
Or perhaps i just did compile it wrong?..
make NO_CHECKSUM=yes clean debug=1 reinstall WITH_DEBUG=yes

global
 stats socket /tmp/hap.socket level admin
 maxconn 6
 lua-load /haproxy/brute/hello.lua

defaults
 timeout client 1
 timeout connect 1
 timeout server 1

frontend HAProxyLocalStats
 bind :2300 name localstats
 mode http
 stats enable
 stats refresh 1000
 stats admin if TRUE
 stats uri /
frontend http_frt
   bind :801
   mode http
   http-request use-service lua.hello-world


Here, if I use :
$ ab -n100 -c1 -k http://127.0.0.1:801/
There will be a 1ms delay after the first request.

Or with another test case :
echo -ne "GET / HTTP/1.1\r\nHost: localhost\r\n\r\nGET / 
HTTP/1.1\r\nHost: localhost\r\n\r\n"| nc localhost 801

HTTP/1.1 200 OK
content-type: text/plain
Transfer-encoding: chunked

d
Hello World !
0

=> only 1 response

Now, if I change "frontend http_frt" to "listen http_frt", I get the 
awaited behaviour.


The second test case with "listen" :
echo -ne "GET / HTTP/1.1\r\nHost: localhost\r\n\r\nGET / 
HTTP/1.1\r\nHost: localhost\r\n\r\n"| nc localhost 801

HTTP/1.1 200 OK
content-type: text/plain
Transfer-encoding: chunked

d
Hello World !
0

HTTP/1.1 200 OK
content-type: text/plain
Transfer-encoding: chunked

d
Hello World !
0

=> the 2 responses are returned



core.register_service("hello-world", "http", function(applet)
local response = "Hello World !"
applet:set_status(200)
applet:add_header("content-type", "text/plain")
applet:start_response()
applet:send(response)
end )

(gdb) bt full
#0  0x000801a2da75 in memcpy () from /lib/libc.so.7
No symbol table info available.
#1  0x00417388 in buffer_slow_realign (buf=0x7d3c90) at
src/buffer.c:166
 block1 = -3306
 block2 = 0
#2  0x00480c42 in http_wait_for_request (s=0x80247d600,
req=0x80247d610, an_bit=4)
 at src/proto_http.c:2686
 cur_idx = -6336
 sess = (struct session *) 0x80241e400
 txn = (struct http_txn *) 0x802bb2140
 msg = (struct http_msg *) 0x802bb21a0
 ctx = {line = 0x2711079 , idx
= 3, val = 0, vlen = 7, tws = 0,
   del = 33, prev = 0}
#3  0x004d55b1 in process_stream (t=0x80244b390) at
src/stream.c:1759
 max_loops = 199
 ana_list = 52
 ana_back = 52
 flags = 4227584
 srv = (struct server *) 0x0
 s = (struct stream *) 0x80247d600
 sess = (struct session *) 0x80241e400
 rqf_last = 8397312
 rpf_last = 2248179715
 rq_prod_last = 7
 rq_cons_last = 9
 rp_cons_last = 7
 rp_prod_last = 0
 req_ana_back = 8192
 req = (struct channel *) 0x80247d610
 res = (struct channel *) 0x80247d650
 si_f = (struct stream_interface *) 0x80247d7f8
 si_b = (struct stream_interface *) 0x80247d818
#4  0x0041fe78 in process_runnable_tasks () at src/task.c:238
 t = (struct task *) 0x80244b390
 max_processed = 0
#5  0x0040cc4e in run_poll_loop () at src/haproxy.c:1539
 next = 549107027
#6  0x0040daee in main (argc=4, argv=0x7fffeaf0) at
src/haproxy.c:1892
 err = 0
 retry = 200
 limit = {rlim_cur = 120032, rlim_max = 120032}
 errmsg =