Re: Simply adding a filter causes read error

2018-12-12 Thread flamesea12
Hi,

It's 100% reproduciable in my centos 7 PC

### Good Config start #
global
    maxconn 100
    daemon
    nbproc 2

defaults
    retries 3
    option redispatch
    timeout client  60s
    timeout connect 60s
    timeout server  60s
    timeout http-request 60s
    timeout http-keep-alive 60s

frontend web
    bind *:8000

    mode http
    default_backend app
backend app
    mode http
    server nginx01 10.0.3.15:8080
### Good Config end #


And bad config


### Bad Config start #
global
    maxconn 100
    daemon
    nbproc 2

defaults
    retries 3
    option redispatch
    timeout client  60s
    timeout connect 60s
    timeout server  60s
    timeout http-request 60s
    timeout http-keep-alive 60s

frontend web
    bind *:8000

    mode http
    default_backend app
backend app
    mode http
    filter compression
    server nginx01 10.0.3.15:8080
### Bad Config end #


### Lua script used in wrk, a.lua: ###

local count = 0

request = function()
    local url = "/?count=" .. count
    count = count + 1
    return wrk.format(
    'GET',
    url
    )
end


### Test 1 #

wrk -c 1000 -s a.lua http://10.0.3.15:8000
Running 10s test @ http://10.0.3.15:8000
  2 threads and 1000 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    60.43ms   42.63ms   1.06s    91.54%
    Req/Sec     7.86k     1.40k   10.65k    67.54%
  157025 requests in 10.11s, 769.87MB read
  Socket errors: connect 0, read 20, write 0, timeout 0
Requests/sec:  15530.67
Transfer/sec:     76.14MB


### Test 2 ###

change

 filter compression

to ==>

 filter trace

And update flt_trace.c:
add `return 0;` in `trace_attach`
to avoid performance down since there are many print

static int
trace_attach(struct stream *s, struct filter *filter)
{
        struct trace_config *conf = FLT_CONF(filter);
        return 0; 


Running 10s test @ http://10.0.3.15:8000
  2 threads and 1000 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    64.88ms   77.91ms   1.09s    98.26%
    Req/Sec     7.84k     1.47k   11.57k    67.71%
  155800 requests in 10.05s, 763.86MB read
  Socket errors: connect 0, read 21, write 0, timeout 0
Requests/sec:  15509.93
Transfer/sec:     76.04MB


- Original Message -
> From: Willy Tarreau 
> To: flamese...@yahoo.co.jp
> Cc: Aleksandar Lazic ; "haproxy@formilux.org" 
> 
> Date: 2018/12/13, Thu 16:16
> Subject: Re: Simply adding a filter causes read error
> 
> Hi,
> 
> On Thu, Dec 13, 2018 at 03:48:57PM +0900, flamese...@yahoo.co.jp wrote:
>>  Hi again
>>  I tested against v1.8.15, the error's persistent.
> 
> It's very unclear what type of problem you're experiencing. Do you have
> a working and a non-working config as a starting point, and a way to
> reproduce the problem ? Also, are you seeing errors or anything special
> in your logs when you are facing the problem ?
> 
> Thanks,
> Willy
> 




Re: Simply adding a filter causes read error

2018-12-12 Thread Willy Tarreau
Hi,

On Thu, Dec 13, 2018 at 03:48:57PM +0900, flamese...@yahoo.co.jp wrote:
> Hi again
> I tested against v1.8.15, the error's persistent.

It's very unclear what type of problem you're experiencing. Do you have
a working and a non-working config as a starting point, and a way to
reproduce the problem ? Also, are you seeing errors or anything special
in your logs when you are facing the problem ?

Thanks,
Willy



Re: Simply adding a filter causes read error

2018-12-12 Thread flamesea12
Hi again
I tested against v1.8.15, the error's persistent.

 - Original Message -
 From: "flamese...@yahoo.co.jp" 
 To: Aleksandar Lazic ; "haproxy@formilux.org" 
 
 Date: 2018/12/7, Fri 22:59
 Subject: Re: Simply adding a filter causes read error
   
Hi
Thanks for the reply.
I have a test env with 3 identical servers( 8 core cpu and 32GB memory), one 
for wrk, one for nginx, and one for haproxy.
The network looks like wrk => haproxy => nginx. I have tuned OS settings like 
open file limits, etc.
And the test html file is default nginx index.html. There's no error when 
testing wrk => nginx, wrk => haproxy(no filter) => nginx.
Error began to appear if I add filter.
I've thought of performance affected by compression, but that's not true, 
because the request header sent by wrk does not accept compression.
I've even change the following code:
static inttrace_attach(struct stream *s, struct filter *filter){        struct 
trace_config *conf = FLT_CONF(filter);        return 0; // ignore this filter 
to avoid performance down since there are many print
And test with
    filter trace
This way I think there will be no performance affect, since the filter is 
ignored in the very beginning.
But still there are read errors.
Please let me know if you need more information.
Thanks,

 - Original Message -
 From: Aleksandar Lazic 
 To: flamese...@yahoo.co.jp; "haproxy@formilux.org"  
 Date: 2018/12/7, Fri 22:12
 Subject: Re: Simply adding a filter causes read error
   
Hi.

Am 07.12.2018 um 08:37 schrieb flamese...@yahoo.co.jp:
> Hi
> 
> I tested more, and found that even with option http-pretend-keepalive enabled,
> 
> if I increase the test duration , the read error still appear.

Please can you show us some logs when the error appears.
Can you also tell us some data about the Server on which haproxy, wrk and nginx
is running and how the network setup looks like.

maybe you reach some system limits as Compression requires some more os/hw
resources.

Regards
Aleks

> Running 3m test @ http://10.0.3.15:8000 
>   10 threads and 1000 connections
>   Thread Stats   Avg      Stdev     Max   +/- Stdev
>     Latency    19.84ms   56.36ms   1.34s    92.83%
>     Req/Sec    23.11k     2.55k   50.64k    87.10%
>   45986426 requests in 3.33m, 36.40GB read
>   Socket errors: connect 0, read 7046, write 0, timeout 0
> Requests/sec: 229817.63
> Transfer/sec:    186.30MB
> 
> thanks
> 
>    - Original Message -
>    *From:* "flamese...@yahoo.co.jp" 
>    *To:* Aleksandar Lazic ; "haproxy@formilux.org"
>    
>    *Date:* 2018/12/7, Fri 09:06
>    *Subject:* Re: Simply adding a filter causes read error
> 
>    Hi,
> 
>    Thanks for the reply, I thought the mail format is corrupted..
> 
>    I tried option http-pretend-keepalive, seems read error is gone, but 
>timeout
>    error raised(maybe its because the 1000 connections of wrk)
> 
>    Thanks
> 
>        ----- Original Message -----
>        *From:* Aleksandar Lazic 
>        *To:* flamese...@yahoo.co.jp; "haproxy@formilux.org" 
>
>        *Date:* 2018/12/6, Thu 23:53
>        *Subject:* Re: Simply adding a filter causes read error
> 
>        Hi.
> 
>        Am 06.12.2018 um 15:20 schrieb flamese...@yahoo.co.jp
>        <mailto:flamese...@yahoo.co.jp>:
>        > Hi,
>        >
>        > I have a haproxy(v1.8.14) in front of several nginx backends,
>        everything works
>        > fine until I add compression in haproxy.
> 
>        There is a similar thread about this topic.
> 
>        https://www.mail-archive.com/haproxy@formilux.org/msg31897.html 
> 
>        Can you try to add this option in your config and see if the problem is
>        gone.
> 
>        option http-pretend-keepalive
> 
>        Regards
>        Aleks
> 
>        > My config looks like this:
>        >
>        > ### Config start #
>        > global
>        >     maxconn         100
>        >     daemon
>        >     nbproc 2
>        >
>        > defaults
>        >     retries 3
>        >     option redispatch
>        >     timeout client  60s
>        >     timeout connect 60s
>        >     timeout server  60s
>        >     timeout http-request 60s
>        >     timeout http-keep-alive 60s
>        >
>        > frontend web
>        >     bind *:8000
>        >
>        >     mode http
>        >     default_backend app
>        > backend app
>        >     mode http
>        >     #filter compression
>        >     #filter trace 
>        >     server nginx01 10.0.3.15:8080
>        

Re: Simply adding a filter causes read error

2018-12-07 Thread flamesea12
Hi
Thanks for the reply.
I have a test env with 3 identical servers( 8 core cpu and 32GB memory), one 
for wrk, one for nginx, and one for haproxy.
The network looks like wrk => haproxy => nginx. I have tuned OS settings like 
open file limits, etc.
And the test html file is default nginx index.html. There's no error when 
testing wrk => nginx, wrk => haproxy(no filter) => nginx.
Error began to appear if I add filter.
I've thought of performance affected by compression, but that's not true, 
because the request header sent by wrk does not accept compression.
I've even change the following code:
static inttrace_attach(struct stream *s, struct filter *filter){        struct 
trace_config *conf = FLT_CONF(filter);        return 0; // ignore this filter 
to avoid performance down since there are many print
And test with
    filter trace
This way I think there will be no performance affect, since the filter is 
ignored in the very beginning.
But still there are read errors.
Please let me know if you need more information.
Thanks,

 - Original Message -
 From: Aleksandar Lazic 
 To: flamese...@yahoo.co.jp; "haproxy@formilux.org"  
 Date: 2018/12/7, Fri 22:12
 Subject: Re: Simply adding a filter causes read error
   
Hi.

Am 07.12.2018 um 08:37 schrieb flamese...@yahoo.co.jp:
> Hi
> 
> I tested more, and found that even with option http-pretend-keepalive enabled,
> 
> if I increase the test duration , the read error still appear.

Please can you show us some logs when the error appears.
Can you also tell us some data about the Server on which haproxy, wrk and nginx
is running and how the network setup looks like.

maybe you reach some system limits as Compression requires some more os/hw
resources.

Regards
Aleks

> Running 3m test @ http://10.0.3.15:8000 
>   10 threads and 1000 connections
>   Thread Stats   Avg      Stdev     Max   +/- Stdev
>     Latency    19.84ms   56.36ms   1.34s    92.83%
>     Req/Sec    23.11k     2.55k   50.64k    87.10%
>   45986426 requests in 3.33m, 36.40GB read
>   Socket errors: connect 0, read 7046, write 0, timeout 0
> Requests/sec: 229817.63
> Transfer/sec:    186.30MB
> 
> thanks
> 
>    - Original Message -
>    *From:* "flamese...@yahoo.co.jp" 
>    *To:* Aleksandar Lazic ; "haproxy@formilux.org"
>    
>    *Date:* 2018/12/7, Fri 09:06
>    *Subject:* Re: Simply adding a filter causes read error
> 
>    Hi,
> 
>    Thanks for the reply, I thought the mail format is corrupted..
> 
>    I tried option http-pretend-keepalive, seems read error is gone, but 
>timeout
>    error raised(maybe its because the 1000 connections of wrk)
> 
>    Thanks
> 
>        - Original Message -
>        *From:* Aleksandar Lazic 
>        *To:* flamese...@yahoo.co.jp; "haproxy@formilux.org" 
>
>        *Date:* 2018/12/6, Thu 23:53
>        *Subject:* Re: Simply adding a filter causes read error
> 
>        Hi.
> 
>        Am 06.12.2018 um 15:20 schrieb flamese...@yahoo.co.jp
>        <mailto:flamese...@yahoo.co.jp>:
>        > Hi,
>        >
>        > I have a haproxy(v1.8.14) in front of several nginx backends,
>        everything works
>        > fine until I add compression in haproxy.
> 
>        There is a similar thread about this topic.
> 
>        https://www.mail-archive.com/haproxy@formilux.org/msg31897.html 
> 
>        Can you try to add this option in your config and see if the problem is
>        gone.
> 
>        option http-pretend-keepalive
> 
>        Regards
>        Aleks
> 
>        > My config looks like this:
>        >
>        > ### Config start #
>        > global
>        >     maxconn         100
>        >     daemon
>        >     nbproc 2
>        >
>        > defaults
>        >     retries 3
>        >     option redispatch
>        >     timeout client  60s
>        >     timeout connect 60s
>        >     timeout server  60s
>        >     timeout http-request 60s
>        >     timeout http-keep-alive 60s
>        >
>        > frontend web
>        >     bind *:8000
>        >
>        >     mode http
>        >     default_backend app
>        > backend app
>        >     mode http
>        >     #filter compression
>        >     #filter trace 
>        >     server nginx01 10.0.3.15:8080
>        > ### Config end #
>        >
>        >
>        > Lua script used in wrk:
>        > a.lua:
>        >
>        > local count = 0
>        >
>        > request = function()
>        >     local url = "/?c

Re: Simply adding a filter causes read error

2018-12-07 Thread Aleksandar Lazic
Hi.

Am 07.12.2018 um 08:37 schrieb flamese...@yahoo.co.jp:
> Hi
> 
> I tested more, and found that even with option http-pretend-keepalive enabled,
> 
> if I increase the test duration , the read error still appear.

Please can you show us some logs when the error appears.
Can you also tell us some data about the Server on which haproxy, wrk and nginx
is running and how the network setup looks like.

maybe you reach some system limits as Compression requires some more os/hw
resources.

Regards
Aleks

> Running 3m test @ http://10.0.3.15:8000
>   10 threads and 1000 connections
>   Thread Stats   Avg      Stdev     Max   +/- Stdev
>     Latency    19.84ms   56.36ms   1.34s    92.83%
>     Req/Sec    23.11k     2.55k   50.64k    87.10%
>   45986426 requests in 3.33m, 36.40GB read
>   Socket errors: connect 0, read 7046, write 0, timeout 0
> Requests/sec: 229817.63
> Transfer/sec:    186.30MB
> 
> thanks
> 
> - Original Message -
> *From:* "flamese...@yahoo.co.jp" 
> *To:* Aleksandar Lazic ; "haproxy@formilux.org"
>     
>     *Date:* 2018/12/7, Fri 09:06
> *Subject:* Re: Simply adding a filter causes read error
> 
> Hi,
> 
> Thanks for the reply, I thought the mail format is corrupted..
> 
> I tried option http-pretend-keepalive, seems read error is gone, but 
> timeout
> error raised(maybe its because the 1000 connections of wrk)
> 
> Thanks
> 
> - Original Message -
> *From:* Aleksandar Lazic 
>     *To:* flamese...@yahoo.co.jp; "haproxy@formilux.org" 
> 
> *Date:* 2018/12/6, Thu 23:53
> *Subject:* Re: Simply adding a filter causes read error
> 
> Hi.
> 
> Am 06.12.2018 um 15:20 schrieb flamese...@yahoo.co.jp
> <mailto:flamese...@yahoo.co.jp>:
> > Hi,
> >
> > I have a haproxy(v1.8.14) in front of several nginx backends,
> everything works
> > fine until I add compression in haproxy.
> 
> There is a similar thread about this topic.
> 
> https://www.mail-archive.com/haproxy@formilux.org/msg31897.html
> 
> Can you try to add this option in your config and see if the problem 
> is
> gone.
> 
> option http-pretend-keepalive
> 
> Regards
> Aleks
> 
> > My config looks like this:
> >
> > ### Config start #
> > global
> >     maxconn         100
> >     daemon
> >     nbproc 2
> >
> > defaults
> >     retries 3
> >     option redispatch
> >     timeout client  60s
> >     timeout connect 60s
> >     timeout server  60s
> >     timeout http-request 60s
> >     timeout http-keep-alive 60s
> >
> > frontend web
> >     bind *:8000
> >
> >     mode http
> >     default_backend app
> > backend app
> >     mode http
> >     #filter compression
> >     #filter trace 
> >     server nginx01 10.0.3.15:8080
> > ### Config end #
> >
> >
> > Lua script used in wrk:
> > a.lua:
> >
> > local count = 0
> >
> > request = function()
> >     local url = "/?count=" .. count
> >     count = count + 1
> >     return wrk.format(
> >     'GET',
> >     url
> >     )
> > end
> >
> >
> > 01. wrk test against nginx: everything if OK
> >
> > wrk -c 1000 -s a.lua http://10.0.3.15:8080 <http://10.0.3.15:8080/>
> > Running 10s test @ http://10.0.3.15:8080 <http://10.0.3.15:8080/>
> >   2 threads and 1000 connections
> >   Thread Stats   Avg      Stdev     Max   +/- Stdev
> >     Latency    34.83ms   17.50ms 260.52ms   76.48%
> >     Req/Sec    12.85k     2.12k   17.20k    62.63%
> >   255603 requests in 10.03s, 1.23GB read
> > Requests/sec:  25476.45
> > Transfer/sec:    125.49MB
> >
> >
> > 02. Wrk test against haproxy, no filters: everything is OK
> >
> > wrk -c 1000 -s a.lua http://10.0.3.15:8000 <http://10.0.3.15:8000/>
> > Running 10s test @ http://10.0.3.15:8000 <http://10.0.3.15:8000/>

Re: Simply adding a filter causes read error

2018-12-06 Thread flamesea12
Hi
I tested more, and found that even with option http-pretend-keepalive enabled,
if I increase the test duration , the read error still appear.
Running 3m test @ http://10.0.3.15:8000  10 threads and 1000 connections  
Thread Stats   Avg      Stdev     Max   +/- Stdev    Latency    19.84ms   
56.36ms   1.34s    92.83%    Req/Sec    23.11k     2.55k   50.64k    87.10%  
45986426 requests in 3.33m, 36.40GB read  Socket errors: connect 0, read 7046, 
write 0, timeout 0Requests/sec: 229817.63Transfer/sec:    186.30MB
thanks

 - Original Message -
 From: "flamese...@yahoo.co.jp" 
 To: Aleksandar Lazic ; "haproxy@formilux.org" 
 
 Date: 2018/12/7, Fri 09:06
 Subject: Re: Simply adding a filter causes read error
   
Hi,
Thanks for the reply, I thought the mail format is corrupted..
I tried option http-pretend-keepalive, seems read error is gone, but timeout 
error raised(maybe its because the 1000 connections of wrk)
Thanks

 - Original Message -
 From: Aleksandar Lazic 
 To: flamese...@yahoo.co.jp; "haproxy@formilux.org"  
 Date: 2018/12/6, Thu 23:53
 Subject: Re: Simply adding a filter causes read error
   
Hi.

Am 06.12.2018 um 15:20 schrieb flamese...@yahoo.co.jp:
> Hi,
> 
> I have a haproxy(v1.8.14) in front of several nginx backends, everything works
> fine until I add compression in haproxy.

There is a similar thread about this topic.

https://www.mail-archive.com/haproxy@formilux.org/msg31897.html 

Can you try to add this option in your config and see if the problem is gone.

option http-pretend-keepalive

Regards
Aleks

> My config looks like this:
> 
> ### Config start #
> global
>     maxconn         100
>     daemon
>     nbproc 2
> 
> defaults
>     retries 3
>     option redispatch
>     timeout client  60s
>     timeout connect 60s
>     timeout server  60s
>     timeout http-request 60s
>     timeout http-keep-alive 60s
> 
> frontend web
>     bind *:8000
> 
>     mode http
>     default_backend app
> backend app
>     mode http
>     #filter compression
>     #filter trace 
>     server nginx01 10.0.3.15:8080
> ### Config end #
> 
> 
> Lua script used in wrk:
> a.lua:
> 
> local count = 0
> 
> request = function()
>     local url = "/?count=" .. count
>     count = count + 1
>     return wrk.format(
>     'GET',
>     url
>     )
> end
> 
> 
> 01. wrk test against nginx: everything if OK
> 
> wrk -c 1000 -s a.lua http://10.0.3.15:8080 
> Running 10s test @ http://10.0.3.15:8080 
>   2 threads and 1000 connections
>   Thread Stats   Avg      Stdev     Max   +/- Stdev
>     Latency    34.83ms   17.50ms 260.52ms   76.48%
>     Req/Sec    12.85k     2.12k   17.20k    62.63%
>   255603 requests in 10.03s, 1.23GB read
> Requests/sec:  25476.45
> Transfer/sec:    125.49MB
> 
> 
> 02. Wrk test against haproxy, no filters: everything is OK
> 
> wrk -c 1000 -s a.lua http://10.0.3.15:8000 
> Running 10s test @ http://10.0.3.15:8000 
>   2 threads and 1000 connections
>   Thread Stats   Avg      Stdev     Max   +/- Stdev
>     Latency    73.58ms  109.48ms   1.33s    97.39%
>     Req/Sec     7.83k     1.42k   11.95k    66.15%
>   155843 requests in 10.07s, 764.07MB read
> Requests/sec:  15476.31
> Transfer/sec:     75.88MB
> 
> 03. Wrk test against haproxy, add filter compression: read error
> 
> Change
> 
>     #filter compression
> ===>
>     filter compression
> 
> wrk -c 1000 -s a.lua http://10.0.3.15:8000 
> Running 10s test @ http://10.0.3.15:8000 
>   2 threads and 1000 connections
>   Thread Stats   Avg      Stdev     Max   +/- Stdev
>     Latency    60.43ms   42.63ms   1.06s    91.54%
>     Req/Sec     7.86k     1.40k   10.65k    67.54%
>   157025 requests in 10.11s, 769.87MB read
>   Socket errors: connect 0, read 20, write 0, timeout 0
> Requests/sec:  15530.67
> Transfer/sec:     76.14MB
> 
> 04. Wrk test against haproxy, add filter trace, and update flt_trace.c:
> 
> static int
> trace_attach(struct stream *s, struct filter *filter)
> {
>         struct trace_config *conf = FLT_CONF(filter);
>         // add below
>        // ignore this filter to avoid performance down since there are many 
> print
>         return 0; 
> 
> And change
>     #filter compression
>     #filter trace
> ===>
>     #filter compression
>     filter trace
> 
> Running 10s test @ http://10.0.3.15:8000 
>   2 threads and 1000 connections
>   Thread Stats   Avg      Stdev     Max   +/- Stdev
>     Latency    64.88ms   77.91ms   1.09s    98.26%
>     Req/Sec     7.84k     1.47k   11.57k    67.71%
>   155800 requests in 10.05s, 763.86MB read
>   Socket errors: connect 0, read 21, write 0, timeout 0
> Requests/sec:  15509.93
> Transfer/sec:     76.04MB
> 
> 
> Is there any config error? Am I doing something wrong?
> 
> Thanks
> 



   
 

   


Re: Simply adding a filter causes read error

2018-12-06 Thread flamesea12
Hi,
Thanks for the reply, I thought the mail format is corrupted..
I tried option http-pretend-keepalive, seems read error is gone, but timeout 
error raised(maybe its because the 1000 connections of wrk)
Thanks

 - Original Message -
 From: Aleksandar Lazic 
 To: flamese...@yahoo.co.jp; "haproxy@formilux.org"  
 Date: 2018/12/6, Thu 23:53
 Subject: Re: Simply adding a filter causes read error
   
Hi.

Am 06.12.2018 um 15:20 schrieb flamese...@yahoo.co.jp:
> Hi,
> 
> I have a haproxy(v1.8.14) in front of several nginx backends, everything works
> fine until I add compression in haproxy.

There is a similar thread about this topic.

https://www.mail-archive.com/haproxy@formilux.org/msg31897.html 

Can you try to add this option in your config and see if the problem is gone.

option http-pretend-keepalive

Regards
Aleks

> My config looks like this:
> 
> ### Config start #
> global
>     maxconn         100
>     daemon
>     nbproc 2
> 
> defaults
>     retries 3
>     option redispatch
>     timeout client  60s
>     timeout connect 60s
>     timeout server  60s
>     timeout http-request 60s
>     timeout http-keep-alive 60s
> 
> frontend web
>     bind *:8000
> 
>     mode http
>     default_backend app
> backend app
>     mode http
>     #filter compression
>     #filter trace 
>     server nginx01 10.0.3.15:8080
> ### Config end #
> 
> 
> Lua script used in wrk:
> a.lua:
> 
> local count = 0
> 
> request = function()
>     local url = "/?count=" .. count
>     count = count + 1
>     return wrk.format(
>     'GET',
>     url
>     )
> end
> 
> 
> 01. wrk test against nginx: everything if OK
> 
> wrk -c 1000 -s a.lua http://10.0.3.15:8080 
> Running 10s test @ http://10.0.3.15:8080 
>   2 threads and 1000 connections
>   Thread Stats   Avg      Stdev     Max   +/- Stdev
>     Latency    34.83ms   17.50ms 260.52ms   76.48%
>     Req/Sec    12.85k     2.12k   17.20k    62.63%
>   255603 requests in 10.03s, 1.23GB read
> Requests/sec:  25476.45
> Transfer/sec:    125.49MB
> 
> 
> 02. Wrk test against haproxy, no filters: everything is OK
> 
> wrk -c 1000 -s a.lua http://10.0.3.15:8000 
> Running 10s test @ http://10.0.3.15:8000 
>   2 threads and 1000 connections
>   Thread Stats   Avg      Stdev     Max   +/- Stdev
>     Latency    73.58ms  109.48ms   1.33s    97.39%
>     Req/Sec     7.83k     1.42k   11.95k    66.15%
>   155843 requests in 10.07s, 764.07MB read
> Requests/sec:  15476.31
> Transfer/sec:     75.88MB
> 
> 03. Wrk test against haproxy, add filter compression: read error
> 
> Change
> 
>     #filter compression
> ===>
>     filter compression
> 
> wrk -c 1000 -s a.lua http://10.0.3.15:8000 
> Running 10s test @ http://10.0.3.15:8000 
>   2 threads and 1000 connections
>   Thread Stats   Avg      Stdev     Max   +/- Stdev
>     Latency    60.43ms   42.63ms   1.06s    91.54%
>     Req/Sec     7.86k     1.40k   10.65k    67.54%
>   157025 requests in 10.11s, 769.87MB read
>   Socket errors: connect 0, read 20, write 0, timeout 0
> Requests/sec:  15530.67
> Transfer/sec:     76.14MB
> 
> 04. Wrk test against haproxy, add filter trace, and update flt_trace.c:
> 
> static int
> trace_attach(struct stream *s, struct filter *filter)
> {
>         struct trace_config *conf = FLT_CONF(filter);
>         // add below
>        // ignore this filter to avoid performance down since there are many 
> print
>         return 0; 
> 
> And change
>     #filter compression
>     #filter trace
> ===>
>     #filter compression
>     filter trace
> 
> Running 10s test @ http://10.0.3.15:8000 
>   2 threads and 1000 connections
>   Thread Stats   Avg      Stdev     Max   +/- Stdev
>     Latency    64.88ms   77.91ms   1.09s    98.26%
>     Req/Sec     7.84k     1.47k   11.57k    67.71%
>   155800 requests in 10.05s, 763.86MB read
>   Socket errors: connect 0, read 21, write 0, timeout 0
> Requests/sec:  15509.93
> Transfer/sec:     76.04MB
> 
> 
> Is there any config error? Am I doing something wrong?
> 
> Thanks
> 



   


Re: Simply adding a filter causes read error

2018-12-06 Thread Aleksandar Lazic
Hi.

Am 06.12.2018 um 15:20 schrieb flamese...@yahoo.co.jp:
> Hi,
> 
> I have a haproxy(v1.8.14) in front of several nginx backends, everything works
> fine until I add compression in haproxy.

There is a similar thread about this topic.

https://www.mail-archive.com/haproxy@formilux.org/msg31897.html

Can you try to add this option in your config and see if the problem is gone.

option http-pretend-keepalive

Regards
Aleks

> My config looks like this:
> 
> ### Config start #
> global
>     maxconn         100
>     daemon
>     nbproc 2
> 
> defaults
>     retries 3
>     option redispatch
>     timeout client  60s
>     timeout connect 60s
>     timeout server  60s
>     timeout http-request 60s
>     timeout http-keep-alive 60s
> 
> frontend web
>     bind *:8000
> 
>     mode http
>     default_backend app
> backend app
>     mode http
>     #filter compression
>     #filter trace 
>     server nginx01 10.0.3.15:8080
> ### Config end #
> 
> 
> Lua script used in wrk:
> a.lua:
> 
> local count = 0
> 
> request = function()
>     local url = "/?count=" .. count
>     count = count + 1
>     return wrk.format(
>     'GET',
>     url
>     )
> end
> 
> 
> 01. wrk test against nginx: everything if OK
> 
> wrk -c 1000 -s a.lua http://10.0.3.15:8080
> Running 10s test @ http://10.0.3.15:8080
>   2 threads and 1000 connections
>   Thread Stats   Avg      Stdev     Max   +/- Stdev
>     Latency    34.83ms   17.50ms 260.52ms   76.48%
>     Req/Sec    12.85k     2.12k   17.20k    62.63%
>   255603 requests in 10.03s, 1.23GB read
> Requests/sec:  25476.45
> Transfer/sec:    125.49MB
> 
> 
> 02. Wrk test against haproxy, no filters: everything is OK
> 
> wrk -c 1000 -s a.lua http://10.0.3.15:8000
> Running 10s test @ http://10.0.3.15:8000
>   2 threads and 1000 connections
>   Thread Stats   Avg      Stdev     Max   +/- Stdev
>     Latency    73.58ms  109.48ms   1.33s    97.39%
>     Req/Sec     7.83k     1.42k   11.95k    66.15%
>   155843 requests in 10.07s, 764.07MB read
> Requests/sec:  15476.31
> Transfer/sec:     75.88MB
> 
> 03. Wrk test against haproxy, add filter compression: read error
> 
> Change
> 
>     #filter compression
> ===>
>     filter compression
> 
> wrk -c 1000 -s a.lua http://10.0.3.15:8000
> Running 10s test @ http://10.0.3.15:8000
>   2 threads and 1000 connections
>   Thread Stats   Avg      Stdev     Max   +/- Stdev
>     Latency    60.43ms   42.63ms   1.06s    91.54%
>     Req/Sec     7.86k     1.40k   10.65k    67.54%
>   157025 requests in 10.11s, 769.87MB read
>   Socket errors: connect 0, read 20, write 0, timeout 0
> Requests/sec:  15530.67
> Transfer/sec:     76.14MB
> 
> 04. Wrk test against haproxy, add filter trace, and update flt_trace.c:
> 
> static int
> trace_attach(struct stream *s, struct filter *filter)
> {
>         struct trace_config *conf = FLT_CONF(filter);
>         // add below
>        // ignore this filter to avoid performance down since there are many 
> print
>         return 0; 
> 
> And change
>     #filter compression
>     #filter trace
> ===>
>     #filter compression
>     filter trace
> 
> Running 10s test @ http://10.0.3.15:8000
>   2 threads and 1000 connections
>   Thread Stats   Avg      Stdev     Max   +/- Stdev
>     Latency    64.88ms   77.91ms   1.09s    98.26%
>     Req/Sec     7.84k     1.47k   11.57k    67.71%
>   155800 requests in 10.05s, 763.86MB read
>   Socket errors: connect 0, read 21, write 0, timeout 0
> Requests/sec:  15509.93
> Transfer/sec:     76.04MB
> 
> 
> Is there any config error? Am I doing something wrong?
> 
> Thanks
> 




Re: Simply adding a filter causes read error

2018-12-06 Thread flamesea12
Sorry, please ignore this one with bad style.I will send another one.


 - Original Message -
 From: "flamese...@yahoo.co.jp" 
 To: "haproxy@formilux.org"  
 Date: 2018/12/6, Thu 23:20
 Subject: Simply adding a filter causes read error
   
Hi,
#yiv8353377196 p.yiv8353377196p1 {margin:0.0px 0.0px 0.0px 0.0px;font:12.0px 
'Helvetica Neue';}#yiv8353377196 p.yiv8353377196p2 {margin:0.0px 0.0px 0.0px 
0.0px;font:12.0px 'Helvetica Neue';min-height:14.0px;}I have a haproxy(v1.8.14) 
in front of several nginx backends, everything works fine until I add 
compression in haproxy.
My config looks like this:
### Config start #global    maxconn         100    daemon    nbproc 2
defaults    retries 3    option redispatch    timeout client  60s    timeout 
connect 60s    timeout server  60s    timeout http-request 60s    timeout 
http-keep-alive 60s
frontend web    bind *:8000
    mode http    default_backend appbackend app    mode http    #filter 
compression    #filter trace     server nginx01 10.0.3.15:8080### Config end 
#

Lua script used in wrk:a.lua:
local count = 0
request = function()    local url = "/?count=" .. count    count = count + 1    
return wrk.format(    'GET',    url    )end

01. wrk test against nginx: everything if OK
wrk -c 1000 -s a.lua http://10.0.3.15:8080Running 10s test @ 
http://10.0.3.15:8080  2 threads and 1000 connections  Thread Stats   Avg      
Stdev     Max   +/- Stdev    Latency    34.83ms   17.50ms 260.52ms   76.48%    
Req/Sec    12.85k     2.12k   17.20k    62.63%  255603 requests in 10.03s, 
1.23GB readRequests/sec:  25476.45Transfer/sec:    125.49MB

02. Wrk test against haproxy, no filters: everything is OK
wrk -c 1000 -s a.lua http://10.0.3.15:8000Running 10s test @ 
http://10.0.3.15:8000  2 threads and 1000 connections  Thread Stats   Avg      
Stdev     Max   +/- Stdev    Latency    73.58ms  109.48ms   1.33s    97.39%    
Req/Sec     7.83k     1.42k   11.95k    66.15%  155843 requests in 10.07s, 
764.07MB readRequests/sec:  15476.31Transfer/sec:     75.88MB
03. Wrk test against haproxy, add filter compression: read error
Change
    #filter compression===>    filter compression
wrk -c 1000 -s a.lua http://10.0.3.15:8000Running 10s test @ 
http://10.0.3.15:8000  2 threads and 1000 connections  Thread Stats   Avg      
Stdev     Max   +/- Stdev    Latency    60.43ms   42.63ms   1.06s    91.54%    
Req/Sec     7.86k     1.40k   10.65k    67.54%  157025 requests in 10.11s, 
769.87MB read  Socket errors: connect 0, read 20, write 0, timeout 
0Requests/sec:  15530.67Transfer/sec:     76.14MB
04. Wrk test against haproxy, add filter trace, and update flt_trace.c:
static inttrace_attach(struct stream *s, struct filter *filter){        struct 
trace_config *conf = FLT_CONF(filter);        // add below       // ignore this 
filter to avoid performance down since there are many print        return 0; 
And change    #filter compression    #filter trace===>    #filter compression   
 filter trace
Running 10s test @ http://10.0.3.15:8000  2 threads and 1000 connections  
Thread Stats   Avg      Stdev     Max   +/- Stdev    Latency    64.88ms   
77.91ms   1.09s    98.26%    Req/Sec     7.84k     1.47k   11.57k    67.71%  
155800 requests in 10.05s, 763.86MB read  Socket errors: connect 0, read 21, 
write 0, timeout 0Requests/sec:  15509.93Transfer/sec:     76.04MB

Is there any config error? Am I doing something wrong?
Thanks


   


Simply adding a filter causes read error

2018-12-06 Thread flamesea12
Hi,
p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 12.0px 'Helvetica Neue'}p.p2 
{margin: 0.0px 0.0px 0.0px 0.0px; font: 12.0px 'Helvetica Neue'; min-height: 
14.0px}I have a haproxy(v1.8.14) in front of several nginx backends, everything 
works fine until I add compression in haproxy.
My config looks like this:
### Config start #global    maxconn         100    daemon    nbproc 2
defaults    retries 3    option redispatch    timeout client  60s    timeout 
connect 60s    timeout server  60s    timeout http-request 60s    timeout 
http-keep-alive 60s
frontend web    bind *:8000
    mode http    default_backend appbackend app    mode http    #filter 
compression    #filter trace     server nginx01 10.0.3.15:8080### Config end 
#

Lua script used in wrk:a.lua:
local count = 0
request = function()    local url = "/?count=" .. count    count = count + 1    
return wrk.format(    'GET',    url    )end

01. wrk test against nginx: everything if OK
wrk -c 1000 -s a.lua http://10.0.3.15:8080Running 10s test @ 
http://10.0.3.15:8080  2 threads and 1000 connections  Thread Stats   Avg      
Stdev     Max   +/- Stdev    Latency    34.83ms   17.50ms 260.52ms   76.48%    
Req/Sec    12.85k     2.12k   17.20k    62.63%  255603 requests in 10.03s, 
1.23GB readRequests/sec:  25476.45Transfer/sec:    125.49MB

02. Wrk test against haproxy, no filters: everything is OK
wrk -c 1000 -s a.lua http://10.0.3.15:8000Running 10s test @ 
http://10.0.3.15:8000  2 threads and 1000 connections  Thread Stats   Avg      
Stdev     Max   +/- Stdev    Latency    73.58ms  109.48ms   1.33s    97.39%    
Req/Sec     7.83k     1.42k   11.95k    66.15%  155843 requests in 10.07s, 
764.07MB readRequests/sec:  15476.31Transfer/sec:     75.88MB
03. Wrk test against haproxy, add filter compression: read error
Change
    #filter compression===>    filter compression
wrk -c 1000 -s a.lua http://10.0.3.15:8000Running 10s test @ 
http://10.0.3.15:8000  2 threads and 1000 connections  Thread Stats   Avg      
Stdev     Max   +/- Stdev    Latency    60.43ms   42.63ms   1.06s    91.54%    
Req/Sec     7.86k     1.40k   10.65k    67.54%  157025 requests in 10.11s, 
769.87MB read  Socket errors: connect 0, read 20, write 0, timeout 
0Requests/sec:  15530.67Transfer/sec:     76.14MB
04. Wrk test against haproxy, add filter trace, and update flt_trace.c:
static inttrace_attach(struct stream *s, struct filter *filter){        struct 
trace_config *conf = FLT_CONF(filter);        // add below       // ignore this 
filter to avoid performance down since there are many print        return 0; 
And change    #filter compression    #filter trace===>    #filter compression   
 filter trace
Running 10s test @ http://10.0.3.15:8000  2 threads and 1000 connections  
Thread Stats   Avg      Stdev     Max   +/- Stdev    Latency    64.88ms   
77.91ms   1.09s    98.26%    Req/Sec     7.84k     1.47k   11.57k    67.71%  
155800 requests in 10.05s, 763.86MB read  Socket errors: connect 0, read 21, 
write 0, timeout 0Requests/sec:  15509.93Transfer/sec:     76.04MB

Is there any config error? Am I doing something wrong?
Thanks