Re: Compare two variables in acl

2022-06-23 Thread Seena Fallah
Yea exactly that was what I wanted.
Thanks. :)

On Thu, 23 Jun 2022 at 15:50, Tim Düsterhus  wrote:

> Seena,
>
> On 6/22/22 19:57, Seena Fallah wrote:
> > I'm trying to compare two variables in ACL but seems the one on the right
> > side is not rendering and assumed as a literal string.
> > Is there any example of how can I compare two variables in haproxy acls?
>
> Your question is very light on details (e.g. what you have attempted),
> but likely the `strcmp` converter does what you need.
>
> Best regards
> Tim Düsterhus
>


Compare two variables in acl

2022-06-22 Thread Seena Fallah
Hi,

I'm trying to compare two variables in ACL but seems the one on the right
side is not rendering and assumed as a literal string.
Is there any example of how can I compare two variables in haproxy acls?

Testing on haproxy v2.6

Thanks.


Re: Rate Limiting with token/leaky bucket algorithm

2022-06-07 Thread Seena Fallah
Got it!
Thanks. Works like a charm =)

On Tue, 7 Jun 2022 at 17:50, Willy Tarreau  wrote:

> On Tue, Jun 07, 2022 at 01:51:06PM +0200, Seena Fallah wrote:
> > I also tried with this one but this will give me 20req/s 200 OK and the
> > rest of it 429 too many requests
> > ```
> > listen test
> > bind :8000
> > stick-table  type ip  size 100k expire 30s store http_req_rate(1s)
> > acl exceeds_limit src_http_req_rate gt 100
> > http-request track-sc0 src unless exceeds_limit
> > http-request deny deny_status 429 if exceeds_limit
> > http-request return status 200 content-type "text/plain" lf-string
> "200
> > OK"
> > ```
> >
> > Maybe the "1s" isn't handled correctly? when I fetch the current value
> for
> > the http_req_rate it is 100 so that makes sense other requests get 429
> but
> > actually, only 20req/s is responding "200" because the http_req_rate is
> not
> > decreasing in the correct intervals!
>
> There is a reason to this, which is subtle: the counter is updated when
> the track action is performed. As such, each new request refreshes the
> counter and the counter reports the total number of *received* requests
> and not the number of accepted requests.
>
> There are different ways to deal with this, usually they involve a check
> *before* the track. With your config it's trivial since you're already
> using src_http_req_rate which performs its own lookup. Just move the
> track_sc rule at the end and it should be OK.
>
> Willy
>


Re: Rate Limiting with token/leaky bucket algorithm

2022-06-07 Thread Seena Fallah
I also tried with this one but this will give me 20req/s 200 OK and the
rest of it 429 too many requests
```
listen test
bind :8000
stick-table  type ip  size 100k expire 30s store http_req_rate(1s)
acl exceeds_limit src_http_req_rate gt 100
http-request track-sc0 src unless exceeds_limit
http-request deny deny_status 429 if exceeds_limit
http-request return status 200 content-type "text/plain" lf-string "200
OK"
```

Maybe the "1s" isn't handled correctly? when I fetch the current value for
the http_req_rate it is 100 so that makes sense other requests get 429 but
actually, only 20req/s is responding "200" because the http_req_rate is not
decreasing in the correct intervals!

On Fri, 3 Jun 2022 at 17:44, Seena Fallah  wrote:

> Do you see any diff between my conf and the one in the link? :/
>
> On Fri, 3 Jun 2022 at 17:37, Aleksandar Lazic  wrote:
>
>> Hi.
>>
>> On Fri, 3 Jun 2022 17:12:25 +0200
>> Seena Fallah  wrote:
>>
>> > When using the below config to have 100req/s rate-limiting after passing
>> > the 100req/s all of the reqs will deny not reqs more than 100req/s!
>> > ```
>> > listen test
>> > bind :8000
>> > stick-table  type ip  size 100k expire 30s store http_req_rate(1s)
>> > http-request track-sc0 src
>> > http-request deny deny_status 429 if { sc_http_req_rate(0) gt 100 }
>> > http-request return status 200 content-type "text/plain" lf-string
>> "200
>> > OK"
>> > ```
>> >
>> > Is there a way to deny reqs more than 100 not all of them?
>> > For example, if we have 1000req/s, 100reqs get "200 OK" and the rest of
>> > them (900reqs) gets "429"?
>>
>> Yes.
>>
>> Here are some examples with explanation.
>> https://www.haproxy.com/blog/four-examples-of-haproxy-rate-limiting/
>>
>> Here some search outputs, maybe some of the examples helps you to.
>> https://html.duckduckgo.com/html?q=haproxy%20rate%20limiting
>>
>> Regards
>> Alex
>>
>


Re: Rate Limiting with token/leaky bucket algorithm

2022-06-03 Thread Seena Fallah
Do you see any diff between my conf and the one in the link? :/

On Fri, 3 Jun 2022 at 17:37, Aleksandar Lazic  wrote:

> Hi.
>
> On Fri, 3 Jun 2022 17:12:25 +0200
> Seena Fallah  wrote:
>
> > When using the below config to have 100req/s rate-limiting after passing
> > the 100req/s all of the reqs will deny not reqs more than 100req/s!
> > ```
> > listen test
> > bind :8000
> > stick-table  type ip  size 100k expire 30s store http_req_rate(1s)
> > http-request track-sc0 src
> > http-request deny deny_status 429 if { sc_http_req_rate(0) gt 100 }
> > http-request return status 200 content-type "text/plain" lf-string
> "200
> > OK"
> > ```
> >
> > Is there a way to deny reqs more than 100 not all of them?
> > For example, if we have 1000req/s, 100reqs get "200 OK" and the rest of
> > them (900reqs) gets "429"?
>
> Yes.
>
> Here are some examples with explanation.
> https://www.haproxy.com/blog/four-examples-of-haproxy-rate-limiting/
>
> Here some search outputs, maybe some of the examples helps you to.
> https://html.duckduckgo.com/html?q=haproxy%20rate%20limiting
>
> Regards
> Alex
>


Rate Limiting with token/leaky bucket algorithm

2022-06-03 Thread Seena Fallah
When using the below config to have 100req/s rate-limiting after passing
the 100req/s all of the reqs will deny not reqs more than 100req/s!
```
listen test
bind :8000
stick-table  type ip  size 100k expire 30s store http_req_rate(1s)
http-request track-sc0 src
http-request deny deny_status 429 if { sc_http_req_rate(0) gt 100 }
http-request return status 200 content-type "text/plain" lf-string "200
OK"
```

Is there a way to deny reqs more than 100 not all of them?
For example, if we have 1000req/s, 100reqs get "200 OK" and the rest of
them (900reqs) gets "429"?


Re: Too many response errors

2020-10-18 Thread Seena Fallah
I used "show errors -1 response" in haproxy socket to see these errors but
nothing found!
Is there any way I can see the errors?

On Thu, Oct 15, 2020 at 9:48 PM Seena Fallah  wrote:

> Based on this comment is this related to the client and there is no
> problem on the server side?
> https://github.com/haproxy/haproxy/blob/master/include/haproxy/channel-t.h#L68
>
> On Wed, Oct 14, 2020 at 3:29 PM Seena Fallah 
> wrote:
>
>> Hi.
>>
>> I'm facing many response errors from my backends and I have checked the
>> logs but there were no 5xx errors for these response errors! It seems I'm
>> in this section of code and because I use http-server-close it will count
>> failed_resp!
>> https://github.com/haproxy/haproxy/blob/master/src/http_ana.c#L1648-L1667
>> Can you please explain why keep-alive connections won't count this and
>> what actually is this?
>>
>> Using haproxy 2.2.4 on docker
>>
>> Thanks.
>>
>


Re: Too many response errors

2020-10-15 Thread Seena Fallah
Based on this comment is this related to the client and there is no problem
on the server side?
https://github.com/haproxy/haproxy/blob/master/include/haproxy/channel-t.h#L68

On Wed, Oct 14, 2020 at 3:29 PM Seena Fallah  wrote:

> Hi.
>
> I'm facing many response errors from my backends and I have checked the
> logs but there were no 5xx errors for these response errors! It seems I'm
> in this section of code and because I use http-server-close it will count
> failed_resp!
> https://github.com/haproxy/haproxy/blob/master/src/http_ana.c#L1648-L1667
> Can you please explain why keep-alive connections won't count this and
> what actually is this?
>
> Using haproxy 2.2.4 on docker
>
> Thanks.
>


Too many response errors

2020-10-14 Thread Seena Fallah
Hi.

I'm facing many response errors from my backends and I have checked the
logs but there were no 5xx errors for these response errors! It seems I'm
in this section of code and because I use http-server-close it will count
failed_resp!
https://github.com/haproxy/haproxy/blob/master/src/http_ana.c#L1648-L1667
Can you please explain why keep-alive connections won't count this and what
actually is this?

Using haproxy 2.2.4 on docker

Thanks.


Partial response

2020-10-11 Thread Seena Fallah
Hi. Does haproxy support partial response form servers?
In nginx there is a parameter named proxy_read_timeout that defines a
timeout for reading a response from the proxied server. The timeout is set
only between two successive read operations, not for the transmission of
the whole response. If the proxied server does not transmit anything within
this time, the connection is closed.
Does Haproxy have this option too?


Response time by http method

2020-04-22 Thread Seena Fallah
Hi all.

I think there is a really missing parameter in prometheus exporter that
there is no response time metric by HTTP method. To monitor the state of
response times there is a need of this metric. Any plan to be added?
Issue: https://github.com/haproxy/haproxy/issues/580

Thanks,


Prometheus service

2020-02-27 Thread Seena Fallah
Hi all.
I have upgraded to HAProxy 2.0.13 and enabled Prometheus service on it. In
previous version (1.8.8) I used haproxy_exporter
 and I
have haproxy_server_check_duration_milliseconds and new_session_rate for
each server but in HAProxy v2.0.13 Prometheus service I don't see these
metrics.
How can I have them?