Hi Vel

Form what you describe the example using the tarpit feature may help you
taken from here
https://blog.codecentric.de/en/2014/12/haproxy-http-header-rate-limiting/

frontend fe_api_ssl
  bind 192.168.0.1:443 ssl crt /etc/haproxy/ssl/api.pem no-sslv3 ciphers ...
  default_backend be_api

  tcp-request inspect-delay 5s

  acl document_request path_beg -i /v2/documents
  acl is_upload hdr_beg(Content-Type) -i multipart/form-data
  acl too_many_uploads_by_user sc0_gpc0_rate() gt 100
  acl mark_seen sc0_inc_gpc0 gt 0

  stick-table type string size 100k store gpc0_rate(60s)

  tcp-request content track-sc0 hdr(Authorization) if METH_POST
document_request is_upload

  use_backend 429_slow_down if mark_seen too_many_uploads_by_user

backend be_429_slow_down
  timeout tarpit 2s
  errorfile 500 /etc/haproxy/errorfiles/429.http
  http-request tarpit



Andrew Smalley

Loadbalancer.org Ltd.
www.loadbalancer.org <https://www.loadbalancer.org/?gclid=ES2017>

<https://plus.google.com/+LoadbalancerOrg>
<https://twitter.com/loadbalancerorg>
<http://www.linkedin.com/company/3191352?trk=prof-exp-company-name>
<https://www.loadbalancer.org/?category=company&post-name=overview&?gclid=ES2017>
<https://www.loadbalancer.org/?gclid=ES2017>
+1 888 867 9504 / +44 (0)330 380 1064
asmal...@loadbalancer.org

Leave a Review
<http://collector.reviews.io/loadbalancer-org-inc-/new-review> | Deployment
Guides
<https://www.loadbalancer.org/?category=resources&post-name=deployment-guides&?gclid=ES2017>
| Blog <https://www.loadbalancer.org/?category=blog&?gclid=ES2017>

On 28 June 2017 at 10:01, Velmurugan Dhakshnamoorthy <dvel....@gmail.com>
wrote:

> Hi Lukas,
> Thanks for your response in length. As I mentioned earlier, I was not
> aware that the people from discourse forum and this email d-list group are
> same. I am 100% new to HAProxy.
>
> Let me explain my current situation in-detail in this email thread, Kindly
> check if you or other people from the group can guide me.
>
> Our requirement to use HAProxy is NOT to load balance back-end (Weblogic
> 12c) servers, we have a singe backend instance (ex: PIA1), our server
> capacity is not high to handle the heavy traffic during peak load, the peak
> load occurs only 2 times in a year, that's a reason we are not scaling up
> our server resources as they will be idle majority of the time.
>
> we would like to use HAProxy to throttle http/tcp connections during the
> peak load, so that weblogic backed will not go to Out-Of-Memory
> state/PeopleSoft will not crash.
>
> To achieve http throttling,when setting maxconn to back end , HAProxy
> queue up further connections and releases once the active http connections
> become idle,however how weblogic works is, once the PeopleSoft URL is
> accessed and user is authenticated , cookie will be inserted to browser and
> cookie will be active by default 20 minutes, which mean even if user does
> not navigate and do anything inside the application, cookie session state
> will be retained in weblogic java heap. weblogic allocates small amount of
> memory in order to retain each active sessions (though memory allocation
> increase/decrease dynamically based on various business functionality i).
> as per current capacity , weblogic can retain only 100 session state ,
> which means, I don't want to forward any further connections to weblogic
> until some of the sessions from 100 are released (by default the session
> will be released when user clicks explicitly on signout button or
> inactivity timeout reaches 20 minutes).
>
> according to my understanding, maxconn in back-end throttles connections
> and releases to back-end as and when tcp connection status changed to idle,
> but though connections are idle, logout/signout not occurred from
> PeopleSoft, so that still session state are maintained in weblogic and not
> released and cannot handle further connections.
>
> that's reason, I am setting the maxconn in front end and keeping HTTP
> alive option ON, so that I can throttle connections at front end itself.
> According to my POC, setting maxconn in front-end behaves differently than
> setting in back-end, when it is on front-end, it hold further connections
> in kernel , once the existing http connections are closed, it allows
> further connections inside, in this I dont see any performance issue for
> existing connections.
>
> for your information HAProxy and Weblogic are residing in a same single VM.
>
> please let me know if my above understanding is correct about maxconn. Is
> there any understanding gap ? is there any way to achieve my requirement
> differently?
>
> when decided to use maxconn in front-end, the connection queuing for few
> milli seconds and seconds are OK, but when connections are queued in
> minutes, would like to emit some meaningful message to user, that's a
> reason asked if there is any way to display custom message when connections
> are queued in Linux kernel.
>
> to answer Luaks question, weblogic does not logout user when tcp
> connection is closed. weblogic creates new connections as and when required.
>
>
>
> Best Wishes,
> Vel
>
> On Wed, Jun 28, 2017 at 9:47 AM, Lukas Tribus <lu...@gmx.net> wrote:
>
>> Hello Andrew,
>>
>>
>> Am 28.06.2017 um 02:06 schrieb Andrew Smalley:
>> > Lukas
>> >
>> > Why is this triple posting? Surely he asked questions in a nice way in
>> more than one location and deserves the right answer and not a flame down
>> here.
>> >
>> > It is about helping people after all I hope!
>>
>> Questions have been answered in a lengthy thread some 10 days ago:
>> http://discourse.haproxy.org/t/regarding-maxconn-parameter-i
>> n-backend-for-connection-queueing/1320/9
>>
>> No followup questions there.
>>
>>
>> Then a new thread today, no specific question that hasn't already
>> been answered in the previous thread, no followup responses (to my
>> request to clarify the question) either:
>> http://discourse.haproxy.org/t/custom-display-message-when-s
>> etting-maxconn-in-front-end-listen-block/1382/2
>>
>>
>> Then he moves the discussion to the mailing list, not mentioning the
>> conversations on discourse (which would prevented people - in this
>> case Jarno - from trying to explain the same thing all over again).
>>
>>
>> Its about helping people out, but that doesn't work in the long term
>> when we have people deliberately spread questions about the same topic
>> across different channels (mailing list, discourse).
>>
>>
>>
>> Lukas Tribus:
>> > Is there anything that has been answered 3 times already, or
>> > do you just like to annoy other people?
>>
>> This should have been:
>> Is there anything that has *not* been answered 3 times already?
>>
>>
>>
>> Velmurugan Dhakshnamoorthy:
>> > Apologize,  my intent is not to annoy anyone
>> > [...]
>> > I am not aware this email group and discourse forum are same.
>>
>> The point is: please keep the discussion of a single topic/question
>> in a single thread (on the mailing list or discourse), unless you
>> don't get any responses.
>>
>> If something is unclear, you ought to ask for clarification, not
>> rephrase the question and ask somewhere else.
>>
>>
>> Lukas
>>
>>
>

Reply via email to