Hi all,

On Thu, Jun 29, 2017 at 11:23 AM, Velmurugan Dhakshnamoorthy <
dvel....@gmail.com> wrote:

> Thanks Much Andrew,  I will definitely explore on this.
>
> Thanks again.
>
> On Jun 28, 2017 22:03, "Andrew Smalley" <asmal...@loadbalancer.org> wrote:
>
>> Hi Vel
>>
>> Form what you describe the example using the tarpit feature may help you
>> taken from here https://blog.codecentric.de/en
>> /2014/12/haproxy-http-header-rate-limiting/
>>
>> frontend fe_api_ssl
>>   bind 192.168.0.1:443 ssl crt /etc/haproxy/ssl/api.pem no-sslv3 ciphers ...
>>   default_backend be_api
>>
>>   tcp-request inspect-delay 5s
>>
>>   acl document_request path_beg -i /v2/documents
>>   acl is_upload hdr_beg(Content-Type) -i multipart/form-data
>>   acl too_many_uploads_by_user sc0_gpc0_rate() gt 100
>>   acl mark_seen sc0_inc_gpc0 gt 0
>>
>>   stick-table type string size 100k store gpc0_rate(60s)
>>
>>   tcp-request content track-sc0 hdr(Authorization) if METH_POST 
>> document_request is_upload
>>
>>   use_backend 429_slow_down if mark_seen too_many_uploads_by_user
>>
>> backend be_429_slow_down
>>   timeout tarpit 2s
>>   errorfile 500 /etc/haproxy/errorfiles/429.http
>>   http-request tarpit
>>
>>
>>
>> Andrew Smalley
>>
>> Loadbalancer.org Ltd.
>> www.loadbalancer.org <https://www.loadbalancer.org/?gclid=ES2017>
>>
>> <https://plus.google.com/+LoadbalancerOrg>
>> <https://twitter.com/loadbalancerorg>
>> <http://www.linkedin.com/company/3191352?trk=prof-exp-company-name>
>> <https://www.loadbalancer.org/?category=company&post-name=overview&?gclid=ES2017>
>> <https://www.loadbalancer.org/?gclid=ES2017>
>> +1 888 867 9504 <%28888%29%20867-9504> / +44 (0)330 380 1064
>> <+44%20330%20380%201064>
>> asmal...@loadbalancer.org
>>
>> Leave a Review
>> <http://collector.reviews.io/loadbalancer-org-inc-/new-review> | Deployment
>> Guides
>> <https://www.loadbalancer.org/?category=resources&post-name=deployment-guides&?gclid=ES2017>
>> | Blog <https://www.loadbalancer.org/?category=blog&?gclid=ES2017>
>>
>> On 28 June 2017 at 10:01, Velmurugan Dhakshnamoorthy <dvel....@gmail.com>
>> wrote:
>>
>>> Hi Lukas,
>>> Thanks for your response in length. As I mentioned earlier, I was not
>>> aware that the people from discourse forum and this email d-list group are
>>> same. I am 100% new to HAProxy.
>>>
>>> Let me explain my current situation in-detail in this email thread,
>>> Kindly check if you or other people from the group can guide me.
>>>
>>> Our requirement to use HAProxy is NOT to load balance back-end (Weblogic
>>> 12c) servers, we have a singe backend instance (ex: PIA1), our server
>>> capacity is not high to handle the heavy traffic during peak load, the peak
>>> load occurs only 2 times in a year, that's a reason we are not scaling up
>>> our server resources as they will be idle majority of the time.
>>>
>>> we would like to use HAProxy to throttle http/tcp connections during the
>>> peak load, so that weblogic backed will not go to Out-Of-Memory
>>> state/PeopleSoft will not crash.
>>>
>>> To achieve http throttling,when setting maxconn to back end , HAProxy
>>> queue up further connections and releases once the active http connections
>>> become idle,however how weblogic works is, once the PeopleSoft URL is
>>> accessed and user is authenticated , cookie will be inserted to browser and
>>> cookie will be active by default 20 minutes, which mean even if user does
>>> not navigate and do anything inside the application, cookie session state
>>> will be retained in weblogic java heap. weblogic allocates small amount of
>>> memory in order to retain each active sessions (though memory allocation
>>> increase/decrease dynamically based on various business functionality i).
>>> as per current capacity , weblogic can retain only 100 session state ,
>>> which means, I don't want to forward any further connections to weblogic
>>> until some of the sessions from 100 are released (by default the session
>>> will be released when user clicks explicitly on signout button or
>>> inactivity timeout reaches 20 minutes).
>>>
>>> according to my understanding, maxconn in back-end throttles connections
>>> and releases to back-end as and when tcp connection status changed to idle,
>>> but though connections are idle, logout/signout not occurred from
>>> PeopleSoft, so that still session state are maintained in weblogic and not
>>> released and cannot handle further connections.
>>>
>>> that's reason, I am setting the maxconn in front end and keeping HTTP
>>> alive option ON, so that I can throttle connections at front end itself.
>>> According to my POC, setting maxconn in front-end behaves differently than
>>> setting in back-end, when it is on front-end, it hold further connections
>>> in kernel , once the existing http connections are closed, it allows
>>> further connections inside, in this I dont see any performance issue for
>>> existing connections.
>>>
>>> for your information HAProxy and Weblogic are residing in a same single
>>> VM.
>>>
>>> please let me know if my above understanding is correct about maxconn.
>>> Is there any understanding gap ? is there any way to achieve my requirement
>>> differently?
>>>
>>> when decided to use maxconn in front-end, the connection queuing for few
>>> milli seconds and seconds are OK, but when connections are queued in
>>> minutes, would like to emit some meaningful message to user, that's a
>>> reason asked if there is any way to display custom message when connections
>>> are queued in Linux kernel.
>>>
>>> to answer Luaks question, weblogic does not logout user when tcp
>>> connection is closed. weblogic creates new connections as and when required.
>>>
>>>
>>>
>>> Best Wishes,
>>> Vel
>>>
>>> On Wed, Jun 28, 2017 at 9:47 AM, Lukas Tribus <lu...@gmx.net> wrote:
>>>
>>>> Hello Andrew,
>>>>
>>>>
>>>> Am 28.06.2017 um 02:06 schrieb Andrew Smalley:
>>>> > Lukas
>>>> >
>>>> > Why is this triple posting? Surely he asked questions in a nice way
>>>> in more than one location and deserves the right answer and not a flame
>>>> down here.
>>>> >
>>>> > It is about helping people after all I hope!
>>>>
>>>> Questions have been answered in a lengthy thread some 10 days ago:
>>>> http://discourse.haproxy.org/t/regarding-maxconn-parameter-i
>>>> n-backend-for-connection-queueing/1320/9
>>>>
>>>> No followup questions there.
>>>>
>>>>
>>>> Then a new thread today, no specific question that hasn't already
>>>> been answered in the previous thread, no followup responses (to my
>>>> request to clarify the question) either:
>>>> http://discourse.haproxy.org/t/custom-display-message-when-s
>>>> etting-maxconn-in-front-end-listen-block/1382/2
>>>>
>>>>
>>>> Then he moves the discussion to the mailing list, not mentioning the
>>>> conversations on discourse (which would prevented people - in this
>>>> case Jarno - from trying to explain the same thing all over again).
>>>>
>>>>
>>>> Its about helping people out, but that doesn't work in the long term
>>>> when we have people deliberately spread questions about the same topic
>>>> across different channels (mailing list, discourse).
>>>>
>>>>
>>>>
>>>> Lukas Tribus:
>>>> > Is there anything that has been answered 3 times already, or
>>>> > do you just like to annoy other people?
>>>>
>>>> This should have been:
>>>> Is there anything that has *not* been answered 3 times already?
>>>>
>>>>
>>>>
>>>> Velmurugan Dhakshnamoorthy:
>>>> > Apologize,  my intent is not to annoy anyone
>>>> > [...]
>>>> > I am not aware this email group and discourse forum are same.
>>>>
>>>> The point is: please keep the discussion of a single topic/question
>>>> in a single thread (on the mailing list or discourse), unless you
>>>> don't get any responses.
>>>>
>>>> If something is unclear, you ought to ask for clarification, not
>>>> rephrase the question and ask somewhere else.
>>>>
>>>>
>>>> Lukas
>>>>
>>>>
>>>
>>
Although tarpit will work, and also the simple solution that Lukas
suggested http-server-close too, the problem as how I see it is this (I
quote from the previous OP email):

*weblogic can retain only 100 session state , which means, I don't want to
forward any further connections to weblogic until some of the sessions from
100 are released (by default the session will be released when user clicks
explicitly on signout button or inactivity timeout reaches 20 minutes).*

which means he has an issue with the memory of Weblogic and not the load
capacity connection wise. This also means a user might get timeouts of up
to 20 minutes in the browser in which time the browser it self would had
timed out and closed the connection (assuming no keep alive) if the user
already hadn't done that (assuming keep alive is on, it is hard to imagine
that someone will seat in front of a page that loads for 20 minutes). So it
looks to me the first thing to do in the case of overload would be to cut
down the Weblogic session time down to a minute if not seconds.

Once the limit of 100 sessions are reached, note we are talking about *100
sessions in Weblogic* and *NOT 100 connections to the backend*, what is the
Weblogic server going to do? We need to understand what happens on Weblogic
side once the 101st session is accepted. You get error 500 straight away or
something else happens? Maybe nothing and the request gets dropped after
sitting in the Weblogic queue for some time?

The bottom line is Haproxy can not know what is the state of the memory and
the number of sessions in Weblogic. There might be 5 connections at given
time but already 100 sessions in Weblogic.

In case if number of connections == number of sessions then the OP is
right, keep alive should be on for sure to avoid the opposite effect when
the number of WL sessions is much lower than the number of connections.
However, the best solution, or better said most accurate, I can think of is
writing an external health check (I use this approach for backend server
cpu load) that returns the number of WL sessions, something like this:

    default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s
maxconn 100 maxqueue 100 *weight 100* *agent-port <some-random-port>
agent-inter 30s*
    server tomcat 127.0.0.1:8080 *check observe layer7*

and have haproxy automatically adjust the weight down to 0 when the
healt-check daemon (you need to write this, maybe in Jython from what I
remember from the days I used to work with WL :-) ) listening on
*<some-random-port>
*returns number of sessions = 100. Then you can chuck in the mix a backup
server (I do it with nodejs http-server):

    server localhost 127.0.0.1:8081 maxconn 500 backup

that can serve what ever you like to show to the customers.

Igor

Reply via email to