[
https://issues.apache.org/jira/browse/TS-3118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14334140#comment-14334140
]
Adam W. Dace edited comment on TS-3118 at 2/24/15 12:21 AM:
------------------------------------------------------------
Meh. Sorry for my reluctance but honestly I'm not sure if I'm violating any
legal non-disclosure terms I'd agreed to by talking about this too much.
To directly answer your question, I believe the logic(from the load-balancer)
went something like this:
1) Client HTTP request comes in. We hold the socket open as the load-balancer
is also acting as a socket-layer endpoint(security reasons).
2) Find a server that can handle the request.
3) Server isn't listening on the required port? No problem, failover to the
next node(various failover schemes, round-robin, etc). Note, no errors here.
Just delays.
4) Deliver HTTP request to an active node, with the socket-layer coming from
the load-balancer(again, security).
5) Fulfill request.
6) Deliver that HTTP response to the actual client, waiting for its HTTP
response on another socket.
7) HTTP request complete.
FWIW, I hope this helps. This was my understanding of the behavior of the
load-balancer itself.
Not impossible in coding terms, but definitely impressive. :-)
was (Author: adace):
Meh. Sorry for my reluctance but honestly I'm not sure if I'm violating any
legal non-disclosure terms I'd agreed to by talking about this too much.
To directly answer your question, I believe the logic(from the load-balancer)
went something like this:
1) Client HTTP request comes in. We hold the socket open as the load-balancer
is also acting as a socket-layer endpoint(security reasons).
2) Find a server that can handle the request.
3) Server isn't listening on the required port? No problem, failover to the
next node(various failover schemes, round-robin, etc). Note, no errors here.
Just delays.
4) Deliver HTTP request to an active node, with the socket-layer coming from
the load-balancer(again, security).
5) Fulfill request.
6) Deliver that HTTP request to the actual client, waiting for its HTTP
response on another socket.
7) HTTP request complete.
FWIW, I hope this helps. This was my understanding of the behavior of the
load-balancer itself.
Not impossible in coding terms, but definitely impressive. :-)
> Feature to stop accepting new connections
> -----------------------------------------
>
> Key: TS-3118
> URL: https://issues.apache.org/jira/browse/TS-3118
> Project: Traffic Server
> Issue Type: New Feature
> Reporter: Miles Libbey
> Labels: A
> Fix For: 5.3.0
>
>
> When taking an ATS machine out of production, it would be nice to have ATS
> stop accepting new connections without affecting the existing client
> connections to minimize client disruption.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)