Having finally received the actual details of what the OP actually is doing
in email #37 of this thread, I was struck by a simple thought. I have
re-read the whole thread, and don't think/hope that I am about to say
anything completely stupid.
We develop software that routes millions of requests
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Chris,
On 11/14/12 11:02 AM, chris derham wrote:
My simple thought was that it sounds like your code isn't working.
You have more load than one tomcat instance can handle, which
overloads that instance. You are trying to write code to handle
this
On 11/14/12 11:02 AM, chris derham wrote:
My simple thought was that it sounds like your code isn't working.
You have more load than one tomcat instance can handle, which
overloads that instance. You are trying to write code to handle
this situation, and seem convinced that the only solution
In addition, those clients will then get exactly the behaviour that
you are complaining about: a successful connection and then a 'connection
reset'
when doing I/O.
No, they will not get an ACK to the SYN packet which is much better.
No, Asahnka, you are completely wrong about this. Clients
On 11/9/2012 1:41 PM, Christopher Schultz wrote:
Closing the listening socket, as you seem to be now suggesting, is
a very poor idea indeed: what happens if some other process grabs
the port in the meantime: what is Tomcat supposed to do then?
I haven't been following this thread closely
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Esmond,
On 11/11/12 10:18 PM, Esmond Pitt wrote:
Asankha
To reiterate what Christopher said, if you close the listening
socket because you think you can't service one extra client, you
will lose all the connections on the backlog queue, which
Hi Esmond
To reiterate what Christopher said, if you close the listening socket
because you think you can't service one extra client, you will lose all the
connections on the backlog queue, which could be hundreds of clients, that
you *can* service.
I do not see a problem here. We develop
On 11/12/2012 10:47 PM, Terence M. Bandoian wrote:
On 11/9/2012 1:41 PM, Christopher Schultz wrote:
Closing the listening socket, as you seem to be now suggesting, is
a very poor idea indeed: what happens if some other process grabs
the port in the meantime: what is Tomcat supposed to do then?
Esmond Pitt esmond.p...@bigpond.com wrote:
Asankha
To reiterate what Christopher said, if you close the listening socket
because you think you can't service one extra client, you will lose all
the
connections on the backlog queue, which could be hundreds of clients,
that
you *can* service.
Asankha C. Perera asan...@apache.org wrote:
I do not understand the negativity here..
You are trying to solve a problem no-one here recognises.
You are ignoring and/or dismissing people that ask difficult questions or point
out flaws in your proposal.
Your posts - this one in particular -
Hi All,
I'm new to this mailing list. I read the above mail list and not able to
grasp the concepts like load balancing, keep alive. Can you please give me
links where I can find links about it.
On Tue, Nov 13, 2012 at 6:32 AM, Mark Thomas ma...@apache.org wrote:
Asankha C. Perera
From: selvakumar netaji [mailto:vvekselva...@gmail.com]
Subject: Re: Handling requests when under load - ACCEPT and RST vs non-ACCEPT
I'm new to this mailing list. I read the above mail list and not able to
grasp the concepts like load balancing, keep alive. Can you please give me
links
On 10/11/2012 04:52, Asankha C. Perera wrote:
Hi Chris
processing 1 connection through completion
(there are 99 others still running), re-binding, accepting a single
connection into the application plus 100 others into the backlog, then
choking again and dropping 100 connections, then
Asankha
To reiterate what Christopher said, if you close the listening socket
because you think you can't service one extra client, you will lose all the
connections on the backlog queue, which could be hundreds of clients, that
you *can* service.
In addition, those clients will then get exactly
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Esmond,
On 11/7/12 10:03 PM, Esmond Pitt wrote:
Asankha
I haven't said a word about your second program, that closes the
listening socket. *Of course* that causes connection refusals, it
can't possibly not, but it isn't relevant to the
Hi Chris
processing 1 connection through completion
(there are 99 others still running), re-binding, accepting a single
connection into the application plus 100 others into the backlog, then
choking again and dropping 100 connections, then processing another
single connection. That's a huge
On 08/11/2012 03:35, Asankha C. Perera wrote:
Hi Esmond
Closing the listening socket, as you seem to be now suggesting, is a
very poor idea indeed:
I personally do not think there is anything at all bad about turning it
off. After all, if you are not ready to accept more, you should be clear
Hi Mark
what happens if some other process grabs the port in the meantime:
what is Tomcat supposed to do then?
In reality I do not know of a single client production deployment that
would allocate the same port to possibly conflicting services, that may
grab another's port when its suffering
On 08/11/2012 15:03, Asankha C. Perera wrote:
Hi Mark
what happens if some other process grabs the port in the meantime:
what is Tomcat supposed to do then?
In reality I do not know of a single client production deployment that
would allocate the same port to possibly conflicting services,
On 11/09/2012 02:16 AM, Pid wrote:
On 08/11/2012 15:03, Asankha C. Perera wrote:
Hi Mark
what happens if some other process grabs the port in the meantime:
what is Tomcat supposed to do then?
In reality I do not know of a single client production deployment that
would allocate the same port
Hi Mark
maxThreads limits the number of concurrent threads available for
processing requests. connection != concurrent request, primarily
because of HTTP keep-alive.
maxConnections can be used to limit the number of connections.
Thanks for this insight.. I initially missed this when I went
Hi Esmond
That wouldn't have any different effect to not calling accept() at all
in blocking mode
Clearly there is a difference.
There isn't a difference. All that deregistering OP_ACCEPT does is prevent
the application from calling accept(). It has exactly the same effect as
On 11/08/2012 04:57 AM, Esmond Pitt wrote:
That wouldn't have any different effect to not calling accept() at all
in blocking mode
Clearly there is a difference.
There isn't a difference. All that deregistering OP_ACCEPT does is prevent
the application from calling accept(). It has exactly the
Asankha
I haven't said a word about your second program, that closes the listening
socket. *Of course* that causes connection refusals, it can't possibly not,
but it isn't relevant to the misconceptions about what OP_ACCEPT does that
you have been expressing here and that I have been addressing.
Hi Esmond
I haven't said a word about your second program, that closes the
listening socket. *Of course* that causes connection refusals, it
can't possibly not, but it isn't relevant to the misconceptions about
what OP_ACCEPT does that you have been expressing here and that I have
been
That wouldn't have any different effect to not calling accept() at all in
blocking mode, or to thread starvation such that the accept thread didn't
get a run. It wouldn't make any difference to whether the client got a
connection refused/reset. The backlog queue would still fill up in exactly
the
Hi Esmond
That wouldn't have any different effect to not calling accept() at all in
blocking mode
Clearly there is a difference. Please see the samples in [1] [2] and
execute them to see this. The TestAccept1 below allows one to open more
than one connection at a time, even when only one
Hi Chris
My expectation from the backlog is:
1. Connections that can be handled directly will be accepted and work
will begin
2. Connections that cannot be handled will accumulate in the backlog
3. Connections that exceed the backlog will get connection refused
There are caveats, I would
Asankha C. Perera asan...@apache.org wrote:
My testing has been primarily on Linux / Ubuntu.
With which version of Tomcat?
Mark
-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail:
Mark Thomas ma...@apache.org wrote:
Asankha C. Perera asan...@apache.org wrote:
My testing has been primarily on Linux / Ubuntu.
With which version of Tomcat?
And which connector implentation?
Mark
-
To unsubscribe,
On 11/07/2012 11:55 AM, Mark Thomas wrote:
Mark Thomas ma...@apache.org wrote:
Asankha C. Perera asan...@apache.org wrote:
My testing has been primarily on Linux / Ubuntu.
With which version of Tomcat?
And which connector implentation?
Tomcat 7.0.29 and possibly 7.0.32 too, but I believe
On 07/11/2012 08:13, Asankha C. Perera wrote:
On 11/07/2012 11:55 AM, Mark Thomas wrote:
Mark Thomas ma...@apache.org wrote:
Asankha C. Perera asan...@apache.org wrote:
My testing has been primarily on Linux / Ubuntu.
With which version of Tomcat?
And which connector implentation?
Tomcat
Hi Chris / Mark
Or you could just read the configuration documentation for the
connector. Hint: acceptCount - and it has been there since at
least Tomcat 4.
The acceptCount WAS being used, but was not being honored as an end user
would expect in reality (See the configurations I've shared at
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Asankha,
On 11/5/12 8:36 AM, Asankha C. Perera wrote:
Hi Chris / Mark
Or you could just read the configuration documentation for the
connector. Hint: acceptCount - and it has been there since at
least Tomcat 4.
The acceptCount WAS being
Hi Chris
First, evidently, acceptCount almost does not appear in the Tomcat
source. It's real name is backlog if you want to do some searching.
It's been in there forever.
Yes, I found it too; but saw that it didn't perform what an 'end user'
would expect from Tomcat.
Second, all three
Hi Esmond
You are correct. As I recently found out Tomcat and Java is not causing
this explicitly, as I first thought. So there is no 'bug' to be fixed.
But I believe there is an elegant way to refuse further connections when
under load by turning off just the 'accepting' of new connections,
On 02/11/2012 19:13, Asankha C. Perera wrote:
Hi Esmond
You are correct. As I recently found out Tomcat and Java is not causing
this explicitly, as I first thought. So there is no 'bug' to be fixed.
But I believe there is an elegant way to refuse further connections when
under load by
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Mark,
On 11/2/12 3:51 PM, Mark Thomas wrote:
On 02/11/2012 19:13, Asankha C. Perera wrote:
Hi Esmond
You are correct. As I recently found out Tomcat and Java is not
causing this explicitly, as I first thought. So there is no 'bug'
to be fixed.
Hi Chris
I was connecting locally to the same node over the local interface on
both EC2 and locally.
Since this went unresolved for sometime now for me, I investigated this
a bit myself, first looking at the Coyote source code, and then
experimenting with plain Java sockets. It seems like
Asankha
What you are looking at is TCP platform-dependent behaviour. There is a
'backlog' queue of inbound connections that have been completed by the TCP
stack but not yet accepted by the application via the accept() API. This is
the queue whose length is specified in the 'C' listen() method
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Asankha,
On 10/29/12 11:56 PM, Asankha C. Perera wrote:
Hi Chris
Sorry, also what is your OS (be as specific as possible) and what
JVM are you running on?
Locally for the Wireshark capture I ran this on:
asankha@asankha-dm4:~$ uname -a Linux
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Asankha,
On 10/29/12 9:20 AM, Asankha C. Perera wrote:
During some performance testing I've seen that Tomcat resets
accepted TCP connections when under load. I had seen this
previously too [1], but was not able to analyze the scenario in
detail
Hi Chris
Connector port=9000
protocol=org.apache.coyote.http11.Http11NioProtocol
connectionTimeout=2
redirectPort=8443
maxKeepAliveRequests=1
processorCache=1
acceptCount=1
maxThreads=1/
I used
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Asankha,
On 10/29/12 12:29 PM, Asankha C. Perera wrote:
Hi Chris
Connector port=9000
protocol=org.apache.coyote.http11.Http11NioProtocol
connectionTimeout=2 redirectPort=8443
maxKeepAliveRequests=1 processorCache=1 acceptCount=1
Hi Chris
Sorry, also what is your OS (be as specific as possible) and what JVM
are you running on?
Locally for the Wireshark capture I ran this on:
asankha@asankha-dm4:~$ uname -a
Linux asankha-dm4 3.2.0-31-generic #50-Ubuntu SMP Fri Sep 7 16:16:45 UTC
2012 x86_64 x86_64 x86_64 GNU/Linux
45 matches
Mail list logo