I just want to point out here that it is not fair to assume that just because a 
one-way void method is called that the connection won't be re-used.

It is perfectly valid for a client to issue multiple one-way void methods over 
the same connection, and this connection should respect the server-side timeout 
(i.e. the client may be permitted to wait as long as the server's read timeout 
between sending one-way void methods if they so desire).

Any patch to implement early-closing on the server-side should NOT depend upon 
the type of method invoked, it should purely be based upon detecting an 
explicit socket close from the client.

Generally, I agree with others on this thread that it sounds like the 
application/load use case isn't designed quite right. With this many requests 
at such a rate, you probably DO want to be reusing longer-lived connections for 
multiple requests or batching your requests somehow, not early-closing. Opening 
and tearing down 500 unique TCP connections per second is a fair amount of 
network churn, and as others have pointed out if your application can't 
actually service all 500 requests in that amount of time, the sockets are going 
to stack up regardless. Are these 500 connections all coming from unique client 
machines/processes?

-----Original Message-----
From: Ted Dunning [mailto:[email protected]] 
Sent: Thursday, February 11, 2010 11:15 AM
To: [email protected]
Subject: Re: Java Server Does Not Close Sockets immediately for oneway methods

It sounds like you need an extra queuing layer.  The thrift call can insert
the request into the queue (and thereby close the socket).  It should be
possible to insert 500 transactions per second into a large queue with no
difficulty at all.

The real problem here is not the late close.  It is the fact that you can't
handle your peak volume without a huge backlog.  If you design a layer that
can absorb the backlog, then the problem will go away.  You still have the
problem that if your peak is 4x higher than the rate that you can process,
then you are probably getting close to the situation where you can't keep up
with your average load either.

On Thu, Feb 11, 2010 at 11:10 AM, Utku Can Topçu <[email protected]> wrote:

> Ted,
>
> I sure agree with you, however having 500*x more threads for handling the
> concurrency, will eventually set the processing time for one request from 4
> seconds to 4*x seconds.
> At the end of the day the open connection count will be the same ;)
>
> Best,
> Utku
>
> On Thu, Feb 11, 2010 at 9:06 PM, Ted Dunning <[email protected]>
> wrote:
>
> > This is the key phrase that I was looking at.  If you only allow 500x
> > concurrency and you are getting 500 transactions per second, then you are
> > going to back up pretty seriously.
> >
> > On Thu, Feb 11, 2010 at 10:58 AM, Utku Can Topçu <[email protected]>
> > wrote:
> >
> > > > > time. Say each request needs 4 seconds to complete in 500
> > concurrency.
> >
> >
> >
> >
> > --
> > Ted Dunning, CTO
> > DeepDyve
> >
>



-- 
Ted Dunning, CTO
DeepDyve

Reply via email to