Re: MultiThread Client problem with thrift

2009-12-22 Thread Richard Grossman
Ok,

After digging. The problem is relative to open socket the connection stay
for a while in TIME_WAIT status until closed really.
So If you overload the server of connection then you get this error after
you don't have free port.

To prevent this the standart java Socket implement the method
setReuseAddress. Unfortunately I think it's not implement in TSocket of
thift. Is there any other solution than using thrift ??

thanks




Re: MultiThread Client problem with thrift

2009-12-22 Thread Jaakko
Hi,

I don't know the particulars of java implementation, but if it works
the same way as Unix native socket API, then I would not recommend
setting linger to zero.

SO_LINGER option with zero value will cause TCP connection to be
aborted immediately as soon as the socket is closed. That is, (1)
remaining data in the send buffer will be discarded, (2) no proper
disconnect handshake and (3) receiving end will get TCP reset.

Sure this will avoid TIME_WAIT state, but TIME_WAIT is our friend and
is there to avoid packets from old connection being delivered to new
incarnation of the connection. Instead of avoiding the state, the
application should be changed so that TIME_WAIT will not be a problem.
How many open files you can see when the exception happens? Might be
that you're out of file descriptors.

-Jaakko


On Tue, Dec 22, 2009 at 8:17 PM, Richard Grossman richie...@gmail.com wrote:
 Hi
 To all is interesting I've found a solution seems not recommended but
 working.
 When opening a Socket set this:
    tSocket.getSocket().setReuseAddress(true);
    tSocket.getSocket().setSoLinger(true, 0);
 it's prevent to have a lot of connection TIME_WAIT state but not
 recommended.



Re: MultiThread Client problem with thrift

2009-12-22 Thread Richard Grossman
I agree it's solve my problem but can give a bigger one.
The problem is I can't succeed to prevent opening a lot of connection

On Tue, Dec 22, 2009 at 1:51 PM, Jaakko rosvopaalli...@gmail.com wrote:

 Hi,

 I don't know the particulars of java implementation, but if it works
 the same way as Unix native socket API, then I would not recommend
 setting linger to zero.

 SO_LINGER option with zero value will cause TCP connection to be
 aborted immediately as soon as the socket is closed. That is, (1)
 remaining data in the send buffer will be discarded, (2) no proper
 disconnect handshake and (3) receiving end will get TCP reset.

 Sure this will avoid TIME_WAIT state, but TIME_WAIT is our friend and
 is there to avoid packets from old connection being delivered to new
 incarnation of the connection. Instead of avoiding the state, the
 application should be changed so that TIME_WAIT will not be a problem.
 How many open files you can see when the exception happens? Might be
 that you're out of file descriptors.

 -Jaakko


 On Tue, Dec 22, 2009 at 8:17 PM, Richard Grossman richie...@gmail.com
 wrote:
  Hi
  To all is interesting I've found a solution seems not recommended but
  working.
  When opening a Socket set this:
 tSocket.getSocket().setReuseAddress(true);
 tSocket.getSocket().setSoLinger(true, 0);
  it's prevent to have a lot of connection TIME_WAIT state but not
  recommended.
 



Re: MultiThread Client problem with thrift

2009-12-22 Thread Ran Tavory
Would connection pooling work for you?
This Java client http://code.google.com/p/cassandra-java-client/ has
connection pooling.
I haven't put the client under stress yet so I can't testify, but this may
be a good solution for you

On Tue, Dec 22, 2009 at 2:22 PM, Richard Grossman richie...@gmail.comwrote:

 I agree it's solve my problem but can give a bigger one.
 The problem is I can't succeed to prevent opening a lot of connection


 On Tue, Dec 22, 2009 at 1:51 PM, Jaakko rosvopaalli...@gmail.com wrote:

 Hi,

 I don't know the particulars of java implementation, but if it works
 the same way as Unix native socket API, then I would not recommend
 setting linger to zero.

 SO_LINGER option with zero value will cause TCP connection to be
 aborted immediately as soon as the socket is closed. That is, (1)
 remaining data in the send buffer will be discarded, (2) no proper
 disconnect handshake and (3) receiving end will get TCP reset.

 Sure this will avoid TIME_WAIT state, but TIME_WAIT is our friend and
 is there to avoid packets from old connection being delivered to new
 incarnation of the connection. Instead of avoiding the state, the
 application should be changed so that TIME_WAIT will not be a problem.
 How many open files you can see when the exception happens? Might be
 that you're out of file descriptors.

 -Jaakko


 On Tue, Dec 22, 2009 at 8:17 PM, Richard Grossman richie...@gmail.com
 wrote:
  Hi
  To all is interesting I've found a solution seems not recommended but
  working.
  When opening a Socket set this:
 tSocket.getSocket().setReuseAddress(true);
 tSocket.getSocket().setSoLinger(true, 0);
  it's prevent to have a lot of connection TIME_WAIT state but not
  recommended.
 





Re: MultiThread Client problem with thrift

2009-12-22 Thread Richard Grossman
Yes of course but do you have updated to cassandra 0.5.0-beta2 ?

On Tue, Dec 22, 2009 at 2:30 PM, Ran Tavory ran...@gmail.com wrote:

 Would connection pooling work for you?
 This Java client http://code.google.com/p/cassandra-java-client/ has
 connection pooling.
 I haven't put the client under stress yet so I can't testify, but this may
 be a good solution for you


 On Tue, Dec 22, 2009 at 2:22 PM, Richard Grossman richie...@gmail.comwrote:

 I agree it's solve my problem but can give a bigger one.
 The problem is I can't succeed to prevent opening a lot of connection


 On Tue, Dec 22, 2009 at 1:51 PM, Jaakko rosvopaalli...@gmail.com wrote:

 Hi,

 I don't know the particulars of java implementation, but if it works
 the same way as Unix native socket API, then I would not recommend
 setting linger to zero.

 SO_LINGER option with zero value will cause TCP connection to be
 aborted immediately as soon as the socket is closed. That is, (1)
 remaining data in the send buffer will be discarded, (2) no proper
 disconnect handshake and (3) receiving end will get TCP reset.

 Sure this will avoid TIME_WAIT state, but TIME_WAIT is our friend and
 is there to avoid packets from old connection being delivered to new
 incarnation of the connection. Instead of avoiding the state, the
 application should be changed so that TIME_WAIT will not be a problem.
 How many open files you can see when the exception happens? Might be
 that you're out of file descriptors.

 -Jaakko


 On Tue, Dec 22, 2009 at 8:17 PM, Richard Grossman richie...@gmail.com
 wrote:
  Hi
  To all is interesting I've found a solution seems not recommended but
  working.
  When opening a Socket set this:
 tSocket.getSocket().setReuseAddress(true);
 tSocket.getSocket().setSoLinger(true, 0);
  it's prevent to have a lot of connection TIME_WAIT state but not
  recommended.
 






Re: MultiThread Client problem with thrift

2009-12-22 Thread Ran Tavory
I don't have a 0.5.0-beta2 version, no. It's not too difficult to add it,
but I haven't done so myself, I'm using 0.4.2

On Tue, Dec 22, 2009 at 2:42 PM, Richard Grossman richie...@gmail.comwrote:

 Yes of course but do you have updated to cassandra 0.5.0-beta2 ?


 On Tue, Dec 22, 2009 at 2:30 PM, Ran Tavory ran...@gmail.com wrote:

 Would connection pooling work for you?
 This Java client http://code.google.com/p/cassandra-java-client/ has
 connection pooling.
 I haven't put the client under stress yet so I can't testify, but this may
 be a good solution for you


 On Tue, Dec 22, 2009 at 2:22 PM, Richard Grossman richie...@gmail.comwrote:

 I agree it's solve my problem but can give a bigger one.
 The problem is I can't succeed to prevent opening a lot of connection


 On Tue, Dec 22, 2009 at 1:51 PM, Jaakko rosvopaalli...@gmail.comwrote:

 Hi,

 I don't know the particulars of java implementation, but if it works
 the same way as Unix native socket API, then I would not recommend
 setting linger to zero.

 SO_LINGER option with zero value will cause TCP connection to be
 aborted immediately as soon as the socket is closed. That is, (1)
 remaining data in the send buffer will be discarded, (2) no proper
 disconnect handshake and (3) receiving end will get TCP reset.

 Sure this will avoid TIME_WAIT state, but TIME_WAIT is our friend and
 is there to avoid packets from old connection being delivered to new
 incarnation of the connection. Instead of avoiding the state, the
 application should be changed so that TIME_WAIT will not be a problem.
 How many open files you can see when the exception happens? Might be
 that you're out of file descriptors.

 -Jaakko


 On Tue, Dec 22, 2009 at 8:17 PM, Richard Grossman richie...@gmail.com
 wrote:
  Hi
  To all is interesting I've found a solution seems not recommended but
  working.
  When opening a Socket set this:
 tSocket.getSocket().setReuseAddress(true);
 tSocket.getSocket().setSoLinger(true, 0);
  it's prevent to have a lot of connection TIME_WAIT state but not
  recommended.
 







Re: MultiThread Client problem with thrift

2009-12-22 Thread Ran Tavory
Not at expert in this field, but I think what you want is use a connection
pool and NOT close the connections - reuse them. Only idle connections are
released after, say 1sec. Also, with a connection pool it's easy
to throttle the application, you can tell the pool to block if all 50
connections, or how many you define are allowed.

On Tue, Dec 22, 2009 at 4:01 PM, Richard Grossman richie...@gmail.comwrote:

 So I can't use it.

 But I've make my own connection pool. This are not fix nothing because the
 problem is lower than even java. In fact the socket is closed and java
 consider it as close but the system keep the Socket in the  state TIME_WAIT.
 Then the port used is actually still in use.

 So my question is that is there people that manage to open multiple
 connection and ride off the TIME_WAIT. No matter in which language PHP or
 Python etc...

 Thanks

 On Tue, Dec 22, 2009 at 2:55 PM, Ran Tavory ran...@gmail.com wrote:

 I don't have a 0.5.0-beta2 version, no. It's not too difficult to add it,
 but I haven't done so myself, I'm using 0.4.2


 On Tue, Dec 22, 2009 at 2:42 PM, Richard Grossman richie...@gmail.comwrote:

 Yes of course but do you have updated to cassandra 0.5.0-beta2 ?


 On Tue, Dec 22, 2009 at 2:30 PM, Ran Tavory ran...@gmail.com wrote:

 Would connection pooling work for you?
 This Java client http://code.google.com/p/cassandra-java-client/ has
 connection pooling.
 I haven't put the client under stress yet so I can't testify, but this
 may be a good solution for you


 On Tue, Dec 22, 2009 at 2:22 PM, Richard Grossman 
 richie...@gmail.comwrote:

 I agree it's solve my problem but can give a bigger one.
 The problem is I can't succeed to prevent opening a lot of connection


 On Tue, Dec 22, 2009 at 1:51 PM, Jaakko rosvopaalli...@gmail.comwrote:

 Hi,

 I don't know the particulars of java implementation, but if it works
 the same way as Unix native socket API, then I would not recommend
 setting linger to zero.

 SO_LINGER option with zero value will cause TCP connection to be
 aborted immediately as soon as the socket is closed. That is, (1)
 remaining data in the send buffer will be discarded, (2) no proper
 disconnect handshake and (3) receiving end will get TCP reset.

 Sure this will avoid TIME_WAIT state, but TIME_WAIT is our friend and
 is there to avoid packets from old connection being delivered to new
 incarnation of the connection. Instead of avoiding the state, the
 application should be changed so that TIME_WAIT will not be a problem.
 How many open files you can see when the exception happens? Might be
 that you're out of file descriptors.

 -Jaakko


 On Tue, Dec 22, 2009 at 8:17 PM, Richard Grossman 
 richie...@gmail.com wrote:
  Hi
  To all is interesting I've found a solution seems not recommended
 but
  working.
  When opening a Socket set this:
 tSocket.getSocket().setReuseAddress(true);
 tSocket.getSocket().setSoLinger(true, 0);
  it's prevent to have a lot of connection TIME_WAIT state but not
  recommended.
 









Re: MultiThread Client problem with thrift

2009-12-22 Thread matthew hawthorne
On Tue, Dec 22, 2009 at 9:10 AM, Ran Tavory ran...@gmail.com wrote:
 Not at expert in this field, but I think what you want is use a connection
 pool and NOT close the connections - reuse them. Only idle connections are
 released after, say 1sec. Also, with a connection pool it's easy
 to throttle the application, you can tell the pool to block if all 50
 connections, or how many you define are allowed.

I did something very similar to this.  A difference in my approach is
that I did not release idle connections after a specific time period,
instead I performed a liveness check on each connection after
obtaining it from the pool, like this:

// get client connection from pool
Cassandra.Client client =

try {
  client.getInputProtocol().getTransport().flush();
} catch (TTransportException e) {
  // connection is invalid, obtain new connection
}

It seemed to work during my testing, not sure if the thrift specifics
are 100% correct (meaning I'm not sure if the catch block will work
for all situations involving stale or expired connections).

-matt


 On Tue, Dec 22, 2009 at 4:01 PM, Richard Grossman richie...@gmail.com
 wrote:

 So I can't use it.
 But I've make my own connection pool. This are not fix nothing because the
 problem is lower than even java. In fact the socket is closed and java
 consider it as close but the system keep the Socket in the  state TIME_WAIT.
 Then the port used is actually still in use.
 So my question is that is there people that manage to open multiple
 connection and ride off the TIME_WAIT. No matter in which language PHP or
 Python etc...
 Thanks
 On Tue, Dec 22, 2009 at 2:55 PM, Ran Tavory ran...@gmail.com wrote:

 I don't have a 0.5.0-beta2 version, no. It's not too difficult to add it,
 but I haven't done so myself, I'm using 0.4.2

 On Tue, Dec 22, 2009 at 2:42 PM, Richard Grossman richie...@gmail.com
 wrote:

 Yes of course but do you have updated to cassandra 0.5.0-beta2 ?

 On Tue, Dec 22, 2009 at 2:30 PM, Ran Tavory ran...@gmail.com wrote:

 Would connection pooling work for you?
 This Java client http://code.google.com/p/cassandra-java-client/ has
 connection pooling.
 I haven't put the client under stress yet so I can't testify, but this
 may be a good solution for you

 On Tue, Dec 22, 2009 at 2:22 PM, Richard Grossman richie...@gmail.com
 wrote:

 I agree it's solve my problem but can give a bigger one.
 The problem is I can't succeed to prevent opening a lot of connection

 On Tue, Dec 22, 2009 at 1:51 PM, Jaakko rosvopaalli...@gmail.com
 wrote:

 Hi,

 I don't know the particulars of java implementation, but if it works
 the same way as Unix native socket API, then I would not recommend
 setting linger to zero.

 SO_LINGER option with zero value will cause TCP connection to be
 aborted immediately as soon as the socket is closed. That is, (1)
 remaining data in the send buffer will be discarded, (2) no proper
 disconnect handshake and (3) receiving end will get TCP reset.

 Sure this will avoid TIME_WAIT state, but TIME_WAIT is our friend and
 is there to avoid packets from old connection being delivered to new
 incarnation of the connection. Instead of avoiding the state, the
 application should be changed so that TIME_WAIT will not be a
 problem.
 How many open files you can see when the exception happens? Might be
 that you're out of file descriptors.

 -Jaakko


 On Tue, Dec 22, 2009 at 8:17 PM, Richard Grossman
 richie...@gmail.com wrote:
  Hi
  To all is interesting I've found a solution seems not recommended
  but
  working.
  When opening a Socket set this:
     tSocket.getSocket().setReuseAddress(true);
     tSocket.getSocket().setSoLinger(true, 0);
  it's prevent to have a lot of connection TIME_WAIT state but not
  recommended.
 









Re: MultiThread Client problem with thrift

2009-12-22 Thread Richard Grossman
Ok I got this, of course this problem can be solved but lowered the load of
the server by whatever you want : connection pool or Thread management less
agressive. It's not my goal I would like to keep the server under high
pressure.

Finally I managed to find a solution by lowered the TIME_WAIT connection
status on my machine. need to make the adjustement to every machine. It's
system related on mac os x it's here :
http://www.brianp.net/2008/10/03/changing-the-length-of-the-time_wait-state-on-mac-os-x/

on linux it's easier to find.

Thanks

On Tue, Dec 22, 2009 at 4:46 PM, matthew hawthorne mhawtho...@gmail.comwrote:

 On Tue, Dec 22, 2009 at 9:10 AM, Ran Tavory ran...@gmail.com wrote:
  Not at expert in this field, but I think what you want is use a
 connection
  pool and NOT close the connections - reuse them. Only idle connections
 are
  released after, say 1sec. Also, with a connection pool it's easy
  to throttle the application, you can tell the pool to block if all 50
  connections, or how many you define are allowed.

 I did something very similar to this.  A difference in my approach is
 that I did not release idle connections after a specific time period,
 instead I performed a liveness check on each connection after
 obtaining it from the pool, like this:

 // get client connection from pool
 Cassandra.Client client =

 try {
  client.getInputProtocol().getTransport().flush();
 } catch (TTransportException e) {
  // connection is invalid, obtain new connection
 }

 It seemed to work during my testing, not sure if the thrift specifics
 are 100% correct (meaning I'm not sure if the catch block will work
 for all situations involving stale or expired connections).

 -matt


  On Tue, Dec 22, 2009 at 4:01 PM, Richard Grossman richie...@gmail.com
  wrote:
 
  So I can't use it.
  But I've make my own connection pool. This are not fix nothing because
 the
  problem is lower than even java. In fact the socket is closed and java
  consider it as close but the system keep the Socket in the  state
 TIME_WAIT.
  Then the port used is actually still in use.
  So my question is that is there people that manage to open multiple
  connection and ride off the TIME_WAIT. No matter in which language PHP
 or
  Python etc...
  Thanks
  On Tue, Dec 22, 2009 at 2:55 PM, Ran Tavory ran...@gmail.com wrote:
 
  I don't have a 0.5.0-beta2 version, no. It's not too difficult to add
 it,
  but I haven't done so myself, I'm using 0.4.2
 
  On Tue, Dec 22, 2009 at 2:42 PM, Richard Grossman richie...@gmail.com
 
  wrote:
 
  Yes of course but do you have updated to cassandra 0.5.0-beta2 ?
 
  On Tue, Dec 22, 2009 at 2:30 PM, Ran Tavory ran...@gmail.com wrote:
 
  Would connection pooling work for you?
  This Java client http://code.google.com/p/cassandra-java-client/ has
  connection pooling.
  I haven't put the client under stress yet so I can't testify, but
 this
  may be a good solution for you
 
  On Tue, Dec 22, 2009 at 2:22 PM, Richard Grossman 
 richie...@gmail.com
  wrote:
 
  I agree it's solve my problem but can give a bigger one.
  The problem is I can't succeed to prevent opening a lot of
 connection
 
  On Tue, Dec 22, 2009 at 1:51 PM, Jaakko rosvopaalli...@gmail.com
  wrote:
 
  Hi,
 
  I don't know the particulars of java implementation, but if it
 works
  the same way as Unix native socket API, then I would not recommend
  setting linger to zero.
 
  SO_LINGER option with zero value will cause TCP connection to be
  aborted immediately as soon as the socket is closed. That is, (1)
  remaining data in the send buffer will be discarded, (2) no proper
  disconnect handshake and (3) receiving end will get TCP reset.
 
  Sure this will avoid TIME_WAIT state, but TIME_WAIT is our friend
 and
  is there to avoid packets from old connection being delivered to
 new
  incarnation of the connection. Instead of avoiding the state, the
  application should be changed so that TIME_WAIT will not be a
  problem.
  How many open files you can see when the exception happens? Might
 be
  that you're out of file descriptors.
 
  -Jaakko
 
 
  On Tue, Dec 22, 2009 at 8:17 PM, Richard Grossman
  richie...@gmail.com wrote:
   Hi
   To all is interesting I've found a solution seems not recommended
   but
   working.
   When opening a Socket set this:
  tSocket.getSocket().setReuseAddress(true);
  tSocket.getSocket().setSoLinger(true, 0);
   it's prevent to have a lot of connection TIME_WAIT state but not
   recommended.
  
 
 
 
 
 
 
 



Re: MultiThread Client problem with thrift

2009-12-22 Thread Richard Grossman
The problem is not on the server side but on the client side.
The connection are not open they are closed but in status TIME_WAIT so they
use a port

On Tue, Dec 22, 2009 at 7:50 PM, Ran Tavory ran...@gmail.com wrote:

 I don't know how keeping the connections open affects at scale. I suppose
 if you have 10 to 1 ratio of cassandra clients to cassandra server (probably
 a typical ratio) then you may be using too much server resources




RE: MultiThread Client problem with thrift

2009-12-22 Thread Brian Burruss
i don't close the connection to a server unless i get exceptions.  and when i 
close the connection i try a new server in the cluster just to keep the 
connections spread across the cluster.

should i be closing them?  if the connection is closed by client or server i'll 
just reconnect.


From: Ran Tavory [ran...@gmail.com]
Sent: Tuesday, December 22, 2009 9:50 AM
To: cassandra-user@incubator.apache.org
Subject: Re: MultiThread Client problem with thrift

I don't know how keeping the connections open affects at scale. I suppose if 
you have 10 to 1 ratio of cassandra clients to cassandra server (probably a 
typical ratio) then you may be using too much server resources

On Tue, Dec 22, 2009 at 4:46 PM, matthew hawthorne 
mhawtho...@gmail.commailto:mhawtho...@gmail.com wrote:
On Tue, Dec 22, 2009 at 9:10 AM, Ran Tavory 
ran...@gmail.commailto:ran...@gmail.com wrote:
 Not at expert in this field, but I think what you want is use a connection
 pool and NOT close the connections - reuse them. Only idle connections are
 released after, say 1sec. Also, with a connection pool it's easy
 to throttle the application, you can tell the pool to block if all 50
 connections, or how many you define are allowed.

I did something very similar to this.  A difference in my approach is
that I did not release idle connections after a specific time period,
instead I performed a liveness check on each connection after
obtaining it from the pool, like this:

// get client connection from pool
Cassandra.Client client =

try {
 client.getInputProtocol().getTransport().flush();
} catch (TTransportException e) {
 // connection is invalid, obtain new connection
}

It seemed to work during my testing, not sure if the thrift specifics
are 100% correct (meaning I'm not sure if the catch block will work
for all situations involving stale or expired connections).

-matt


 On Tue, Dec 22, 2009 at 4:01 PM, Richard Grossman 
 richie...@gmail.commailto:richie...@gmail.com
 wrote:

 So I can't use it.
 But I've make my own connection pool. This are not fix nothing because the
 problem is lower than even java. In fact the socket is closed and java
 consider it as close but the system keep the Socket in the  state TIME_WAIT.
 Then the port used is actually still in use.
 So my question is that is there people that manage to open multiple
 connection and ride off the TIME_WAIT. No matter in which language PHP or
 Python etc...
 Thanks
 On Tue, Dec 22, 2009 at 2:55 PM, Ran Tavory 
 ran...@gmail.commailto:ran...@gmail.com wrote:

 I don't have a 0.5.0-beta2 version, no. It's not too difficult to add it,
 but I haven't done so myself, I'm using 0.4.2

 On Tue, Dec 22, 2009 at 2:42 PM, Richard Grossman 
 richie...@gmail.commailto:richie...@gmail.com
 wrote:

 Yes of course but do you have updated to cassandra 0.5.0-beta2 ?

 On Tue, Dec 22, 2009 at 2:30 PM, Ran Tavory 
 ran...@gmail.commailto:ran...@gmail.com wrote:

 Would connection pooling work for you?
 This Java client http://code.google.com/p/cassandra-java-client/ has
 connection pooling.
 I haven't put the client under stress yet so I can't testify, but this
 may be a good solution for you

 On Tue, Dec 22, 2009 at 2:22 PM, Richard Grossman 
 richie...@gmail.commailto:richie...@gmail.com
 wrote:

 I agree it's solve my problem but can give a bigger one.
 The problem is I can't succeed to prevent opening a lot of connection

 On Tue, Dec 22, 2009 at 1:51 PM, Jaakko 
 rosvopaalli...@gmail.commailto:rosvopaalli...@gmail.com
 wrote:

 Hi,

 I don't know the particulars of java implementation, but if it works
 the same way as Unix native socket API, then I would not recommend
 setting linger to zero.

 SO_LINGER option with zero value will cause TCP connection to be
 aborted immediately as soon as the socket is closed. That is, (1)
 remaining data in the send buffer will be discarded, (2) no proper
 disconnect handshake and (3) receiving end will get TCP reset.

 Sure this will avoid TIME_WAIT state, but TIME_WAIT is our friend and
 is there to avoid packets from old connection being delivered to new
 incarnation of the connection. Instead of avoiding the state, the
 application should be changed so that TIME_WAIT will not be a
 problem.
 How many open files you can see when the exception happens? Might be
 that you're out of file descriptors.

 -Jaakko


 On Tue, Dec 22, 2009 at 8:17 PM, Richard Grossman
 richie...@gmail.commailto:richie...@gmail.com wrote:
  Hi
  To all is interesting I've found a solution seems not recommended
  but
  working.
  When opening a Socket set this:
 tSocket.getSocket().setReuseAddress(true);
 tSocket.getSocket().setSoLinger(true, 0);
  it's prevent to have a lot of connection TIME_WAIT state but not
  recommended.
 










Re: MultiThread Client problem with thrift

2009-12-22 Thread Richard Grossman
When I try to reuse socket to make multiple thrift operation in multithread
I get always exception

On Tue, Dec 22, 2009 at 9:28 PM, Brian Burruss bburr...@real.com wrote:

 i don't close the connection to a server unless i get exceptions.  and when
 i close the connection i try a new server in the cluster just to keep the
 connections spread across the cluster.

 should i be closing them?  if the connection is closed by client or server
 i'll just reconnect.

 
 From: Ran Tavory [ran...@gmail.com]
 Sent: Tuesday, December 22, 2009 9:50 AM
 To: cassandra-user@incubator.apache.org
 Subject: Re: MultiThread Client problem with thrift

 I don't know how keeping the connections open affects at scale. I suppose
 if you have 10 to 1 ratio of cassandra clients to cassandra server (probably
 a typical ratio) then you may be using too much server resources

 On Tue, Dec 22, 2009 at 4:46 PM, matthew hawthorne mhawtho...@gmail.com
 mailto:mhawtho...@gmail.com wrote:
 On Tue, Dec 22, 2009 at 9:10 AM, Ran Tavory ran...@gmail.commailto:
 ran...@gmail.com wrote:
  Not at expert in this field, but I think what you want is use a
 connection
  pool and NOT close the connections - reuse them. Only idle connections
 are
  released after, say 1sec. Also, with a connection pool it's easy
  to throttle the application, you can tell the pool to block if all 50
  connections, or how many you define are allowed.

 I did something very similar to this.  A difference in my approach is
 that I did not release idle connections after a specific time period,
 instead I performed a liveness check on each connection after
 obtaining it from the pool, like this:

 // get client connection from pool
 Cassandra.Client client =

 try {
  client.getInputProtocol().getTransport().flush();
 } catch (TTransportException e) {
  // connection is invalid, obtain new connection
 }

 It seemed to work during my testing, not sure if the thrift specifics
 are 100% correct (meaning I'm not sure if the catch block will work
 for all situations involving stale or expired connections).

 -matt


  On Tue, Dec 22, 2009 at 4:01 PM, Richard Grossman richie...@gmail.com
 mailto:richie...@gmail.com
  wrote:
 
  So I can't use it.
  But I've make my own connection pool. This are not fix nothing because
 the
  problem is lower than even java. In fact the socket is closed and java
  consider it as close but the system keep the Socket in the  state
 TIME_WAIT.
  Then the port used is actually still in use.
  So my question is that is there people that manage to open multiple
  connection and ride off the TIME_WAIT. No matter in which language PHP
 or
  Python etc...
  Thanks
  On Tue, Dec 22, 2009 at 2:55 PM, Ran Tavory ran...@gmail.commailto:
 ran...@gmail.com wrote:
 
  I don't have a 0.5.0-beta2 version, no. It's not too difficult to add
 it,
  but I haven't done so myself, I'm using 0.4.2
 
  On Tue, Dec 22, 2009 at 2:42 PM, Richard Grossman richie...@gmail.com
 mailto:richie...@gmail.com
  wrote:
 
  Yes of course but do you have updated to cassandra 0.5.0-beta2 ?
 
  On Tue, Dec 22, 2009 at 2:30 PM, Ran Tavory ran...@gmail.commailto:
 ran...@gmail.com wrote:
 
  Would connection pooling work for you?
  This Java client http://code.google.com/p/cassandra-java-client/ has
  connection pooling.
  I haven't put the client under stress yet so I can't testify, but
 this
  may be a good solution for you
 
  On Tue, Dec 22, 2009 at 2:22 PM, Richard Grossman 
 richie...@gmail.commailto:richie...@gmail.com
  wrote:
 
  I agree it's solve my problem but can give a bigger one.
  The problem is I can't succeed to prevent opening a lot of
 connection
 
  On Tue, Dec 22, 2009 at 1:51 PM, Jaakko rosvopaalli...@gmail.com
 mailto:rosvopaalli...@gmail.com
  wrote:
 
  Hi,
 
  I don't know the particulars of java implementation, but if it
 works
  the same way as Unix native socket API, then I would not recommend
  setting linger to zero.
 
  SO_LINGER option with zero value will cause TCP connection to be
  aborted immediately as soon as the socket is closed. That is, (1)
  remaining data in the send buffer will be discarded, (2) no proper
  disconnect handshake and (3) receiving end will get TCP reset.
 
  Sure this will avoid TIME_WAIT state, but TIME_WAIT is our friend
 and
  is there to avoid packets from old connection being delivered to
 new
  incarnation of the connection. Instead of avoiding the state, the
  application should be changed so that TIME_WAIT will not be a
  problem.
  How many open files you can see when the exception happens? Might
 be
  that you're out of file descriptors.
 
  -Jaakko
 
 
  On Tue, Dec 22, 2009 at 8:17 PM, Richard Grossman
  richie...@gmail.commailto:richie...@gmail.com wrote:
   Hi
   To all is interesting I've found a solution seems not recommended
   but
   working.
   When opening a Socket set this:
  tSocket.getSocket().setReuseAddress(true);
  tSocket.getSocket().setSoLinger(true, 0

Re: MultiThread Client problem with thrift

2009-12-22 Thread Jonathan Ellis
On Tue, Dec 22, 2009 at 1:28 PM, Brian Burruss bburr...@real.com wrote:
 i don't close the connection to a server unless i get exceptions.  and when i 
 close the connection i try a new server in the cluster just to keep the 
 connections spread across the cluster.

right, that is the sane way to do it rather than imposing a thrift
connection overhead on each operation.

-Jonathan