Under heavy load, this could be the result of the server not accept()ing
fast enough, causing the number of pending connections to exceed the listen
backlog size in the kernel.

I believe Cassandra uses the default of 50 backlogged connections.

This is one of the reasons why a persistent connection pool is a good idea.

-Dave
On Jan 30, 2011 1:14 AM, "aaron morton" <aa...@thelastpickle.com> wrote:
> Am assuming these are client side side timeouts, you could increase the
client side timeout when the TSocket is created. Are you using a higher
level library or raw Thrift?.
>
> Alternatively you may be overloading the cluster. Are there are WARN
messages in the cluster about Dropped Messages ?
>
> Aaron
>
> On 30 Jan 2011, at 14:19, Courtney Robinson wrote:
>
>> It may also be an idea to check the node's memory usage. I encountered
this on a few occasions and I simply killed
>> any unneeded process that was eating away my node's memory. In each
instance it worked fine after there was about 300MB of free memory
>>
>> From: Patricio Echagüe
>> Sent: Sunday, January 30, 2011 12:46 AM
>> To: user@cassandra.apache.org
>> Subject: Re: TSocket timing out
>>
>> The recommendation is to wait few milliseconds and retry.
>>
>> For Example if you use Hector ( I don't think it is your case), Hector
will retry to different nodes in your cluster and the retry mechanisms is
tunable as well.
>>
>> On Sat, Jan 29, 2011 at 2:20 PM, buddhasystem <potek...@bnl.gov> wrote:
>>
>> When I do a lot of inserts into my cluster (>10k at a time) I get
timeouts
>> from Thrift, the TScoket.py module.
>>
>> What do I do?
>>
>> Thanks,
>>
>> Maxim
>>
>> --
>> View this message in context:
http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/TSocket-timing-out-tp5973548p5973548.html
>> Sent from the cassandra-u...@incubator.apache.org mailing list archive at
Nabble.com.
>>
>

Reply via email to