> On Jul 25, 2019, at 6:51 AM, Alberto Gomez <alberto.go...@est.tech> wrote:
>> 
>> The C++ client connection pooling should behave a lot like the old client, 
>> so you probably want to enable thread local connections on the C++ client. 
>> It will eat more sockets but should behave better. Try and give feedback on 
>> how it addresses your file handles issue. Again load conditioning is 
>> probably fighting you if all you want is single-hop partition behavior. 
>> Check client connection idle timeout settings.
> I am using here the default value for idle timeout.
> 
> Using thread local connections makes the number of connections not go above a 
> number (164 in a case in which I have two servers and 80 threads).
> 
> Regarding the closing of connections in the C++ client when idle, I found a 
> possible bug in the code. A variable of type size_t is used to store possibly 
> negative numbers in the ThinClientPoolDM::cleanStaleConnections which was 
> preventing the closing of any idleConnection.
> 
> Once I have changed that variable to int, I see the closing of connections 
> when the client is idle as well as a behavior similar to the Java client when 
> there is high load (continous opening and closing of connections).
> 
> I will write JIRA about this problem.
> 
Awesome find! Thanks for digging in and writing up a JIRA.

Reply via email to