Hi again,

Please, see my comments below:

On 23/7/19 18:37, Jacob Barrett wrote:


On Jul 23, 2019, at 9:29 AM, Alberto Gomez 
<alberto.go...@est.tech<mailto:alberto.go...@est.tech>> wrote:

I am using the today's version of the develop branch. According to your answer, 
I should have the fix applied on the Java client.

How many threads can be considered "lots"? My client uses around 40 threads and 
is limited to run on 4 CPUs.

That might a “lots” of threads in your scenario. In our geode-benchmarks we saw 
issues at thread counts more than 8 times the number of CPUs. We have not done 
benchmarking of the C++ client to see how it behaves under heavy thread load.


Another thing I have noticed is that connections in the C++ client are not 
closed when idle while in the Java client, if they are idle, are eventually 
closed. That would explain why in the Java client there is closing of 
connections while in the C++ there isn't. Nevertheless, I still do not see why 
connections may be closed in the Java client due to idleness with a high amount 
of operations being continually sent.

The clients do different things around connection management right now.
Java client should be only be closing idle connections after they reach their 
timeout. What is your idle timeout? They can also close during load 
conditioning. If you are only doing partitioned region operations you should 
just disable load conditioning. It has known issues in which it will fight 
single-hop connection pool balancing.
I am using the default value for idleTimeout which is 5 secs. If I disable load 
conditioning I do not see big changes.

The C++ client connection pooling should behave a lot like the old client, so 
you probably want to enable thread local connections on the C++ client. It will 
eat more sockets but should behave better. Try and give feedback on how it 
addresses your file handles issue. Again load conditioning is probably fighting 
you if all you want is single-hop partition behavior. Check client connection 
idle timeout settings.

I am using here the default value for idle timeout.

Using thread local connections makes the number of connections not go above a 
number (164 in a case in which I have two servers and 80 threads).

Regarding the closing of connections in the C++ client when idle, I found a 
possible bug in the code. A variable of type size_t is used to store possibly 
negative numbers in the ThinClientPoolDM::cleanStaleConnections which was 
preventing the closing of any idleConnection.

Once I have changed that variable to int, I see the closing of connections when 
the client is idle as well as a behavior similar to the Java client when there 
is high load (continous opening and closing of connections).

I will write JIRA about this problem.

-Jake
-Alberto

Reply via email to