darinspivey commented on issue #24879:
URL: https://github.com/apache/pulsar/issues/24879#issuecomment-3473483468

   > Did you set brokerClient_connectionsPerBroker=15
   
   I made a mistake.  I DID set it in `configData` of the values.yaml for the 
broker, however upon further inspection, I could not find the setting in 
broker.conf.  After more digging, I was not using the helm chart correctly.  
You have to prefix such settings with `PULSAR_PREFIX_` as noted in the [helm 
chart 
comment](https://github.com/apache/pulsar-helm-chart/blob/e9f7f1d2285e2795961bc6a9b18dfc12c9ef9c85/charts/pulsar/values.yaml#L1162-L1165).
   
   After doing that, I see that it has been applied:
   ```
   > kc logs pod/pulsar-broker-1 -n pulsar | grep connectionsPerBroker
   Defaulted container "pulsar-broker" out of: pulsar-broker, 
wait-zookeeper-ready (init), wait-bookkeeper-ready (init)
   [conf/broker.conf] Adding config brokerClient_connectionsPerBroker = 15
   ```
   
   > If there are many partitioned-topics gc at the same time, it will also 
cause connection pool deadlock.
   
   Yes, because our use of partitions, there will be many topics deleted at the 
same time which can cause a thundering herd.  Mostly this is due to our test 
suite which creates 100+ topics per run, and stops using them all at the same 
time when the test suite is done.  Therefore, we have roughly 600 partitions 
deleting *around* the same time.  By using a bigger pool of `15`, is it 
possible that the retries will work if all 15 connections are being used?  I 
would think that some would free up as they complete, and retries may work? Or 
should we just consider moving to a non-pooled value of `0`?
    
   Thank you.  I will continue to monitor.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to