makssent commented on issue #37569:
URL: 
https://github.com/apache/shardingsphere/issues/37569#issuecomment-3697664026

   **In short, the setup is as follows:**
   
   **200 sysbench threads**
   
   - **Single**
     - 200 frontend connections → up to 200 backend connections to a single 
database (HikariPool).
     - All 200 concurrent requests are handled by one node, which may lead to 
resource contention.
   
   - **Cluster via Proxy**
     - 200 frontend connections to the Proxy.
     - Backend connections are created according to the pool configuration:
   
     - With `maxPoolSize = 200` and `minPoolSize = 200`
       - The Proxy creates up to 200 backend connections **per node** (400 
total for two nodes).
       - This provides sufficient connection headroom and avoids 
connection-pool queuing.
       - With even request distribution across shards, the workload is split 
between nodes, and total throughput can approach **~2× compared to single**.
   
     - With `maxPoolSize = 100` and `minPoolSize = 100`
       - The Proxy creates up to 100 backend connections per node (200 total).
       - In this case, the cluster’s total parallelism is comparable to single.
       - Performance will be roughly the same as single, or lower if requests 
are not distributed close to 50/50 (queues may appear on the more heavily 
loaded node).
   
   It is also important to note that **having 400 backend connections does not 
automatically produce a 2× performance gain**.  
   The Proxy only creates connection pools on each node; **how many of those 
connections are actually active** depends on routing and the current workload.  
   With 200 connections per node, there is simply enough headroom so the 
connection pool does not become a bottleneck and requests do not wait for a 
free connection.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to