makssent commented on issue #37569:
URL: 
https://github.com/apache/shardingsphere/issues/37569#issuecomment-3696855043

   Okay, as I understand it, the threads remain the same both for the proxy and 
for the single setup, just as I initially assumed. Most likely, I simply mixed 
up connections and threads. Roughly speaking, in sysbench with 200 threads, the 
test on single will have 200 threads, and on proxy it will also have 200 
threads.
   
   What I don’t understand next is what exactly is meant by “even distribution 
across nodes” and how this is related to the HikariPool configuration. Based on 
my tests, performance gains appeared only when I explicitly set the connection 
pool size. The situation was as follows: in the single setup, with `threads = 
200` and `HikariPool = 200`, I had 200 connections to a single database, while 
when working through the Proxy with the same `threads = 200`, but with 
`HikariPool = 200` on each node, I ended up with 400 connections to the 
backends in total. In this scenario, requests were indeed sent in parallel to 
different shards (`ds_0`, `ds_1`), each node processed its own part of the 
load, and as a result I observed a performance gain close to ×2. Am I correct 
in understanding that this is exactly what is meant by load distribution?
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to