XingyuFu opened a new issue, #13291:
URL: https://github.com/apache/dubbo/issues/13291

   <!-- If you need to report a security issue please visit 
https://github.com/apache/dubbo/security/policy -->
   
   - [ ] I have searched the [issues](https://github.com/apache/dubbo/issues) 
of this repository and believe that this is not a duplicate.
   
   ### Environment
   
   * Dubbo version: 3.2.7
   * Java version: 8
   
   ### Steps to reproduce this issue
   构造一个Consumer,一个Provider端
   1. Provider端只提供triple协议,线程池大小为1200。
   
![image](https://github.com/apache/dubbo/assets/19774938/d62a11d0-9e58-45e3-aff6-96b8c6852d71)
   具体的接口级注册中心的信息如下:side=provider,threads=1200
   
![image](https://github.com/apache/dubbo/assets/19774938/2ef6d2bd-2c93-43d1-ad4c-087b1a6f8286)
   2. 重新部署下Consumer,进行多次调用
   3. 在org.apache.dubbo.rpc.protocol.tri.TripleInvoker 
类中打断点,debug查看streamExecutor,可以看到核心线程数为1200.
   
![image](https://github.com/apache/dubbo/assets/19774938/d1dcf2cb-f020-4808-aff4-eccdf524b927)
   
   
追踪代码的话,可以从org.apache.dubbo.common.threadpool.manager.DefaultExecutorRepository#createExecutorIfAbsent中看,线程池是根据url创建出来的。
   
![image](https://github.com/apache/dubbo/assets/19774938/62b7451e-2d44-4f8f-a21e-5874dbba23ec)
   
   
更内部的线程池参数,org.apache.dubbo.common.threadpool.support.fixed.FixedThreadPool,是从url的threads参数获取的。
   
![image](https://github.com/apache/dubbo/assets/19774938/4d61dd4a-97bb-4f9d-bc8e-4143bf339dc4)
   
   
   这就引申出一个问题:Consumer端的这个线程池,核心线程数是依赖于Provider端url配置的。
   
Provider端面临的场景和Consumer端可能不同,机器配置、数量上都可能不相同。Provider对自己这边配置的合理线程数,对Consumer这一端可能就不合理了。
   我们这边使用的时候,可能会存在OOM风险。


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to