saleson opened a new issue, #10046:
URL: https://github.com/apache/dubbo/issues/10046

   线上环境:
   consumer 报错:“Data length too large:xxx” , 但是打印的url是Service App1,  
但是查出来的ip却是Service App2, 而且确实也是Service App2 的某个接口返回的数据量过大,但是Service 
App2有设置合理的payload, 可是consumer却没有读到到,而是使用了默认的8M,  
最终报错。后来跟踪代码发现是consumer中本来该是对应Service App2的Channel, 其绑定的url却是Service 
App1的信息。后面进一步分析终于找到复现过程:由于ip池的原因,Service App1 发布时,ip被回收,Service App2 发布时, 
从ip池获取到了之前Serice app1使用过的ip-a, consumer在这个时间段内没有发布没有重启,当使用ip-a的Service 
App2实例启动完成后, DubboProtocol.getSharedClient()方法获取到的仍是对应的Service App1的Channel对象。
   
   ### Environment
   
   * Dubbo version: 3.0.5
   * Java version: 1.8
   
   ### Steps to reproduce this issue
   
   1. 启动provider ,payload 设置为100 
   2. 启动consumer, 调用provider一次
   3. 修改provider payload阀值,设为8M, 重启
   4. consumer 再次调用provider
   
   Pls. provide [GitHub address] to reproduce this issue.
   
   ### Expected Behavior
   consumer第一次调用报“Data length too large:xxx”
   修改provider payload重启后,consumer调用正常完成
   
   
   ### Actual Behavior
   consumer第一次调用报“Data length too large:xxx”
   修改provider payload重启后,consumer仍是调用失败,报“Data length too large:xxx”
   
   
   
跟踪代码发现原来是ReferenceCountExchangeClient的close()方法在执行完被代理的Client.close()方法后,将实例变理client设置为LazyConnectExchangeClient,
 该对象在后续被调用时会重新创建ExchangeClient对象; 
且DubboProtocol中的referenceClientMap(用于缓存client)实例变量也没有删除掉ReferenceCountExchangeClient对象,而referenceClientMap缓存使用的key是ip:port,
 
后续有使用这个ip+port的实例启时后被consumer发现后,也不会更新ReferenceCountExchangeClient对象以及其一系列的Client中的url信息,仍是直接返回该ReferenceCountExchangeClient对象供新的DubboInvoker使用。
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to