yangxiaohui-coll opened a new issue, #7948:
URL: https://github.com/apache/rocketmq/issues/7948

   ### Before Creating the Bug Report
   
   - [X] I found a bug, not just asking a question, which should be created in 
[GitHub Discussions](https://github.com/apache/rocketmq/discussions).
   
   - [X] I have searched the [GitHub 
Issues](https://github.com/apache/rocketmq/issues) and [GitHub 
Discussions](https://github.com/apache/rocketmq/discussions)  of this 
repository and believe that this is not a duplicate.
   
   - [X] I have confirmed that this bug belongs to the current repository, not 
other repositories of RocketMQ.
   
   
   ### Runtime platform environment
   
   和系统无关
   
   ### RocketMQ version
   
   4.x、5.x均存在
   
   ### JDK Version
   
   和jdk版本无关
   
   ### Describe the Bug
   
   调用queryMessage接口,服务端内存暴涨 -> old gc ->客户端连接rocketmq服务端超时
   
   ### Steps to Reproduce
   
   调用mqAdminExt.queryMessage(topic, key, size, start, end); size 设置极大值,虽然是极端情况。
   就会发现rocketmq jvm内存暴涨。
   
   ### What Did You Expect to See?
   
   当我的消息中key是唯一的,某个时间段内消息可能很多,但是这个key只有一个,我认为size将不会rocketmq 内存暴涨。
   
   ### What Did You See Instead?
   
   服务端直接通过maxsize,new了一个list。new的动作是不是应该放到Math.min()下会好一些。
   
       public QueryOffsetResult queryOffset(String topic, String key, int 
maxNum, long begin, long end) {
           List<Long> phyOffsets = new ArrayList<Long>(maxNum);
           long indexLastUpdateTimestamp = 0;
           long indexLastUpdatePhyoffset = 0;
           maxNum = Math.min(maxNum, 
this.defaultMessageStore.getMessageStoreConfig().getMaxMsgsNumBatch());
           try {
               this.readWriteLock.readLock().lock();
               if (!this.indexFileList.isEmpty()) {
           ...
       }
   
   
   
   ### Additional Context
   
   _No response_


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to