[jira] [Commented] (ROCKETMQ-273) return an expression when a function has no write operations
[ https://issues.apache.org/jira/browse/ROCKETMQ-273?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16137927#comment-16137927 ] ASF GitHub Bot commented on ROCKETMQ-273: - Github user coveralls commented on the issue: https://github.com/apache/incubator-rocketmq/pull/150 [![Coverage Status](https://coveralls.io/builds/12949746/badge)](https://coveralls.io/builds/12949746) Coverage increased (+0.04%) to 39.068% when pulling **45f692cff6382483e2d9ef4ac481ca3523e0816c on kevin-better:develop** into **ca14a2d474b6c71143944ec95f7c28e23e15632d on apache:develop**. > return an expression when a function has no write operations > > > Key: ROCKETMQ-273 > URL: https://issues.apache.org/jira/browse/ROCKETMQ-273 > Project: Apache RocketMQ > Issue Type: Improvement > Components: rocketmq-store >Affects Versions: 4.2.0-incubating >Reporter: wangkai >Assignee: yukon >Priority: Minor > Labels: Improvement > Fix For: 4.2.0-incubating > > Original Estimate: 2h > Remaining Estimate: 2h > > return an expression when a function has no write operations. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (ROCKETMQ-265) when os crash for some reasons, the broker consume queue’s data maybe repeat, consumer can’t pull the latest message, cause message lag
[ https://issues.apache.org/jira/browse/ROCKETMQ-265?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16137768#comment-16137768 ] ASF GitHub Bot commented on ROCKETMQ-265: - Github user coveralls commented on the issue: https://github.com/apache/incubator-rocketmq/pull/146 [![Coverage Status](https://coveralls.io/builds/12948530/badge)](https://coveralls.io/builds/12948530) Coverage decreased (-0.04%) to 38.735% when pulling **c2ecd16fd0cc183b0c2fbb35727e2bbcdcabb931 on fuyou001:ROCKETMQ-265** into **2ddb744b3157604ec87a82143c3100728589c6ec on apache:master**. > when os crash for some reasons, the broker consume queue’s data maybe repeat, > consumer can’t pull the latest message, cause message lag > --- > > Key: ROCKETMQ-265 > URL: https://issues.apache.org/jira/browse/ROCKETMQ-265 > Project: Apache RocketMQ > Issue Type: Bug > Components: rocketmq-store >Affects Versions: 4.0.0-incubating, 4.1.0-incubating >Reporter: yubaofu >Assignee: yukon >Priority: Critical > Labels: bug > Fix For: 4.2.0-incubating > > Original Estimate: 24h > Remaining Estimate: 24h > > when os crash for some reasons, the broker consume queue’s data maybe repeat, > consumer can’t pull the latest message, cause message lag -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (ROCKETMQ-257) name server address and web server address should be specified at least one
[ https://issues.apache.org/jira/browse/ROCKETMQ-257?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16136683#comment-16136683 ] ASF GitHub Bot commented on ROCKETMQ-257: - Github user qqeasonchen closed the pull request at: https://github.com/apache/incubator-rocketmq/pull/144 > name server address and web server address should be specified at least one > --- > > Key: ROCKETMQ-257 > URL: https://issues.apache.org/jira/browse/ROCKETMQ-257 > Project: Apache RocketMQ > Issue Type: Bug > Components: rocketmq-client >Affects Versions: 4.1.0-incubating > Environment: test and production >Reporter: Eason Chen >Assignee: Xiaorui Wang >Priority: Minor > Fix For: 4.2.0-incubating > > > if name server address and web server address both are not specified , client > will not fetch the > right name server and client will start fail, because the default > wsAddr=http://jmenv.tbsite.net:8080/rocketmq/nsaddr is not reachable. > {code:java} > // name server address and web server address should be specified at least one > if (null == this.clientConfig.getNamesrvAddr() && > MixAll.getWSAddr().equals(MixAll.WS_ADDR)) { > throw new MQClientException("name server address and > web server address should be specified at least one.", null); > } else if (null == this.clientConfig.getNamesrvAddr()) { > this.mQClientAPIImpl.fetchNameServerAddr(); > } > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (ROCKETMQ-272) The config `syncFlushTimeout` doesn't work for SYNC_MASTER
[ https://issues.apache.org/jira/browse/ROCKETMQ-272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16136453#comment-16136453 ] Yu Kaiyuan commented on ROCKETMQ-272: - Relevant log: *Proudcer* {code:java} 2017-08-22 14:26:25,420 INFO main com.rmq.example.TestProducer 28: Message content size = 512028 2017-08-22 14:26:25,887 INFO main com.rmq.example.TestProducer 35: sendResult = SendResult [sendStatus=FLUSH_SLAVE_TIMEOUT, msgId=0A05121B278412A3A3806CAB9B7D, offsetMsgId=0ADD49222A9F11B26A50, messageQueue=MessageQueue [topic=TopicTestBigMessage, brokerName=broker-a, queueId=0], queueOffset=1] {code} *Broker* {code:java} 2017-08-22 14:26:25 WARN GroupTransferService - transfer messsage to slave timeout, 297196971 2017-08-22 14:26:25 ERROR SendMessageThread_1 - do sync transfer other node, wait return, but failed, topic: TopicTestBigMessage tags: null client address: 10.5.18.27 {code} > The config `syncFlushTimeout` doesn't work for SYNC_MASTER > -- > > Key: ROCKETMQ-272 > URL: https://issues.apache.org/jira/browse/ROCKETMQ-272 > Project: Apache RocketMQ > Issue Type: Bug > Components: rocketmq-broker >Affects Versions: 4.1.0-incubating >Reporter: Yu Kaiyuan >Assignee: yukon > > It's quite frequent to get result as `sendStatus=FLUSH_SLAVE_TIMEOUT` when > sending big messages(>500k) in SYNC_MASTER/SLAVE scenario. > The timeout value used by the sync process currently as I found, is the > config `syncFlushTimeout`. And its default value is 5000 milliseconds. > But it shows that producer get the result as `FLUSH_SLAVE_TIMEOUT` less than > 1 second. > So why does the config not work as expected? > Relevant code: > {code:java} > // CommitLog.java > public PutMessageResult putMessage(final MessageExtBrokerInner msg) { > { > // Synchronous write double > if (BrokerRole.SYNC_MASTER == > this.defaultMessageStore.getMessageStoreConfig().getBrokerRole()) { > HAService service = this.defaultMessageStore.getHaService(); > if (msg.isWaitStoreMsgOK()) { > // Determine whether to wait > if (service.isSlaveOK(result.getWroteOffset() + > result.getWroteBytes())) { > if (null == request) { > request = new GroupCommitRequest(result.getWroteOffset() > + result.getWroteBytes()); > } > service.putRequest(request); > service.getWaitNotifyObject().wakeupAll(); > boolean flushOK = > // TODO > > request.waitForFlush(this.defaultMessageStore.getMessageStoreConfig().getSyncFlushTimeout()); > if (!flushOK) { > log.error("do sync transfer other node, wait return, but > failed, topic: " + msg.getTopic() + " tags: " > + msg.getTags() + " client address: " + > msg.getBornHostString()); > > putMessageResult.setPutMessageStatus(PutMessageStatus.FLUSH_SLAVE_TIMEOUT); > } > } > // Slave problem > else { > // Tell the producer, slave not available > > putMessageResult.setPutMessageStatus(PutMessageStatus.SLAVE_NOT_AVAILABLE); > } > } > } > return putMessageResult; > } > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (ROCKETMQ-272) The config `syncFlushTimeout` doesn't work for SYNC_MASTER
Yu Kaiyuan created ROCKETMQ-272: --- Summary: The config `syncFlushTimeout` doesn't work for SYNC_MASTER Key: ROCKETMQ-272 URL: https://issues.apache.org/jira/browse/ROCKETMQ-272 Project: Apache RocketMQ Issue Type: Bug Components: rocketmq-broker Affects Versions: 4.1.0-incubating Reporter: Yu Kaiyuan Assignee: yukon It's quite frequent to get result as `sendStatus=FLUSH_SLAVE_TIMEOUT` when sending big messages(>500k) in SYNC_MASTER/SLAVE scenario. The timeout value used by the sync process currently as I found, is the config `syncFlushTimeout`. And its default value is 5000 milliseconds. But it shows that producer get the result as `FLUSH_SLAVE_TIMEOUT` less than 1 second. So why does the config not work as expected? Relevant code: {code:java} // CommitLog.java public PutMessageResult putMessage(final MessageExtBrokerInner msg) { { // Synchronous write double if (BrokerRole.SYNC_MASTER == this.defaultMessageStore.getMessageStoreConfig().getBrokerRole()) { HAService service = this.defaultMessageStore.getHaService(); if (msg.isWaitStoreMsgOK()) { // Determine whether to wait if (service.isSlaveOK(result.getWroteOffset() + result.getWroteBytes())) { if (null == request) { request = new GroupCommitRequest(result.getWroteOffset() + result.getWroteBytes()); } service.putRequest(request); service.getWaitNotifyObject().wakeupAll(); boolean flushOK = // TODO request.waitForFlush(this.defaultMessageStore.getMessageStoreConfig().getSyncFlushTimeout()); if (!flushOK) { log.error("do sync transfer other node, wait return, but failed, topic: " + msg.getTopic() + " tags: " + msg.getTags() + " client address: " + msg.getBornHostString()); putMessageResult.setPutMessageStatus(PutMessageStatus.FLUSH_SLAVE_TIMEOUT); } } // Slave problem else { // Tell the producer, slave not available putMessageResult.setPutMessageStatus(PutMessageStatus.SLAVE_NOT_AVAILABLE); } } } return putMessageResult; } {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029)