[ 
https://issues.apache.org/jira/browse/ARTEMIS-3500?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17422938#comment-17422938
 ] 

Ekta commented on ARTEMIS-3500:
-------------------------------

Regarding : What exactly "goes in infinite loop and keeps retrying to process 
the message"? The broker itself _rejects_ the message so I assume that 
something is trying to resend the message over and over. Can you clarify this?

 

It is kind of hard to say... it could be that the message coming from a 
microservice may have a retry logic where the message will retry a couple of 
times before it is exhausted but I believe we tested this from local and could 
see that the broker goes in a infinite loop where it tries to process the 
rejected message. at the same time we could see logs from broker printing too 
many open files and large message exception below. 

 
{noformat}
Caused by: javax.jms.JMSException: AMQ149005: Message of 564,590 bytes is 
bigger than the max record size of 501,760 bytes. You should try to move large 
application properties to the message body. [condition = failed]{noformat}
{noformat}
2021-09-22 07:37:01,092 WARN [org.apache.activemq.artemis.core.server] 
AMQ222086: error handling packet ReplicationSyncFileMessage(LARGE_MESSAGE, 
id=*********) for replication: java.io.FileNotFoundException: 
/app/test/large-messages/*********.msg (Too many open files) {noformat}

> Route messages to broker with large header
> ------------------------------------------
>
>                 Key: ARTEMIS-3500
>                 URL: https://issues.apache.org/jira/browse/ARTEMIS-3500
>             Project: ActiveMQ Artemis
>          Issue Type: Improvement
>          Components: Broker
>    Affects Versions: 2.18.0
>         Environment: PROD
>            Reporter: Ekta
>            Priority: Major
>
> Below is what our application and messaging architecture looks like. 
> {noformat}
> Microservice --> nlb --> qpid ---> amq brokers (Master/slave){noformat}
> We recently saw a scenario where a Java microservice application pushed a few 
> messages to our brokers where the message was containing a larger header than 
> the normal size header. This led to a very big disaster and caused our whole 
> broker environment to crash. To avoid such cases like these is there a way to 
> discard a message or handle this situation at the broker so that it does not 
> end up in a big mess on our brokers? Qpid is basically acting like a proxy 
> and simply routes the traffic to the brokers. Though the broker rejects this 
> large header message, but still stores it in its memory and goes in infinite 
> loop and keeps retrying to process the message which causes over head on the 
> brokers and immediately all the brokers go in shutdown state.
> After filtering out the headers to a smaller size, we could see the issue is 
> resolved. If there is a way to handle this little better on the broker or 
> even if it is possible on Qpid layer we would also consider that.
> Appreciate any feedback if anyone have faced this issue before as it could 
> happen again in future. Below are some of the exceptions we saw related to 
> above issue. 
> {noformat}
> Caused by: javax.jms.JMSException: AMQ149005: Message of 564,590 bytes is 
> bigger than the max record size of 501,760 bytes. You should try to move 
> large application properties to the message body. [condition = 
> failed]{noformat}
> {noformat}
> 2021-09-22 07:37:01,092 WARN [org.apache.activemq.artemis.core.server] 
> AMQ222086: error handling packet ReplicationSyncFileMessage(LARGE_MESSAGE, 
> id=*********) for replication: java.io.FileNotFoundException: 
> /app/test/large-messages/*********.msg (Too many open files){noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to