[ 
https://issues.apache.org/jira/browse/ARTEMIS-3500?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17423007#comment-17423007
 ] 

Justin Bertram commented on ARTEMIS-3500:
-----------------------------------------

bq. ...it could be that the message coming from a microservice may have a retry 
logic where the message will retry a couple of times before it is exhausted...

Is this something you could investigate and clarify conclusively? I assume a 
retry could be in the microservice or the Qpid layer as well.

bq. I believe we tested this from local and could see that the broker goes in a 
infinite loop where it tries to process the rejected message.

Could you provide a test that reproduces this behavior or at least elaborate on 
what was looping/processing? It doesn't make sense to me that a broker would, 
by itself, continue trying to process a message that it had rejected, but bugs 
do happen so I'm keen to understand more about how this situation might arise.

As [~brusdev] noted already the AMQ222086 issue should already be resolved via 
ARTEMIS-3467. Furthermore, the AMQ149005 issue is the broker rightfully 
rejecting the message. I'm really just curious about the "infinite loop" issue 
you've described, but more information is needed to investigate further.

> Route messages to broker with large header
> ------------------------------------------
>
>                 Key: ARTEMIS-3500
>                 URL: https://issues.apache.org/jira/browse/ARTEMIS-3500
>             Project: ActiveMQ Artemis
>          Issue Type: Improvement
>          Components: Broker
>    Affects Versions: 2.18.0
>         Environment: PROD
>            Reporter: Ekta
>            Priority: Major
>
> Below is what our application and messaging architecture looks like. 
> {noformat}
> Microservice --> nlb --> qpid ---> amq brokers (Master/slave){noformat}
> We recently saw a scenario where a Java microservice application pushed a few 
> messages to our brokers where the message was containing a larger header than 
> the normal size header. This led to a very big disaster and caused our whole 
> broker environment to crash. To avoid such cases like these is there a way to 
> discard a message or handle this situation at the broker so that it does not 
> end up in a big mess on our brokers? Qpid is basically acting like a proxy 
> and simply routes the traffic to the brokers. Though the broker rejects this 
> large header message, but still stores it in its memory and goes in infinite 
> loop and keeps retrying to process the message which causes over head on the 
> brokers and immediately all the brokers go in shutdown state.
> After filtering out the headers to a smaller size, we could see the issue is 
> resolved. If there is a way to handle this little better on the broker or 
> even if it is possible on Qpid layer we would also consider that.
> Appreciate any feedback if anyone have faced this issue before as it could 
> happen again in future. Below are some of the exceptions we saw related to 
> above issue. 
> {noformat}
> Caused by: javax.jms.JMSException: AMQ149005: Message of 564,590 bytes is 
> bigger than the max record size of 501,760 bytes. You should try to move 
> large application properties to the message body. [condition = 
> failed]{noformat}
> {noformat}
> 2021-09-22 07:37:01,092 WARN [org.apache.activemq.artemis.core.server] 
> AMQ222086: error handling packet ReplicationSyncFileMessage(LARGE_MESSAGE, 
> id=*********) for replication: java.io.FileNotFoundException: 
> /app/test/large-messages/*********.msg (Too many open files){noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to