[ 
https://issues.apache.org/jira/browse/ARTEMIS-3500?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Justin Bertram updated ARTEMIS-3500:
------------------------------------
    Description: 
Below is what our application and messaging architecture looks like. 
{noformat}
Microservice --> nlb --> qpid ---> amq brokers (Master/slave){noformat}

We recently saw a scenario where a Java microservice application pushed a few 
messages to our brokers where the message was containing a larger header than 
the normal size header. This led to a very big disaster and caused our whole 
broker environment to crash. To avoid such cases like these is there a way to 
discard a message or handle this situation at the broker so that it does not 
end up in a big mess on our brokers? Qpid is basically acting like a proxy and 
simply routes the traffic to the brokers. Though the broker rejects this large 
header message, but still stores it in its memory and goes in infinite loop and 
keeps retrying to process the message which causes over head on the brokers and 
immediately all the brokers go in shutdown state.

After filtering out the headers to a smaller size, we could see the issue is 
resolved. If there is a way to handle this little better on the broker or even 
if it is possible on Qpid layer we would also consider that.

Appreciate any feedback if anyone have faced this issue before as it could 
happen again in future. Below are some of the exceptions we saw related to 
above issue. 
{noformat}
Caused by: javax.jms.JMSException: AMQ149005: Message of 564,590 bytes is 
bigger than the max record size of 501,760 bytes. You should try to move large 
application properties to the message body. [condition = failed]{noformat}
{noformat}
2021-09-22 07:37:01,092 WARN [org.apache.activemq.artemis.core.server] 
AMQ222086: error handling packet ReplicationSyncFileMessage(LARGE_MESSAGE, 
id=*********) for replication: java.io.FileNotFoundException: 
/app/test/large-messages/*********.msg (Too many open files){noformat}

  was:
Hello,

Below is what our application and messaging architecture looks like. 

We have  a Microservice --> nlb --> qpid ---> amq brokers (Master/slave)

We recently saw a scenario where a java microservice application pushed a few 
messages to our amq brokers where the message was containing a larger header 
than the normal size header, which led to a very big disaster and caused our 
whole amq env to crash. To avoid such cases like these, is there a way to 
discard a message or handle this situation at the amq layer so that it does not 
endup in a big mess on our brokers as qpid for us is basically acting like a 
proxy and simply routes the traffic to brokers. Though the broker rejects this 
large header message but still stores it in its memory and goes in infinite 
loop and keeps retrying to process the message which causes over head on the 
brokers and immediately all the brokers go in shutdown state.

After filtering out the headers to a smaller size, we could see the issue is 
resolved. If there is a way to handle this little better on amq or even if it 
is possible on qpid layer, we would also consider that.

Appreciate any feedback if anyone have faced this issue before as it could 
happen again in future. Below are some of the exceptions we saw related to 
above issue. 

Caused by: javax.jms.JMSException: AMQ149005: Message of 564,590 bytes is 
bigger than the max record size of 501,760 bytes. You should try to move large 
application properties to the message body. [condition = failed]

2021-09-22 07:37:01,092 WARN [org.apache.activemq.artemis.core.server] 
AMQ222086: error handling packet ReplicationSyncFileMessage(LARGE_MESSAGE, 
id=*********) for replication: java.io.FileNotFoundException: 
/app/test/large-messages/*********.msg (Too many open files)

Thanks 


> Route messages to broker with large header
> ------------------------------------------
>
>                 Key: ARTEMIS-3500
>                 URL: https://issues.apache.org/jira/browse/ARTEMIS-3500
>             Project: ActiveMQ Artemis
>          Issue Type: Improvement
>          Components: Broker
>    Affects Versions: 2.18.0
>         Environment: PROD
>            Reporter: Ekta
>            Priority: Major
>
> Below is what our application and messaging architecture looks like. 
> {noformat}
> Microservice --> nlb --> qpid ---> amq brokers (Master/slave){noformat}
> We recently saw a scenario where a Java microservice application pushed a few 
> messages to our brokers where the message was containing a larger header than 
> the normal size header. This led to a very big disaster and caused our whole 
> broker environment to crash. To avoid such cases like these is there a way to 
> discard a message or handle this situation at the broker so that it does not 
> end up in a big mess on our brokers? Qpid is basically acting like a proxy 
> and simply routes the traffic to the brokers. Though the broker rejects this 
> large header message, but still stores it in its memory and goes in infinite 
> loop and keeps retrying to process the message which causes over head on the 
> brokers and immediately all the brokers go in shutdown state.
> After filtering out the headers to a smaller size, we could see the issue is 
> resolved. If there is a way to handle this little better on the broker or 
> even if it is possible on Qpid layer we would also consider that.
> Appreciate any feedback if anyone have faced this issue before as it could 
> happen again in future. Below are some of the exceptions we saw related to 
> above issue. 
> {noformat}
> Caused by: javax.jms.JMSException: AMQ149005: Message of 564,590 bytes is 
> bigger than the max record size of 501,760 bytes. You should try to move 
> large application properties to the message body. [condition = 
> failed]{noformat}
> {noformat}
> 2021-09-22 07:37:01,092 WARN [org.apache.activemq.artemis.core.server] 
> AMQ222086: error handling packet ReplicationSyncFileMessage(LARGE_MESSAGE, 
> id=*********) for replication: java.io.FileNotFoundException: 
> /app/test/large-messages/*********.msg (Too many open files){noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to