[ 
https://issues.apache.org/jira/browse/ARTEMIS-4781?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erwin Dondorp updated ARTEMIS-4781:
-----------------------------------
    Description: 
SETUP:

Using a broker-cluster.

The tests are executed with durable and non-durable messages. 3 durable 
messages and 3 non-durable messages are produced every 60 seconds (almost) at 
the same time on the 1st broker.

We are producing large AMQP messages and leave them on a durable queue. MSG/TMP 
files are created in directory `large-messages` for this as expected.

After the configured amount of time, the messages expire as expected. the 
original MSG/TMP files are removed as expected.

For monitoring, we have an simple extra consumer on the address `ExpiryQueue` 
connected to a 2nd broker in the same cluster.

OBSERVATION:

The MSG/TMP files are left on the disk of the 2nd broker also every 60 seconds. 
This is unexpected. No related logfile lines are seen on either broker.
The content of the MSG/TMP files is (based on it size) related to the original 
MSG/TMP files. These files have a different names, likely because they have 
been recreated in the context of the ExpiryQueue address. The files are 
slightly larger, likely because of the addition of a few expiry related headers.

  was:
DRAFT! text below is not complete yet...



SETUP:

using a broker-cluster.

the tests are executed with durable and non-durable messages. 3 durable 
messages and 3 non-durable messages are produced every 60 seconds (almost) at 
the same time on the 1st broker.

we are producing large amqp messages and leave them on a durable queue. MSG/TMP 
files are created in directory `large-messages` for this as expected.

after the configured amount of time, the messages expire as expected. the 
original MSG/TMP files are removed as expected.

OBSERVATION:
for monitoring, we have a simple consumer on the address `ExpiryQueue` on a 2nd 
broker in the same cluster:

3 TMP files are left on the disk of the 2nd broker every 60 seconds. this is 
unexpected. no related logfile lines are seen on either broker.
the content of the TMP files is (based on it size) related to the original 


> on-disk files for large messages are not always removed on expiry
> -----------------------------------------------------------------
>
>                 Key: ARTEMIS-4781
>                 URL: https://issues.apache.org/jira/browse/ARTEMIS-4781
>             Project: ActiveMQ Artemis
>          Issue Type: Bug
>          Components: Clustering
>    Affects Versions: 2.33.0
>            Reporter: Erwin Dondorp
>            Priority: Major
>
> SETUP:
> Using a broker-cluster.
> The tests are executed with durable and non-durable messages. 3 durable 
> messages and 3 non-durable messages are produced every 60 seconds (almost) at 
> the same time on the 1st broker.
> We are producing large AMQP messages and leave them on a durable queue. 
> MSG/TMP files are created in directory `large-messages` for this as expected.
> After the configured amount of time, the messages expire as expected. the 
> original MSG/TMP files are removed as expected.
> For monitoring, we have an simple extra consumer on the address `ExpiryQueue` 
> connected to a 2nd broker in the same cluster.
> OBSERVATION:
> The MSG/TMP files are left on the disk of the 2nd broker also every 60 
> seconds. This is unexpected. No related logfile lines are seen on either 
> broker.
> The content of the MSG/TMP files is (based on it size) related to the 
> original MSG/TMP files. These files have a different names, likely because 
> they have been recreated in the context of the ExpiryQueue address. The files 
> are slightly larger, likely because of the addition of a few expiry related 
> headers.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]
For further information, visit: https://activemq.apache.org/contact


Reply via email to