Hello,

I'm trying to narrow the cause of the behavior we had on our artemis
brokers recently during a test.

We deliberately put one broker in disk quota overload by creating big
temp files on the fs of its store (the broker is configured with a 75%
limit and we had the expected log statement (the queues are set to paging)

AMQ222210: Free storage space is at 26.2GB of 105.1GB total. Usage rate
is 75.1% which is beyond the configured <max-disk-usage>. System will
start blocking producers.

after some time, the temp files were removed and the broker allowed to
recover, which it did by processing new messages sent by clients.

However, after the simulated incident, the broker was slower than
before. The average processing time a incoming message in the consumers
went from 3-6ms to 10-13m on average. The excess spent time was narrowed
down to the session.commit() occurring at the end of the processing of a
incoming message.

Worse this behavior occurred on all of the 3 brokers the clients were
hitting during the test and not only the first one which was set to
block. The brokers are embedded (each in a different deployment of the
same webapp, on different servers).

The nominal processing performance is recovered by restarting the
brokers and only the broker (restarting the server-side consumers change
nothing. stopping and starting the broker - in the same running and
ininterrupted webapp, application server and jvm runtime - make the
problem disappear).

The client is connected via STOMP, the internal connections on the
brokers from application services are CORE/JMS.

Any ideas ?

Regards

Reply via email to