BIG Update:

I've just completed the performance work to make the MAPPED Journal suitable
to be used as a full citizen Journal type for any kind of loads (concurrent
and durable ones in particular).

In order to make it suitable to be used with high loads of concurrent
requests of persistent messages, I've performed these optimisations:
1) smart batching on concurrent/high-rate sync requests (different from NIO
and ASYNCIO; it is high throughput *and* low latency, falling back to honor
a configurable SLA on latency only when unable to achieve both)  
2) adapting (on the load) page prefetching/zeroing to make the write
requests more sympathetic with the Unix/Windows paging caching policies and
the hard disk sector sizes

The change is huge and I'm thinking (need feedbacks!!) to make a different
(the old one would be good) implementation of it for the PAGING: they
satisfies different purposes and requires different optimisations.

I prefer to not publish benchmark results and I've not updated the Artemis
journal benchmark tool on purpose: it can't show the improvements easily due
to the kind of optimisations done to make it faster on the common execution
paths.
On the other hand, I've reused the tuning performed by the tool on Artemis
startup to configure the latency SLA of the journal and the write buffer
size: hence a user will need only to set the Journal type on broker.xml and
everything will work as the the ASYNCIO case (using its same properties,
maxAIO excluded).

I'll suggest to try it on real loads and disabling the disk HW write cache
(eg hdparm -W 0 /dev/sda on Linux) to have consistent results: it is more
sensible than the ASYNCIO journal to these hardware features, in particular
due to the adapting prefetching feature I've put it in.

The branch with its latest version is:  here
<https://github.com/franz1981/activemq-artemis/tree/batch_buffer.>  . 





--
View this message in context: 
http://activemq.2283324.n4.nabble.com/Paging-tp4724085p4724470.html
Sent from the ActiveMQ - Dev mailing list archive at Nabble.com.

Reply via email to