Hello Antoine,
Yes, it is expected that a new Connection is made for each enqueue and
dequeue.
The relevant code is
org.apache.qpid.server.store.AbstractJDBCMessageStore#getConnection
which is called from multiple places.
We do our performance testing using the BDB store. We do not performance
test other store types.
Therefore, it is possible that the JDBC path is not as well tuned as the
BDB one.
I do not know of any obvious performance bottle necks in the JDBC code
(if they were obvious we would have probably fixed them).
I currently do not have the capacity to investigate this but feel free
to investigate yourself and ideally provide a patch :)
When it comes to performance your exact setup and configuration is
important.
To compare local BDB with over the network JDBC is dubious.
Here are a couple of things to always consider when investigating
performance
* Are you using persistent or transient messages?
* Are you using transacted or auto-ack sessions?
* Are the messages published sequentially by a single producer or
multiple producers in parallel?
If you are publishing/consuming in parallel you might want to try
tuning the connection pool size.
* Are you consuming at the same time or are you first publishing and
then consuming?
* Are the messages being flown to disk (check broker logs for BRK-1014)?
This might happen in low memory conditions and is detrimental to
performance because message need to be reloaded from disk.
* Is the broker enforcing producer side flow control?
This might happen when running out of disk space and is obviously
detrimental to performance.
* ...
I hope this somewhat helps with your investigation.
Kind regards,
Lorenz
On 05/01/17 13:45, Antoine Chevin wrote:
Hello,
I ran a benchmark using Qpid java broker 6.0.4 and the JDBC message
store with an Oracle database.
I tried to send and read 1,000,000 messages to the broker but was not
able to finish the benchmark as there was a StoreException caused by a
java.net.ConnectException (full stack is attached).
I suspected a very high number of connections.
I tried using JDBC with BoneCP and the benchmark finished. I could get
the BoneCP statistics and for 1,000,000 messages, there were 3,000,000
DB connections requested.
It looks like the broker requests a connection when enqueuing and
dequeuing the message with the JDBC store. Is it a normal behavior?
Also, the benchmark showed that the JDBC store with Oracle was slower
than the BDB store. (an average throughput of 2.8K msg/s vs 5.4K msg/s).
I suspected a degradation as the Oracle store is located on a separate
machine and the broker goes over the network to persist the messages.
But not that much.
Do you know if there is a possible improvement in the JDBC message
store code to narrow the gap?
Thank you in advance,
Best regards,
Antoine
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]