On 6 January 2017 at 14:06, Lorenz Quack <[email protected]> wrote:
> Hello Antoine, > > Yes, it is expected that a new Connection is made for each enqueue and > dequeue. > The relevant code is org.apache.qpid.server.store.A > bstractJDBCMessageStore#getConnection which is called from multiple > places. > > To be clear - the semantic behaviour is that each "transaction" gets a new connection ... if you use transactions in the client to combine multiple enqueues/dequeues then this will all happen on one connection. Obviously this strategy doesn't work well if you are actually opening a new TCP connection each time (for in-memory Derby, which the code was originally written for, it doesn't matter too much if Qpid isn't pooling in any way, since the connections don't have much overhead), so using a connection caching provider is pretty much mandatory if you want to use an external RDBMS. -- Rob > We do our performance testing using the BDB store. We do not performance > test other store types. > Therefore, it is possible that the JDBC path is not as well tuned as the > BDB one. > I do not know of any obvious performance bottle necks in the JDBC code (if > they were obvious we would have probably fixed them). > I currently do not have the capacity to investigate this but feel free to > investigate yourself and ideally provide a patch :) > > When it comes to performance your exact setup and configuration is > important. > To compare local BDB with over the network JDBC is dubious. > Here are a couple of things to always consider when investigating > performance > * Are you using persistent or transient messages? > * Are you using transacted or auto-ack sessions? > * Are the messages published sequentially by a single producer or > multiple producers in parallel? > If you are publishing/consuming in parallel you might want to try > tuning the connection pool size. > * Are you consuming at the same time or are you first publishing and then > consuming? > * Are the messages being flown to disk (check broker logs for BRK-1014)? > This might happen in low memory conditions and is detrimental to > performance because message need to be reloaded from disk. > * Is the broker enforcing producer side flow control? > This might happen when running out of disk space and is obviously > detrimental to performance. > * ... > > > I hope this somewhat helps with your investigation. > > Kind regards, > Lorenz > > > > > On 05/01/17 13:45, Antoine Chevin wrote: > >> Hello, >> >> I ran a benchmark using Qpid java broker 6.0.4 and the JDBC message store >> with an Oracle database. >> I tried to send and read 1,000,000 messages to the broker but was not >> able to finish the benchmark as there was a StoreException caused by a >> java.net.ConnectException (full stack is attached). >> >> I suspected a very high number of connections. >> >> I tried using JDBC with BoneCP and the benchmark finished. I could get >> the BoneCP statistics and for 1,000,000 messages, there were 3,000,000 DB >> connections requested. >> >> It looks like the broker requests a connection when enqueuing and >> dequeuing the message with the JDBC store. Is it a normal behavior? >> >> Also, the benchmark showed that the JDBC store with Oracle was slower >> than the BDB store. (an average throughput of 2.8K msg/s vs 5.4K msg/s). >> I suspected a degradation as the Oracle store is located on a separate >> machine and the broker goes over the network to persist the messages. But >> not that much. >> Do you know if there is a possible improvement in the JDBC message store >> code to narrow the gap? >> >> Thank you in advance, >> Best regards, >> Antoine >> >> >> --------------------------------------------------------------------- >> To unsubscribe, e-mail: [email protected] >> For additional commands, e-mail: [email protected] >> > > > --------------------------------------------------------------------- > To unsubscribe, e-mail: [email protected] > For additional commands, e-mail: [email protected] > >
