Thank you for the reply. Yes, it really helped.
I spent more time investigating the metrics and found out that the
throughput is lower in my benchmark as the same time as the latency is
higher.
The higher latency can be explained with the network indirection.
Each producer sends the messages sequentially, so the increased latency
will directly affect the throughput.
I reran the benchmark with 8 producers (instead of 4) and obtained
throughput results that were closer to the one I used to have (BDB: 5.4K
msg/s JDBC: 4.9K msg/s).

One question remains, I tried to configure the connection pool (Bonecp) but
did not manage to change the partitionCount, maxConnectionsPerPartition or
minConnectionsPerPartition. I could not find any documentation for that. Do
you know how I can set those values?

Thank you,
Regards,
Antoine

-----Original Message-----
From: Rob Godfrey [mailto:rob.j.godf...@gmail.com]
Sent: lundi 9 janvier 2017 15:49
To: users@qpid.apache.org
Subject: Re: Qpid java broker 6.0.4 - JDBC message store performance issues

Just to cover this part:

*I don't know the code in details but is there a reason for not using a
single connection for the JDBC message store lifecycle?*

Since SQL/JDBC can have at most one open transaction for any given
connection, we would have to have (for AMQP 0-x) one JDBC connection open
per session (i.e. potentially multiple per AMQP connection).  This would
likely lead to vastly more connections to the database being opened than
would be necessary. For AMQP 1.0 the situation is worse since the protocol
allows multiple open transactions.  In practice it makes more sense for us
to use a pool of connections and to pull a connection out of the pool when
we want to begin transactional work.  (Also not that even if you are not
using transactions in AMQP, we need to use them at the store level - if a
message is published to an exchange and is routed to multiple queues, this
must happen atomically.  Similarly if you acknowledge multiple messages in
a single command, this must happen in a database txn.

Hope this helps,
Rob

On 9 January 2017 at 14:13, Antoine Chevin <antoine.che...@gmail.com> wrote:

> Hello,
>
> Thank you for your answers. It's a lot clearer for me how the JDBC
> behaves now. You can find in blue the answers to your questions below.
>
> Regards,
> Antoine
>
> -----Original Message-----
> From: Rob Godfrey [mailto:rob.j.godf...@gmail.com]
> Sent: vendredi 6 janvier 2017 15:52
> To: users@qpid.apache.org
> Subject: Re: Qpid java broker 6.0.4 - JDBC message store performance
> issues
>
> On 6 January 2017 at 14:06, Lorenz Quack <quack.lor...@gmail.com> wrote:
>
> > Hello Antoine,
> >
> > Yes, it is expected that a new Connection is made for each enqueue
> > and dequeue.
> > The relevant code is org.apache.qpid.server.store.A
> > bstractJDBCMessageStore#getConnection which is called from multiple
> > places.
> >
> >
> To be clear - the semantic behaviour is that each "transaction" gets a
> new connection ...  if you use transactions in the client to combine
> multiple enqueues/dequeues then this will all happen on one connection.
>
> *I don't know the code in details but is there a reason for not using
> a single connection for the JDBC message store lifecycle?*
>
> Obviously this strategy doesn't work well if you are actually opening
> a new TCP connection each time (for in-memory Derby, which the code
> was originally written for, it doesn't matter too much if Qpid isn't
> pooling in any way, since the connections don't have much overhead),
> so using a connection caching provider is pretty much mandatory if you
> want to use an external RDBMS.
>
> -- Rob
>
>
>
> > We do our performance testing using the BDB store. We do not
> > performance test other store types.
> > Therefore, it is possible that the JDBC path is not as well tuned as
> > the BDB one.
> > I do not know of any obvious performance bottle necks in the JDBC
> > code (if they were obvious we would have probably fixed them).
> > I currently do not have the capacity to investigate this but feel
> > free to investigate yourself and ideally provide a patch :)
>
> *I'm in the analysis phase but performance will be in the critical
> path and I'll be happy to contribute :-)*
> >
> > When it comes to performance your exact setup and configuration is
> > important.
> > To compare local BDB with over the network JDBC is dubious.
> > Here are a couple of things to always consider when investigating
> > performance
> >  * Are you using persistent or transient messages?
> *The messages are set to be persistent and the broker queue is
> durable.*
> >  * Are you using transacted or auto-ack sessions?
> *It's an AUTO_ACK session*
> >  * Are the messages published sequentially by a single producer or
> > multiple producers in parallel?
> *4 producers are publishing the messages in parallel (they are 4
> processes)*
> >    If you are publishing/consuming in parallel you might want to try
> > tuning the connection pool size.
>
>
>
> *Currently, I'm using the default (4 partitions count, 10 connections
> per partition max). I tried to change these options in the
> configuration but did not succeed.I tried to set for instance {
> partitionCount: 2,
> minConnectionsPerPartition: 1, maxConnectionsPerPartition: 4 } in the
> VirtualHost configuration.* *I looked at the code and tried as well:
> **qpid.jdbcstore.bonecp.maxConnectionsPerPartition
> instead of maxConnectionsPerPartition (same for min).*
>
> *Do you know how I can change the BoneCP configuration?*
>
> >  * Are you consuming at the same time or are you first publishing
> > and then consuming?
> *Publishing and consuming at the same time.*
> >  * Are the messages being flown to disk (check broker logs for
BRK-1014)?
> *I did not see this log and I think there is enough memory (16g)*
> >    This might happen in low memory conditions and is detrimental to
> > performance because message need to be reloaded from disk.
> > * Is the broker enforcing producer side flow control?
> *I don't think so but how can I check? Should I see a log too?*
>
> >    This might happen when running out of disk space and is obviously
> > detrimental to performance.
> >  * ...
> >
> >
> > I hope this somewhat helps with your investigation.
> >
> > Kind regards,
> > Lorenz
> >
> >
> >
> >
> > On 05/01/17 13:45, Antoine Chevin wrote:
> >
> >> Hello,
> >>
> >> I ran a benchmark using Qpid java broker 6.0.4 and the JDBC message
> >> store with an Oracle database.
> >> I tried to send and read 1,000,000 messages to the broker but was
> >> not able to finish the benchmark as there was a StoreException
> >> caused by a java.net.ConnectException (full stack is attached).
> >>
> >> I suspected a very high number of connections.
> >>
> >> I tried using JDBC with BoneCP and the benchmark finished. I could
> >> get the BoneCP statistics and for 1,000,000 messages, there were
> >> 3,000,000 DB connections requested.
> >>
> >> It looks like the broker requests a connection when enqueuing and
> >> dequeuing the message with the JDBC store. Is it a normal behavior?
> >>
> >> Also, the benchmark showed that the JDBC store with Oracle was
> >> slower than the BDB store. (an average throughput of 2.8K msg/s vs
5.4K msg/s).
> >> I suspected a degradation as the Oracle store is located on a
> >> separate machine and the broker goes over the network to persist
> >> the messages. But not that much.
> >> Do you know if there is a possible improvement in the JDBC message
> >> store code to narrow the gap?
> >>
> >> Thank you in advance,
> >> Best regards,
> >> Antoine
> >>
> >>
> >> -------------------------------------------------------------------
> >> -- To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org For
> >> additional commands, e-mail: users-h...@qpid.apache.org
> >>
> >
> >
> > --------------------------------------------------------------------
> > - To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org For
> > additional commands, e-mail: users-h...@qpid.apache.org
> >
> >
>

Reply via email to