This is where the Java world starts to get complicated as no 2 people from 
it use the same tools in the same way, so all our view points are different.

HikariCP is considered one of, if not the best of the different connection 
pools available by many. This can cause problems if you start to use 
articles that are focussed on HikariCP as the foundation for configuring 
other solutions. Saying that, while I do not know much about the default 
apache option you can see that it does not perform in the same way as 
HikariCP if you read up on the HikariCP benchmarks shown on the main 
HikariCP page  https://github.com/brettwooldridge/HikariCP. You will see 
that the way in which a connection cycle is handled by each solution is 
very different. This is something that may be visible via the profiler. 

Are you able to post the segment of Java that you use to create, populate 
and execute the statement you have given as a log example. jOOQ has a range 
of ways to do things, which Lukas can comment on with easy. It is also the 
case that as with any library solution there can also be the need to 
'pin-hole' optimise critical code to get the best performance. Any query to 
an SQL database that is executed in such a way as being described as "300 
concurrent" is a good target for optimisation or at least a strong 
understanding of how it operates. I say this with my DBA hat on as I have 
often seen the DB server handle such requests with easy, but I see the 
server committing all its resources to do so.

As Lukas has already noted the enabling of logging brings its own overhead 
issues - so greatly impacting performance and in this case maybe causing 
connections to be held open far longer than non-logged code would, so 
needing the pool size to be grown. The log itself shows the problem. It 
records the single request, with a single returned record as taking 
6.523ms, but if you look at the log the step from "Fetched result" to 
"Result fetched" took 1.349ms and most of that is likely to have been the 
output of the returned record to the log. I guess this also raises the 
question - which log sub-system have you deployed as some are faster than 
others. This issue regarding how to get log visibility of high-performance 
areas is the reason why I had to go off and learnt about the metrics 
library.

Roger


On Friday, April 7, 2017 at 6:53:31 PM UTC+1, [email protected] wrote:
>
> Roger,
>
> Thanks for taking the time to respond. Pool sizing was done based on this 
> article: 
> https://github.com/brettwooldridge/HikariCP/wiki/About-Pool-Sizing. We 
> may be off base on that decision. We're using the standard database 
> connection pool from tomcat that ships with dropwizard. I've increased the 
> db connection pool size and run the same tests and see less db pool 
> exhausted errors. 
>
> However, my concerns remain about how the performance degrades. Given that 
> we're running against a db thats running locally (dev setup), I would not 
> expect the database connections to be getting held up for milliseconds. 
> Even then, at the rate we're sending requests, it seems like it should be 
> able to handle the queue of requests that are trying to pull a connection 
> from the pool given they're allowed to wait for a 1000ms. Thats why I 
> turned on the JOOQ execute logging to see how long the queries were taking. 
> I was surprised to see the numbers I was seeing (going against the local 
> db). So wanted to get an idea from others about what sort of perf impact we 
> can expect from using JOOQ and how it would degrade under load.
>
> There may be other things at play here. Its possible its an issue with 
> dropwizard or my setup. I'll keep digging on this and try to isolate jooq 
> more.
>
> Will give the metrics api a go as well. Did not know about profilesql, 
> thats an interesting tool that I'll use in the future as well.
>
> Udam
>
> On Friday, April 7, 2017 at 4:20:41 AM UTC-7, Roger Thomas wrote:
>>
>> Hi,
>>
>> Which dbconnection pool manager are you using and why have you the number 
>> of connections set to such a low number? If you are expecting/testing 100's 
>> of concurrent requests per second such a small pool could become exhausted 
>> unless each request can be handled in a millisecond or 2. Even if not 
>> exhausted you are likely to see a high level of contention and so delays at 
>> the pool level. You seem to indicate contention and then exhaustion as you 
>> say that it starts off quick, then slows down (contention) and then times 
>> out (exhaustion/timeout). Unless you have any other restrictions do a test 
>> run with the pool set to 50 and see what happens.
>>
>> If you have the time to read up on yet another Java library you may find 
>> java metrics useful 
>>
>>         http://metrics.dropwizard.io/3.2.2/
>>
>> This allows you to reduce the logging level and instead insert code that 
>> records stats with detail being reported via histograms. It can make 
>> understanding things like DB connections and query processing over time far 
>> easier to understand.
>>
>> Another nice tool you may want to look at is http://www.profilesql.com/ 
>> as this provides a free in-line profiler for MySQL. By deploying this you 
>> will be able to confirm how connections are being made/dropped and the time 
>> it takes the database to process the request passed to it - the values you 
>> see here should be far nearer to the values you see within mysql workbench, 
>> but it's worth checking. In past projects I have found this very helpful as 
>> it has allowed me to discover configuration issues with the db connection 
>> pool, such as lack of reuse (lots of opens/closes) or lots of clean up 
>> between reuse (re-issuing of default information).
>>
>> I hope my comments are helpful. 
>>
>> Roger
>>
>> On Thursday, April 6, 2017 at 7:20:53 PM UTC+1, [email protected] 
>> wrote:
>>>
>>> Wanted to get an idea of what the performance impact is of using JOOQ 
>>> and if there were ways to improve it.
>>>
>>> Currently we're seeing pretty slow performance when using JOOQ for doing 
>>> even very simple tasks. See output below from the StopWatch:
>>>
>>> 2017-04-06 10:45:25:428 TRACE [dw-28 - POST /service/nonuilogin] 
>>> [JooqLogger] [SdDYuJEQ] Initialising             : Total: 0.003ms
>>> 2017-04-06 10:45:25:428 TRACE [dw-28 - POST /service/nonuilogin] 
>>> [JooqLogger] [SdDYuJEQ] Rendering query          : Total: 0.231ms, +0.228ms
>>> 2017-04-06 10:45:25:428 DEBUG [dw-28 - POST /service/nonuilogin] 
>>> [JooqLogger] [SdDYuJEQ] Executing query          : SELECT `session`.`id`, 
>>> `session`.`created_date` FROM `session` WHERE (`session`.`id` = ? AND 
>>> `session`.`created_date` = ?)
>>> 2017-04-06 10:45:25:429 DEBUG [dw-28 - POST /service/nonuilogin] 
>>> [JooqLogger] [SdDYuJEQ] -> with bind values      : SELECT `session`.`id`, 
>>> `session`.`created_date` FROM `session` WHERE (`session`.`id` = 'abcd' AND 
>>> `session`.`created_date` = {TS '2017-04-06 10:45:25.417'})
>>> 2017-04-06 10:45:25:429 TRACE [dw-28 - POST /service/nonuilogin] 
>>> [JooqLogger] [SdDYuJEQ] Query rendered           : Total: 1.166ms, +0.935ms
>>> 2017-04-06 10:45:25:429 TRACE [dw-28 - POST /service/nonuilogin] 
>>> [JooqLogger] [SdDYuJEQ] Preparing statement      : Total: 1.391ms, +0.225ms
>>> 2017-04-06 10:45:25:429 TRACE [dw-28 - POST /service/nonuilogin] 
>>> [JooqLogger] [SdDYuJEQ] Statement prepared       : Total: 1.645ms, +0.253ms
>>> 2017-04-06 10:45:25:430 TRACE [dw-28 - POST /service/nonuilogin] 
>>> [JooqLogger] [SdDYuJEQ] Binding variables        : Total: 1.814ms, +0.168ms
>>> 2017-04-06 10:45:25:430 TRACE [dw-28 - POST /service/nonuilogin] 
>>> [JooqLogger] [SdDYuJEQ] Variables bound          : Total: 2.113ms, +0.299ms
>>> 2017-04-06 10:45:25:430 TRACE [dw-28 - POST /service/nonuilogin] 
>>> [JooqLogger] [SdDYuJEQ] Executing query          : Total: 2.331ms, +0.217ms
>>> 2017-04-06 10:45:25:431 DEBUG [dw-28 - POST /service/nonuilogin] 
>>> [JooqLogger] [SdDYuJEQ] Query executed           : Total: 3.486ms, +1.154ms
>>> 2017-04-06 10:45:25:432 TRACE [dw-28 - POST /service/nonuilogin] 
>>> [JooqLogger] [SdDYuJEQ] Fetching results         : Total: 4.052ms, +0.566ms
>>> 2017-04-06 10:45:25:432 TRACE [dw-28 - POST /service/nonuilogin] 
>>> [JooqLogger] [SdDYuJEQ] Fetching result          : Total: 4.304ms, +0.252ms
>>> 2017-04-06 10:45:25:432 TRACE [dw-28 - POST /service/nonuilogin] 
>>> [JooqLogger] [SdDYuJEQ] Fetching record          : Total: 4.576ms, +0.271ms
>>> 2017-04-06 10:45:25:432 TRACE [dw-28 - POST /service/nonuilogin] 
>>> [JooqLogger] [SdDYuJEQ] Record fetched           : Total: 4.796ms, +0.22ms
>>> 2017-04-06 10:45:25:433 DEBUG [dw-28 - POST /service/nonuilogin] 
>>> [JooqLogger] [SdDYuJEQ] Fetched result           : 
>>> +------------------------------------+-----------------------+
>>> 2017-04-06 10:45:25:433 DEBUG [dw-28 - POST /service/nonuilogin] 
>>> [JooqLogger] [SdDYuJEQ]                          : |id                     
>>>              |created_date           |
>>> 2017-04-06 10:45:25:433 DEBUG [dw-28 - POST /service/nonuilogin] 
>>> [JooqLogger] [SdDYuJEQ]                          : 
>>> +------------------------------------+-----------------------+
>>> 2017-04-06 10:45:25:434 DEBUG [dw-28 - POST /service/nonuilogin] 
>>> [JooqLogger] [SdDYuJEQ]                          : | abcd |2017-04-06 
>>> 10:45:25.417|
>>> 2017-04-06 10:45:25:434 DEBUG [dw-28 - POST /service/nonuilogin] 
>>> [JooqLogger] [SdDYuJEQ]                          : 
>>> +------------------------------------+-----------------------+
>>> 2017-04-06 10:45:25:434 TRACE [dw-28 - POST /service/nonuilogin] 
>>> [JooqLogger] [SdDYuJEQ] Result fetched           : Total: 6.146ms, +1.349ms
>>> 2017-04-06 10:45:25:434 TRACE [dw-28 - POST /service/nonuilogin] 
>>> [JooqLogger] [SdDYuJEQ] Results fetched          : Total: 6.374ms, +0.228ms
>>> 2017-04-06 10:45:25:434 DEBUG [dw-28 - POST /service/nonuilogin] 
>>> [JooqLogger] [SdDYuJEQ] Finishing                : Total: 6.523ms, +0.148ms
>>>
>>> This is run on a quad-core MacBook Pro with 16GB against a MySQL 
>>> database running on the same machine. The same query run using mysql 
>>> workbench would take roughly (duration/fetch time): 0.00074 sec / 0.000036 
>>> sec.
>>>
>>> Seems like using JOOQ to execute the query causes several magnitudes 
>>> more time. This causes us to have db connectionpool starvation (timeout 
>>> waiting for an open connection) and terrible latencies when running with 
>>> any concurrent traffic (even as low as 20 concurrent requests). 
>>>
>>> I'd like to think that we're doing something wrong here with the way 
>>> we're using JOOQ (we use dropwizard/jooq for our service). All these things 
>>> that are taking a lot of time seem like things that are done with CPU 
>>> cycles and I would have expected would run many magnitudes faster. If 
>>> anyone can provide any hints about how we would get these latencies back 
>>> down to decent levels, I would very much appreciate it.
>>>
>>> Thank you!
>>> Udam
>>>
>>

-- 
You received this message because you are subscribed to the Google Groups "jOOQ 
User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to