There are a lot of things that might hamper concurrency in this scenario, 
but it's unlikely they have anything specific to do with Dropwizard (or 
Jetty).

Some things to consider:
- what data source / connection provider / connection pool manager are you 
using?  is it tuned to accommodate very long connection checkouts?
- what is the transaction isolation level on the queries / contention for 
database objects?
- what is the contention for limited database server resources (i/o, cpu)?
- have you looked into strategies to reduce or split the workload, anything 
from table partitions, to materialized views, columnar store, caching 
layer, etc.?  checked database statistics for missing (or unused) indexes, 
page fragmentation, etc.?
- any explicit synchronization in application code (synchronized blocks, 
mutexes, latches, semaphores, barriers, etc.)?

How scaled-out is the backend--what happens if 20 clients all show up at 
once--does it take 45 seconds to service all their requests, or does it 
take more like 15 minutes?  If the latter, it won't be long before you bump 
into timeouts of all different flavors.

45 seconds is a long time, and if other approaches prove fruitless, one 
might want to change the application design so that the client submits its 
query, the server puts that query in a queue and gives the client a GUID, 
and the client can come back and ask if its job is done.  The server could 
even give the client an estimated wait time depending on queue depth, 
server load, etc.

Things like @Priority and registration order can affect filter behavior so 
it's hard to say if the filter's giving an accurate picture.

--Steve

On Thursday, July 20, 2017 at 10:30:15 AM UTC-4, Kristian Rink wrote:
>
> Folks;
>
> are there any known pitfalls, helps, howtos, ... on how to get parallel 
> long-running requests to work well? In one of our services we do have 
> resources that include SQL database backend calls and some of these 
> requests take up to 45 seconds to finish. We also do have a servlet filter 
> registered to see requests arriving at the server and responds being sent 
> out. In these situations, it *seems* that whenever such a long-running 
> request is working, new incoming requests to the same resource are 
> extremely delayed or don't go in at all until the first "running" request 
> has finished. Arguably DB requests taking that long to respond aren't a 
> good thing but shouldn't those be processed parallel after all? Any common 
> starting points where to look here?
> TIA and all the best,
> Kristian
>

-- 
You received this message because you are subscribed to the Google Groups 
"dropwizard-user" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to