Folks;

are there any known pitfalls, helps, howtos, ... on how to get parallel 
long-running requests to work well? In one of our services we do have 
resources that include SQL database backend calls and some of these 
requests take up to 45 seconds to finish. We also do have a servlet filter 
registered to see requests arriving at the server and responds being sent 
out. In these situations, it *seems* that whenever such a long-running 
request is working, new incoming requests to the same resource are 
extremely delayed or don't go in at all until the first "running" request 
has finished. Arguably DB requests taking that long to respond aren't a 
good thing but shouldn't those be processed parallel after all? Any common 
starting points where to look here?
TIA and all the best,
Kristian

-- 
You received this message because you are subscribed to the Google Groups 
"dropwizard-user" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to