clintropolis opened a new pull request, #14184:
URL: https://github.com/apache/druid/pull/14184

   ### Description
   This PR fixes an issue that could occur if 
`druid.query.scheduler.numThreads` is configured and any exception occurs after 
`QueryScheduler.run` has been called to create a `Sequence`. This would result 
in total and/or lane specific locks being acquired, but because the sequence 
was not actually being evaluated, the "baggage" which typically releases these 
locks was not being executed. An example of how this can happen is if a 
group-by having filter, which wraps and transforms this sequence happens to 
explode while wrapping the sequence. The end result is that the locks are 
acquired, but never released, eventually halting the ability to execute any 
queries.
   
   I've modified `QueryScheduler.run` to now use `Sequence.wrap` with a full 
`SequenceWrapper` implementation that acquire the locks in the `before` and 
releases them in the `after` to better ensure that we always release what we 
take, no matter what. I'm not really sure why I wasn't doing it like this 
before...
   
   This PR has:
   
   - [x] been self-reviewed.
      - [x] using the [concurrency 
checklist](https://github.com/apache/druid/blob/master/dev/code-review/concurrency.md)
 (Remove this item if the PR doesn't have any relation to concurrency.)
   - [x] added unit tests or modified existing tests to cover new code paths, 
ensuring the threshold for [code 
coverage](https://github.com/apache/druid/blob/master/dev/code-review/code-coverage.md)
 is met.
   - [x] been tested in a test Druid cluster.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to