franz1981 commented on a change in pull request #44:
URL: https://github.com/apache/qpid-jms/pull/44#discussion_r745530285
##########
File path:
qpid-jms-client/src/main/java/org/apache/qpid/jms/JmsConnectionFactory.java
##########
@@ -368,6 +377,19 @@ protected static URI createURI(String name) {
return null;
}
+ protected Supplier<Holder<ExecutorService>>
getCompletionExecutorServiceFactory() {
+ if (this.completionThreads == 0) {
+ return null;
+ }
+ synchronized (this) {
+ if (completionExecutorServiceFactory == null) {
+ QpidJMSForkJoinWorkerThreadFactory fjThreadFactory = new
QpidJMSForkJoinWorkerThreadFactory("completion thread pool", true);
+ completionExecutorServiceFactory = sharedRefCnt(() -> new
ForkJoinPool(completionThreads, fjThreadFactory, null, false),
ThreadPoolUtils::shutdown);
Review comment:
> Given that, the mechanism still all seems rather overcomplicated. This
feels like a relatively simple case, an 'if there is an existing pool, then use
that, otherwise create one' check coupled with the opposing cleanup. One that
should be relatively infrequently used. It seems like even a simple
synchronized block with a count inside could do?
Not sure, there's still a problem related disposing it:
1. the shared/common pool should be allocated once and live forever?
2. if the answer to 1 is no, how/what is going to trigger disposing it?
The mechanism I've implemented just handle this use-case using reference
counting, but in order to do it, it requires someone to be the first owner
while ensuring correct/deterministic release of resources that could cause the
whole application/class-loader to leak
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]