On Aug 21, 2012, at 11:27 AM, David Blevins wrote:

> 
> On Aug 21, 2012, at 2:18 AM, Bjorn Danielsson wrote:
> 
>> In my opinion this is a bit overkill, at least the options for
>> using DelayQueue and PriorityBlockingQueue. They require the
>> Runnable queue elements to implement the Delayed and Comparable
>> interfaces, respectively. I don't see how to make use of that in
>> an EJB method call.
> 
> Agreed.  Though now my mind is wondering if we should tag the runnables with 
> the number of retries and have them support higher priority retry.  That's 
> not applicable to the @Asynchronous support, but is a feature of the 
> @Timeout/EJB Timer stuff which uses an identical executor. (i'm reworking 
> that code too to get everything as consistent as possible)
> 
> We might even be able to support a @Priority(0.5) annotation or something for 
> @Asynchronous methods.  Could be interesting.
> 
>> I looked at the Sun ThreadPoolExecutor docs and have now played
>> a little with it in a standalone program. I honestly think the
>> design is a bit bizarre. I would expect the thread pool to expand
>> to max capacity before tasks are put in the wait queue. For the
>> purpose of optimizing CPU core utilization, the number of threads
>> in the operating system run-queue is the only important number,
>> not the number of provisioned threads in a JVM thread pool.
>> But this is slightly off-topic here.
>> 
>> I believe it's enough to have just one more configuration
>> property for TomEE: AsynchronousPool.QueueSize. If this is 0,
>> let the container use a SynchronousQueue. Otherwise use a
>> LinkedBlockingQueue with the specified capacity. That way it's
>> possible to have any of the three queueing strategies mentioned
>> in the ThreadPoolExecutor javadocs. I can't imagine a situation
>> where using an ArrayBlockingQueue instead of a bounded
>> LinkedBlockingQueue in TomEE really makes a big difference,
>> but I could be wrong about that.
> 
> Hacking again, will repost and see what you think.  As mentioned above, also 
> cleaning up the EJB Timer @Timeout execution queue and any others I can find.

Work in progress, but here's an idea:

 https://gist.github.com/3418341

Can still yank ARRAY and PRIORITY, but the idea is to make the defaults as 
smart as possible based on what you have explicitly configured.

Some of the more interesting aspects:

AllowCoreThreadTimeOut=true essentially makes CorePoolSize and MaximumPoolSize 
the same thing in the regards that you have a thread pool size that can shrink. 
 Core and Maximum become really subjective at that point.  Probably you only 
need Core.

The idea behind the OfferRejectedExecutionHandler is to basically throttle the 
input to hopefully match the output.  I.e. slow down the people producing work 
to give the "doing" work side a chance to catch up.  Theoretically this is a 
much smarter queue.  Wether you're 100th in line to add to a Synchronous queue 
or 100th in line in a bounded queue, you're still 99 jobs away from getting 
your work done.  If you're actually waiting on a Future.get() then it should 
theoretically come out being the same.

The default queue size is really the one I am least certain about.

It would seem that with AllowCoreThreadTimeOut=true and CorePoolSize=20 and 
MaximumPoolSize=CorePoolSize the goal of QueueSize would be the tiniest buffer 
to ensure as close to 100% utilization as possible without letting the "work 
input" side get too much larger than the "work output" side.


-David

>> David Blevins <david.blev...@gmail.com> wrote:
>>> On Aug 20, 2012, at 10:55 AM, Romain Manni-Bucau wrote:
>>> 
>>>> that's because we use a linked blocking queue
>>>> 
>>>> maybe we should make it configurable, not sure...
>>> 
>>> 
>>> Made it configurable.  Code is basically:
>>> 
>>>   public static AsynchronousPool create(AppContext appContext) {
>>>       final Options options = appContext.getOptions();
>>> 
>>>       final String id = appContext.getId();
>>>       final int corePoolSize = options.get("AsynchronousPool.CorePoolSize", 
>>> 10);
>>>       final int maximumPoolSize = 
>>> Math.max(options.get("AsynchronousPool.MaximumPoolSize", 20), corePoolSize);
>>>       final Duration keepAliveTime = 
>>> options.get("AsynchronousPool.KeepAliveTime", new Duration(60, 
>>> TimeUnit.SECONDS));
>>>       final BlockingQueue queue = options.get("AsynchronousPool.QueueType", 
>>> QueueType.LINKED).create(options);
>>> 
>>>       return new AsynchronousPool(id, corePoolSize, maximumPoolSize, 
>>> keepAliveTime, queue);
>>>   }
>>> 
>>>   private static enum QueueType {
>>>       ARRAY,
>>>       DELAY,
>>>       LINKED,
>>>       PRIORITY,
>>>       SYNCHRONOUS;
>>> 
>>>       public BlockingQueue create(Options options) {
>>>           switch (this) {
>>>               case ARRAY: {
>>>                   return new 
>>> ArrayBlockingQueue(options.get("AsynchronousPool.QueueSize", 100));
>>>               }
>>>               case DELAY: {
>>>                   return new DelayQueue();
>>>               }
>>>               case LINKED: {
>>>                   return new 
>>> LinkedBlockingQueue(options.get("AsynchronousPool.QueueSize", 
>>> Integer.MAX_VALUE));
>>>               }
>>>               case PRIORITY: {
>>>                   return new PriorityBlockingQueue();
>>>               }
>>>               case SYNCHRONOUS: {
>>>                   return new 
>>> SynchronousQueue(options.get("AsynchronousPool.QueueFair", false));
>>>               }
>>>               default: {
>>>                   // The Options class will throw an error if the user 
>>> supplies an unknown enum string
>>>                   // The only way we can reach this is if we add a new 
>>> QueueType element and forget to
>>>                   // implement it in the above switch statement.
>>>                   throw new IllegalArgumentException("Unknown QueueType 
>>> type: " + this);
>>>               }
>>>           }
>>>       }
>>>   }
> 

Reply via email to