On Sep 2, 2007, at 11:23 AM, Manu George wrote:
Hi David,
Thanks for the explanation.
In case of waitWhenBlocked=true what will be the expected behaviour if
I set the poolsize as 1? Currently on debugging I see that the calling
thread gets parked and then never gets resumed if already 1 thread is
executing. Any idea why would this be happening?
I suspect that the first thread is submitting work which needs a
second thread, or for the first one to complete. In other words it
won't work with waitWhenBlocked = true.
Secondly the api docs say that we shouldn't do executor.getQueue
().put(r);
Will this not create problems with unexpected behaviour?
I didn't see this documentation, can you point me to it? I don't see
how this could create unexpected problems, but I might have missed
something.
thanks
david jencks
Thanks
Manu
On 9/2/07, David Jencks <[EMAIL PROTECTED]> wrote:
I don't think the current implementation is actually wrong under
normal use (where you just configure the gbean in xml and don't
change its settings at runtime).
I think it would be better to set up the executor in its constructor
(keeping the waitWhenBlocked as a constructor parameter). There's a
constructor for ThreadPoolExecutor we can use that lets us set
everything at once.
I think the wait when blocked configuration is correct as it stands.
I suggest that using the LinkedBlockingQueue is appropriate when
waitWhenBlocked is false but not when its true.
IMO the caller-runs policy is not appropriate for use in j2ca since
the work manager can notify you if the work is rejected. Thus tying
up your own thread is not appropriate since it eliminates the
possibility of the caller taking corrective action.
thanks
david jencks
On Sep 2, 2007, at 8:46 AM, Manu George wrote:
Hi,
I was investigating why setting the resourceAdapter poolsize
to 1 and using it in an Mdb for sequential message processing was
failing and found that the org.apache.geronimo.pool.ThreadPool class
in geronimo contains a ThreadPoolExecutor instance created with the
constructor
new ThreadPoolExecutor(
poolSize, // core size
poolSize, // max size
keepAliveTime, TimeUnit.MILLISECONDS,
new SynchronousQueue());
The default behaviour is to reject the Runnables supplied when
the no
of active threads equals the pool size.
Now the ThreadPool class has a setWaitWhenBlocked method which when
called makes it wait. Setting this makes the pool enter into a
deadlock. The reason for this is the WaitWhenBlockedPolicy used to
process rejection. In this class there is a
executor.getQueue().put(r);
Now the API docs mention that once you handoff the queue to a
ThreadPoolExecutor we should not directly modify the queue as it may
result in unpredictable behaviour. So this is a bug.
As a solution for this what I did was
ThreadPoolExecutor p = new ThreadPoolExecutor(
poolSize, // core size
poolSize, // max size
keepAliveTime, TimeUnit.MILLISECONDS,
new LinkedBlockingQueue<Runnable>(queueSize));
Now this results in upto queue size of Runnables that can be
submitted
of which poolsize will be the no of parallel executions. More than
queue size will be rejected.
We can also set the rejection handler to the CallerRunsPolicy
when the
quesize is exceeded and the waitWhenBlocked flag is true.
This results in me getting the scenario working. I am not yet
sure why
the SynchronousQueue was used in this case, so thats why I
changed the
queuing strategy. So Is this an acceptable approach for fixing this
issue?
Secondly do we want a behaviour of rejection of work items when
queue
is full? Setting the CallerRunsPolicy actually results in graceful
degradation when load exceeds capacity by increasing backlog at the
TCP/IP layer. Would be happy if some of the experts would comment on
this
Thanks
Manu