Hi Pinaki

I try and finally{} the block with threadPool.shutdown(). 

That seems to work.  100k transaction no problem now.

You nail it right on with the excessive pool creation in every flush().

I am not sure CachedThreadPool would have solved the problem.  You still
have more flush() call coming along the way -- then we need to define
RejectExecutionHandler.  We can't abort and we can't discard.

Since the pool is gonna go out of scope by the time flush() is done, we
probably need to shut it down before going out of scope (so that expired
thread no longer hang around).

I will keep on stressing the stack and see if there are more problems.

Cheers,
Simon
-- 
View this message in context: 
http://openjpa.208410.n2.nabble.com/Spring-3-0-2-OpenJPA-2-0-Slice-OutOfMemoryError-shortly-after-pounding-1000-threads-to-the-system-tp5000822p5005891.html
Sent from the OpenJPA Users mailing list archive at Nabble.com.

Reply via email to