Actually, I'm testing out another option, which is to fix up QueryImpl,
not to do aggressive locking (at least in the isUnique method), and I
think it will at least remove the current blocking issue.
But if you get to read the last email I sent, I think that if you use
ParallelExecutor, you HAVE to use Multithreded=true. Since behind the
scenes you are executing various different queries using multiple
threads, and they might be touching/using shared resources lower down.
This is why I think I'm seeing all of those corrupted queries, which
were fixed once I turned off ParallelExecutor.
So we are currently between a rock and a hard place. We have to turn
off ParallelExecutor to prevent corrupted queries.. until we can fix
things up to run properly with Multithreaded=true.
Pinaki Poddar wrote:
The easy (and escapist) way out is to fall back on sequential query
execution when
openjpa.Multithreaded is set.
Shall we look at dismantling the ParallelExecutor from Slices for now (
and take the slower performance)?
1. Slower performance shows that parallel execution of query across slices
is a good thing.
2. This also tells us that we should not dismantle parallel execution
unconditionally. What I proposed is
a) execute queries in parallel when openjpa.Multithreaded=false (which
is default)
and b) execute queries sequentially when openjpa.Multithreaded=true.
....Till we find a smarter/better way.
This can be done without perturbing things too much within
DistributedStoreQuery.