I am actually considering a read-only case here. So no modifications.

If the objects need to be modified, they have to be transferred to a peer 
ObjectContext using 'localObject'. Which sorta makes sense even now - contexts 
with local cache are often shared and hence de-facto have to be read-only, and 
contexts that track modifications are user- or request- or method- scoped.

A.

On Nov 4, 2013, at 10:42 AM, Aristedes Maniatis <a...@maniatis.org> wrote:

> On 26/10/2013 3:09am, Andrus Adamchik wrote:
> 
> 
>> 2. Queue based approach… Place each query result merge operation in an 
>> operation queue for a given DataContext. Polling end of the queue will 
>> categorize the operations by "affinity", and assign each op to a worker 
>> thread, selected from a thread pool based on the above "affinity". Ops that 
>> may potentially update the same objects are assigned to the same worker and 
>> are processed serially. Ops that have no chance of creating conflict between 
>> each other are assigned to separate workers and are processed in parallel. 
> 
> This queue needs to keep both SELECT and modify operations in some sort of 
> order? So let's imagine you get a queue like this:
> 
> 1. select table A
> 2. select table B
> 3. select table A
> 4. modify table B
> 5. select table B
> 6. select table A
> 
> Is the idea here that you would dispatch 1,2,3,6 to three worker threads to 
> be executed in parallel. But then 4 would be queued behind 2. And 5 would 
> also wait until 4 was complete.
> 
> Is that the idea?
> 
> 
> I can see some situations where this would result in worse behaviour than we 
> have now. If operation 1 and 3 were the same query, then today we get to take 
> advantage of a query cache.
> 
> 
> Am I getting the general idea right?
> 
> 
> Ari
> 
> 
> 
> 
> -- 
> -------------------------->
> Aristedes Maniatis
> GPG fingerprint CBFB 84B4 738D 4E87 5E5C  5EFA EF6A 7D2E 3E49 102A
> 

Reply via email to