On Mon, 2013-06-03 at 22:17 +0200, Lukas Zeller wrote:
> > How about yet another approach: the store is allowed to return a new
> > error code, LOCERR_AGAIN, for add/update/delete operations. The engine
> > then remembers all operations with that return code and calls them again
> > (with the exact same parameters and without freeing resources!) at a
> > suitable time, for example the end of the current message. In the second
> > call, the store must execute or finish the operation.
> > 
> > The engine must not call an operation on an item for which it has
> > already something pending. If that happens, it has to complete the first
> > operation first. That way the store can keep a list of pending
> > operations with the ItemID as key.
> 
> Sounds like a good solution indeed. I like it better than
> late-handling return codes especially because it avoids any risk of
> breaking existing datastores (and possible unsafe assumptions in their
> implementations), because for those nothing would change outside *and*
> inside the engine.
> 
> It's also somewhat similar to the <finalisationscript> mechanism,
> which also keeps items (or parts of them) in memory for using them
> again at the end of the session, usually to resolve cross references
> (parent/child tasks for example). Maybe some of the mechanisms can be
> re-used for LOCERR_AGAIN. 

I found a different mechanism: TSyncSession::processSyncOpItem() can
decide to delay execution of the command by setting a flag. Currently
this is done when processing the message already took too long.

I have changed the command and item processing call chain and my backend
so that it can kick of the operation and continue when the chain is
invoked again. For that, I am setting the aQueueForLater=true to use the
existing queuing mechanism for SyncML commands.

Now I found one problem with that: after the first of several Add or
Update commands got queued, all following commands are also queued. What
I'd like to see instead is that they all get processed.

Then in the level above the engine, right before sending the response, I
would gather all pending operations and combine them into a batched,
asynchronous add or update operation. The batching is expected to be
much more efficient with EDS. It also allows overlapping local
processing in the PIM storage with network IO. But right now, I always
only get one item to be batched because everything else is still in the
queue for later processing.

Is the "command received after other commands needed to be delayed ->
must be delayed, too" something which is imposed by SyncML?

I tried to think of situations where the engine needs to enforce
completion of the pending operation before triggering another one, but
couldn't think of such a situation. My expectation is that either the
backend will properly serialize the item changes or the item changes are
independent (update "foo", delete "bar").

But what could happen is that "update foo" gets started, does not
complete, and then "delete bar" gets processed right away. That would
re-order the status messages such that the status for the later command
gets sent first. That's because my backend currently can only do
insert/update asynchronously, but not deletes.

-- 
Best Regards, Patrick Ohly

The content of this message is my personal opinion only and although
I am an employee of Intel, the statements I make here in no way
represent Intel's position on the issue, nor am I authorized to speak
on behalf of Intel on this matter.



_______________________________________________
os-libsynthesis mailing list
os-libsynthesis@synthesis.ch
http://lists.synthesis.ch/mailman/listinfo/os-libsynthesis

Reply via email to