Hello Patrick, FinalizeLocalID came into existence as a workaround for the iOS client datastores, in particular the address book which did not generate proper IDs until a very expensive "save all" method was called. I tried to implement it as generically as possible, but avoiding any extra risk (and development time I didn't have then) beyond what was needed to fix that problem. Which was for binfile based clients only.
For the same reason (less risk to break something) the FinalizeLocalID implementation must be safe against being called with a already finalized ID. The idea is that the calls to FinalizeLocalID can be placed wherever a final ID is required, without need to track if the same item will get finalized again later. So making the FinalizeLocalID mechanism work for non-binfile and server should be not much more than placing the call at a few more places. Probably, for the server it's only when processing Map items, and before persisting them at the end of the request. > How about yet another approach: the store is allowed to return a new > error code, LOCERR_AGAIN, for add/update/delete operations. The engine > then remembers all operations with that return code and calls them again > (with the exact same parameters and without freeing resources!) at a > suitable time, for example the end of the current message. In the second > call, the store must execute or finish the operation. > > The engine must not call an operation on an item for which it has > already something pending. If that happens, it has to complete the first > operation first. That way the store can keep a list of pending > operations with the ItemID as key. Sounds like a good solution indeed. I like it better than late-handling return codes especially because it avoids any risk of breaking existing datastores (and possible unsafe assumptions in their implementations), because for those nothing would change outside *and* inside the engine. It's also somewhat similar to the <finalisationscript> mechanism, which also keeps items (or parts of them) in memory for using them again at the end of the session, usually to resolve cross references (parent/child tasks for example). Maybe some of the mechanisms can be re-used for LOCERR_AGAIN. So, from my side: just go ahead :-) Best Regards, Lukas On 03.06.2013, at 15:12, Patrick Ohly <[email protected]> wrote: > Hello! > > I'm trying to batch local database adds together to increase performance > in SyncEvolution (https://bugs.freedesktop.org/show_bug.cgi?id=52669 and > https://bugs.freedesktop.org/show_bug.cgi?id=64838#c5). > > My hope was that I can postpone the actual modification and return a > temporary local ID. Then later or in parallel, combine multiple items > together in a batch add. Finally, the temporary ID was meant to be > replaced by the final one in FinalizeLocalID. > > But I hit the first snag pretty early: FinalizeLocalID() doesn't seem to > get called on the server side where I need this feature primarily (for > PBAP, CalDAV, ActiveSync the server side is the one with the local > database). > > dsFinalizeLocalID, the wrapper around the store's FinalizeLocalID, gets > called in three functions: > (gdb) b sysync::TBinfileImplDS::changeLogPostflight > Breakpoint 2 at 0x106f956: file > /home/pohly/syncevolution/libsynthesis/src/sysync/binfileimplds.cpp, line 462. > (gdb) b sysync::TBinfileImplDS::SaveAdminData > Breakpoint 3 at 0x10739fd: file > /home/pohly/syncevolution/libsynthesis/src/sysync/binfileimplds.cpp, line > 2389. > (gdb) b sysync::TLocalEngineDS::engGenerateMapItems > Breakpoint 4 at 0x109c79c: file > /home/pohly/syncevolution/libsynthesis/src/sysync/localengineds.cpp, line > 6635. > > None of these get called on the server side during a > refresh-from-client, slow or normal two-way sync. That makes sense, map > items are only relevant on the client, and the server is not using the > binfile class. > > But shouldn't the server also call FinalizeLocalID somewhere? Where > would be the right place for it? > > Regarding FinalizeLocalID() in the client, I see some conceptual issues > with it, at least for what I am trying to achieve. My main concern is > about error handling. > > FinalizeLocalID() can return an error code, but dsFinalizeLocalID does > not check for errors. It merely checks for "ID modified". So in case of > an error, the local store itself has to remember that writing failed and > "do something". > > What that "something" could be is not clear. It could return an error in > EndDataWrite(), in which case I would expect the engine to enforce a > slow sync in the next sync session. I can't think of other, realistic > options right now. > > I think a better solution would be to delay handling the result of a > database write until the entire SyncML message has been processed. Then > give the store a chance to finalize the ID of each item that was touched > as part of that SyncML message. The store can use that opportunity to > flush the pending operations and return error codes for each item, just > as it would normally do in the synchronous write operations. Then the > engine continues with the same error handling that it already does. > > The advantage, besides better error handling, is that the store has a > natural place to flush pending operations. Without the intermediate > FinalizeLocalID() calls a new "max pending operations" setting > independent of the SyncML message size would be needed. > > Comments? > > -- > Best Regards, Patrick Ohly > > The content of this message is my personal opinion only and although > I am an employee of Intel, the statements I make here in no way > represent Intel's position on the issue, nor am I authorized to speak > on behalf of Intel on this matter. > > > > _______________________________________________ > os-libsynthesis mailing list > [email protected] > http://lists.synthesis.ch/mailman/listinfo/os-libsynthesis _______________________________________________ os-libsynthesis mailing list [email protected] http://lists.synthesis.ch/mailman/listinfo/os-libsynthesis
