Re: [SyncEvolution] mapping between individual X- extensions and grouped extensions

2014-04-26 Thread Lukas Zeller
Hello Patrick,

On 25.04.2014, at 21:47, Patrick Ohly patrick.o...@intel.com wrote:

 [...]
 
 The bigger problem will be on the Evolution side. I don't see how I can
 teach libsynthesis that a IMPP entry whose protocol (encoded as part of
 the value!) is xmpp maps to X-JABBER.

I see no direct way either.

 Should I keep the traditional JABBER_HANDLE array and move entries back
 and forth between it and the IMPP array? This could be done with
 incoming/outgoing resp. afterread/beforewrite scripts.

I guess that's the way to go. In particular, I added the incoming/outgoing 
scripts on the datatypes (looong time ago :-) for very similar problems, after 
realizing that the formerly pure declarative way to describe vCard/Calendar 
types was getting more and more complex and still not catching all cases at 
hand.

In the libsynthesis based iOS clients, incoming/outgoing scripts are used a lot 
for exactly this type of normalizing data for the internal representation. No 
surprise that the scripting engine got regex support then...

The afterread/beforewrite script could do such a conversion as well, however 
for normalizing data these are executed too late on the server side for 
normalized data to be used in slow sync matching, so it'll be more complicated 
to correctly match and merge records.

So I'd think doing this in incoming/outgoing scripts would be better. As you 
said, JABBER_HANDLE would need to be kept as a temporary storage area for the 
parser to put X-JABBER values into, but the IMPP array would be the actual 
internal representation for the data.

Best Regards,

Lukas


signature.asc
Description: Message signed with OpenPGP using GPGMail
___
SyncEvolution mailing list
SyncEvolution@syncevolution.org
https://lists.syncevolution.org/mailman/listinfo/syncevolution

Re: [SyncEvolution] mapping between individual X- extensions and grouped extensions

2014-04-26 Thread Lukas Zeller
Hi Emiliano,

On 26.04.2014, at 12:31, Emiliano Heyns emiliano.he...@iris-advies.com wrote:

 Hello! Where do I find more on these scripts? This could be very interesting 
 for me.

The script language is documented in the libsynthesis config reference, see 
http://cgit.freedesktop.org/SyncEvolution/libsynthesis/tree/doc/SySync_config_reference.pdf

To see what scripts are available and when they are run within a sync session, 
http://cgit.freedesktop.org/SyncEvolution/libsynthesis/tree/doc/SySync_script_call_flow.pdf
 is helpful. It documents the server side only, but the client flow of calls is 
not fundamentally different.

Note that SyncEvolution already makes use of these scripts for a long time, 
it's nothing new.

Lukas


signature.asc
Description: Message signed with OpenPGP using GPGMail
___
SyncEvolution mailing list
SyncEvolution@syncevolution.org
https://lists.syncevolution.org/mailman/listinfo/syncevolution

Re: [SyncEvolution] mapping between individual X- extensions and grouped extensions

2014-04-26 Thread Lukas Zeller
On 26.04.2014, at 21:11, Patrick Ohly patrick.o...@intel.com wrote:

 On Sat, 2014-04-26 at 12:30 +0200, Lukas Zeller wrote:
 The afterread/beforewrite script could do such a conversion as well,
 however for normalizing data these are executed too late on the server
 side for normalized data to be used in slow sync matching, so it'll be
 more complicated to correctly match and merge records.
 
 What do you mean with that? The afterread script gets called after
 reading and before using the field list, right? So whatever
 transformation is necessary (for example, X-JABBER - IMPP) can be done
 in time before the engine processes the IMPP field.

It's the opposite direction I was thinking about: Assuming you chose the IMPP 
array to internally represent jabber, and further assuming a sync with a peer 
that uses the X-JABBER format, incoming items would be parsed and compared with 
existing items (in a slow sync for example) without beforewritescript being 
run. So you'll compare/merge non-normalized items (those coming from the peer) 
with normalized ones (those coming from the database).

This might not be relevant in the specific SyncEvolution case because of the 
vCard-based database backend, i.e. the fact that both ends are vCard 
representations of the data.

But for a classical libsynthesis SyncML app with a ODBC or SQLite backend: 
incoming/outgoing scripts are the place to handle variations in the 
*representation* of the same data content in vCard, whereas 
afterread/beforwrite just implements the non-trivial parts of the 1:1 *mapping* 
of the internal data content to a database backend.

Best Regards,

Lukas


signature.asc
Description: Message signed with OpenPGP using GPGMail
___
SyncEvolution mailing list
SyncEvolution@syncevolution.org
https://lists.syncevolution.org/mailman/listinfo/syncevolution

Re: [SyncEvolution] sync timeouts for syncevo-http-server + nokia e51 phone

2013-05-13 Thread Lukas Zeller
Hello Patrick,

On 23.04.2013, at 18:17, Patrick Ohly patrick.o...@intel.com wrote:

 On Wed, 2013-04-17 at 17:12 +0200, Lukas Zeller wrote:
 One more question, to you and/or Christof: in your experience, how
 quickly do phones give up? In other words, what is the recommended value
 of requestmaxtime?
 
 I'm currently considering to make one minute the default for all HTTP
 clients.

Years ago, the device that caused me implementing this was the Ericsson T68i 
which had no more than 30 seconds patience. 

Looking at the default config of the server, it seems that we settled on 120 
seconds for the global requestmaxtime. However it is possible to set a 
per-device requestmaxtime in the remote rule. Curiously, the T68i in the 
sample config does not set 30 seconds, though. Maybe later revisions had a more 
generous timeout, or nobody used that device ever again...

Best Regards,

Lukas


___
SyncEvolution mailing list
SyncEvolution@syncevolution.org
https://lists.syncevolution.org/mailman/listinfo/syncevolution


Re: [SyncEvolution] sync timeouts for syncevo-http-server + nokia e51 phone

2013-05-13 Thread Lukas Zeller
Hello Patrick,

On 13.05.2013, at 17:55, Patrick Ohly patrick.o...@intel.com wrote:

 Years ago, the device that caused me implementing this was the
 Ericsson T68i which had no more than 30 seconds patience. 
 
 Looking at the default config of the server, it seems that we settled
 on 120 seconds for the global requestmaxtime.
 
 I'm using the same default now.
 
 While testing, I found some issues in the code (compile problems,
 valgrind warnings) when running with multithreading and separate logs.
 The fixes for these problems are now in the master branch on
 freedesktop.org.

Thanks!

 Also there are some fixes for problems found earlier when running
 Klocwork on the code. At this point I have only analyzed and fixed some
 warnings from Klocwork; expect more of these kind of fixes once I find
 some time (and motivation) again ;-}

:-) Neither time nor motivation is unlimited...

 BTW, did you remove the mailing list intentionally?

No, once again just me pressing the wrong buttons in my mailer :-(

Best Regards,

Lukas


Lukas Zeller, plan44.ch
l...@plan44.ch - www.plan44.ch

___
SyncEvolution mailing list
SyncEvolution@syncevolution.org
https://lists.syncevolution.org/mailman/listinfo/syncevolution


Re: [SyncEvolution] sync timeouts for syncevo-http-server + nokia e51 phone

2013-02-07 Thread Lukas Zeller
On 07.02.2013, at 14:14, Patrick Ohly patrick.o...@intel.com wrote:

 On Thu, 2013-02-07 at 13:46 +0100, Christof Schulze wrote:
 I started with a one-way-sync from phone to syncevo-http-server.
 This caused syncevolution to remove all data from owncloud, which is
 perfectly fine. However doing so takes about one second for each entry
 causing the phone to report a timeout after a while.
 
 What is your take on that? Shouldn't syncevo-http-server try to keep the
 session to the phone open?
 
 The session is kept open; it's the phone decides that it is waiting too
 long for a reply and therefore aborts. I know that the Synthesis SyncML
 engine can send empty messages to inform the client that it is still
 alive, but I am not familiar with how that works in detail.
 
 Lukas, can you say more about this feature? When does it kick in?

This only works during loading the sync set, and only if libsynthesis is 
configured to allow threads (multithread set). Then, the initial loading 
operation (performStartSync()) is run in a separate thread   while the main 
thread issues a (empty) response every requestmaxtime.

However, the deletes that seem to cause the timeout are probably happening 
during implStartDataWrite()/apiZapSyncSet(), which does not have the option to 
get executed in a separate thread.

 Maybe a better approach would be to clean all
 files at once on the webdav server to circumvent the timeout in the
 first place?
 
 Did you do a refresh sync?
 
 There are ways to let a backend wipe out data more efficiently. The
 WebDAV backend doesn't support that at the moment, leading to a lot of
 DELETE requests. As you said, this should be added.

Yes, a more efficient delete-all operation usually can solve this problem. The 
libsynthesis plugin API has the option to implement a delete-all operation in 
DeleteSyncSet(), only if it is not implemented, the engine will issue a row of 
single item deletes.

Lukas


Lukas Zeller, plan44.ch
l...@plan44.ch - www.plan44.ch

___
SyncEvolution mailing list
SyncEvolution@syncevolution.org
http://lists.syncevolution.org/listinfo/syncevolution


Re: [SyncEvolution] PBAP: one-way sync modes

2012-09-05 Thread Lukas Zeller
Hello Patrick,

On 31.08.2012, at 15:49, Patrick Ohly patrick.o...@intel.com wrote:

 [...] let's focus on one-way syncing for PBAP.
 
 At the moment, the engine cannot know whether it is meant to mirror the
 data and thus it will leave the extra items on the server unchanged. I
 intend to add a mirror config option similar to readonly. If
 set, the engine will not only avoid sending changes to the client, it
 will also remove those extra local items.
 
 I've done it a bit differently: instead of adding a config option,
 there's now only a macro call. The full code is in the pbap branch. I
 also pushed the patches which use this feature into the pbap
 SyncEvolution branch.
 
 I'm attaching the libsynthesis patches for review. Lukas, does this make
 sense?

Yes. I looked through the patches and everything looks ok to me. 

 Can it get included upstream?

Yes. As I thought it was about time to clean up and update master as well,
I merged everything into my working branch and rebased it on luz_rebased,
as a release candidate for updating master real soon now...

 If yes, then I would rebase onto your latest code first. I held back
 with that because I wanted to release 1.3 with minimal changes.

luz_rebased is a candidate for master, you can take that, or create your
own rebased version, whatever fits better. Once this has settled,
I'll see how to update master.

Best Regards,

Lukas Zeller, plan44.ch
l...@plan44.ch - www.plan44.ch

___
SyncEvolution mailing list
SyncEvolution@syncevolution.org
http://lists.syncevolution.org/listinfo/syncevolution


Re: [SyncEvolution] one-way sync + sync tokens not updated

2012-08-22 Thread Lukas Zeller
Hello Patrick,

On 21.08.2012, at 09:28, Patrick Ohly patrick.o...@intel.com wrote:

 SyncEvolution's ActiveSync backend is the first one which uses the
 string tokens in StartDataRead() and EndDataWrite(). The backend always
 runs inside a binfile-based SyncML client.
 
 In one particular test I am using a one-way-from-server sync,
 initiated by the client, and noticed a problem: the new token created by
 the backend in EndDataWrite() was not passed to the StartDataRead() in
 the next sync.
 
 The backend cannot handle that, because the tokens come directly from
 ActiveSync, which only allows reusing old tokens in a few limited cases.
 In this case, the server rejected the obsolete token, causing an
 unexpected slow sync.
 
 [...]
 
 I added that debug output. It confirms that the keeping old sync token
 branch is taken.
 
 What is the rationale here? Is the goal perhaps that if the client
 switches back to a two-way sync, all changes since the last two-way sync
 get sent, despite the intermediate one-way sync?

Exactly!

Changes made on the updated-only side must not be forgotten, so the reference 
point for syncing BACK data is the last sync where data is sent to the remote 
(hence fPreviousTOREMOTESyncIdentifier). A update-from-remote is not a 
To-Remote-Sync, so the identifier is not updated.

 Those changes include the changes made on behalf of the server during
 the one-way-from-server sync. Is that filtered out?

The binfile implementation does some filtering of changes done by sync vs real 
changes (chgl_modbysync flag), although I'm not 100% sure it would cover this 
case correctly; it was implemented for suspendresume which has a similar 
challenge (partial syncs bring in new data, but the token cannot be updated 
until the sync has completed).

But that particular code you were looking at long predates binfile. The 
original idea was that it is better to reflect back too many changes to the 
server than loosing client data or getting out of sync at a switch from update 
device back to two way.

 OTOH, a user might decide to use an ActiveSync server as remote backup,
 in which case one-way syncing makes sense again. Would it be acceptable
 to always take the updating sync token branch above?

Yes, I guess that could be added as an option. To make it clean, a application 
using that options should make sure a switch back to two way also forces a 
slow sync; otherwise the resulting state could be out of sync (changes left on 
one side alone indefinitely even after a seemingly complete and successful 
two-way sync).

Best Regards,

Lukas Zeller, plan44.ch
l...@plan44.ch - www.plan44.ch

___
SyncEvolution mailing list
SyncEvolution@syncevolution.org
http://lists.syncevolution.org/listinfo/syncevolution


Re: [SyncEvolution] Can syncevolution be used to sync Palm PIlot and Evolution?

2012-07-14 Thread Lukas Zeller
Hello Adam and Patrick,

On 14.07.2012, at 21:09, Patrick Ohly wrote:

 On Fri, 2012-07-13 at 21:07 -0400, Adam Stein wrote:
 I thought I had come across something that sounds liked I could use
 syncevolution to sync up my Palm Pilot with Evolution.  Since Evolution
 no longer supports the Gnome conduits, I am trying to find another way
 to sync my Palm and Evolution.  Searched but couldn't find anything
 definite.
 
 I'm sorry, that is not possible. A sufficiently motivated developer
 could write the necessary code, but it doesn't exit today.

As long as your Palm has TCP connectivity (WiFi or via serial using helper 
tools like SoftIck PPP), that code does exist in form of a SyncML client for 
PalmOS. I wrote it a zillion years ago as far as I remember :-) It is based on 
the code that later became libsynthesis, and thus should work fine with 
SyncEvolution.

The product is still available from Synthesis - check out 
http://www.synthesis.ch/dl_client.php?lang=elay=deskbp=CPDApv=PALM#dlds_CPDA1
 for a free trial version.

Best Regards,

Lukas
___
SyncEvolution mailing list
SyncEvolution@syncevolution.org
http://lists.syncevolution.org/listinfo/syncevolution


Re: [SyncEvolution] Recurrences with Nth weekday of the month

2012-06-22 Thread Lukas Zeller
Hi Patrick,

On 22.06.2012, at 11:32, Patrick Ohly wrote:

 The original Evolution event has this:
 RRULE:BYDAY=TU;BYSETPOS=2;COUNT=2;FREQ=MONTHLY;WKST=SU
 
 [...]
 
 Are these type of recurrences supported with Evolution?
 
 In Evolution, yes, in libsynthesis, no. I can reproduce the problem and
 noticed that the code (src/sysync/rrules.cpp, RRULE2toInternal()) bails
 out because it doesn't recognize the BYSETPOS keyword.
 
 Lukas, was that a conscious decision, perhaps because it cannot be
 supported in combination with vCalendar 1.0, or just an oversight? What
 would it take to support that?

I don't really know, because the iCalendar 2.0 RRULE parser was a contribution 
from someone years ago.

But for certain, many things from the iCalendar 2.0 RRULEs can's be mapped to 
vCalendar 1.0 and also probably not to the current internal representation (the 
RR_xxx fields). I guess the 2.0 implementation was done more or less along what 
the 1.0 implementation had already offered.

However, second tuesday in month type rules are supported, albeit in a 
slightly different form:

  RRULE:FREQ=MONTHLY;BYDAY=2TU;COUNT=2;
  (second tuesday in month)

It also works relative to the end of the month, e.g.

  RRULE:FREQ=MONTHLY;BYDAY=-1MO;COUNT=2;
  (last monday in month)

For full BYSETPOS support (which also allows to pick the Nth match from groups, 
like last workday in month) the RR_xxx model would need to be extended, and 
the result would no longer be convertible into vCalendar 1.0.

But maybe it would already help to add limited BYSETPOS support to catch the 
cases that actually are only different forms of describing recurrences that ARE 
supported in the RR_xxx model and vCalendar 1.0. Which IMHO is specifically 
BYSETPOS lists in conjunction with a single entry BYDAY (like in the RRULE from 
the original posting).

Best Regards,

Lukas Zeller

___
SyncEvolution mailing list
SyncEvolution@syncevolution.org
http://lists.syncevolution.org/listinfo/syncevolution


Re: [SyncEvolution] Sync spinnt - X-SYNTHESIS-RESTART + Funambol

2012-03-26 Thread Lukas Zeller
Hi Patrick,

On Mar 26, 2012, at 9:07 , Patrick Ohly wrote:

 I should add that this X-SYNTHESIS-RESTART was introduced in the 1.3
 pre-releases. You can downgrade to the latest stable 1.2.2 without
 problems for syncing to avoid the 500 internal server error for the time
 being.

Just to make sure: Am I right that X-SYNTHESIS-RESTART (which now causes 
problems with Funambol) is only included in devInf when the datastore config 
has canrestartyes/canrestart? Which would mean that general libsynthesis 
usage (without SyncEvolution specific config) is not affected and works with 
Funambol?

Best Regards,

Lukas Zeller, plan44.ch
l...@plan44.ch - www.plan44.ch

___
SyncEvolution mailing list
SyncEvolution@syncevolution.org
http://lists.syncevolution.org/listinfo/syncevolution


Re: [SyncEvolution] 409 item merged in client + multiple sync cycles in a single session

2012-03-12 Thread Lukas Zeller
Hi Patrick,

On Mar 9, 2012, at 14:30 , Patrick Ohly wrote:

 On Tue, 2012-03-06 at 14:50 +0100, Patrick Ohly wrote:
 I haven't look into this yet, but still have it on my radar.
 
 Done in the meego.gitorious.org master branch. I found that checking for
 collisions is hard (not all records are in memory), so I settled for
 making the chance of collisions smaller in the string case by including
 a running count.

I guess this is way safe enough.

The worst that can happen is that two (at that time by definition already 
obsolete) server items will get a mapping to the same client ID when a fake-ID 
generation collision should occur (which now can only happen with 
suspendresume where libsynthesis is reinstantiated in between).

If so, the client will send two deletes for the same clientID to the server in 
a subsequent sync.

Depending on the server implementation, either the first delete wipes all items 
mapped to that ID and the second delete will get a 404, which is fine. Or the 
first delete wipes just the first item that maps to that item, and the second 
delete then wipes the second - correct as well.

Even if a super-smart server would merge the two items upon receiving a map to 
the same clientID, the end result would be correct (altough the merged item 
would likely make no sense - but as it is doomed at the time of merge already 
that would be an acceptable intermediate state for a case that is extremely 
unlikely to occur at all).

Best Regards,

Lukas 


Lukas Zeller, plan44.ch
l...@plan44.ch - www.plan44.ch

___
SyncEvolution mailing list
SyncEvolution@syncevolution.org
http://lists.syncevolution.org/listinfo/syncevolution


Re: [SyncEvolution] Update on work to port syncevo-dbus-server to GIO GDBus

2011-11-18 Thread Lukas Zeller
On Nov 17, 2011, at 18:04 , Patrick Ohly wrote:

 It turned that the 101 status in these failures is a remote error sent
 by the plan44.ch SyncML server, which is the server that these real
 syncs are meant to be done with. I've switched to Memotoo as peer for
 the D-Bus tests for now.
 
 Lukas, any idea about the reason for the 101 error?

Yes, it means service unavailable - too many sync sessions in parallel.

I now fixed that (it was a artifical limitation in the server, which was once 
intended for commerical installations. It was acidentally turned on in the 
plan44.ch server).

Note that while doing so I also built the plan44.ch server from the newest 
sources (version 3.4.0.38) . It identifies itself differently now (as 
plan44.ch), so if you did match those ID strings in testing, you might need 
to change it.

Best Regards,

Lukas Zeller
___
SyncEvolution mailing list
SyncEvolution@syncevolution.org
http://lists.syncevolution.org/listinfo/syncevolution


Re: [SyncEvolution] [os-libsynthesis] super data store: trying to read to check for existance

2011-10-25 Thread Lukas Zeller
Hello Patrick,

On Oct 24, 2011, at 16:55 , Patrick Ohly wrote:

 Why is it necessary to read before trying to delete? If the item exists,
 then reading it is a fairly expensive test.

Unfortunately, with some some backends, this is the only test that reliably 
works. For example, some SQL databases (or their ODBC driver) can't return a 
correct number of affected rows for DELETE statements. So reading before 
deleting was the only way to detect if any of the subdatastores really had that 
item and correctly return 404 or 200.

 So far, my backends were written with the expectation that they have to
 cope with delete requests for items which are already deleted. This
 follows from the inherent race condition between syncing and some other
 process which might delete items while a sync runs.
 
 Are all backends expected to return 404 when an item doesn't exist?

Not all backends (see above, built-in ODBC can't), but yes, for plugin 
backends, returning proper error codes is specified, also for delete. Still, if 
a plugin would not conform to that in its implementation of delete, that would 
probably have gone unnoticed so far.

Of course, the test is a bit expensive - if that's a concern, one could 
differentiate between plugin datastores and others (all the builtin ones, 
including SQL/ODBC), and use the expensive read test only for the latter. Like 
a virtual dsDeleteDetectsItemPresence() which returns false by default, but can 
be overridden in plugin datastores to return true.

Best Regards,

Lukas Zeller, plan44.ch
l...@plan44.ch - www.plan44.ch

___
SyncEvolution mailing list
SyncEvolution@syncevolution.org
http://lists.syncevolution.org/listinfo/syncevolution


Re: [SyncEvolution] [os-libsynthesis] super data store: trying to read to check for existance

2011-10-25 Thread Lukas Zeller
Hello Patrick,

On Oct 25, 2011, at 9:36 , Patrick Ohly wrote:

 Are these expectations documented somewhere?

Probably not as clearly as it should be. Beat's doc/SDK_manual.pdf is all we 
have.

 I'm still a bit fuzzy about
 which error codes are expected. For example, should a delete of a
 non-existent item succeed (200) or fail (404)?

It should return 404. The basic rule is: at the API level, the errors should be 
as direct and honest as possible.

If the engine tries to delete something that does not exist, the plugin should 
return an error. It's the task of the engine to disguise that error towards the 
SyncML peer (and in fact, there is an option for that: deletinggoneok, on by 
default, which means that failed deletes don't show up as 404 towards the peer, 
but go through as 200/OK).

 Still, if a plugin would not conform to that in its implementation of
 delete, that would probably have gone unnoticed so far.
 
 Right. SyncEvolution has hardly ever (never?) returned 404 and that has
 not been an issue until now ;-)

Because deletinggoneok is on by default, it made no difference. However (and 
that's the reason for that expensive test in the first place and the config 
option), the SCTS (SyncML Conformance Test Suite) did a test which failed when 
deleting a non-existing item returned 200. As in the SyncFest's back in early 
2000s passing SCTS was a prerequiste to be allowed to go to the IOT, the 
demands of SCTS had to be met even if the opposite made more sense for real 
world operation...

 Of course, the test is a bit expensive - if that's a concern, one
 could differentiate between plugin datastores and others (all the
 builtin ones, including SQL/ODBC), and use the expensive read test
 only for the latter. Like a virtual dsDeleteDetectsItemPresence()
 which returns false by default, but can be overridden in plugin
 datastores to return true.
 
 That sounds like the right compromise. I'm much more comfortable
 returning 404 in a delete and not reporting that to the user as an
 error, compared to not reporting a 404 in a ReadItem call (as I have to
 do now).

Now I don't understand. Why do you have to *not* report a 404 in ReadItem now??

Best Regards,

Lukas Zeller, plan44.ch
l...@plan44.ch - www.plan44.ch

___
SyncEvolution mailing list
SyncEvolution@syncevolution.org
http://lists.syncevolution.org/listinfo/syncevolution


Re: [SyncEvolution] [os-libsynthesis] super data store: trying to read to check for existance

2011-10-25 Thread Lukas Zeller
Hello Patrick,

On Oct 25, 2011, at 10:46 , Patrick Ohly wrote:

 Now I don't understand. Why do you have to *not* report a 404 in
 ReadItem now??
 
 Reporting to the engine is fine, reporting to the user is the
 problematic part. Let me explain.
 
 The SyncEvolution output, the one that is visible to end-users, contains
 INFO and ERROR messages about operations happening in the backend. Each
 attempt by the Synthesis engine to read a non-existent item results in a
 user-visible error. Or rather, two of them in the case of a super data
 store combining calendar and todo:
 
 [ERROR] error code from SyncEvolution fatal error (local, status 10500): 
 calendar: retrieving item: 20111023T082825Z-2322-1001-1969-0@lulu-rid: 
 Objektpfad des Kalenders kann nicht abgerufen werden: Objekt nicht gefunden
 [ERROR] error code from SyncEvolution fatal error (local, status 10500): 
 todo: retrieving item: 20111023T082825Z-2322-1001-1969-0@lulu-rid: Objektpfad 
 des Kalenders kann nicht abgerufen werden: Objekt nicht gefunden
 
 At the point where that logging happens it is unknown whether the error
 is really a problem or can be ignored. Only the Synthesis engine itself
 knows. When I started to use the Synthesis engine, I tried to make the
 logging happening inside the engine visible to users, but it simply
 wasn't meant to be used for that and so I gave up on that approach. 
 
 Admittedly the ERROR message above is not very informative either.

Ok, that is a bit of context needed to understand the initial statement :-)

But if the superdatastore was changed from probing with ReadItem to directly 
probing/deleting with DeleteItem, the error reporting would look similar, as it 
would show failed deletes for those subdatastores not containing the object in 
question.

I mean, for deleting a todo from the calendar superdatastore item you'd see 
something like calendar: failed deleting item xy and todo: deleted item xy 
ok (don't know if success is shown as well). Would this be any better / more 
understandable?

Best Regards,

Lukas Zeller, plan44.ch
l...@plan44.ch - www.plan44.ch

___
SyncEvolution mailing list
SyncEvolution@syncevolution.org
http://lists.syncevolution.org/listinfo/syncevolution


Re: [SyncEvolution] regular expression library: boost Xpressive?

2011-10-23 Thread Lukas Zeller
BTW: pcre is already in use in libsynthesis, so you'll not create a new 
dependency when using it...

On Oct 18, 2011, at 20:25 , Patrick Ohly wrote:

 On Tue, 2011-10-18 at 17:10 +0100, Peter Robinson wrote:
 What about pcre? Its fairly low level and used by quite a few OS
 features (at least in Fedora) and the configure for syncevolution
 already explicitly checks for it so it looks like syncevolution
 already uses it somewhere.
 
 I thought it was limited to only matching. Checking again I find, at
 least in the C++ bindings, also a Replace() operation. It's a bit more
 limited than boost::Xpressive (cannot provide a callback for the
 replace, only supports \0-9 in the replacement string). One point in
 favor of pcre is UTF-8 support.
 
 Thanks for the reminder, pcre indeed might be suitable.
 
 -- 
 Best Regards, Patrick Ohly
 
 The content of this message is my personal opinion only and although
 I am an employee of Intel, the statements I make here in no way
 represent Intel's position on the issue, nor am I authorized to speak
 on behalf of Intel on this matter.
 
 
 ___
 SyncEvolution mailing list
 SyncEvolution@syncevolution.org
 http://lists.syncevolution.org/listinfo/syncevolution

___
SyncEvolution mailing list
SyncEvolution@syncevolution.org
http://lists.syncevolution.org/listinfo/syncevolution


Re: [SyncEvolution] sqlite backend enhancement

2011-09-26 Thread Lukas Zeller
Hi Roger,

On Sep 26, 2011, at 14:24 , Roger KeIrad wrote:

 Hi Lukas,
 I ve got 2759 which is correct in the first and second call which is the 
 right size of the thumbnail.
 The problem is that when I write the buffer to a file I only get 4 bytes 
 (size of the buffer).
 The file is dumped in PHOTO field in incoming item.
 Is there a mistake in the manner I use the extracted buffer? because I try to 
 put the buffer on a file by character.

That is binary data. If you have that processed by a character API, probability 
is high that processing will stop at the first 0x00 / NUL character, because in 
text data this is usually the terminating character, while in binary it is a 
part of the data like all other values from 0x00..0xFF.

So I assume that the data you get into your buffer is fine, but has a 0x00 as 
the fifth byte, which then stops the code you are using to write that to disk.
Plain fwrite should work, but not fputs for example, as this stops at the first 
0x00 encountered.

Best Regards,

Lukas



 2011/9/26 Lukas Zeller l...@plan44.ch
 Hello Roger,
 
 the code looks fine, exactly the way I am doing it to get the PHOTO data in 
 my own plugin I have for iOS address book.
 So there's something else that must be wrong.
 
 You say you get 2759 as size. Is this valsize in the first call?
 And then in the second call you get only 4 bytes back? What does valsize 
 return for the second call? Also 4 or 2759?
 
 Do you have a logfile where you see the incoming item dumped? If so, what 
 does it show for the PHOTO field?
 
 Best Regards,
 
 Lukas
 
 
 On Sep 26, 2011, at 11:32 , Roger KeIrad wrote:
 
  Hi Lukas
  Thanks for the response. I followed your tips but It didn't work here is my 
  code :
 
  TSyError res = this-ui.GetValueByID(this,
  aItemKey,
valueId,
  idx,
  sysync::VALTYPE_BUF,
  NULL,
  0,
  valsize
  );
   if(!res  valsize){
  data = SharedBuffer(new 
  char[valsize*1024*1024 + 1], valsize);
  data[valsize] = 0;
  res = this-ui.GetValueByID(this,
   aItemKey,
   valueId,
   idx,
   
  sysync::VALTYPE_BUF,
   data.get(),
   valsize + 
  1,
   valsize
   );
  }
 
  I ve got size = 2759 but i always extract 4 bytes.
  Can you help me please?
  Thank you in advance
  Regards
 
  2011/9/22 Lukas Zeller l...@plan44.ch
  Hello Roger,
 
  On Sep 22, 2011, at 16:25 , Roger KeIrad wrote:
 
   Hello,
   It is me again.
   I have some troubles with PHOTO for synchronised contact.
   I used the same idea to extract encoded value to decode it.
   I tried with GetValue and  GetValueByID but no chance.
 
  Both should work (the PHOTO is not an array field).
 
  Just pass VALTYPE_BUF as type, and provide a buffer large enough to get the 
  binary photo data.
 
  To find out in advance how large the buffer needs to be, just call GetValue 
  once with a NULL pointer as buffer or zero buffer size. This will return 
  the needed size in the aValSize parameter, which you can use to allocate a 
  buffer large enough to get the entire PHOTO.
 
  If you pass a buffer that is too small for the entire PHOTO data, you'll 
  get LOCERR_TRUNCATED status.
 
  The data you get is binary data, usually a JPG. If you save the data 1:1 
  into a file, any image viewer program should be able to show it.
 
  Best Regards,
 
  Lukas

Lukas Zeller, plan44.ch
l...@plan44.ch - www.plan44.ch

___
SyncEvolution mailing list
SyncEvolution@syncevolution.org
http://lists.syncevolution.org/listinfo/syncevolution


Re: [SyncEvolution] sqlite backend enhancement

2011-09-16 Thread Lukas Zeller
Hello Roger,

On Sep 16, 2011, at 16:28 , Roger KeIrad wrote:

 hello,
 any help please?
 thanks in advance.


There are a number of GetValueXXX variants. GetValue() is only for single 
values.

For arrays, you need to use GetValueByID(), which has a arrIndex parameter. To 
get the ID for a name, use GetValueID(). You can get the size of the array by 
appending VALSUFF_ARRSZ to the array field name, and call GetValue() with that 
- you'll get the number of elements back.

See sysync_SDK/Sources/enginemodulebase.h for more detailed description of all 
the Get/SetValue functions, and engine_defs.h for the VALNAME_xxx and 
VALSUFF_xxx explanations.

Pseudo code to get all elements of an array would look like:

// get size of array
uInt16 arraySize;
memsize n;
GetValue(arrayFieldName VALSUFF_ARRSZ, VALTYPE_INT16, 
arraySize,sizeof(arraySize),n);

// get elements of array
long arrayFieldID = GetValueID(arrayFieldName);
for (int i=0; iarraySize; i++) {
  GetValueByID(arrayFieldID, i, VALTYPE_XXX, buffer, maxSize, n);
  // do something with the value
}



Below is a more elaborate routine, which prints out all values for a given 
aItemKey, including array fields:


/* Show all fields/variables of item represented by aItemKey */
static void ShowFields(DB_Callback cb, appPointer aItemKey)
{
  const stringSize maxstr = 128;
  appChar fnam[maxstr];
  appChar fval[maxstr];
  appChar ftz[maxstr];
  uInt16 fvaltype;
  uInt16 farrsize;
  uInt16 arridx;
  bool fisarray;
  uInt32 valsize;
  TSyError err;
  uInt32 valueID,nameFlag,typeFlag,arrszFlag,tznamFlag;
  // set desired time mode
  cb-ui.SetTimeMode(cb, aItemKey, TMODE_LINEARTIME+TMODE_FLAG_FLOATING);  
  // get flags that can be combined with valueID to get attributes of a value
  nameFlag = cb-ui.GetValueID(cb, aItemKey, .FLAG.VALNAME);
  typeFlag = cb-ui.GetValueID(cb, aItemKey, .FLAG.VALTYPE);
  arrszFlag = cb-ui.GetValueID(cb, aItemKey, .FLAG.ARRAYSIZE);
  tznamFlag = cb-ui.GetValueID(cb, aItemKey, .FLAG.TZNAME);
  // iterate over all fields
  // - start iteration
  valueID = cb-ui.GetValueID(cb, aItemKey, VALNAME_FIRST);
  while (valueID != KEYVAL_ID_UNKNOWN  valueID != KEYVAL_NO_ID) {
// get field name
err = cb-ui.GetValueByID(cb,
  aItemKey,
  valueID + nameFlag,
  0,
  VALTYPE_TEXT,
  fnam,
  maxstr,
  valsize
); 
// get field type
err = cb-ui.GetValueByID(cb,
  aItemKey,
  valueID + typeFlag,
  0,
  VALTYPE_INT16,
  fvaltype,
  sizeof(fvaltype),
  valsize
);
// check if array, and if array, get number of elements
err = cb-ui.GetValueByID(cb,
  aItemKey,
  valueID + arrszFlag,
  0,
  VALTYPE_INT16,
  farrsize,
  sizeof(farrsize),
  valsize
);
fisarray = err==LOCERR_OK;

if (!fisarray) {
  // single value
  err = cb-ui.GetValueByID(cb, aItemKey, valueID, 0, VALTYPE_TEXT, fval, 
maxstr, valsize); 
  if (err==LOCERR_OK) {
if (fvaltype==VALTYPE_TIME64) {
  // for timestamps, get time zone name as well
  cb-ui.GetValueByID(cb, aItemKey, valueID+tznamFlag, 0, VALTYPE_TEXT, 
ftz, maxstr, valsize);
  DEBUG_(cb, - %-20s (VALTYPE=%2hd) = %s 
timezone=%s,fnam,fvaltype,fval,ftz);
}
else
  DEBUG_(cb, - %-20s (VALTYPE=%2hd) = '%s',fnam,fvaltype,fval);
  }
  else
DEBUG_(cb, - %-20s (VALTYPE=%2hd) : No value, 
error=%hd,fnam,fvaltype,err);  
}
else {
  // array
  DEBUG_(cb, - %-20s (VALTYPE=%2d) = Array with %d 
elements,fnam,fvaltype,farrsize);
  // show elements
  for (arridx=0; arridxfarrsize; arridx++) {
err = cb-ui.GetValueByID(cb, aItemKey, valueID, arridx, VALTYPE_TEXT, 
fval, maxstr, valsize);
if (err==LOCERR_OK) {
  if (fvaltype==VALTYPE_TIME64) {
// for timestamps, get time zone name as well
cb-ui.GetValueByID(cb, aItemKey, valueID+tznamFlag, arridx, 
VALTYPE_TEXT, ftz, maxstr, valsize);
DEBUG_(cb,%20s[%3hd] = %s 
timezone=%s,fnam,arridx,fval,ftz);
  }
  else
DEBUG_(cb,%20s[%3hd] = '%s',fnam,arridx,fval);
}
else
  DEBUG_(cb,%20s[%3hd] : No value, 
error=%hd,fnam,arridx,err);
  }
}
// next value
valueID = cb-ui.GetValueID(cb, aItemKey, VALNAME_NEXT);
  } // while more values
} /* ShowFields */

___
SyncEvolution mailing list
SyncEvolution@syncevolution.org
http://lists.syncevolution.org/listinfo/syncevolution


Re: [SyncEvolution] [os-libsynthesis] UID matching during add-add conflict

2011-09-15 Thread Lukas Zeller
Hi Patrick,

On Sep 15, 2011, at 11:56 , Patrick Ohly wrote:

 [...]
 The sync then aborts and during flushing the current state for a suspend
 segfaults when trying to read the bogus 0x20 local ID pointer.
 
 This is because of TCustomImplDS::implProcessItem():
 [...]
 ===augmentedItemP = (TMultiFieldItem 
 *)SendDBVersionOfItemAsServer(myitemP);
 [...]
  delete augmentedItemP; augmentedItemP = NULL;
 
 The 207 code path goes through the two marked lines. I suppose setting
 augmentedItemP in the first line is wrong? It's not needed and must not
 be freed.

Exactly, thanks for finding this.

I am a bit spoilt by retain/release mechanism in ObjC ;-)

In the meantime, I have consolidated my repositories and merged your branches 
into luz and updated it with what was pending from my side so far, including 
this one now.

Best Regards,

Lukas


From d649138efc02e959cb1197a58afa7a6b5c1e959f Mon Sep 17 00:00:00 2001
From: Lukas Zeller l...@plan44.ch
Date: Thu, 15 Sep 2011 14:51:18 +0200
Subject: [PATCH] engine: fixed bad object delete case (Patrick found it) -
SendDBVersionOfItemAsServer() does not pass ownership for
item returned!

---
src/sysync/customimplds.cpp |3 ++-
1 files changed, 2 insertions(+), 1 deletions(-)

diff --git a/src/sysync/customimplds.cpp b/src/sysync/customimplds.cpp
index 888cde0..f790b05 100755
--- a/src/sysync/customimplds.cpp
+++ b/src/sysync/customimplds.cpp
@@ -2903,7 +2903,8 @@ bool TCustomImplDS::implProcessItem(
  }
  else {
// augmented version was created in backend, fetch it now and 
add to list of items to be sent
-augmentedItemP = (TMultiFieldItem 
*)SendDBVersionOfItemAsServer(myitemP);
+// Note: item remains owned by list of items to be sent, so we 
don't need to dispose it.
+SendDBVersionOfItemAsServer(myitemP);
  }
}
sta = LOCERR_OK; // otherwise, treat as ok
-- 
1.7.5.4+GitX

___
SyncEvolution mailing list
SyncEvolution@syncevolution.org
http://lists.syncevolution.org/listinfo/syncevolution


Re: [SyncEvolution] UID matching during add-add conflict

2011-09-07 Thread Lukas Zeller


The other half:

 So what's missing right now is a simple way to actually do the merge
 in the backend, without re-implementing half of libsynthesis. The
 (hopefully not so) longterm solution would be libvxxx. As a short term
 workaround, which is not exactly super elegant but not dangerous
 either, we could do the following:
 
 * we define another special return code for AddItem, say 419.
  The backend can return it when it detects that the added data 
 SHOULD
  be merged with what the backend already has, but can't do that
  itself.
 
 * the engine would do the same things as (details see commit msg
  in the patch below) for 207 but additionally it would:
   * merge the incoming data with the data fetched from the backend
with the localID returned from AddItem()
   * call UpdateItem() with the merge result in the backend
 
 Let me know what you think...
 
 That sounds like it would solve the problem.
 
 Is this something that you intend to work on or shall I give it a try
 myself?

I am confident I can do that tomorrow.

Best Regards,

Lukas


Lukas Zeller, plan44.ch
l...@plan44.ch - www.plan44.ch

___
SyncEvolution mailing list
SyncEvolution@syncevolution.org
http://lists.syncevolution.org/listinfo/syncevolution


Re: [SyncEvolution] Memotoo + X-EVOLUTION-FILE-AS

2011-08-18 Thread Lukas Zeller

On Aug 18, 2011, at 12:08 , Patrick Ohly wrote:

 This is provided by Evolution:
 [...]
 
 X-EVOLUTION-FILE-AS:spouse\, name
 
 [...]
 
 This is sent to Memotoo as:
 
 X-EVOLUTION-FILE-AS:spouse, name
 
 [...]
 
 Note that the comma was not considered special by the Synthesis encoder.
 My memory is a bit fuzzy: is escaping it optional in vCard 3.0 optional
 and/or does this depend on whether the property is meant to be a
 comma-separated list? Lukas?

For vCard 2.1, the engine only escapes when the contents is really a comma 
separated list.
For vCard 3.0, commas are always escaped.

That is, when actually generating that property normally. It seems however that 
X-EVOLUTION-FILE-AS was created using the unprocessed mode (X-*), which does 
not do any deescaping or escaping (except for lineends). So if 
X-EVOLUTION-FILE-AS did not have the escape on input, it would not be 
reproduced with comma escapes on output.

At least it was meant to work like this, and it also does when generating X-* 
properties.

However when looking at the code I found that the code checking the 
*de*-escaping has a bug and IMHO would have converted a sequence like 
foo\,bar into foo,,bar, i.e. duplicated the escaped char instead of 
retaining the escape. I wonder why that double comma is not seen in the version 
from Evolution (after being parsed into a syncitem, of course).

Anyway, here's a patch which fixes the deescaping bug, even if that seems not 
directly related to the issue.

Still, I'd like to understand what exactly happened here. Any ideas?

Best Regards,

Lukas


From ffdaf33a39da012969d3292b885963d8bd533cae Mon Sep 17 00:00:00 2001
From: Lukas Zeller l...@plan44.ch
Date: Thu, 18 Aug 2011 18:12:31 +0200
Subject: [PATCH] engine: MIME-DIR, fixed de-escaping in
 only-de-escape-linefeeds mode

The way the code was before, a sequence like foo\,bar would have
been converted into foo,,bar, i.e. duplicated the escaped char
instead of retaining the escape.
---
 sysync/mimedirprofile.cpp |2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/sysync/mimedirprofile.cpp b/sysync/mimedirprofile.cpp
index caaa262..985601e 100644
--- a/sysync/mimedirprofile.cpp
+++ b/sysync/mimedirprofile.cpp
@@ -3689,7 +3689,7 @@ bool TMimeDirProfileHandler::parseValue(
 c=*p;
 if (!c) break; // half escape sequence, ignore
 else if (c=='n' || c=='N') c='\n';
-else if (aOnlyDeEscLF) val+=c; // if deescaping only for \n, 
transfer escape char into output
+else if (aOnlyDeEscLF) val+='\\'; // if deescaping only for \n, 
transfer this non-LF escape into output
 // other escaped chars are shown as themselves
   }
   // add char
-- 
1.7.5.4+GitX
___
SyncEvolution mailing list
SyncEvolution@syncevolution.org
http://lists.syncevolution.org/listinfo/syncevolution


Re: [SyncEvolution] UID matching during add-add conflict

2011-08-16 Thread Lukas Zeller
 time. I
 have my doubts whether it is always the right choice (for example, one
 cannot add similar contacts that the server deems identical), but for
 iCalendar 2.0 UID it would be.

I see that the actions to be taken after a backend-detected merge make sense to 
be added to the engine, as outline above.

However, actually detecting and merging a duplicate belongs into the backend, 
as the search usually must extend beyond what the engine sees as current sync 
set (imagine a date-range filtered sync, and an invitation added on both sides 
which is to far in the future to be in the date range window. The candidate for 
a merge could not be found in the sync set!).

 The problematic part is indeed the client side, because a duplicate
 detected there will temporarily leave client and server out of sync. But
 when assuming that the server does the same duplicate checking, it can
 only happen when a new item was added on the client during the sync
 (otherwise the server would have detected the duplicate).

Agreed. Technically, modifications happening *during* sync in real time are 
considered having happened *after* sync (except if the backend cannot pinpoint 
the modification dates of all entries modified by sync excactly to the sync 
reference token sampled at the beginning of the sync).

Best Regards,

Lukas Zeller

syncml...@plan44.ch
http://www.plan44.ch/syncmlios.php

___
SyncEvolution mailing list
SyncEvolution@syncevolution.org
http://lists.syncevolution.org/listinfo/syncevolution


Re: [SyncEvolution] UID matching during add-add conflict

2011-08-16 Thread Lukas Zeller
On Aug 16, 2011, at 12:54 , Patrick Ohly wrote:

 SyncEvolution already does that. But because it can't do a real merge of
 the data, some information is lost.
 
 Why can't it do a real merge?
 
 Merely for the practical reasons that you mentioned - there's no code
 which does it. It could be implemented, but right now SyncEvolution is
 not capable of such merging. It would duplicate functionality of the
 Synthesis engine, so we really should get libvxxx ready for such use.

Yes, I guess that would be useful. We'll see...

 But yes, when the sync started, the backend should have reported the
 original item as a new one (before the merge occurred).
 
 Not necessarily as new. In most cases it will be unchanged. Only in the
 case of the add-add conflict will it be new.

Agreed.

 The engine will know about it one way or the other, though.

Yes - it can check in the maps.
I created a patch that enhances support for handling the different
merge cases (see below).

  * item with localID FOO exists
  * item with remotedID BAR is coming in as an add
  * backend merges it with existing item and returns localID FOO with 
 status 207
 
 Agreed. This is what happens right now already. What I don't understand
 is what the backend should be doing differently. You said need to issue
 a delete for the other item - which item? The backend only knows about
 FOO, which continues to exist. It is never passed the BAR remote ID,
 is it?

Correct. I was wrong in the first mail about that. The backend cannot do that, 
the engine can.

 All but the first bullet point are not implemented so far.
 
 I had to resolve the double negation before this sentence made sense to
 me ;-) So the read back bullet item is implemented, the rest isn't.

Correct! That was a case of too many edits ;-)

With the patch below the entire list (plus also handling not just add-add 
conflicts) is implemented (but not yet tested).

 However, actually detecting and merging a duplicate belongs into the
 backend, as the search usually must extend beyond what the engine sees
 as current sync set (imagine a date-range filtered sync, and an
 invitation added on both sides which is to far in the future to be in
 the date range window. The candidate for a merge could not be found in
 the sync set!).
 
 Agreed.

So what's missing right now is a simple way to actually do the merge in the 
backend, without re-implementing half of libsynthesis. The (hopefully not so) 
longterm solution would be libvxxx. As a short term workaround, which is not 
exactly super elegant but not dangerous either, we could do the following:

* we define another special return code for AddItem, say 419.
  The backend can return it when it detects that the added data SHOULD
  be merged with what the backend already has, but can't do that
  itself.

* the engine would do the same things as (details see commit msg
  in the patch below) for 207 but additionally it would:
  * merge the incoming data with the data fetched from the backend
with the localID returned from AddItem()
  * call UpdateItem() with the merge result in the backend

Let me know what you think...

Best Regards,

Lukas Zeller


--- the patch ---

From a6ff628dc1833c6eacfa307c4fbd1ce7a4757077 Mon Sep 17 00:00:00 2001
From: Lukas Zeller l...@plan44.ch
Date: Tue, 16 Aug 2011 15:02:43 +0200
Subject: [PATCH] server engine: better support for backend doing its own
 duplicate merging (status 207 from API)

So far, the DB backend could return DB_DataMerged (=207) for an add
to signal that the added data was merged with some pre-existing data
and thus the client should be updated with the merge result. This was
intended for merge with external data (augmenting entries with
lookups from other sources), but not yet for duplicate elimination.

This patch adds support for propagating merges that eliminate duplicates
in addition to merges that just add external data.

When a client sends a new item (new = client side ID not yet known
server side), and the DB backend returns status 207 for AddItem,
the following happens:

- current maps are searched for an item with the same localID as
  what the DB backend just returned as the new localID. If such
  an item is found, this means the add was completed by merging
  with an existing item.
- If so, this means that for this single localID, there are now
  two versions with different remoteIDs on the client. The item
  with the remoteID found in the map must be deleted, so a
  delete command is added to the list of changes to be sent to
  the client.
- It might also be that the item merged was just added to the
  server (and not yet known to the client), or had another
  change predating the merge. If so, the list of changes to be sent
  to the client will contain an add or a replace, resp.
  that must NOT propagate to the client, so it is removed
  from the list now.
- Finally, the result

Re: [SyncEvolution] uri + binary PHOTO values (was: Re: [os-libsynthesis] field merging + parameters)

2011-07-22 Thread Lukas Zeller
On Jul 21, 2011, at 14:13 , Patrick Ohly wrote:

 On Do, 2011-07-21 at 12:21 +0200, Patrick Ohly wrote:
 Now there is only one other problem with PHOTO uris. They get encoded as
 binary data:
 
 PHOTO;VALUE=uri:http://example.com/photo.jpg
 =
 PHOTO;ENCODING=B;VALUE=uri:aHR0cDovL2V4YW1wbGUuY29tL3Bob3RvLmpwZw==
 
 Is that because PHOTO is defined as blob?
 
 Would it make sense to be selective in the encoder for blob and only
 use binary encoding if the content contains non-printable characters?
 
 This patch has the desired effect:
 
 [...]
 
 I tried it with both binary PHOTO and VALUE=uri.

Looks fine, however I'd prefer to restrict that functionality to a new convmode 
CONVMODE_BLOB_AUTO, instead of changing the behaviour of CONVMODE_BLOB_B64 
which has the B64 encoding as a promise in its name :-).

I added this in the following patch (which fits on top of current 
meego/bmc19661).


From 1161e5f15100432f7edad792bc72fe64ca22bf00 Mon Sep 17 00:00:00 2001
From: Lukas Zeller l...@plan44.ch
Date: Fri, 22 Jul 2011 14:34:24 +0200
Subject: [PATCH] blob fields: Added separate CONVMODE_BLOB_AUTO conversion
 mode for fields that should be rendered as B64 only in case
 they are really non-printable or non-ASCII

This is an addition to 8d5cce896d (blob fields: avoid binary encoding if 
possible) to avoid change of behaviour for CONVMODE_BLOB_B64.
---
 src/sysync/mimedirprofile.cpp |   32 ++--
 src/sysync/mimedirprofile.h   |1 +
 2 files changed, 23 insertions(+), 10 deletions(-)

diff --git a/src/sysync/mimedirprofile.cpp b/src/sysync/mimedirprofile.cpp
index 1499876..c8fe995 100644
--- a/src/sysync/mimedirprofile.cpp
+++ b/src/sysync/mimedirprofile.cpp
@@ -175,6 +175,8 @@ bool TMIMEProfileConfig::getConvMode(cAppCharP aText, 
sInt16 aConvMode)
   aConvMode = CONVMODE_MULTIMIX;
 else if (strucmp(aText,blob_b64,n)==0)
   aConvMode = CONVMODE_BLOB_B64;
+else if (strucmp(aText,blob_auto,n)==0)
+  aConvMode = CONVMODE_BLOB_AUTO;
 else if (strucmp(aText,mailto,n)==0)
   aConvMode = CONVMODE_MAILTO;
 else if (strucmp(aText,valuetype,n)==0)
@@ -2260,7 +2262,8 @@ sInt16 TMimeDirProfileHandler::generateValue(
   maxSiz = 0; // no size restriction
 bool 
noTruncate=aItem.getTargetItemType()-getFieldOptions(fid)-notruncate;
 // check for BLOB values
-if ((aConvDefP-convmode  CONVMODE_MASK)==CONVMODE_BLOB_B64) {
+sInt16 convmode = aConvDefP-convmode  CONVMODE_MASK;
+if (convmode==CONVMODE_BLOB_B64 || convmode==CONVMODE_BLOB_AUTO) {
   // no value lists, escaping, enums. Simply set value and encoding
   TItemField *fldP = aItem.getArrayField(fid,aRepOffset,true); // 
existing array elements only
   if (!fldP) return GENVALUE_EXHAUSTED; // no leaf field - must be 
exhausted array (fldP==NULL is not possible here for non-arrays)
@@ -2275,16 +2278,22 @@ sInt16 TMimeDirProfileHandler::generateValue(
   }
   // append to existing string
   fldP-appendToString(outval,maxSiz);
-  // force B64 encoding if non-printable or non-ASCII characters
-  // are in the value
-  size_t len = outval.size();
-  for (size_t i = 0; i  len; i++) {
-char c = outval[i];
-if (!isascii(c) || !isprint(c)) {
-  aEncoding=enc_base64;
-  break;
+  if (convmode==CONVMODE_BLOB_AUTO) {
+// auto mode: use  B64 encoding only if non-printable or
+// non-ASCII characters are in the value
+size_t len = outval.size();
+for (size_t i = 0; i  len; i++) {
+  char c = outval[i];
+  if (!isascii(c) || !isprint(c)) {
+aEncoding=enc_base64;
+break;
+  }
 }
   }
+  else {
+// blob mode: always use B64
+aEncoding=enc_base64;
+  }
   // only ASCII in value: either because it contains only
   // those to start with or because they will be encoded
   aNonASCII=false;
@@ -3658,7 +3667,10 @@ bool TMimeDirProfileHandler::parseValue(
 // find out if value exists (available in source and target)
 if (isFieldAvailable(aItem,fid)) {
   // parse only if field available in both source and target
-  if ((aConvDefP-convmode  CONVMODE_MASK)==CONVMODE_BLOB_B64) {
+  if (
+(aConvDefP-convmode  CONVMODE_MASK)==CONVMODE_BLOB_B64 ||
+(aConvDefP-convmode  CONVMODE_MASK)==CONVMODE_BLOB_AUTO
+  ) {
 // move 1:1 into field
 // - get pointer to leaf field
 TItemField *fldP = aItem.getArrayField(fid,aRepOffset);
diff --git a/src/sysync/mimedirprofile.h b/src/sysync/mimedirprofile.h
index 4db8bdf..ed67fbf 100755
--- a/src/sysync/mimedirprofile.h
+++ b/src/sysync/mimedirprofile.h
@@ -49,6 +49,7 @@ namespace sysync {
 #define CONVMODE_VALUETYPE 14 // automatic VALUE parameter e.g. for timestamp

Re: [SyncEvolution] dealing with PHOTO:file:// (BMC #19661)

2011-07-22 Thread Lukas Zeller
Hello Patrick,

thanks for the READ() script function! This is sure a useful extension of the 
script toolset.

However, I'd like to mention in this context that the engine for a long time 
contains a mechanism for the general problem with large fields.

Some data (usually opaque binary data like the PHOTO or email attachments, but 
also possibly very large text fields like NOTE) should be loaded on-demand 
only, and not with with the syncset and the fields that are needed for ID and 
content matching.

So string fields can have a proxy object (a better term would probably be data 
provider), which is called not before the contents of the field is actually 
needed - usually when encoding for the remote peer. It is the p mode flag 
in the datastore maps which controls the use of field proxies.

In the ODBC/SQL-backend, these proxies are configured with their own SQL 
statement which loads the field's data. In the plugin backend, which is used in 
SyncEvolution, the single-field-pull mechanism is mapped onto the 
ReadBLOB/WriteBLOB Api.

The proxy mechanism was even designed with the idea that really huge data 
should never be loaded as a block, but only streamed through the engine. 
However, that was never implemented on the encoding side, as the current SyncML 
item chunking mechanism is not ready for streamed generation (total size must 
be known in advance).
But for a SQL based server like our IOT server, it already helps a lot if 
contact images are NOT loaded as part of the syncset loading, but only when 
actually needed.

This is JFYI - for the problem at hand the READ() script solution is sure a 
clean and efficient way to go. 

Best Regards,

Lukas


On Jul 22, 2011, at 9:37 , Patrick Ohly wrote:

 Hello!
 
 I'm currently working on https://bugs.meego.com/show_bug.cgi?id=19661:
 like N900/Maemo 5 before, MeeGo apps prefer to store URIs of local photo
 files in the PIM storage instead of storing the photo data in EDS
 (better for performance).
 
 When such a contact is synced to a peer, the photo data must be included
 in the vCard. Ove solved that in his N900 port by inlining the data when
 reading from EDS, before handing the data to SyncEvolution/Synthesis.
 This has the downside that the data must be loaded in all cases,
 including those where it is not needed (slow sync comparison of the
 other properties in server mode) and remains in memory much longer.
 
 I'd like to propose a more efficient solution that'll work with all
 backends. Ove, if this goes in, then you might be able to remove the
 special handling of photos in your port.
 
 The idea is that a) the data model gets extended to allow both URI and
 data in PHOTO and b) a file URI gets replaced by the actual file content
 right before sending (but not sooner).
 
 Lukas, can you review the libsynthesis changes? See below.  I pushed to
 the bmc19661 branch in meego.gitorious.org. Some other fixes are also
 included.
 
 ---
 $ git log --reverse -p 92d2f367..bmc19661
 
 commit 01c6ff4f7136d2c72b520818ee1ba89dc53c71f0
 Author: Patrick Ohly patrick.o...@intel.com
 Date:   Fri Jul 22 08:12:05 2011 +0200
 
SMLTK: fixed g++ 4.6 compiler warning
 
g++ 4.6 warns that the rc variable is getting assigned but never
used. smlEndEvaluation() must have been meant to return that error
code instead of always returning SML_ERR_OK - fixed.
 
 diff --git a/src/syncml_tk/src/sml/mgr/all/mgrcmdbuilder.c 
 b/src/syncml_tk/src/sml/mgr/all/mgrcmdbuilder.c
 index ae040a4..601b530 100755
 --- a/src/syncml_tk/src/sml/mgr/all/mgrcmdbuilder.c
 +++ b/src/syncml_tk/src/sml/mgr/all/mgrcmdbuilder.c
 @@ -698,7 +698,7 @@ SML_API Ret_t smlEndEvaluation(InstanceID_t id, MemSize_t 
 *freemem)
 return SML_ERR_WRONG_USAGE;
 
   rc = xltEndEvaluation(id, (XltEncoderPtr_t)(pInstanceInfo-encoderState), 
 freemem);
 -  return SML_ERR_OK;
 +  return rc;
 }
 
 #endif
 
 commit 8d5cce896dcc5dba028d1cfa18f08e31adcc6e73
 Author: Patrick Ohly patrick.o...@intel.com
 Date:   Fri Jul 22 08:36:22 2011 +0200
 
blob fields: avoid binary encoding if possible
 
This change is meant for the PHOTO value, which can contain both
binary data and plain text URIs. Other binary data fields might also
benefit when their content turns out to be plain text (shorter
encoding).
 
The change is that base64 encoding is not enforced if all characters
are ASCII and printable. That allows special characters like colon,
comma, and semicolon to appear unchanged in the content.
 
Regardless whether the check succeeds, the result is guaranteed to
contain only ASCII characters, either because it only contains those
to start with or because of the base64 encoding.
 
 diff --git a/src/sysync/mimedirprofile.cpp b/src/sysync/mimedirprofile.cpp
 index 4105d03..1499876 100644
 --- a/src/sysync/mimedirprofile.cpp
 +++ b/src/sysync/mimedirprofile.cpp
 @@ -23,6 +23,7 @@
 
 #include syncagent.h
 
 +#include ctype.h
 
 using namespace sysync;
 
 

Re: [SyncEvolution] uri + binary PHOTO values (was: Re: [os-libsynthesis] field merging + parameters)

2011-07-22 Thread Lukas Zeller
Hi Patrick,

On Jul 22, 2011, at 15:00 , Patrick Ohly wrote:

 I couldn't think of cases where avoiding binary encoding might have a
 negative effect, but you are right, it is not according to spec and it
 makes sense to not change existing behavior.

Although I guess many clients would bail out when encountering a PHOTO (non-URI 
type) with non-B64 content, I can't imagine photo data possibly be ASCII so I 
admit this cannot happen.

But that's now probably *my* paranoia - if it is called B64 it should do B64 
or some day I will spend hours debugging a case when I have long forgotten 
about this special feature and wondering why the hell my test data foobar is 
not coming out as B64 :-)

Best Regards,

Lukas Zeller, plan44.ch
l...@plan44.ch - www.plan44.ch

___
SyncEvolution mailing list
SyncEvolution@syncevolution.org
http://lists.syncevolution.org/listinfo/syncevolution


Re: [SyncEvolution] minimizing WebDAV traffic

2011-06-23 Thread Lukas Zeller
On Jun 22, 2011, at 22:16 , Patrick Ohly wrote:

 I have done some work on minimizing the number of requests sent during a
 WebDAV and/or CalDAV sync. One major improvement is the use of CTag
 (https://trac.calendarserver.org/browser/CalendarServer/trunk/doc/Extensions/caldav-ctag.txt),
  an extensions supported by Google, Yahoo and Apple calendar servers.

Oh, that's interesting! I speculated about sync and the iCloud a few weeks ago 
(http://www.hardturm.ch/luz/2011/06/icloud-sync-speculation/), based on a IETF 
draft proposal from early 2011, co-authored by Cyrus Daboo at Apple: 
http://datatracker.ietf.org/doc/draft-daboo-webdav-sync.

Cyrus Daboo is the author of Mulberry IMAP client (now opensource, 
http://www.mulberrymail.com/download.shtml), who joined Apple some 3 or 4 yeras 
ago.

Now I see that the CTag extension (from 2007) is also his work. Which makes - 
at least for me - an interesting picture, showing that Apple is working for 
some years now to make CalDAV, and with the new proposal WebDAV in general, a 
sync-enabled infrastructure. Although the iCloud itself is all but open, it 
will certainly fuel the sync topic in general in the next few months. And if 
it is based on syncable WebDAV as I believe it is, that would be an important 
step towards open standards based sync!

I hope such general thoughts on sync are not considere OT for the Syncevolution 
list :-)

Lukas
___
SyncEvolution mailing list
SyncEvolution@syncevolution.org
http://lists.syncevolution.org/listinfo/syncevolution


Re: [SyncEvolution] Sony Ericsson K800 umlaute

2011-02-09 Thread Lukas Zeller
Hello,

On Feb 7, 2011, at 12:48 , Patrick Ohly wrote:

 A user contacted me regarding special characters (German umlaute)
 getting mangled when synchronizing with a Sony Ericsson K800. It turned
 out that the phone, in violation of the SyncML spec, uses ISO-8859-1
 encoding of its vCard data. Worse, it does so without adding the CHARSET
 property.
 
 There is a workaround in libsynthesis. It is possible to override the
 encoding for a specific phone. The attached file can be dropped into
 ~/.config/syncevolution-xml/remoterules/server/00_sony_ericsson.xml and
 the default charset for all Sony Ericsson phones will be ISO-8859-1.
 
 This is not a proper solution, though:
 - not all Sony Ericsson phones are broken like that
 - of those which are, some might use a different local charset

The workaround was built into libsynthesis exactly for SonyEricsson phones, and 
all I encountered used ISO-8859-1 as their local charset.

In fact, for quite a while this was even hard-coded into the engine to treat 
untagged incoming chars as ISO-8859-1 because these SE phones were the only 
real world case with that problem.

Only later, other phones started to send UTF-8 without saying so in CHARSET 
(which is probably acceptable as the SyncML environment is specified as UTF-8), 
so the default was changed to UTF-8 and inputcharset and outputcharset were 
added to allow device-specific behaviour.

Of course there might be chinese, greek or kyrillic versions of legacy SE 
phones around. But maybe these can be distinguished by devinf?

Best Regards,

Lukas


Lukas Zeller, plan44.ch
l...@plan44.ch - www.plan44.ch

___
SyncEvolution mailing list
SyncEvolution@syncevolution.org
http://lists.syncevolution.org/listinfo/syncevolution


Re: [SyncEvolution] A rather general question about synchronisation (as in syncevolution) compared with what rsync (for example) does

2011-01-09 Thread Lukas Zeller
On Jan 9, 2011, at 18:17 , Chris G wrote:

 Sync peers all have slightly differening ideas of the concept calendar 
 item or contact, so the real challenge is not just to find which side has 
 most recent data, but especially which data is available at one side at all. 
 If you just choose most recent, you'll eventually loose all data but what 
 the dumbest (most limited) sync participant can store.
 
 Just imagine a mobile which stores name and tel numbers, but no postal 
 address. If the most recent update (say, to a telephone number) comes from 
 such a device, there's a whole lot of field by field merging magic needed to 
 avoid loosing the postal address a more capable device might have. 
 
 I'm designing (in my mind) the way the data is stored as well as the
 synchronisation system.
 
 If the data is stored as one file per record - e.g. each address/contacts
 entry is a separate file then I see no major issue.  *Both* ends store
 all the data, name, address, phone number, E-Mail and whatever.  Then
 an rsync type approach of always keeping the latest one will work pretty
 well - the only case it won't handle is if both ends are changed between
 synchronisation runs.

I see. However this is more a file based approach to database replication than 
to PDA sync.

You postulate that all peers need to understand one common format for each data 
type. This is something that I've never seen happen in reality. Not even plain 
text is always exchangeable without conversions (line ends, character set), yet 
alone more complex structured data.
So while it is certainly desirable to reduce the conversion needs as much as 
possible by using data exchange formats that are as universally usable as 
possible, a sync architecture IMHO always needs to deal with conversion.

 By the way, no, I can't imagine a mobile which stores only names and
 telephone numbers!  :-)

You're probably too young, then :-) These things have existed and I have some 
lying around here.

Seriously, you're probably right regarding what we today consider PDA data, 
i.e. contacts and calendar entries. We can more or less assume that modern 
back-ends for PDA data are able to store everything for contacts and calendars 
(oops, Outlook IMHO still has a fixed number of fields for telephone 
numbers...).

 Part of the point of synchronising two devices
 (for me anyway) is to have the same data on both, by which I mean *all*
 of the same data for a given person.

And that *all* will continue to expand. Today it's tel+email+postal+im. But 
what's with the most recent blog entry, geo coordinate, twitter status, hires 
photo, github commit? And the relation between all this? Even simple meeting 
invitations is something today's backend often have a hard time to get right.

All of this will eventually become (if it hasn't already) part of what we today 
call a contact. But capabilities will be added to some systems sooner than to 
others. With the result that we'll have quite similar situations as my 
(admittedly a bit outdated) device-with-no-postal-address scenario, just on the 
next level.

I see no chance for a sync solutions that relies on *the* format that can 
represent everything.

Lukas


Lukas Zeller, plan44.ch
l...@plan44.ch - www.plan44.ch

___
SyncEvolution mailing list
SyncEvolution@syncevolution.org
http://lists.syncevolution.org/listinfo/syncevolution


Re: [SyncEvolution] libsynthesis and vformats

2011-01-08 Thread Lukas Zeller
Hello Chris,

On Jan 7, 2011, at 22:04 , Chris Frey wrote:

 I notice that there seems to be a difference in development and licensing
 strategies between OpenSync and libsynthesis.  Anyone can contribute
 to OpenSync and not have to sign anything, while it appears that
 contributing to libsynthesis requires signing a contributor agreement.
 This is off-putting to some developers.

The thing is, there's simply no money except our own (2 people) funding our 
work in libsynthesis, so we need the closed derived work (products we sell) to 
keep it going. That's why we have the contributor agreement. Without it, we 
cannot use contributions to libsynthesis ourselves which would essentially 
split the project into a internal and a public version. We'll not do that with 
the version we provide the infastructure for, but of course anyone who feels 
too much restrained by a contributor agreement could start maintaining a 
separate fork.

 I have some experience wrestling with vformat.c and incorporating it
 into the Barry plugin, as well as into the Barry library itself now.
 This is wasteful duplication, and it would be nice to see vformat
 support available generically.
 
 Are you interested in any help with this that does not involve signing
 a contributor agreement?

The above explanation holds true for the classic libsynthesis. I can well 
imagine (however not guarantee anything at this time) that a separated vformat 
library could be handled differently. Unless it means directly killing my own 
income, I'm always open for open :-)

Lukas
___
SyncEvolution mailing list
SyncEvolution@syncevolution.org
http://lists.syncevolution.org/listinfo/syncevolution


Re: [SyncEvolution] [Opensync-devel] libsynthesis and vformats

2011-01-08 Thread Lukas Zeller
Hello,

On Jan 8, 2011, at 14:12 , Emanoil Kotsev wrote:

 In accordance to the message below I have a feeling that a lot in the concept 
 of the opensync engine is similar to what I have researched in my studies in 
 AI called autonomous agents or agent based architecture, though it's not 
 exactly the same... opensync is somehow like a prototype for a coordinator 
 (the hub). So if we were to borrow some wisdom from OAA (open agent 
 architecture) we were to make the handling of the object types more 
 autonomous or take the sync and merge functionality outside the engine (like 
 default plugins filters or similar). My idea is that an obj can ask who can 
 merge and get answer from the merger that will take over the job ...

Note that this is much closer to how libsynthesis actually works than it might 
seem. Of course it is now all integrated into one engine, but the actual sync 
and merge is NOT done on vformat, but on abstracted data items. With some 
refactoring, sync and merge could be separated from formatting and talking to a 
peer.

Tight integration is needed between the vformat converters and the information 
a engine might have about the current peer. Because vformats (especially the 
old, but widespread ones) are too weak in specification and often too sloppily 
implemented to be handled without quite a bit of context about the 
sender/recipient. There's no such thing as a converter that can be fed only a 
vformat and produces sensible, normalized output (or vice versa).

Best Regards,

Lukas
___
SyncEvolution mailing list
SyncEvolution@syncevolution.org
http://lists.syncevolution.org/listinfo/syncevolution


Re: [SyncEvolution] [Opensync-devel] OpenSync: fragmentation is harmful

2011-01-05 Thread Lukas Zeller
Hello all,

Coincidentally right now I am looking into what could be the long-term future 
of my efforts that went into libsynthesis, now that I am no longer with 
Synthesis, but again independent and free :-)

I very much welcome efforts to join forces to make sync better, as I must admit 
that despite all efforts in the last 10 years, interoperable PIM sync is still 
only a dream for most users. The only solutions that work for non-geeks that 
are not backed up by a corporate IT support are the (more or less) proprietary 
end-to-end solutions, all but interoperable.

At the same time I am convinced that standardized sync (far beyond PIM) is a 
concept that is of paramount importance for cloud computing. Selling our souls 
(ahem, data) to single points of (technical, commercial, political) failure 
such as Google, Facebook or even Dropbox can't be a long term solution. I see 
these as the Compuserves and Datex-P's of today, important to get things 
started. But if cloud computing is to be more than a hype, it will need a open, 
reliable, generic, pragmatic sync mechanism eventually, like inter-computer 
data exchange needed TCP/IP to take off.

So while I'm completely a SyncML guy as far as my actual work (libsynthesis) is 
concerned, the thing that brought me to SyncML in 2000 was the hope for truly 
interoperable sync, and not vice versa. Today, libsynthesis is certainly useful 
for doing SyncML, which will remain a part of picture for some years, if only 
to support legacy devices. But I'm totally open to ideas that go beyond SyncML, 
and probably refactoring libsynthesis to make parts of it more genererally 
usable (vXXX formatting in particular) for other protocols.

What I'm not sure at the moment at the concept level is if unifying the 
currently available (IMHO all more or less legacy) sync protocols directly 
into a single engine is really the way to go. This thought feels a bit like 
creating the super-legacy engine. Maybe we need a structure where a really next 
gen sync mechanism builds the core, and the OpenSyncs, libsynthesises and 
Syncevolutions are degraded to plugins on an outer ring into that architecture 
to provide legacy sync.

I mean, even we had perfect integration of SyncML, CalDAV, ActiveSync etc. 
today, I feel it would still not cover everyday sync needs I have today, let 
alone in the future. With the explosion of endpoints (devices) on one side and 
data sources (services, databases) on the other I doubt that point-to-point 
sync has a bright future. I have the impression what we need could be more 
similar to git than to SyncML.

This is of course vague thinking and far from anything realizable in short 
term. But when talking about combining efforts, I think we need also to talk 
about the big context for all this.

Down from that meta level, two comments to messages in this thread:

On Jan 4, 2011, at 17:23 , Georg C. F. Greve wrote:

 SyncML has had interoperability issues due to its loose definition.
 Vendors also got stuck supporting the same old data formats and devices,
 instead of adapting to more modern needs like iCalendar 2.0.
 
 Both points are true. I would also say that SyncML has the conceptual flaw of 
 assuming dumber devices than todays devices actually are.

The devices might be faster and have more memory, but unfortunately some 
backends are still as dumb as ever regarding *the* (IMHO) single most important 
feature for smooth sync, especially when multi-point sync comes into the 
picture, which is a creator-assigned, 100% persistent UID.

SyncML tried to work around this, and most of the complexity and also of 
reliability issues in SyncML comes from the fact that identity has to be 
*derived* by comparing payload. And in consequence, this shifted resposibility 
for success or failure onto the loosely specified vXXX-formats, which were not 
designed for nor reliable enough for that (with the exception maybe of 
iCalendar 2.0 which most implementations don't have).

There's also a conceptual contradiction in this, because SyncML on one hand 
requires identity to be derived from the payload and on the other hand demands 
very much flexibility in handling of that payload (very dumb endpoints with 
very few fields vs. sophisticated endpoints with many details).
IMHO it's an either-or: Either identity is derived from payload (SHA checksums 
like in git or dropbox) which implies that all peers need to store the entire 
paylod 1:1, or identity is a (G)UID assigned by the creator of a record and 
guaranteed to remain unchanged over its entire lifetime - then peers might be 
allowed to only store abstractions or subsets of the actual data.

SyncML tried to do both at the same time - the price paid is the ever fragile 
slow sync with its potential for great duplicate mess.


On Jan 4, 2011, at 11:36 , Daniel Gollub wrote:

 I'm still open to adapt OpenSync and it's plugin to make more use of common 
 code - e.g. vcard handling of libsynthesis, maybe even 

Re: [SyncEvolution] empty description set to summary (was: Re: SyncEvolution 1.1.1 release candidate)

2010-12-22 Thread Lukas Zeller
Hello,

On Dec 22, 2010, at 9:41 , Patrick Ohly wrote:

 =if (DESCRIPTION==EMPTY) DESCRIPTION=SUMMARY;
 
 I'm not sure what exactly the motivation for that code was. I copied it
 from the Synthesis reference config. I could imagine that it was added
 because some peers depend on having a description. I can ask Lukas.

Exactly. Some old devices (Nokia or Ericsson, don't remember exactly) did have 
a single field for description and no summary field. Normal vCalendars on the 
other side often have only a summary. So that's why this was added. Note that 
there's also the opposite when receiving (in incominscript):

 // eliminate description that is the same as summary
 if (DESCRIPTION==SUMMARY) DESCRIPTION=EMPTY;

This avoids storing a description that is just a duplicate of the summary.

Note that duplicating the SUMMARY into an empty DESCRIPTION could be made 
conditional using the ISAVAILABLE() function, which returns TRUE only if the 
peer explicitly lists a field supported in the CTCAP (it returns FALSE when it 
is not supported, and EMPTY if the peer does not have CTCap, meaning it is 
unknown if the field is supported or not):

  if (ISAVAILABLE(SUMMARY)!=TRUE  DESCRIPTION==EMPTY) DESCRIPTION=SUMMARY;

This would do the duplication only if it is not known if the peer supports 
SUMMARY.

Best Regards,

Lukas Zeller
___
SyncEvolution mailing list
SyncEvolution@syncevolution.org
http://lists.syncevolution.org/listinfo/syncevolution
___
SyncEvolution mailing list
SyncEvolution@syncevolution.org
http://lists.syncevolution.org/listinfo/syncevolution


Re: [SyncEvolution] [os-libsynthesis] sending photo to Nokia phone (N97 mini): wrong CTCap

2010-08-30 Thread Lukas Zeller
Hi Patrick,

On Aug 30, 2010, at 10:43 , Patrick Ohly wrote:

 Speaking of CtCap, do you happen to know what Nokia's involvement was in
 the definition of that part of the spec? Or speaking more generally,
 what was the desired usage of CtCap?
 
 I've just been in another discussion around that, where the Synthesis
 use of CtCap to determine unsupported properties was questioned.
 
 One interpretation of CtCap is limitations for some properties. In
 that interpretation, unlisted properties might still be supported.
 
 The other interpretation is that CtCap has to be complete and accurate,
 and thus anything not covered by it (value too long, unknown property)
 is not stored by the device. This is the interpretation used by
 Synthesis. FWIW, it sounds more plausible to me.

The real world devices that most prominently and strictly supported our 
interpretation were Nokia's.

We had a lot of trouble of getting all data out of them without entirely 
omitting CTCap in the beginning, until we found that not only all properties 
needed to be listed (that was the case already in very early versions of our 
server), but also the complete list of possible property parameters. For 
example, a Nokia phone would not send any telephone number unless the CTCap had 
not only TEL but also the possible property parameters WORK, HOME, 
CELL...

And because their clients were for a long time pretty much the only ones 
looking at CTCap at all apart from ours, I'd at least say that this 
interpretation is a de facto standard.

But I have no insight into the involvement of Nokia in that part of the specs.

 But is there anything in the standard which supports one or the other
 interpretation?

If there is, I am not aware of it.

 Or perhaps it was part of meeting minutes?

Maybe, I don't know.

Lukas Zeller (l...@synthesis.ch)
- 
Synthesis AG, SyncML Solutions   Sustainable Software Concepts
i...@synthesis.ch, http://www.synthesis.ch




___
SyncEvolution mailing list
SyncEvolution@syncevolution.org
http://lists.syncevolution.org/listinfo/syncevolution


Re: [SyncEvolution] [os-libsynthesis] sending photo to Nokia phone (N97 mini): wrong CTCap

2010-08-26 Thread Lukas Zeller
Hi Patrick,

it's a known bug in many Nokia phones that they have a maxSize of 255 on all 
fields, including the PHOTO. 

And yes, there is a remoterule option called ignoredevinfmaxsize for that. It 
applies to all fields, however I agree that those constant 255 maxsizes are 
probably meaningless anyway.

Lukas

On Aug 26, 2010, at 17:24 , Patrick Ohly wrote:

 Hello!
 
 I'm investigating why a photo sent to a Nokia device does not show
 properly on the device. It was originally reported for a 5800
 (http://bugs.meego.com/show_bug.cgi?id=5860); I can also reproduce it
 with a N97 mini.
 
 One problem is that the device lies about its capabilities. CTCap
 contains 256 as maximum length for PHOTO. When encoding, the Synthesis
 library dutifully obliges and truncates the field.
 
 The device clearly is capable of handling larger photos - the photo was
 originally attached to the contact *on* the device itself and then
 copied to the server.
 
 Is there a remote rule that I can use to ignore the length limitation of
 the PHOTO property? Should I ignore the whole CTCap, as it is clearly
 not based on reality? I think there is an option for that, but won't
 that have undesirable effects like not preserving local extensions that
 are really not supported by the phone?
 
 Another problem might be the TYPE. Evolution doesn't set JPEG by itself.
 We may have to add that to the outgoing PHOTO.
 
 -- 
 Best Regards, Patrick Ohly
 
 The content of this message is my personal opinion only and although
 I am an employee of Intel, the statements I make here in no way
 represent Intel's position on the issue, nor am I authorized to speak
 on behalf of Intel on this matter.
 
 
 
 ___
 os-libsynthesis mailing list
 os-libsynthe...@synthesis.ch
 http://lists.synthesis.ch/mailman/listinfo/os-libsynthesis

Lukas Zeller (l...@synthesis.ch)
- 
Synthesis AG, SyncML Solutions   Sustainable Software Concepts
i...@synthesis.ch, http://www.synthesis.ch




___
SyncEvolution mailing list
SyncEvolution@syncevolution.org
http://lists.syncevolution.org/listinfo/syncevolution


Re: [SyncEvolution] How to select the evolution target calendar

2010-06-21 Thread Lukas Zeller
Hello Guy,

That 503 error sounds familiar to me. I don't have detail insight into OCS, but 
we've done a lot of testing together with the SyncML guys at Oracle. We've seen 
that 503 usually when a previous sync with the same user or device has not 
sucessfully completed, and a session was still pending somehow. Just leaving 
the server alone for 5mins and trying again has usually worked.

This situation easily happens when testing new stuff, but in normal operation 
this is rare so end users don't run into that too often. Still, some do and we 
have a FAQ entry on the Synthesis site for that: 
http://www.synthesis.ch/faq.php?lang=e (BTW: Altough that FAQ is for our 
commercial products, it might be helpful regarding compatibility issues with 
libsynthesis based clients as well.

Best Regards,

Lukas Zeller

On Jun 17, 2010, at 22:18 , Guy Stalnaker wrote:

 Patrick,
 
 On 06/17/2010 01:54 AM, Patrick Ohly wrote:
 On Wed, 2010-06-16 at 23:23 +0100, Guy Stalnaker wrote:
 What is meant by Oracle branch?
 
 At one point there was a branch in our source code repository for
 Oracle. It had a new template for Oracle. This has been rolled into the
 main branch and is in the latest releases, so you can use --configure
 --sync-property syncurl=my url  ... oracle.
 
 Thank you very much for your reply. The log of my latest attempt is attached. 
  Here is the situation.
 
 Server: Oracle Calendar OCAS v10.1.2.3.4
 Client: SyncEvolution 1.0beta2a
 Host: Ubuntu Lucid 10.04 32-bit
 
 I have both manually created a profile and (following your explanation) I've 
 used the built-in oracle template. Both result in the exact same 503 error.  
 As you can see from the log, credential authentication is successful.  As is 
 the query for the Sync server info:
 
 quote
 [2010-06-17 14:07:45.819] 'DevInf_Analyze' - Analyzing remote
 devInf [--][++] [-end] [-enclosing]
 # [2010-06-17 14:07:45.820] Device ID='OracleSyncServer', Type='server',
 Model=''
 # [2010-06-17 14:07:45.820] Manufacturer='Oracle', OEM='Oracle'
 # [2010-06-17 14:07:45.820] Softwarevers='10.1.2',
 Firmwarevers='10.1.2', Hardwarevers='10.1.2'
 # [2010-06-17 14:07:45.820] SyncML Version: SyncML/1.2
 # [2010-06-17 14:07:45.820] SyncML capability flags: wantsNOC=Yes,
 canHandleUTC=Yes, supportsLargeObjs=Yes
 # [2010-06-17 14:07:45.820] Detected Oracle OCS Server - suppress
 dynamic X- TYPE params, and filter
 # [2010-06-17 14:07:45.820] OCS with device that
 has not guaranteed
 unique ID - use user+devid hash for Source LocURI
 /quote
 
 SyncEvolution both successfully authenticates a session and it successfully 
 queries and parses the response for devInf. The failure comes next.
 
 quote
 [2010-06-17 14:07:45.827] 'issue' - issuing command,
 Cmd=Status [--][++] [-end] [-enclosing]
 # [2010-06-17 14:07:45.827] Status Code 200 issued for Cmd=Results, (incoming 
 MsgID=1, CmdID=4)
 # [2010-06-17 14:07:45.827] - SourceRef (remoteID) = './devinf12'
 # [2010-06-17 14:07:45.827] Status: issued as (outgoing MsgID=2, CmdID=2), 
 not waiting for status
 # [2010-06-17 14:07:45.827] Deleted command 'Status' (outgoing MsgID=2, 
 CmdID=2)
 # [2010-06-17 14:07:45.827] Outgoing Message size is now 384 bytes
 –[2010-06-17 14:07:45.827] End of 'issue' [-top] [-enclosing]
 o [2010-06-17 14:07:45.828] Deleted command 'Results' (incoming MsgID=1, 
 CmdID=4)
 –[2010-06-17 14:07:45.828] End of 'processCmd' [-top] [-enclosing]
 + [2010-06-17 14:07:45.828] Created command 'Status' (incoming)
 + +
 – [2010-06-17 14:07:45.828] 'processStatus' - Processing incoming Status 
 [--][++] [-end] [-enclosing]
 o [2010-06-17 14:07:45.828] Started processing Command 'Status' (incoming 
 MsgID=1, CmdID=5)
 o [2010-06-17 14:07:45.828] WARNING: RECEIVED NON-OK STATUS 503 for command 
 'Alert' (outgoing MsgID=1, CmdID=3)
 o [2010-06-17 14:07:45.828] - TargetRef (remoteID) = './calendar/tasks'
 o [2010-06-17 14:07:45.828] - SourceRef (localID) = './todo'
 o [2010-06-17 14:07:45.828] - Item data =
 o [2010-06-17 14:07:45.828] Found matching command 'Alert' for Status - 
 Synthesis SyncML Engine 3.4.0.5 Log (p15 of 19)
 o +
 /quote
 
 The abort happens soon after.
 
 You'll know why this Command 'Alert' is being sent to the server? What is 
 the expected server response?
 
 Using Oracle-supplied steps I created a sync profile for SyncEvolution (much 
 like the article about SE and Beehive mentions). I can see that the Sync 
 server records the connection in data for my user account, but that data is 
 minimal as the sync session is terminated after the 503 error (I've included 
 the full directory listing for my data so you can see what the various files 
 look like on the Oracle side of things; the 364A7A16 set are from use of a 
 Palm Synthesis; the IMEI is a Blackberry Nexthaus client; the NL-283134 is 
 NotifyLink Enterprise Server):
 
 quote
 [2:06pm n...@wisccal-sync] ~ sudo ls -l 
 ocas10g/linkdb/calserv/jstalnak,user=/
 wisccal-sync Password:
 total 678
 -rw-r-   1 oracle   dba

Re: [SyncEvolution] deleted contacts come back, sync topology

2010-06-01 Thread Lukas Zeller
Hi Patrick, Hi Anssi,

On May 30, 2010, at 21:43 , Patrick Ohly wrote:

 On Sun, 2010-05-30 at 18:46 +0100, Anssi Saari wrote:
 http://www.modeemi.fi/~as/syncevolution-log_laptop_to_phone_2010-05-28-17-36.html.gz

I found something in this log that seems to be the real cause behind all this 
weird behaviour:

At the places where these questionable deletes are generated, the server also 
calls adjustLocalIDforSize() on the localID of these items to create a GUID 
that does not exceed MaxGUID:

   • [2010-05-28 17:36:18.384] translated 
 realLocalID='20100518t125216z-20287-1000-1...@r500.localdomain-rid' to 
 tempLocalID='#9'

It gets tempLocalID #9 here. Fine so far, but for the next delete it gets #9 
AGAIN for the next (different) realLocalID:

   • [2010-05-28 17:36:18.385] translated 
 realLocalID='20100518t125405z-11480-1000-1-...@r500.localdomain-rid' to 
 tempLocalID='#9'

Now TLocalEngineDS::adjustLocalIDforSize() is implemented to simpy prefixing 
the current size of the container plus 1 with a #. 

The only possible way to make this create the same key multiple times would be 
that the STL map container already contains 8 items, and one of the ALREADY 
keyed #9. The only way how *this* can happen is by malfunctioning 
SaveAdminDate/UpdateMapItem in a previous session or 
LoadAdminData/ReadNextMapItem in the current session, such that somehow map 
entries get messed up. Messed up map entries can explain the deletes with no 
localID/remoteID.

One possible reason for this could be when a plugin does not properly save/load 
the ident field in MapIDType, or does not allow multiple map entries with the 
same localID, but different ident.

Running syncs with debugenable option=exotic/ would log more details 
about the map entries saved and retrieved and thus probably reveal what's going 
wrong.

Best Regards,

Lukas Zeller (l...@synthesis.ch)
- 
Synthesis AG, SyncML Solutions   Sustainable Software Concepts
i...@synthesis.ch, http://www.synthesis.ch




___
SyncEvolution mailing list
SyncEvolution@syncevolution.org
http://lists.syncevolution.org/listinfo/syncevolution


Re: [SyncEvolution] dynamic discovery of sync service in (W)LAN

2010-04-07 Thread Lukas Zeller
Hi Patrick,

that's an interesting topic! Just a few comments:

On Apr 3, 2010, at 17:34 , Patrick Ohly wrote:

 [...] For synchronization via wireless, there are two
 options:
  * ad-hoc direct between device and PC: would work without any
additional hardware, but uncommon
  * devices log into the same W-LAN: that's how devices typically
connect to the Internet, but they won't know about each other,
so direct sync without manual configuration becomes harder
 
 One possible solution for the second problem is DNS Service Discovery,
 as used by Apple Bonjour and Avahi on Linux: http://www.dns-sd.org/

Just to say - I took a tiny first step into that direction with our iPhone 
apps, as Bonjour is readily available there. The server side annouces itself 
via Bonjour, and the URL entry field in the client has a Bonjour browser so in 
a iPhone to iPhone sync the user can just select the other iphone from a list.

Of course, it's a long way to go from here to real select-and-use. Due to the 
lack of a standardized SyncML service description, I have to announce the 
SyncML server as a web server (_http._tcp), and the browser part has to show 
any web server (which often includes config webpages of network printers etc.). 
And it's only the URL.

 I checked, so far no-one has bother to add OMA DS/SyncML to the list of
 services which can be discovered like that. There are several
 proprietary sync services listed, though.
 
 There are several data items which would have to be discoverable:
  * IP address of host
  * port
  * path (for HTTP POST)
  * unique SyncML device ID (to distinguish multiple servers)
  * information about URIs
 
 IP address, port and path is something also used by existing services.
 The SyncML specific parts are those that we have to define in more
 detail. For example, how do we describe that URI addressbook is for
 contacts? How do we describe multiple address books, for example one
 for private use and company?
 
 This kind of self-description is not part of SyncML, leading to the
 current complexity involved in configuring clients and servers
 correctly.
 
 Just food for thought... I'm not ready to suggest a specific solution,
 but wanted to write down my thoughts so far.

I wonder if it might be possible for us (Intel, Nokia, Synthesis :-) to 
define such a spec for Bonjour-based SyncML service discovery and try to have 
it standardized? Right now, I don't have the slightest idea if this could be 
feasible, and what standardisation body we'd need to address, and if that would 
be someting only OMA itself can do or could be added from outside etc.

But as the complex config IMHO is the single most important reason for the 
niche existence SyncML still has after all these years, it would make a lot of 
sense to join any forces available. With the open key=value structure of the 
dns-sd I see no reason prohibiting inclusion of the complete config required 
for successful sync.

Given that there's also a wide area version of Bonjour/dns-sd, that could be 
huge for non local HTTP as well. I guess none of the internet SyncML service 
providers would hesitate a second to add some extra DNS records to help their 
users to setup sync...

Lukas Zeller (l...@synthesis.ch)
- 
Synthesis AG, SyncML Solutions   Sustainable Software Concepts
i...@synthesis.ch, http://www.synthesis.ch




___
SyncEvolution mailing list
SyncEvolution@syncevolution.org
http://lists.syncevolution.org/listinfo/syncevolution


Re: [SyncEvolution] dynamic discovery of sync service in (W)LAN

2010-04-07 Thread Lukas Zeller
Hello Patrick,

On Apr 7, 2010, at 12:09 , Patrick Ohly wrote:

 Adding a new description is easy, in particular if it is proprietary,
 see the top of:
 http://www.dns-sd.org/ServiceTypes.html

Yes, I saw that. For proprietary stuff (a lot of well-known iPhone apps seem to 
appear in here) it's easy.

 We should definitely define something and register it officially as
 DNS-SD service type. If we pick a neutral name (not SyncML or OMA-DS),
 then I guess we can do it without OMA getting involved.

Good idea :-)

 But we should also drive a discussion with the OMA consortium to get this 
 ratified by
 them.
 
 Intel is a OMA member and I know who our representative is. I suggest we
 write up a proposal and then approach OMA. I can do both sometime next
 week, if you want.

That would be great!

The most tricky part is probably representing the datastore path such that a 
client can figure out which ones to use for which of the datatypes. Mime-type 
to path name mapping alone doesn't do the trick for all the 
events/tasks/combined/weird-ovi cases. We'd probably need an extra variations 
list, saying something like VTODO or VEVENT or VTODO,VEVENT to 
differentiate them.

Anyway, I'm open to discussions and helping to create a proposal.

Best Regards,

Lukas Zeller (l...@synthesis.ch)
- 
Synthesis AG, SyncML Solutions   Sustainable Software Concepts
i...@synthesis.ch, http://www.synthesis.ch




___
SyncEvolution mailing list
SyncEvolution@syncevolution.org
http://lists.syncevolution.org/listinfo/syncevolution


Re: [SyncEvolution] Syncevolution on Nokia N900 : Force SyncML 1.1 ?

2010-03-01 Thread Lukas Zeller
Hello,

On Mar 1, 2010, at 21:02 , Patrick Ohly wrote:

 So is there a way to force Syncevolution to us SyncML 1.1 ?
 
 Nothing straight-forward. Here's something a bit more complicated which
 might work. If it does, we can think about making it easier.
  * download

 http://git.moblin.org/cgit.cgi/syncevolution/plain/src/syncclient_sample_config.xml?id=syncevolution-0-9-2
  * put it onto the N900 in some directory, as normal user, using
the name syncclient_sample_config.xml
  * edit that file and insert
defaultsyncmlversion1.1/defaultsyncmlversion somewhere after
client type=plugin and before /client
  * run the syncevolution --run --sync-property loglevel=4 'server
config name' command line in the directory where you edited the
file
  * if it doesn't work, check the session log (see syncevolution
--print-sessions) whether the defaultsyncmlversion part is in
the config logged there
 
 According to the Synthesis docs for that option, the engine should retry
 with a lower protocol version when the 1.2 default doesn't work. I'm not
 sure how that is meant to work exactly, because the Internal Server
 Error gives no indication at all about the root cause and thus should
 be treated as fatal error, with no retries. Perhaps it works if the
 server fails more gracefully.

Yes - Every SyncML server (all versions) should respond with status 513 when it 
can't handle the SyncML version of a message sent by a client. libsynthesis 
also checks for 505,501 and even 400 status code in addition to 513, and 
performs a retry with the next lower SyncML version if any of these are 
returned in a SyncHdr Status. Unfortunately, some servers just break the 
connection or return 500 HTTP level errors.

But note that libsynthesis provides another way for clients to set the default 
SyncML version to be used with a server apart from 
defaultsyncmlversion1.1/defaultsyncmlversion: In the settings profile, 
there is a key called syncmlvers which stores the SyncML version that should 
be used for the next sync (0=unknown, 1=1.0, 2=1.1, 3=1.2). This is initialized 
with 0 for a new profile, and automatically updated by libsynthesis after a 
successful message exchange with the server (such that the client will use the 
correct SyncML version for subsequent syncs without the need for retries).

This syncmlvers profile key could be exposed in the settings, or otherwise 
influenced by the application (SyncEvolution). In the synthesis PDA clients, 
this is exposed in the settings UI to allow forcing lower SyncML versions for 
servers that don't properly implement the 513 version check/fallback mechanism.
Another option is to decrement the SyncML version using syncmlvers 
automatically when detecting a server responding with errors on the HTTP or 
transport level - with the disadvantage that a real network problem while 
trying to connect to a DS 1.2 capable server can lead to the client using only 
an inferior (1.1 or 1.0) SyncML version with that server. So IMHO a UI setting 
is safer.

Best Regards,

Lukas Zeller (l...@synthesis.ch)
- 
Synthesis AG, SyncML Solutions   Sustainable Software Concepts
i...@synthesis.ch, http://www.synthesis.ch




___
SyncEvolution mailing list
SyncEvolution@syncevolution.org
http://lists.syncevolution.org/listinfo/syncevolution


Re: [SyncEvolution] Nokia N85 sync status

2010-02-19 Thread Lukas Zeller

Hello,

maybe I misread the patch (on the way home, iPhone...) but for the  
server subdatastores should not be hidden. For the client, I agree, in  
a session using the superdatastore the subdatastores should not be  
included in the defInf.


l...@synthesis.ch

On 19.02.2010, at 15:33, Patrick Ohly patrick.o...@intel.com wrote:


On Fr, 2010-02-19 at 12:47 +, Patrick Ohly wrote:

I can imagine that including stores which have not been alerted yet
might be useful, in case that the client sends an Alert for one  
store

first and later for another one, but sending all stores with the same
URI as above doesn't make sense to me.


Attached is a patch which might do the job of suppressing sub- 
datastores
in the DevInf. Untested, because I cannot reproduce the situation  
here.


Jussi, can you apply it and check whether the right thing (= one
datastore for calendar+todo) is sent?

Does it have any effect on the remaining problem with calendar updates
on the device?


--
Best Regards, Patrick Ohly

The content of this message is my personal opinion only and although
I am an employee of Intel, the statements I make here in no way
represent Intel's position on the issue, nor am I authorized to speak
on behalf of Intel on this matter.

no-substores.patch

___
SyncEvolution mailing list
SyncEvolution@syncevolution.org
http://lists.syncevolution.org/listinfo/syncevolution


Re: [SyncEvolution] Nokia N85 sync status

2010-02-18 Thread Lukas Zeller
Hello Patrick,

On Feb 18, 2010, at 17:53 , Patrick Ohly wrote:

 On Mi, 2010-02-17 at 17:29 +, Lukas Zeller wrote:
 The big problem is that the device (Nokia phone) usually asks for the
 server devInf BEFORE it sends it's own devInf, but the server can't
 detect what device it is talking to before receiving devInf.
 
 Asking for DevInf before sending its own would probably confuse even
 more devices, right?

Probably. Without checking details, IMHO the standard would allow to delay the 
Results for a Get, but that would imply that the init phase would be 
extended to have two message exchanges. Given all the troubles we've seen with 
final/ statemachine handling, it's likely to cause problems.

 Speaking of confused, I'm a bit confused about CTCap inside DevInf right
 now. In the SyncEvolution client - SyncEvolution server scenario, the
 client sends DevInf including CTCap to the server in its first message.
 The server sends DevInf in its reply, but without any CTCap. I don't
 have showctcapproperties in my config.
 
 I would expect to see the server's CTCap in the session. Any idea why it
 is not sent?

The only reason I can think of right now would be a too small message size, 
which the devInf exceeds. If that happens, CTCap is left out to make the devInf 
sendable, because results can't be split across messages.

Lukas Zeller (l...@synthesis.ch)
- 
Synthesis AG, SyncML Solutions   Sustainable Software Concepts
i...@synthesis.ch, http://www.synthesis.ch




___
SyncEvolution mailing list
SyncEvolution@syncevolution.org
http://lists.syncevolution.org/listinfo/syncevolution


Re: [SyncEvolution] Automatic testing with phones

2009-11-23 Thread Lukas Zeller
Hello,

just some ideas from my experience with Nokias.

With the more advanced Nokias (I have a E90 here, which is S60 Ed3) you can 
have multiple OBEX profiles (with PC Suite selected, create a new profile, 
and it will even offer to copy the settings from PC Suite). The profile will 
have bluetooth as data bearer and a host address of PC Suite - that's the 
string being matched with the SAN, so I guess (haven't tried) you can just 
change that to TestSuite or something and trigger it with that string.

When you can't create a second BT/OBEX profile, you could create a http 
profile, and trigger that using SMS based SAN (if you have a SMS gateway that 
won't ruin you with the SMS fees...). SMS based SAN should work on any SyncML 
1.2 enabled Nokia, the older SMS based SyncML 1.1 message 0 trigger works with 
even more devices.

That's all S60 stuff however - how much of that works with S40 I don't know.

Best Regards,

Lukas Zeller


On Nov 23, 2009, at 13:26 , Chen Congwu wrote:

 Hello,
 I have just enabled syncing with Nokia 7210c (Symbian S40 5th editon), it
 works basically well with contacts(vcard 2.1), calendar(vcalendar),
 todos(vcalendar) and memo(text/plain) (Server alerted two-way sync).
 
 However I haven't conducted more extensive testing yet. I am asking ideas for
 automate the test process.
 At this scenario, SyncEvolution on Moblin acts as SyncML Server which inited
 the sync with a Nokia phone. The two client based client-test mostly will 
 not
 work here because a Nokia phone typically don't have mutliple profile support
 (There is only PC Suite for Nokia PC Suite usage). Therefore emulating two
 servers at PC side will likely not work.
 
 For example:
 Adding item on PC and SyncEvolution on PC init server alerted two-way sync 
 with phone, how can we reliably detect the item is really accepted at phone
 side?
 Sync the phone with another server will not work because the phone doesn't 
 support
 multiple servers.
 
 What I can think of is:
 a) Hacking on the Phone side so that it can work with multiple servers (or
 emulation?) Not sure wheather it works.
 b) Use Refresh-from-Server sync with the same server (This is very limited).
 
 Any ideas?
 -- 
 Regards,
 
 Chen Congwu
 Moblin China Development
 
 ___
 SyncEvolution mailing list
 SyncEvolution@syncevolution.org
 http://lists.syncevolution.org/listinfo/syncevolution

Lukas Zeller (l...@synthesis.ch)
- 
Synthesis AG, SyncML Solutions   Sustainable Software Concepts
i...@synthesis.ch, http://www.synthesis.ch




___
SyncEvolution mailing list
SyncEvolution@syncevolution.org
http://lists.syncevolution.org/listinfo/syncevolution


Re: [SyncEvolution] syncevolution and oracle = transport failure

2009-10-12 Thread Lukas Zeller

Hi Andrzej,

On Oct 12, 2009, at 13:51 , Andrzej Wąsowski wrote:

I only use the calendar at the moment - I want to make it work  
first.  But my set up is essentially the same as yours.


Other users of my server also add a query limitiing the scope of  
synchronization.  I heard rumours that unlimited does not work, but  
I do not know if this is a client or a server limitation in their  
cases.


AFAIK Oracle servers just use a default limit (often quite tight, such  
as only 7 days into the future) when none is specified using the /dr(- 
x,y) syntax.


Note also that the Oracle server can be configured to reject any sync  
attempt with clients it does not know (i.e. for which no so-called  
profile exists on the server). This might be the problem - altough  
according to Oracle by default the server does not reject unknown  
devices.


If someone has admin access to the server, it might be worth trying to  
create a profile for SyncEvolution, probably easiest way would be  
duplicating the Synthesis iPhone client profile (as it is a close  
relative to SyncEvolution) and adapt the model matching strings. See http://download.oracle.com/docs/cd/B25553_01/calendar.1012/b25485/ocas.htm#BABCEGID 
 for documentation - I have no experience myself as I never had admin  
access to an OCS installation. But I have done a lot of IOT with Oracle.


Best Regards,

Lukas Zeller (l...@synthesis.ch)
-
Synthesis AG, SyncML Solutions   Sustainable Software Concepts
i...@synthesis.ch, http://www.synthesis.ch




___
SyncEvolution mailing list
SyncEvolution@syncevolution.org
http://lists.syncevolution.org/listinfo/syncevolution


Re: [SyncEvolution] syncevolution and oracle = transport failure

2009-10-12 Thread Lukas Zeller

Hello Andrzej,

On Oct 12, 2009, at 15:18 , Andrzej Wąsowski wrote:

If this is the reason, perhaps I could hack syncevolution to pretend  
it is an iPhone?


Yes, at least to see if it makes any difference. You could change the  
model and manufacturer settings in the XML config.


The iPhone client identifies itself as:

  Model: SySync Client iPhone Contacts+TodoZ
  Manufacturer : Synthesis AG

You might want to try the much longer available PocketPC client first,  
as the profile for the iPhone client was added to OCS no so long ago,  
so it might not yet be known on a older installation:


  Model: SySync Client PocketPC PRO
  Manufacturer : Synthesis AG

Please note however that using these identifiers is not a solution,  
only a temporary hack and testing setup.


Best Regards,

Lukas Zeller (l...@synthesis.ch)
-
Synthesis AG, SyncML Solutions   Sustainable Software Concepts
i...@synthesis.ch, http://www.synthesis.ch




___
SyncEvolution mailing list
SyncEvolution@syncevolution.org
http://lists.syncevolution.org/listinfo/syncevolution


Re: [SyncEvolution] libsynthesis_srv

2009-09-25 Thread Lukas Zeller

Hi Patrick,

On Sep 25, 2009, at 13:49 , Patrick Ohly wrote:


Lukas, do you have an estimate for how much work it would be to make
these compile-time choices at runtime via if() instead of #ifdef?  
These

two methods are not mutually exclusive, to get more compact libraries
the #ifdef could be preserved in addition to the if(). It's probably  
not

nice, but could work like this: [...]


Very interesting idea! This would make a lot of sense for the  
libsynthesis case! The #ifdef stuff is not a problem, but some work is  
needed to make a TSyncAppBase derivate which can act both as a server  
AND as a client.



Enabled features would have to be the same in the client and server
engine for this to work.


Of course, but I see no reason why the options would be different for  
server and client on one particular platform.


Let me explore that in a bit more detail to see what implications it  
might have.


But basically, I think it would simplify a lot of things, so it's  
definitely worth trying!



Lukas Zeller (l...@synthesis.ch)
-
Synthesis AG, SyncML Solutions   Sustainable Software Concepts
i...@synthesis.ch, http://www.synthesis.ch




___
SyncEvolution mailing list
SyncEvolution@syncevolution.org
http://lists.syncevolution.org/listinfo/syncevolution


Re: [SyncEvolution] libsynthesis_srv

2009-09-25 Thread Lukas Zeller

Hi Patrick,

On Sep 25, 2009, at 16:29 , Patrick Ohly wrote:


Ah, progress. In other words, more questions... ;-)


:-)


How do I tell the server which URL to use and how to append (or not to
append) the ?sessionID=... suffix? The serverurl element is only  
valid

for clients.


You don't need to. The client will send it in the Target LocURI of the  
SyncHdr.
The engine takes it from there and adds the sessionID suffix (if  
enabled) to create a RespURI.


If you don't need the respURI, disable it using the new config  
directives (see commit message of  
4b9c739d0a01f20ae85da983a8feb989b8230c51 for details).



You said that OpenSession() accepts a sessionID created by the app. I
could use that when I want to create my own ID. If I don't do that,  
can

I query which session ID was generated by the engine?


Yes, you can query the session key for localSessionID to get it.

Profiles are gone in the server. How do I control the sync modes  
used by

the server for specific datastores? I'm using
   plugin_module[SDK_textdb]/plugin_module
   plugin_sessionauthyes/plugin_sessionauth
   plugin_deviceadminyes/plugin_deviceadmin
for the server and my own plugin for each datastore.


The client controls the sync modes. The server just executes what the  
client wants.
There are options to declare a datastore or a session read-only, which  
then prevents writing to the DB (see readonly and SETREADONLY() in  
the config reference).



A related question: I was using the dbname in a target to find the
datastore name related to a progress event. The code looked like this
(with wrapper classes around the underlying handles):
  SharedKey targets = m_engine.OpenKeyByPath(profile, targets);
  SharedKey target;
  target = m_engine.OpenSubkey(targets, progressInfo.targetID);
  std::string name = m_engine.GetStrValue(target, dbname);

Is there a better way that works in both client and server?


Note that the server does not have progress events at this time at  
all. Could be added, but requires some work, and would require to re- 
architect the progress events to some degree as these are all global  
(engine level) now, and would need to be made separate per session.



At the moment I can start a sync session, but SessionStep() returns an
error at some point. Need to look more closely into that before I can
say more...


Just let me know...

Lukas Zeller (l...@synthesis.ch)
-
Synthesis AG, SyncML Solutions   Sustainable Software Concepts
i...@synthesis.ch, http://www.synthesis.ch




___
SyncEvolution mailing list
SyncEvolution@syncevolution.org
http://lists.syncevolution.org/listinfo/syncevolution


[SyncEvolution] libsynthesis_srv

2009-09-24 Thread Lukas Zeller

Hello all,

On Aug 31, 2009, at 22:15 , Patrick Ohly wrote:


The biggest unknown factor for 1.0 are the necessary changes in
libsynthesis [to make a SyncML server version of it].
Lukas has reassured me that he has made good progress
towards those, but they have a business to run and therefore might
have more important work to deal with first. September 4th in the
schedule below is way too early


On Mon, 2009-08-31 at 20:21 +0100, Lukas Zeller wrote:


This way, I think that a alpha libsynthesis_srv until end of September
could still work.



Ok, some time has passed, but today I could push a working  
libsynthesis_srv (i.e. it runs ok with some non-in-depth local testing  
and a TextDB backend) to synthesis indefero git, luz branch.


What's missing is the automake build, only the plain make -f  
server_engine_linux.mk in src/ works (kind of, probably output dirs  
need to be manually created).


There is also a server config sample in the sysync_SDK/configs  
subdirectory (that's the one I have used in my tests).


I'll have to document the correct way to use it, but it's quite simple  
and very similar to the client case. Rough sketch:


- receive request from client
- create a session with OpenSession()
- put some data into the engine buffer of that session
- start stepping with STEPCMD_GOTDATA
- step until STEPCMD_SENDDATA
- get data from buffer and send it to client
- step until STEPCMD_DONE or STEPCMD_NEEDDATA
- wait for next request if not STEPCMD_DONE
- if done, CloseSession().

More information later...

Have fun :-)

Lukas Zeller (l...@synthesis.ch)
-
Synthesis AG, SyncML Solutions   Sustainable Software Concepts
i...@synthesis.ch, http://www.synthesis.ch




___
SyncEvolution mailing list
SyncEvolution@syncevolution.org
http://lists.syncevolution.org/listinfo/syncevolution


Re: [SyncEvolution] schedule SyncEvolution 0.9.1 + 1.0

2009-08-31 Thread Lukas Zeller

Hello Patrick,

Thanks for the update for SyncEvolution 1.0 schedule.

I am aware that progress of libsynthesis_srv is becoming part of the  
critical path.


On Aug 31, 2009, at 15:17 , Patrick Ohly wrote:


The biggest unknown factor for 1.0 are the necessary changes in
libsynthesis. Lukas has reassured me that he has made good progress
towards those, but they have a business to run and therefore might  
have

more important work to deal with first. September 4th in the schedule
below is way too early


Yes.


- more realistic proposals welcome ;-}


I have an external project until mid-September that will probably take  
a lot of time (not entirely clear yet). So it's unrealistic that I can  
make a lot of progress until then. On the other hand, if we can take  
apart SyncML server API discussed and usable into discussed and  
usable, we can do the discussion part earlier so at least everyone  
will have a clear picture what is coming and can prepare up to the  
point of actually putting it together.


What also might work is that I make a public branch of compilable, but  
not yet usable work in progress early, so environment setup (e.g.  
organizing the build process for the new libsynthesis_srv) can start  
already.


This way, I think that a alpha libsynthesis_srv until end of September  
could still work.


How does that sound?

Lukas Zeller (l...@synthesis.ch)
-
Synthesis AG, SyncML Solutions   Sustainable Software Concepts
i...@synthesis.ch, http://www.synthesis.ch




___
SyncEvolution mailing list
SyncEvolution@syncevolution.org
http://lists.syncevolution.org/listinfo/syncevolution