2009/5/14 Damien Elmes <[email protected]>:
>
>>> I've also contacted Damien from Anki, but he does not seem to be too
>>> interested...
>>> (http://groups.google.com/group/ankisrs/browse_thread/thread/410997e06be22302).
>>>
>> We can start from looking at Anki's implementation. If it's good
>> enough and only requires a bit of changes then
>> we can use it and Damien might be more interested in joining us.
>> Anyway it makes sense to look at  working implementation first in
>> order to not reinvent the wheel.
>
> The bulk of it was written over a year ago, so I'm sure there's plenty
> of room for improvement. One thing I need to tackle is the initial
> sync or a full update - it performs quite well for daily syncs, but a
> full sync on a deck of 30,000 cards takes up a large amount of memory,
> unpacking the JSON into a python object tree. I will probably address
> this by sending across the compressed deck verbatim, instead of
> bundling it up into a sync message. Of course, this approach would
> never be compatible with other implementations. An alternative would
> be an incremental JSON parser, but that may be complicated to
> implement.
>
I like the idea of using JSON.
What do you think about using one card as minimal amount of sync data?
This would allow us to synchronize in a lazy way, one card at a time.
On top of this we can implement grouping cards into blocks and/or even
compressing them(cards or blocks) to save bandwidth, which would allow
us to quickly
perform initial sync on big decks.

-- 
BR,
Ed

--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"mnemosyne-proj-users" group.
To post to this group, send email to [email protected]
To unsubscribe from this group, send email to 
[email protected]
For more options, visit this group at 
http://groups.google.com/group/mnemosyne-proj-users?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to