Hey,

My latest plan is to basically save a copy of all the relevant
information in a couch doc after every attempted sync.  In this case
the first operation would still timeout, but if the client retried all
the relevant doc ids could be retrieved from that document and only a
small update would have to be applied.

Won't the update still be large? E.g. you'd have to get all the documents listed in your checkpoint document plus whatever changes have happened since you last tried.

Since this is only likely a problem when it is a long time between syncs I think it could work ok
(definitely not ideal, though).  Does this seem sane?


Wouldn't a web cache work better/be simpler? If you're not careful there could be a lot of messages for a person, and the document that you copy back into couch very very large. If you did it with a simple cache you could stream the data into the cache and use the seq number and other pieces of information as an etag to expire the cached documents (maybe... more thinking out loud while listening to the governments spending review...). That doesn't seem particularly relaxing though.

Alternatively, could you not have a database per user? Then the size of _changes will be proportional to the activity of the user, as opposed to the sum of activity across all users, which means _changes should be a bit snappier.
Cheers
Simon

Reply via email to