Hey Dan,

Have you seen the maxRevTreeDepth
<https://github.com/couchbase/couchbase-lite-ios/blob/15e51086b9cedc0963212fb6a2a31f9bc72f811a/Source/API/CBLDatabase.h#L75>
option?

(somehow this got left out of our api docs
<http://developer.couchbase.com/mobile/develop/references/couchbase-lite/couchbase-lite/database/database/index.html>,
and I just filed a ticket internally to fix that)

On Sun, Oct 19, 2014 at 12:44 PM, Daniel Polfer <[email protected]>
wrote:

> We have a document with >2200 revisions that is causing performance issues
> when synchronized with clients. Occasionally, it even appears to stall the
> gateway without necessarily killing it. Other requests succeed, but minutes
> can go by without noticible activity in the logs (at this time it will
> eventually finish the sync - sometimes after more than 6 minutes).
>
> As pulled from the gateway, we get a document with a single attachment
> reference and a JSON body just over 25KB. If we pull that same document
> from Couchbase, we'll get a 7.84MB document. Within that document 6.55MB is
> associated with _sync.history.bodies and 1.57MB from
> _sync.history.channels. There are about 187 non-empty entries in
> history-bodies, and the current sequence number for the document is just
> over 1900.
>
> We know there is a high conflict count, but the difficult part is that
> even when we believe we have resolved all conflicts through the client and
> then compacted (basically deleting any conflict revision not on the "main"
> branch) we cannot seem to cut the size on the underlying document. In fact,
> it grows. We have even tried on a test system to set the rev limits and
> then updated the document before compacting the database (and these are
> compactions through the client and gateway, not Couchbase). It appears that
> the branches all have to be maintained at some level within the document
> iself. The end result is the performance hit, but also the inability to use
> the document in Couchbase views without substantially raising the default
> document size limits.
>
> The database in question has been in use through several versions of both
> Couchbase and Couchbase Lite, and is currently hosted on a server running
> Couchbase Server 2.5.1-1083-rel and Couchbase Sync Gateway 1.0.2-9. Tens of
> iOS clients are current using the database, and complaints of performance
> issues began while still using Sync Gateway 0.81.
>
> I assume we're missing something truly obvious here, and we would love to
> understand the issue and repair the document rather than replace it. There
> are several use cases coming where we expect to have very high revision
> counts with frequent updates from multiple clients, and we'd like to be
> confident we can manage this properly.
>
> Are there use cases we should be careful of that would cause the
> underlying document to grow like this and not be a candidate for reduction
> during a compaction (even when resolving conflicts)?   Any other ideas or
> doc pointers would be greatly appreciated.  Thanks for an active board.
>
> Dan Polfer
>
> --
> You received this message because you are subscribed to the Google Groups
> "Couchbase Mobile" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to [email protected].
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/mobile-couchbase/4a9e1b75-1c9d-46f7-b96e-6ffeeb535e7b%40googlegroups.com
> <https://groups.google.com/d/msgid/mobile-couchbase/4a9e1b75-1c9d-46f7-b96e-6ffeeb535e7b%40googlegroups.com?utm_medium=email&utm_source=footer>
> .
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Couchbase Mobile" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/mobile-couchbase/CACSSHCFauSert4-C3163qGEdyqp3Z7o_%2BR3Y3g%2BJHqX0Lc%2BLfQ%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to