[
https://issues.apache.org/jira/browse/COUCHDB-968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12971259#action_12971259
]
Adam Kocoloski commented on COUCHDB-968:
----------------------------------------
After mulling it over a bit more this morning I think I might have hit upon a
simpler solution. The problems I discovered last night arose from the fact
that we take the result of a successful merge and try to merge that branch into
the remaining branches of the tree. I think we might able to skip that step
because
a) a "double-merge" must be really rare. The only way I could come up with a
successful double-merge would be to lower the _revs_limit and introduce two
distinct edit branches that used to share a common trunk, then raise the
_revs_limit and replicate in a version of the document which has the common
trunk and shares at least one revision with each branch. But more importantly,
b) revision stemming should do the second merge for us. Stemming involves
exploding the tree into a list of paths, taking the Limit sublist of each one,
and then merging them all back together one by one. I'm pretty sure the
"double-merge" gets covered here, i.e. the final result of the stem will be
fully merged. We always stem before completing a write, so the revision tree
on disk would never be in a not-completely-merged state.
I'll think it over some more and see if I can come up with a diabolical case in
which the recursive merge would work but the path-based merging in the stemmer
would not. I'd prefer not to have to deal with sibling insert branches in the
merge code; the code which merges just a single path into the list of paths is
both cleaner and faster.
> Duplicated IDs in _all_docs
> ---------------------------
>
> Key: COUCHDB-968
> URL: https://issues.apache.org/jira/browse/COUCHDB-968
> Project: CouchDB
> Issue Type: Bug
> Components: Database Core
> Affects Versions: 0.10.1, 0.10.2, 0.11.1, 0.11.2, 1.0, 1.0.1, 1.0.2
> Environment: any
> Reporter: Sebastian Cohnen
> Assignee: Adam Kocoloski
> Priority: Blocker
> Fix For: 0.11.3, 1.0.2, 1.1
>
>
> We have a database, which is causing serious trouble with compaction and
> replication (huge memory and cpu usage, often causing couchdb to crash b/c
> all system memory is exhausted). Yesterday we discovered that db/_all_docs is
> reporting duplicated IDs (see [1]). Until a few minutes ago we thought that
> there are only few duplicates but today I took a closer look and I found 10
> IDs which sum up to a total of 922 duplicates. Some of them have only 1
> duplicate, others have hundreds.
> Some facts about the database in question:
> * ~13k documents, with 3-5k revs each
> * all duplicated documents are in conflict (with 1 up to 14 conflicts)
> * compaction is run on a daily bases
> * several thousands updates per hour
> * multi-master setup with pull replication from each other
> * delayed_commits=false on all nodes
> * used couchdb versions 1.0.0 and 1.0.x (*)
> Unfortunately the database's contents are confidential and I'm not allowed to
> publish it.
> [1]: Part of http://localhost:5984/DBNAME/_all_docs
> ...
> {"id":"9997","key":"9997","value":{"rev":"6096-603c68c1fa90ac3f56cf53771337ac9f"}},
> {"id":"9999","key":"9999","value":{"rev":"6097-3c873ccf6875ff3c4e2c6fa264c6a180"}},
> {"id":"9999","key":"9999","value":{"rev":"6097-3c873ccf6875ff3c4e2c6fa264c6a180"}},
> ...
> [*]
> There were two (old) servers (1.0.0) in production (already having the
> replication and compaction issues). Then two servers (1.0.x) were added and
> replication was set up to bring them in sync with the old production servers
> since the two new servers were meant to replace the old ones (to update
> node.js application code among other things).
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.