Dear wiki user,

You have subscribed to a wiki page "Couchdb Wiki" for change notification.

The page "ReleaseNotice1.0.0RepairTool" has been deleted by JoanTouzet:

Migrated to

- This is a page where we document the development of the 1.0.0 data loss 
recovery tool.
- Damien says:
- > We need fixup code to try to find the lost btree roots (by id and by seq) 
and add them to the header and write it.
- > I think there should be a fix-up flag in the ini, and when set startup 
couchdb will scan all the databases and any that don't have a valid header at 
the end, it scans backward, looking for the root of the by_id and by_seq 
btrees. Then it adds those roots to the first header it finds and writes it.
- > It should attempt to load the roots,  using file:pread_term, stepping back 
one byte at a time, looking for the first for the by_seq index root. most of 
the attempts will be bad reads or term_to_binary errors and will throw an 
exception. But some of the reads will produce a valid term, and it will do a 
pattern match to check successfully loaded structures for the proper content to 
ensure it's the right tree. Then it looks for the by_name root, doing the same.
- > Then using the most recent header, add the root addresses to the header and 
rewrite it. The database is restored.
- Chris replies:
- > Once that is working, it should be a straightforward enhancement to trigger 
the compactor (minus deletion of the source file) to run from all (or better 
yet, just those missing a corresponding header) valid btree roots in the file 
(making snapshot databases of any state which might have been lost due to a 
restart). If a couch has been restarting frequently, the recovery might require 
creating a number of snapshots and then using the replicator to merge all the 
snapshots back into one database.
- Jan sums up:
- > there's two scenarios so far we need to cover: a) user's been using couch, 
got restarted, data is "lost" (i.e. data is at the end of the file, but the 
header isn't written) and b) user has been using couch, data got "lost", user 
kept using couch, so there is maybe data at the end of the file without a valid 
header and then some valid data and then some more unreferenced data. for a) 
simply stopping at the first valid header and doing the restore is ok. is 
harder as it is a potential full db scan and we can't just fix up the tree, in 
that case mikeal suggested we should copy these docs to a new database to later 
replicate from
- == Mailing list discussion ==
- Catching up with progress as you drink your morning coffee? See 
 first post on the tool]] and the subsequent thread.
- == Git status ==
- As of 10 August, 2010.
- {{attachment:git.png}}
- == Proposed Recover Procedure (user side of things) ==
-  1. Make a backup of everything.
-  2. Stop CouchDB 1.0.0.
-  3. Install CouchDB 1.0.1.
-    3.1. Point database_dir at the original installation.
-  4. Set in local.ini:
- {{{
-     [couchdb]
-     recovery = true
- }}}
-  5. Start CouchDB 1.0.1.
-  6. Wait until CouchDB says "Time to Relax.".
-    6.1 CouchDB will have switched te config to `recovery = false`.
-    6.2 CouchDB will have created  `/lost\_and\_found\_$dbname\_$timestamp` 
for every db that has orphaned data.
-  7. Go to `/lost\_and\_found\_$dbname\_$timestamp` and see if any lost data 
is found.
-  8. Replicate `/lost\_and\_found\_$dbname\_$timestamp` to `/$dbname` to 
restore all data.
- = A generalized repair algorithm =
- == the basic repair algorithm: ==
-     for Node in BTreeNodes
-       run the compactor to create a lost and found database file that treats 
Node as a root
-     use the replicator to merge all the lost and found database files into 
one database
-     Done!
- It is an optimization to prune the Nodes in BTreeNodes so that there is the 
minumum number of them without losing any data. If we did this for every single 
node in the original file, we'd end up with the same final result as if we did 
it optimally. the difference is that we'd use exponentially more disk space in 
the process.
- [Another space saving optimization would be to only keep one resident 
lost+found db around and have one active l+f db one per compaction and 
replicate the current one into the resident one after each compaction. I don't 
know how hard it'd be to find the minimal set of node to apply the other 
optimization, so this might be a reasonable alternative to keep disk-usage in 
bounds, although not as optimal -- Jan]
- == Finding only the btree nodes we need: ==
- It is hard to tell roots apart from other nodes. The way to do that is work 
backwards through the file keeping track of pointers in nodes. if a node 
matches a pointer you've seen, you don't need to capture it. if a node has not 
been pointed to, capture it. This gives you all the roots.
- This means we can also keep track of the pointers that headers point to, and 
a root is pointed to by any header, we can assume any other roots later in the 
file from that header, take that state into account. therefore any root pointed 
to be a header can be disregarded.
- when we are done, we will have a list of roots that are not reflected by the 
last valid header in the file (some of these may be after the last valid 
header), **nor are they reflected by later roots in the same "lineage"**. this 
is the set of nodes that we need to use as the starting point for compactions.
- **the part in bold is false and means we will collect unneeded roots** I'm 
currently looking into a way to distinguish superceded roots from final roots.
- we will end up with as many "lost and found" database files as we have roots 
here. once we have them, we can begin the compact and replicate routine.

Reply via email to