This question keeps coming up and I feel we should provide an accurate answer, even if the procedure is not "supported".

Any corrections to my current best effort, below?

R=(the highest wanted revision number)

1. Any WCs (or parts of WCs) at a revision > R should first be updated back to a revision <= R, otherwise those WCs will be broken, possibly in subtle ways.

2. on the server:
  2a. freeze the repo

Use any external means to prevent write access to the repo.

(You cannot use 'svnadmin freeze' with the commands below as the script, because the 'svnadmin recover' below would fail. I tried.)

  2b.

Let's assume you're in the repo dir and running Bash or a similar shell.

Collect some info:

  $ R=998902  # the desired new head revision
  $ OLD_R=$(cat db/current)
  $ OLD_TXNS=$(svnadmin lstxns .)

* set the 'current' file to R, or (arguably slightly easier/safer) delete it; running 'svnadmin recover' in a later step will recreate it:

  $ rm db/current

* delete the discarded revision file(s):

$ for F in $(seq $((R+1)) $OLD_R); do rm -f db/revs/*/$F db/revprops/*/$F; done

* clear out any references to discarded revs in 'rep-cache.db'

You'll need the 'sqlite3' command-line utility program, which you may need to install from an operating system package. On Ubuntu the package is named 'sqlite3'.

  $ sqlite3 db/rep-cache.db "delete from rep_cache where revision > $R"

* delete any pending transactions

  $ svnadmin rmtxns . $OLD_TXNS

"Clean up":

  $ svnadmin recover .

(Q: Does 'recover' do any useful magic beyond recreating the 'current' file?)

> 2c. un-freeze the repo


TODO:

'svnadmin verify' doesn't detect if the rep-cache still contains too-new revs, which would cause silent corruption during later commits if allowed to remain in place. We should fix that. (I will file an issue.)

'svnadmin recover' could do the rep-cache recovery step. Any reason it should not?

- Julian

Reply via email to