[
https://issues.apache.org/jira/browse/COUCHDB-1021?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12980380#action_12980380
]
Mike Leddy commented on COUCHDB-1021:
-------------------------------------
Thanks once again Adam. Hopefully I'm getting closer...... and learning along
the way:
m...@mike:/usr/src/couchdb$ cat
debian/patches/keep_purge_state_on_compaction.patch
--- couchdb-1.0.1/src/couchdb/couch_db_updater.erl 2011-01-11
21:45:32.000000000 +0000
+++ couchdb-1.0.1.new/src/couchdb/couch_db_updater.erl 2011-01-11
22:00:07.000000000 +0000
@@ -847,7 +847,7 @@
commit_data(NewDb4#db{update_seq=Db#db.update_seq}).
-start_copy_compact(#db{name=Name,filepath=Filepath}=Db) ->
+start_copy_compact(#db{name=Name,filepath=Filepath,header=#db_header{purge_seq=PurgeSeq}}=Db)
->
CompactFile = Filepath ++ ".compact",
?LOG_DEBUG("Compaction process spawned for db \"~s\"", [Name]),
case couch_file:open(CompactFile) of
@@ -866,9 +866,18 @@
Retry = false,
ok = couch_file:write_header(Fd, Header=#db_header{})
end,
+
NewDb = init_db(Name, CompactFile, Fd, Header),
+ NewDb2 = if PurgeSeq > 0 ->
+ {ok, PurgedIdsRevs} = couch_db:get_last_purged(Db),
+ {ok, Pointer} = couch_file:append_term(Fd, PurgedIdsRevs),
+ NewDb#db{header=Header#db_header{purge_seq=PurgeSeq,
purged_docs=Pointer}};
+ true ->
+ NewDb
+ end,
unlink(Fd),
- NewDb2 = copy_compact(Db, NewDb, Retry),
- close_db(NewDb2),
+
+ NewDb3 = copy_compact(Db, NewDb2, Retry),
+ close_db(NewDb3),
gen_server:cast(Db#db.update_pid, {compact_done, CompactFile}).
> Compacting a database does not preserve the purge_seq
> -----------------------------------------------------
>
> Key: COUCHDB-1021
> URL: https://issues.apache.org/jira/browse/COUCHDB-1021
> Project: CouchDB
> Issue Type: Bug
> Components: Database Core
> Affects Versions: 1.0.1
> Environment: All platforms
> Reporter: Mike Leddy
> Priority: Minor
>
> On compacting a database the purge_seq becomes zero. As a result subsequently
> accessing any view will cause the view to be rebuilt from scratch. I resolved
> the issue for me by patching start_copy_compact, but this only works if you
> can guarantee there will be no purging done during compaction:
> --- couchdb-1.0.1/src/couchdb/couch_db_updater.erl
> +++ couchdb-1.0.1.new/src/couchdb/couch_db_updater.erl
> @@ -857,7 +857,7 @@
>
> commit_data(NewDb4#db{update_seq=Db#db.update_seq}).
>
> -start_copy_compact(#db{name=Name,filepath=Filepath}=Db) ->
> +start_copy_compact(#db{name=Name,filepath=Filepath,header=#db_header{purge_seq=PurgeSeq}}=Db)
> ->
> CompactFile = Filepath ++ ".compact",
> ?LOG_DEBUG("Compaction process spawned for db \"~s\"", [Name]),
> case couch_file:open(CompactFile) of
> @@ -869,7 +869,7 @@
> couch_task_status:add_task(<<"Database Compaction">>, Name,
> <<"Starting">>),
> {ok, Fd} = couch_file:open(CompactFile, [create]),
> Retry = false,
> - ok = couch_file:write_header(Fd, Header=#db_header{})
> + ok = couch_file:write_header(Fd,
> Header=#db_header{purge_seq=PurgeSeq})
> end,
> NewDb = init_db(Name, CompactFile, Fd, Header),
> unlink(Fd),
> I am sure that there must be a better way of doing this.....
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.