On the subject of a compaction that cannot deal with the magnitude of writes, 
can that (or has it already) theory be put to the test? Does anyone know a 
certain setup that consists of machine specifications relative to the amount of 
writes/second?  
 

This is a theoretical obstacle that could use some factual numbers that could 
help everyone avoid it in their specific setup. I wouldn't prefer to have such 
a situation in practice especially if compaction is triggered by some process 
that monitors available disk space or whatever other condition.
-----Original message-----
From: Randall Leeds <[email protected]>
Sent: Wed 26-05-2010 22:36
To: [email protected]; 
Subject: Re: Newbie question: compaction and mvcc consistency?

On Wed, May 26, 2010 at 13:29, Robert Buck <[email protected]> wrote:
> On Wed, May 26, 2010 at 3:00 PM, Randall Leeds <[email protected]> 
> wrote:
>> The switch to the new, compacted database won't happen so long as
>> there are references to the old one. (r) will not disappear until (i)
>> is done with it.
>
> Curious, you said "switch to the new [database]". Does this imply that
> compaction works by creating a new database file adjacent to the old
> one?

Yes.

>
> If this is what you are suggesting, I have another question... I also
> read that compaction process may never catch up with the writes if
> they never let up. So along this specific train of thought, does Couch
> perform compaction by walking through the database in a forward-only
> manner?

If I understand correctly the answer is 'yes'. Meanwhile, new writes
still hit the old database file as the compactor walks the old tree.
If there are new changes when the compactor finishes it will walk the
new changes starting from the root. Typically this process quickly
gets faster and faster on busy databases until it catches up
completely and the switch can be made.

That said, you can construct an environment where compaction will
never finish, but I haven't seen reports of it happening in the wild.

Reply via email to