How is it that you couldn't reproduce the scenario with .10 and onwards? The 
patch you supplied for that JIRA ticket you mention in the other post doesn't 
seem to be incorporated in .10 at all. Are there other useful counter measures 
in .10?

 

Also, on the subject of your ticket and especially Adam's comment to it, would 
storing incoming writes during the wait in a RAM buffer help to allow for 
writes during a compaction that can't cope with the amount of writes?
 
-----Original message-----
From: Robert Newson <[email protected]>
Sent: Wed 26-05-2010 22:56
To: [email protected]; 
Subject: Re: Re: Newbie question: compaction and mvcc consistency?

I succeeding in preventing compaction completion back in the 0.9 days
but I've been unable to reproduce since 0.10 onwards. compaction
retries until it succeeds (or you hit the end of the disk). I've not
managed to make it retry more than five times before it succeeds.

B.

On Wed, May 26, 2010 at 9:52 PM, Markus Jelsma <[email protected]> wrote:
> On the subject of a compaction that cannot deal with the magnitude of writes, 
> can that (or has it already) theory be put to the test? Does anyone know a 
> certain setup that consists of machine specifications relative to the amount 
> of writes/second?
>
>
> This is a theoretical obstacle that could use some factual numbers that could 
> help everyone avoid it in their specific setup. I wouldn't prefer to have 
> such a situation in practice especially if compaction is triggered by some 
> process that monitors available disk space or whatever other condition.
> -----Original message-----
> From: Randall Leeds <[email protected]>
> Sent: Wed 26-05-2010 22:36
> To: [email protected];
> Subject: Re: Newbie question: compaction and mvcc consistency?
>
> On Wed, May 26, 2010 at 13:29, Robert Buck <[email protected]> wrote:
>> On Wed, May 26, 2010 at 3:00 PM, Randall Leeds <[email protected]> 
>> wrote:
>>> The switch to the new, compacted database won't happen so long as
>>> there are references to the old one. (r) will not disappear until (i)
>>> is done with it.
>>
>> Curious, you said "switch to the new [database]". Does this imply that
>> compaction works by creating a new database file adjacent to the old
>> one?
>
> Yes.
>
>>
>> If this is what you are suggesting, I have another question... I also
>> read that compaction process may never catch up with the writes if
>> they never let up. So along this specific train of thought, does Couch
>> perform compaction by walking through the database in a forward-only
>> manner?
>
> If I understand correctly the answer is 'yes'. Meanwhile, new writes
> still hit the old database file as the compactor walks the old tree.
> If there are new changes when the compactor finishes it will walk the
> new changes starting from the root. Typically this process quickly
> gets faster and faster on busy databases until it catches up
> completely and the switch can be made.
>
> That said, you can construct an environment where compaction will
> never finish, but I haven't seen reports of it happening in the wild.
>

Reply via email to