I ran compaction via the button in _utils. I did notice that when I
clicked the button, the spinner in the UI never stops, but I did check
that compact_running was "false" for the DB in question - so I assumed
it finished. I suppose some issue with _utils could instead mean it
never started? Is there some way to distinguish the two cases?

On Mon, Jun 29, 2015 at 5:49 PM, Adam Kocoloski <[email protected]> wrote:
> Database compaction should absolutely recover that space. Can you share a few 
> more details? Are you sure the compaction completes successfully? Cheers,
>
> Adam
>
>> On Jun 29, 2015, at 8:19 PM, Travis Downs <[email protected]> wrote:
>>
>> I have an issue where I'm posting single smallish (~500 bytes)
>> documents to couchdb, yet the DB size is about 10x larger than
>> expected (i.e., 10x larger than the aggregate size of the documents).
>>
>> Documents are not deleted or modified after posting.
>>
>> It seems like what is happening is that every individual (unbatched
>> write) always takes 4K due to the nature of the append-only algorithm
>> writing 2 x 2K blocks for each modification as documented here:
>>
>> http://guide.couchdb.org/draft/btree.html
>>
>> OK, that's fine. What I don't understand is why the "compact"
>> operation doesn't recover this space?
>>
>> I do recover the space if I replicate this DB somewhere else. The full
>> copy takes about 10x less space. I would expect replicate to be able
>> to do the same thing in place. Is there some option I'm missing?
>>
>> Note that I cannot use bulk writes since the documents are posted one
>> by one by different clients.
>>
>

Reply via email to