If documents are too small, compaction cannot retrieve all the disk
space back. See this thread with the similar question:
http://qnalist.com/questions/5836043/couchdb-database-size

Question why is still open for me, but at least solution there is.
--
,,,^..^,,,


On Tue, Jun 30, 2015 at 3:49 AM, Adam Kocoloski <[email protected]> wrote:
> Database compaction should absolutely recover that space. Can you share a few 
> more details? Are you sure the compaction completes successfully? Cheers,
>
> Adam
>
>> On Jun 29, 2015, at 8:19 PM, Travis Downs <[email protected]> wrote:
>>
>> I have an issue where I'm posting single smallish (~500 bytes)
>> documents to couchdb, yet the DB size is about 10x larger than
>> expected (i.e., 10x larger than the aggregate size of the documents).
>>
>> Documents are not deleted or modified after posting.
>>
>> It seems like what is happening is that every individual (unbatched
>> write) always takes 4K due to the nature of the append-only algorithm
>> writing 2 x 2K blocks for each modification as documented here:
>>
>> http://guide.couchdb.org/draft/btree.html
>>
>> OK, that's fine. What I don't understand is why the "compact"
>> operation doesn't recover this space?
>>
>> I do recover the space if I replicate this DB somewhere else. The full
>> copy takes about 10x less space. I would expect replicate to be able
>> to do the same thing in place. Is there some option I'm missing?
>>
>> Note that I cannot use bulk writes since the documents are posted one
>> by one by different clients.
>>
>

Reply via email to