On Wed, Mar 24, 2010 at 15:58, Paul Davis wrote:
> On Wed, Mar 24, 2010 at 5:40 AM, Roessner, Silvester
<[email protected]> wrote:
>> On Wed, Mar 24 2010 at 10:06 Benoit Chesneau wrote:
>>
>> I tried it with consecutive ids as well but I the database is still 4

>> times as big as the payload.
>> I think I must live with this fact.
>>
>>
>> Silvester
>>
>
> The consecutive id's only make a difference when using _bulk_docs
because it removes the holes in the append > only btree.
> 
> Did you check to see what the file size was after a compaction? It
should be significantly better.


Hi Paul,

I just compacted the database containing 100 documents with consecutive
id's.
The size of the database kept the same.

Here you can see what my test looks like.
#document is just a record I use to store metadata.
cma_document:save/1 will save #document with the given _id in CouchDB.

Silvester


=== Code

test6(Start, End) when End >= Start ->
        Database = "jobs",
        Empty = cma_document:new(Database),
      {ok, UTF8} =
file:read_file("V:/Product/CalcEngine/interface/OneC/Version-1.0/test/Cz
vRx/Output-localhost/13.onec.js"),      
        Filled = cma_document:set_content(Empty, UTF8),
        test6_loop(Filled, Start, End).


test6_loop(Document, Start, End) when Start =< End ->
        io:format("~p ", [Start]),
        if Start rem 10 =:= 0 ->
                   io:format("~n");
           true -> 
                   ok
        end,
        {ok, _} = cma_document:save(Document#document{id =
integer_to_list(Start)}),
        test6_loop(Document, Start + 1, End);


test6_loop(_, _, _) ->
        io:format("~n").

=== Code

Reply via email to