[ 
https://issues.apache.org/jira/browse/COUCHDB-220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12695940#action_12695940
 ] 

Paul Joseph Davis commented on COUCHDB-220:
-------------------------------------------

I think we need a slight change in case someone lies to us:

From:
+    case NextAlloc of
+       0 -> NewSize = lists:max([MinAlloc, size(Bin)]);
+       _ -> NewSize = NextAlloc
+    end,

To:
+    case NextAlloc of
+       0 -> NewSize = lists:max([MinAlloc, size(Bin)]);
+       _ -> NewSize = lists:max([NextAlloc, size(Bin)])
+    end,

Otherwise we could end up writing beyond the allocated space if something gets 
confused.

> Extreme sparseness in couch files
> ---------------------------------
>
>                 Key: COUCHDB-220
>                 URL: https://issues.apache.org/jira/browse/COUCHDB-220
>             Project: CouchDB
>          Issue Type: Bug
>          Components: Database Core
>    Affects Versions: 0.9
>         Environment: ubuntu 8.10 64-bit, ext3
>            Reporter: Robert Newson
>         Attachments: 220.patch, attachment_sparseness.js
>
>
> When adding ten thousand documents, each with a small attachment, the 
> discrepancy between reported file size and actual file size becomes huge;
> ls -lh shard0.couch
> 698M 2009-01-23 13:42 shard0.couch
> du -sh shard0.couch
> 57M   shard0.couch
> On filesystems that do not support write holes, this will cause an order of 
> magnitude more I/O.
> I think it was introduced by the streaming attachment patch as each 
> attachment is followed by huge swathes of zeroes when viewed with 'hd -v'.
> Compacting this database reduced it to 7.8mb, indicating other sparseness 
> besides attachments.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to