On 03/02/11 15:14, Adam Kocoloski wrote:
Hi Wayne, I don't think there's a satisfactory solution to this at the moment, which is why I've
been working with Bob Dionne to add some more detailed statistics to help inform that kind of
decision-making. The idea is to add a new field to the response to GET /dbname (and GET
/db/_design/dname/_info) which will report the number of bytes allocated for storage of "user
data"; i.e. latest versions of document bodies and attachments in databases, KV pairs and
reductions in view indexes. You could then write a script to trigger compaction if the ratio of
"data_size" / disk_size drops below a threshold.
Bob has a pull request in process for BigCouch; the changes he's making should
apply to CouchDB as well with a little tweaking.
Adam, Thanks. We'll use some other metric (probably time) to decide
when to compact a database, until something better is possible. And
thanks for mentioning BigCouch--it's on my list of "things to look at."
Wayne Conrad