Thanks for the reply. The map function can emit anywhere from 20 - 500 rows per document. They look like this: key: ["datapoint.name",2012,2,9,13,0,0] value <some small number>. The reduce just averages the values. The output of the reduce function is a single numerical value.
On Sat, Feb 11, 2012 at 12:30 AM, Marcello Nuccio <[email protected] > wrote: > I fear that it's quite hard to respond without knowing the map and > reduce functions. Can you share some more details? > > Marcello > > 2012/2/10 C J <[email protected]>: > > The view file for my database is growing ten times faster than my > database. > > View compaction recovers much of this used space, but I'd like to > minimize > > how often I run view compaction. > > > > Here's some background: I'm attempting to use couchdb as the backend to a > > metrics and statistics system for our application. It is VERY similar to > > statsd, if you're familiar with that. What this means is that we send a > new > > document to couch every 10 seconds. We never update existing documents > and > > never delete documents. The documents can contain anywhere from 20 to 500 > > datapoints. Each datapoint is emitted seperately in the format: > > key: ["datapoint.name",2012,2,9,13,0,0] value <some small number>. > > > > Because we are writing so much data so frequently, I've found that I need > > to keep our view warm by querying it on an interval (currently every ten > > seconds). Functionally, this solution works great, when we hit the views > > for real, they respond quite quickly. The problem is that this view > warming > > causes the view file to grow very quickly. > > > > Anyone know a way around this? FWIW, my view does have a reduce function > > and for the view warming query, I've tried version with reduce=true and > > reduce=false. > > > > Thanks in advance, > > GN >
